top of page
Search
Writer's pictureIan Ritchie

Should we get in a lather over advent of AI?

Ian Ritchie Business HQ / The Herald June 29th, 2023


Artificial Intelligence could transform the economy but investors should be wary... bubbles do have a habit of bursting, writes Scottish Entrepreneur Ian Ritchie


I’ve been watching the recent coverage of powerful new Artificial Intelligence (AI) technology with particular interest, as I was responsible for the development and launch of the world’s first AI-powered program for personal computers forty years ago.


A few universities around the world started to explore AI back in the 1960s and one of the most prominent was the University of Edinburgh.


Edinburgh had its own name for this discipline – they called it Machine Intelligence – and two pioneers in this field, Donald Michie and Richard Gregory set up a laboratory behind the Meadows in Edinburgh in 1963.


During the Second World War Donald Michie had worked with Alan Turing and Max Newman at the Bletchley Park codebreaking centre where they developed phenomenally complex early computational techniques to break German coded messages.


This effort was spectacularly successful, and the secret intelligence they obtained about German military activity is widely thought to have shortened the war by two years. Michie went on to become a pioneer of the use of computing techniques to a variety of challenging tasks.


Regrettably, this field of research was crippled somewhat in 1973 by the publication of a report by Sir James Lighthill which poured cold water on the field of AI research and led to a decision by the British Science Research Council to end all support for AI research in British universities. Although to their credit, Edinburgh University continued to support their breakthrough work.


I first came across Professor Michie in 1982 when he introduced me to a technology which his laboratory had been exploring based on an algorithm, ID3, which could automatically develop new rules consistent with a variety of factors.


The IBM personal computer (PC) had just been launched in 1981 and was widely recognised as the birth of the modern PC industry, opening the opportunity for innovative new applications that had previously been impractical on previous computing systems.


In 1983 my company developed a program based on the work from Professor Michie’s laboratory called Expert-Ease in which you could enter a variety of relevant factors and their outcomes, and it would automatically generate a formula that was the shortest possible algorithm which solved all of these examples. You could choose as many factors as you liked, and it would always calculate an answer than fitted them all.


You could then put in new factors, and it would tell you what the result would be if these rules continued to be followed; it could be used to derive answers for any decision task, such as weather prediction, financial market behaviour, or medical diagnosis.


I demonstrated this program to several people, including Bill Gates in his office at Microsoft – at that time based in a single three floor office block in a Seattle lakeside suburb. He told me that it was ‘neat’ and that we would probably sell about 1500 copies of our product.


He was quite right – there were about 1500 people in the world – known as ‘early adopters’ – who wanted to experiment with our technology because, although it somehow seemed useful, it was ultimately a bit of a toy. You had to input all the data by hand which limited how complex a model you could build, but even at that level it still came out with results that, although correct, were often difficult to understand.


Today, of course, you don’t need to input all the source information yourself, because the whole internet, and all its enormous quantity of evidence is available to today’s AI systems which can collect it and use it to perform predictive analytics far faster than any human can, and as a result, attempt to make decisions quicker with huge accuracy.

However, if it wasn’t always possible to understand the relatively simple algorithms developed by our early simple program back in 1983, then you have little hope of understanding the hugely complex algorithms calculated by today’s massive AI systems.


The recent arrival of ChatGPT4 has stunned the world, leading many observers to predict the loss of hordes of creative and administrative jobs in the future. The education system has been particularly challenged by ChatGPT4’s ability to automatically generate complex essays in response to requests, even achieving a score at the 90% percentile level in the US Bar exams for aspirant lawyers.


This has caused serious anxiety among governments and authorities. Italy has got so spooked by this that it has actually banned the use of ChatGPT.


But all this has led to the latest new ‘bubble’ in technology investment.


The ideal circumstances for a technology bubble are where it is becoming obvious that a new technology is likely to transform huge parts of the economy but that it’s initially difficult to identify which propositions are likely to be winners and which are likely to be the losers.


In the dotcom bubble in the early 2000s, lots of companies attracted investment on huge over-bloated valuations. And many of these – for example pets.com, which sold petfood online – failed to achieve a sustainable business model, leaving many would-be entrepreneurs bruised and beaten. Of course, the internet did go on to transform the economy and the five largest companies in the world today are all interactive technology businesses and the few that did succeed have created massive returns for their investors.


Last year, the technology sector had a downturn with companies such as Meta and Alphabet sacking thousands of employees, so the arrival of AI is being welcomed enthusiastically by the technology investment industry to pep up their businesses. Once again it seems clear that AI technology will transform huge parts of the future economy, although once more it is unclear which will win, and which will lose.


Never mind; Venture Capitalists are now investing heavily in anything which claims to use AI technology and anything which can claim to be AI-based can achieve huge valuations.

Would be entrepreneurs will have to be very careful to avoid the collateral damage that happens when such a bubble bursts, as it almost certainly will.


130 views0 comments

Recent Posts

See All

Responsible education is in our hands

Ian Ritchie  / The Herald Business HQ / November 2024 R ECENT YEARS have seen lots of “war on” campaigns, an odd term for things that...

Venture where angels fear to tread

Ian Ritchie  / The Herald Business HQ / October 2024 W HEN I WAS struggling to raise investment to fund our start-up software company...

Comments


bottom of page