I first came into ‘contact’ with AI during my AI and Databases Masters degree back in 1986-87. At that time it was all about Lisp, Prolog and ‘rules’ based encapsulation of knowledge, very much a ‘glacial’ period in the history of AI.
Apart from both being rather ‘difficult’ languages to work with (‘reverse polish notation’, anyone?), I also recall thinking: ‘How can you create an AI system that will be capable of exhibiting real intelligence, if you have to tell it explicitly what to do using rules? Surely you can never think of all the rules up front and even if you were creating rule patterns, you are liable to miss patterns out’. At that time AI certainly did not excite me and I focused instead on the ‘Databases’ side of the course.
After 30-odd years and a career that ranged from RDBMS Technology consulting to CRM, Telco BSS / OSS, Enterprise Architecture and Public Cloud, I decided to give AI another look. That was back in 2018, around the time the seminal paper ‘Attention is all you need’ was published and what grabbed my attention was all the talk about how far neural networks had come.
What I found, blew my mind. The whole AI/ML scene had totally transformed since the ‘dark ages’ of my Masters. For a start, the theory and resulting algorithms had advanced by leaps and bounds. This advancement was further reflected in the ecosystem of languages, libraries and tools that brought the algorithms to life and within easy reach of AI practitioners.
Python, as an object based language totally fits the needs of prototyping, supported by some excellent libraries such as Numpy, Pandas, Scikit, Pytorch and Transformers to name but a few. These libraries help AI practitioners immensely, by providing great levels of abstraction to what otherwise could be rather complex coding of algorithms. To a large degree, such libraries have changed the focus from coding to data preparation and tuning of parameters. The ‘difficult’ part of the AI coding has been reduced to nothing more than calling the right library function and tuning hyper-parameters.
The field of AI has now become ‘alive’, with ‘machines’ that learn from the world around them, in the form of data. Long gone are the days where AI behaviour is based on defining ‘rules’. The AI of today is, to a large degree, mimicking human learning behaviour, where it learns from the world around it and ‘adjusts’ its behaviour (‘outputs’).
This pattern is exemplified in the case of Large Models (Language, Audio, Video), where the ‘Attention’ mechanism and other relevant developments (eg Diffusion), have created AI systems that are following the human learning paradigm even closer: learning from ‘reading’ publicly available documents on the web, down to needing to be told what is ‘good’ and ‘bad’ for them. What is even more exciting is that the behaviour we get out of them, functionally resembles a reasonably intelligent and trained ‘school’ graduate, with capabilities increasing with the size of the model. You could almost think of the size of the model as representing the level of ‘education’ that the model has attained!
What is even more interesting for me personally, is that these models need to be guided (‘prompting’), as if they are a human, using language that you would use to guide a school graduate on his first day at a clerical job.
All this made me question what has helped this ‘explosion’ of AI intelligence and capabilities. Why now and what happened between my post-graduate venture in the field and my re-engagement some 30 years later?
I suggest that several developments have helped in that:
- First and foremost, the wide deployment and adoption of the internet and the world-wide web, as the ubiquitous medium of connecting the globe and sharing information. This has truly enabled information sharing and we all know that a ‘problem shared is a problem halved’ (the problem here being, how to make better AI). Suddenly, manuals, white papers, how-to’s, ideas etc are available to all, mostly (and today probably all) for free. The plethora of excellent training videos on YouTube and books for free never ceases to amaze me (and were a key factor in helping me pick up rapidly from where I left)
- The adoption of open source, accelerated by the adoption of the internet and world-wide-web. Open source tools, languages and products existed before the internet, but were very much a lonely, small time effort. The internet helped explode both the development and the acceptance of open source products and solutions, from operating systems to AI/ML ideas (as encapsulated by white papers), libraries and frameworks. The enthusiasts amongst us, that had great ideas, did not have to fork out large sums of money to buy the essential building blocks to realise their ideas – benefiting all of us in the process. Why would people dedicate their free time on open source projects? I believe that the excellent book ‘Drive’, by Daniel Pink explains what drives people to offer their knowledge and effort for free.
- Cloud Computing. The utility based pricing model of the public cloud, significantly contributed to the development of LxMs (with their significant, yet temporary demands for training), as well as the development of the modern AI/ML libraries. Through the use of Public Cloud, teams or individuals can have (almost) instantaneous access to significant amounts of infrastructure and pay for what they use and as long as they need it. They do not have to make significant upfront investments to acquire or expand (or be resource constrained when extra temporary capacity is needed). They certainly do not have to worry about hosting it or having the skills for operating it. In this context, focusing on the need in hand (learning or researching), becomes a lot more attainable.
These key factors have enabled AI to flourish to levels that would have been unthinkable back in 1986 and make its application a reality these days. We have only started scratching the surface of use cases across all industries and functions, from Sales to Procurement and HR.
However, concerns around the adoption of AI have already started surfacing, mainly centered around societal and environmental impacts, as well as costs (despite the use of the Public Cloud utility model). These, I will try to visit them in the next blog.
What has your journey to AI and experiences in this field of Technology have been so far? Do you agree on the key enablers that helped AI flourish? What about your view of concerns and risks? I’d love to hear what you think.
Nikos





