Generative AI
Generative AI has become a hot topic for technologists, investors, and policymakers since the release of AI models such as Stable Diffusion and ChatGPT. However, it is not a new concept, the machine-learning techniques behind this technology have evolved over the past decade (think chatbots).
Generative AI refers to a type of artificial intelligence that can generate “synthetic” data – information that is artificially manufactured. It uses deep learning models, such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), or Transformer-based language models (a good example is OpenAI’s GPT-3), to generate synthetic data based on patterns learned from large datasets. For example, a generative AI model trained on a dataset of images of cats could be used to generate new, realistic images of cats that it has never seen before. Similarly, a generative AI model trained on a large corpus of text could be used to generate new, coherent text that matches the style and content of the training data.
Generative AI & Transformer Architecture
Two recent advances played a critical part in generative AI going mainstream: Transformers and breakthrough language models. Transformers enable the creation of large AI models without having to label all the data that is used to train the model in advance. Transformers models can be trained on billions of pages of text, and the resulting models can provide significantly more in-depth analytics of the source data. Transformer model architecture is the basis for AI models like GPT-3 and BERT.
Attention is All You Need
Transformers unlocked a new paradigm called “attention” (see). Attention enabled AI models track relationships between words across pages, chapters and books rather than just in individual sentences. Transformer based models can be used to not only analyse text but also code, DNA, complex chemical structures and many more. Transformers underpin ChatGPT – Generative Pretrained Transformer. The “Generative” stands for the ability of ChatGPT to generate new text based. “Pretrained” because models such as ChatGPT are trained on a large corpus of text data before being fine-tuned for specific tasks, and Transformers” because ChatGPT uses a transformer based neural network architecture to process input text and generate output text.
Predecessors
Several previous models have attempted to tackle the challenges of generative AI. The most popular of these were Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTMs), and Gated Recurrent Units (GRUs). Transformers, however, outshine RNNs, LSTMs and GRUs in multiple aspect, making them the new gold standard in AI.
The table below compares Transformer architecture, to RNNs, LSTMs and GRUs and shows it performs significantly better in the areas of efficiency, speed of training and performance.
(Source: Vaswani et al., “Attention Is All You Need”)
The Road Ahead
Generative AI has the potential to directly impact human productivity at levels much bigger than the agricultural and industrial revolutions. ChatGPT’s streak of popularity over the past few months has caused ripples amongst technologists for this very reason. The capabilities of Generative AI models such as ChatGPT are so striking that experts, including Geoffrey Hinton, have warned (see) that we might be building Generative AI systems that we could someday struggle to control. AI language models can end up fabricate facts express biases and generate text that say unpleasant or disturbing things.
Some of the biggest challenges with Generative AI in the road ahead are:
Hallucinations – Although advanced, AI models rely on training and data to provide answers. Text created by Generative AI can find it ways back as source data on which future versions of Generative AI models are trained.
Deepfakes – Fake content (such as videos and audio of celebrities) can lead to the spread of misinformation, which is a serious societal problem.
Copyright – Copyright is a big area of concern for generative AI models that are created using data from the internet. Data not explicitly shared by the creators can then be used to generate new content, making Copyright a thorny issue for Generative AI.