17Aug

It’s over halfway through 2024 and AI disruption is in full swing. It’s still overhyped and pretty much mislabelled. “AI” is the buzzword that has replaced “Online” in the mouths of “everyone with a mouth”.

There’s lots of amazing use cases and experiments, with some obvious productivity uplifts. We want to ask though; where’s the grand revolution at; halfway through the year?

I’m going to cover some of the AI trends and topics that might give us a clue. These are the challenges and potential business opportunities. 

Here we go.

Synthetic data (generated from GPTs) mimics real-world data sets.

Synthetic Data

It turns out that there’s not enough training data to keep training AI models. Having eaten the internet, AI is hungry for more. This is where synthetic data comes in.

Synthetic data (generated from GPTs) mimics real-world data sets. This is a cost-effective and scalable alternative to collecting and processing real data. You’d use it when you don’t have enough real data. It’s used to train, fine-tune and validate machine learning models. Synthetic data avoids issues related to privacy and data scarcity.

There are two potential revenue streams from this need. The first is the creation and modelling of real world data. The second is the value of real world data. You can make it and/or you can buy it.

For example Reddit has shut off the pipe to bots and scrapers using their data for free. Many other sites are doing the same. They’re starting to charge a premium for data that was being scraped previously.

Synthetic data has a few downsides. There’s a potential lack of realism and accuracy. It might not capture all the nuances of real world data and be less effective. It could increase the risk of bias and amplify that bias. Models could perform well on synthetic data and not in the real world. It could lack the variability and unpredictability of real data.

Either way data has a value and more is needed. 

The feasibility of AI operations at all levels is reliant on energy.

Energy

The feasibility of AI operations at all levels is reliant on energy. High-performance GPUs like NVIDIA chips consume significant amounts of it. This poses a substantial challenge for traditional data centres. They may not be able to currently support the power and cooling requirements of AI platforms.

Where is this power going to come from? What’s it going to cost? Many of the same questions that apply to the feasibility of electric vehicles. Can a country generate and distribute that energy? What are the environmental and infrastructure impacts?

The business opportunities may be in cooling solutions, energy production and environmental offset. Alongside new server infrastructure.

The infrastructure layer is seeing significant activity.

Infrastructure

The infrastructure layer is seeing significant activity. Partly due to the rise of startups focused on vector databases. These are essential for managing and querying high-dimensional data used in AI applications.

AI infrastructure is crucial to support scalability, efficiency, and performance of AI systems. Specialised hardware and data storage handle the increasing demands of AI workloads.

We need more AI tokens (units of data) for more complex tasks.

Not enough tokens

We need more AI tokens (units of data) for more complex tasks. Current limitations restrict the application of AI in more complex tasks. For instance, extended conversations, detailed summarisation and legal document analysis.

More tokens could enable real language translation with context retention. They could enable research spanning multiple documents. They could support tutoring systems capable of holding long-term, context aware educational interactions.

We could start seeing more contextual and personalised customer support. You now have a long term advisor rather than short answers. An example might be medical analysis of patient histories and more accurate diagnostic suggestions. 

Devices that incorporate AI with simpler interaction methods are sure to come.

Native AI Hardware

The Humane AI Pin is getting a few negative reviews. Though it’s early days for these concepts. The AI Pin is a ‘wearable’ that acts as a personal assistant. It has the usual features, making calls, sending messages and capturing moments. The difference is you wear it on your shirt. You interact by voice, touchpad and hand gestures when a tiny monochrome screen is active.

Devices that incorporate AI with simpler interaction methods are sure to come. Some of them might actually be good. There’s space for opportunity. Apple has been touting AI on the iPhone, so Siri is looking nervous.

They call it Apple’s Neural Engine (machine learning, FaceID, Animoji and Augmented Reality). On Android it’s Google’s Tensor SoC (photography, voice recognition and real-time translation).

There’s also NVIDIA’s Jetson (robotics, IoT, smart cities), Oura Ring (fitness wearable), and Amazon Echo Frames (smart glasses with Alexa). 

Working for the man.

Open Source Large Language Models

Workin’ for the Man – Roy Orbison

Don’t like “The Man” when it comes to Large Language Models? You can always roll your own?

Proprietary Large Language Models (LLMs) are those owned by specific organisations and companies. Examples are OpenAI’s GPT-4, Google’s PaLM & Gemini, Anthropic’s Claude, Meta’s LLaMA and Mistral.

Open source Large Language Models (LLMs) allow unrestricted access, modification, and redistribution. Some examples are, GPT-Neo and GPT-J by EleutherAI, BLOOM by BigScience, T5 by Google, OPT by Meta, Bert by Google and others. Looking at that list, it seems that rolling your own still has origins with the big providers.

Open source models allow for a broader range of applications and customisation. The tuning data could be more specific to a business or industry. For example medical or legal texts. Deploying solutions on company cloud architecture, could control performance, cost and data security. Even down to specific chips or hardware.

Developers could make their own APIs (Application Programming Interfaces). These would connect to business programs specific to the company or industry. They could connect to other APIs and program libraries. They could also connect to multi-modal (video, audio, text, image) applications.

Can you roll your own then? Yes with a massive caveat. You are most likely to start with an open source model or you could start from first principles. Otherwise you’d have to consider advanced expertise, infrastructure, compute costs and data acquisition.

There are business opportunities here in the open source models. Leveraging existing open-source models and customising seems to strike a balance. This offers a viable development path without getting overwhelmed building from scratch.

It’s possible that AI mind reading is a thing.

AI Mind Reading

It’s possible that AI mind reading is a thing. It’s called neural decoding. It refers to interpreting brain signals to understand thoughts and intentions. For example, Elon Musk’s company Neuralink.

Machine learning algorithms analyse neural activity captured by EEG, MRI, or other techniques. AI algorithms decode mental states, words, and even visual images.

The business opportunities are in assistive technologies such as helping the paralysed. Also helping those who can’t speak. There are opportunities for device control and games influenced by how you feel. There are opportunities for sleep improvement, mental health and stress management.

There could be tools that analyse responses to ads and products. These would provide deeper insights into consumer preferences and decisions. Increasing conversion rates and engagement in real-time.

There are possibilities of Neuro-Biometric locks or lie detection. There are opportunities for music, art and interactive media driven by responses. As with Cybersecurity there is yet an industry to be born to control ethics and privacy. Especially with personal preferences as mentioned. 

It’s possible that AI mind reading is a thing.

Small Molecule Drugs

AI is being used in the development of small molecule drugs. There are a lot of gains to be made in the pharmaceutical industry. AI platforms help identify potential drug candidates, predicting efficacy and toxicity. They can help optimise chemical structures.

Companies like Atomwise, Exscientia, and Insilico Medicine use machine learning. They sift data, accelerating the discovery process and reducing costs. This aids in the identification of novel compounds and repurposing of existing drugs.

Data quality can be an issue and of course regulatory authority is key. Still, streamlining research and bringing effective treatments to market faster is attractive.

At the very top of AI there are clear offerings aligned with big tech companies.

Middle Tier Fragmentation

At the very top of AI there are clear offerings aligned with big tech companies. In the middle tier choices are not as clear. It does foster innovation and competition. The challenge is to create leading products and standards that implement AI well. This allows organisations to choose the right tools in a crowded market.

As with LLMs, we may see this continue in the hands of incumbent big tech, even when customising solutions. 

As AI research progresses, companies are exploring other methods beyond Large Language Models.

Beyond Large Language Models

As AI research progresses, companies are exploring other methods beyond Large Language Models. Technologies such as Graph Neural Networks (GNNs) are gaining traction. They can process data with relational structures. This makes them ideal for social networks and molecular modelling.

There are Reinforcement Learning Models. For example AlphaFold and AlphaGo. They focus on areas requiring sequential decision-making.

Neuromorphic computing, inspired by the human brain, promises efficient and adaptive AI systems.

There are Hybrid models. They combine transformers (like GPT) with other architectures.

Also Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) for multimodal tasks.

The development of smaller, less costly Large Language Models (LLMs) is happening.

Smaller LLM Models

The development of smaller, less costly Large Language Models (LLMs) is happening. Tailored to perform specific tasks, they provide the greatest performance. Narrowing the scope requires less computational power and resources.

AI is enhancing non-playable characters

Gaming and World Building

AI is enhancing non-playable characters (NPCs). This makes them more intelligent and responsive to player actions. The world(s) will feel more real. This may lead to greater personalisation opportunities.

Machine learning is testing and optimising games to improve game quality.

It makes me think of the work NVIDIA is doing modelling factories. It makes me think of other worlds, workplaces and situations to model.

These are some of the challenges that business and society faces as more of the genie escapes the bottle.

The Built World

These are some of the challenges that business and society faces as more of the genie escapes the bottle. With a bit of thinking these are also some of the greater opportunities in a changing landscape.

Virgil Reality…

Leave a Reply

Your email address will not be published. Required fields are marked *

This field is required.

This field is required.