In the fast-paced landscape of 2023, organizations embraced artificial intelligence (AI) and its related technologies, experiencing a surge in diverse AI applications. According to data from McKinsey, there was a significant 55% adoption rate of AI across global industries by employees. However, as we step into 2024, organizations recognize that while AI is critical for competitiveness and operational efficiency, its practical, everyday integration demands nuanced considerations. The coming year will witness AI technologies entering a new phase of advancement and expansion, but the focus will shift toward specific components and practical applications crucial for its seamless incorporation into daily business operations.
Here are three predictions for the new year as AI continues to advance and gain prevalence within organizations’ everyday operations.
Smaller, specialized large multimodal models (LMMs) will become more popular
In 2024, LMM and text-based interfaces will become integral components of nearly every software product. Interactive text, voice, and image-based interfaces, driven by these LMMs, will undergo a fundamental integration across various applications. These interfaces will be used for everything, from controlling applications to providing answers to user inquiries about the application itself via chatbots. Soon, LMMs will redefine how users engage with and extract value from the digital landscape. This convergence of data control and conversational capabilities will fundamentally alter the user experience, transforming interfaces into intuitive, interactive platforms that cater to diverse user needs seamlessly and intelligently.
In the next year specifically, organizations will shift from large language models (LLMs) toward multimodal models that enable a combination of multiple types of user inputs, beyond just text. These models will enable new types of interactions that broaden and simplify the use of generative AI across more business use cases. But, that’s not to say LLMs will not still play a large role in innovation. Just recently, Apple introduced local execution of LLMs; an innovative approach to efficiently use flash memory to execute large language models in environments with limited memory capacity. Their approach to windowing and bundling data more efficiently enables LLMs to be run locally on mobile devices. As more devices become more capable of locally running LLMs, and eventually LMMs, using techniques like these allows for innovation and broad usage to skyrocket.
In addition, smaller, more purpose-driven generative models will take on more business focus. This transition will streamline the large data requisites for model training allowing for increased privacy, security, and customization. With the general push to cloud-based collaboration like open-source tech, building these specialized LMMs becomes easier to execute, allowing teams to reap the full benefits of the technology. LMMs, designed for specific purposes like healthcare, education, or sustainability, aim to serve these respective domains by providing tailored, domain-specific expertise and capabilities. Open-source solutions, on the other hand, advocate for transparency, accessibility, and collective contribution to software development. When these two concepts intersect, it’s about empowering purpose-driven initiatives by leveraging the collaborative spirit of open source.
Integrating purpose-driven LMMs into open-source frameworks or making them open-source themselves allows broader access to specialized tools and knowledge. This combination fosters innovation and community-driven development in areas that require domain expertise, enabling the creation of more accessible, adaptable, and ethically aligned solutions. Models that exhibit this and boast a well-documented lineage of information sources will be the most popular. This emphasis on sourcing will ensure a heightened level of trust and reliability, fostering a culture of transparency and accountability within the realm of AI-driven solutions.
Privacy regulations will be top of mind for businesses
The data privacy landscape, both in the U.S. and internationally, is becoming increasingly intricate and difficult to manage. With privacy regulations being dictated at the state level in Florida, Delaware, and Texas, and President Biden’s new executive order overseeing AI security and privacy, navigating data privacy rules will be a major feat for most organizations. Internationally, this will be even more complex. The December EU provisional agreement on the Artificial Intelligence Act provided a very detailed framework that is likely to be copied by countries around the world.
Even outside of President Biden’s executive order and the EU Artificial Intelligence Act, the combination of state-specific laws and the diverse data subject rights regulations enforced by various countries means companies planning to integrate AI into their operations have a lot to consider. Many are grappling with abundant data and content scattered across multiple systems. They are finding themselves at a loss when complying with regulations such as GDPR and CCPA. In the new year, these organizations will be slower to fully embrace these privacy regulations as they struggle to manage their wealth of data – a risky reality with more businesses falling behind and dealing with the consequences of non-compliance.
AI will transform value from unstructured data
This year, the real value of AI will lie in its capacity to help people get more value from unstructured information in various internal use cases – parsing through extensive volumes of documents, generating more concise and informative summaries, and facilitating Q&A interactions with these documents. AI plays a crucial role in dealing with unstructured data by leveraging various techniques and algorithms to extract valuable insights and make sense of seemingly chaotic information. The transformative power of these AI tools manifests in their ability to comprehend and summarize information deeply, presenting concise yet comprehensive overviews for the user. The result will be that important content, like contracts, HR policy documents, product schematics, and physical supply chain documents, can be easily queried and understood by everyday employees, without requiring knowledge experts from HR, Legal, and Compliance to translate them.
LLMs are at the heart of AI’s prowess in handling unstructured data, enabling the deciphering of human language nuances and context. LLMs can sift through unstructured text, extracting key ideas, data, and themes. This facilitates categorization, sentiment analysis, and summarization, transforming raw textual data into structured, actionable insights. Additionally, AI’s capability to understand documents can help further dissect information, identify patterns, and extract crucial data points, enabling rapid information retrieval. By harnessing these capabilities, AI can transform unstructured data into actionable insights, driving informed decision-making, automating processes, enhancing customer experiences, and fostering innovation across industries.
As the AI whirlwind continues, the stage is set for an even more transformative year ahead. The impending wave of advancements poised to define 2024 not only signifies AI’s continued evolution and use but heralds an era of responsible integration, transformative capabilities, and ethical considerations, poised to redefine the technological landscape in ways yet unimagined.