Advertisement

Multi-Cloud in the World of AI

By on
Read more about author Matt Wallace.

ChatGPT has emerged as a groundbreaking innovation, taking the collective imagination of technologists by storm, and promising to revolutionize how humans and machines interact. Companies and developers worldwide are harnessing the power of ChatGPT. Many applications are in heavy development, and it has quickly become an essential component of the artificial intelligence (AI) ecosystem. The tool is powerful and useful, but it has sparked an enthusiasm for bigger and better things.

The recent release of ChatGPT 4 has further solidified its importance as a cutting-edge tool for various industries. It is essential to understand the powers and constraints of ChatGPT and how it may impact your IT strategy, including your multi-cloud strategy.

GPT-4 Diversifies How Business Is Done 

The sheer scale of GPT-4, which boasts a vast increase in the number of parameters over previous versions, has led to unprecedented levels of fluency and versatility. GPT-4 is already being deployed in a wide range of real-world applications, from improving customer service interactions to generating high-quality content for marketing purposes. Its flexibility and powerful capabilities have attracted the attention of companies across various industries eager to harness its potential to streamline processes, reduce costs, and drive innovation. 

The announcement of a Plugin architecture is significant; it allows ChatGPT to query third-party services like Expedia for travel and Wolfram Alpha for math. This feature promises to become a key component in an AI-driven ecosystem that empowers people in a much wider variety of activities.

In the realm of chatbots, companies like Zendesk have integrated ChatGPT to provide instant, contextually relevant responses to customer inquiries, significantly improving customer satisfaction and reducing response times.

ChatGPT Forces Companies to Reexamine Their Technology 

ChatGPT has changed the way content is created. Businesses can now generate unique content for blogs, social media, and marketing materials with minimal human intervention. GPT-4’s improved coherence and fluency have made this possible. ChatGPT is used for language translation, enabling companies to communicate effectively with international audiences and break down language barriers. Perhaps even more intriguing, ChatGPT has shown an incredible facility with code and many organizations are working to leverage it with increasing independence through tools like Langchain and AutoGPT.

Within days of the initial launch, I found myself writing a speech-to-text and text-to-speech front-end interface for ChatGPT, purely for fun – and empowered by ChatGPT code generation. I encountered challenges when linking services together. I wanted to use Google’s Text-to-Speech and Twilio’s voice calling (inside AWS). However, Azure is the public cloud provider with OpenAI APIs.

The partnership between OpenAI and Microsoft helps ease the adoption pain of larger enterprises, which can now access OpenAI models securely. Large enterprises often have stringent policies and controls to manage their security and compliance requirements. But this exclusive partnership between Microsoft and OpenAI has necessitated that enterprises look again at their assumptions about multi-cloud.

The OpenAI/Microsoft partnership may be a bellwether for exclusive AI partnerships that increase the diversity of cloud services, a trend that was already on the rise. However, the era of every major public cloud being substantially similar may be over. Microsoft’s exclusive inference APIs for ChatGPT may promise enterprises a range of benefits, including enhanced performance, scalability, and security; but if your cloud strategy didn’t include Azure, what now? And what about when there are 20 AI-powered services scattered across four clouds that you want to use in your application portfolio?

ChatGPT Speeds Up Multi-Cloud Adoption Strategies 

By adopting a multi-cloud strategy, businesses can leverage the strengths of different cloud providers to optimize their use of ChatGPT and other AI tools, ensuring seamless integration with existing infrastructure and services. This approach also mitigates the risk of vendor lock-in, promotes flexibility, and allows companies to take advantage of the best features and pricing options from different providers.

Despite its many strengths, ChatGPT does have limitations in terms of fine-tuning. For certain enterprise use cases, such as sentiment analysis and personalized marketing, or chatbots that can do a good job representing a business, fine-tuning is critical to achieving optimal results. The inability to fine-tune the model can hinder businesses. Not only is fine-tuning unavailable yet but even when it does come, will the right data be available in Azure to fine-tune an enterprise ChatGPT model? Data locality may be critical, given that some fine-tuning processes will be with sensitive data that needs to remain in a carefully architected environment that can guarantee compliance and security.

ChatGPT has captured the enthusiasm, excitement, and, in many cases, fear and uncertainty of many parts of the technology ecosystem. One thing we can be sure of is that advances across the board from massive investments in AI will yield incredible innovations. If that innovation is tied to a specific cloud provider, as many likely will be, then architects have a whole new reason to consider their multi-cloud approach.