Speaking at the event, Andreas Braun, Chief Technology Officer (CTO) at Microsoft Germany and Lead Data & AI STU, said that the release of GPT-4 is imminent and would be “multimodal”. “We will introduce GPT-4 next week, there we will have multimodal models that will offer completely different possibilities – for example videos,” Braun was cited as saying. GPT-4 is the next iteration of OpenAI’s Large Language Model (LLM), which the CTO called it a “game changer”. Currently, the AI-powered ChatGPT and other GPT-3.5-powered technologies as well as Bing Chat only allow text inputs to interact with users, and it also displays all answers and results in the form of text. However, this could soon change with Microsoft’s backed multimodal models of the LLM. The release of GPT-4 could pave the way for users to interact in various means such as text, images, sounds, and maybe even videos too. “In the meantime, the technology has come so far that it basically “works in all languages”: You can ask a question in German and get an answer in Italian. With multimodality, Microsoft(-OpenAI) wants “make the models comprehensive”,” added Braun. Earlier this week, Microsoft announced that OpenAI’s ChatGPT is now available in Azure OpenAI service for preview. “Now with ChatGPT in preview in Azure OpenAI Service, developers can integrate custom AI-powered experiences directly into their own applications, including enhancing existing bots to handle unexpected questions, recapping call center conversations to enable faster customer support resolutions, creating new ad copy with personalized offers, automating claims processing, and more,” Microsoft said in a statement. “Cognitive services can be combined with Azure OpenAI to create compelling use cases for enterprises. For example, see how Azure OpenAI and Azure Cognitive Search can be combined to use conversational language for knowledge base retrieval on enterprise data.”