• Home
  • /
  • Blog
  • /
  • News
  • /
  • OpenAI has introduced GPT-4 Turbo - more powerful and significantly cheaper than GPT-4.
OpenAI has introduced GPT-4 Turbo

OpenAI has introduced GPT-4 Turbo - more powerful and significantly cheaper than GPT-4.

OpenAI has unveiled its flagship neural network, GPT-4 Turbo, which is more powerful and significantly cheaper than GPT-4. Today, at its first developer conference, OpenAI introduced GPT-4 Turbo, an improved version of its flagship large language model. OpenAI developers note that the new GPT-4 Turbo is more powerful and, at the same time, more cost-effective than GPT-4.

The GPT-4 Turbo language model will be available in two versions: one is exclusively for text analysis, while the other understands not only text but also images. The text analysis model is available as a preview version through the API starting today. Both versions of the neural network will be made publicly accessible "in the coming weeks."

The cost of using GPT-4 Turbo is $0.01 per 1,000 input tokens (approximately 750 words) and $0.03 per 1,000 output tokens. Input tokens refer to segments of raw text. For example, the word "fantastic" is broken down into tokens like "fan," "tas," and "tic." The price of GPT-4 Turbo for image processing will depend on the image size. For instance, processing an image with dimensions 1080 x 1080 pixels in GPT-4 Turbo will cost $0.00765.

"We've optimized performance, allowing us to offer GPT-4 Turbo at one-third the price for input tokens and half the price for output tokens compared to GPT-4," OpenAI stated in its blog.

GPT-4 Turbo's knowledge base has been updated, and it now extends up to April 2023. In other words, the neural network will provide more accurate answers to queries related to recent events (up to April 2023) since the GPT-4 model was trained on web data up until September 2021. Based on numerous examples from the internet, GPT-4 Turbo has learned to predict the likelihood of certain words appearing based on patterns, including the semantic context of the surrounding text. For instance, if a typical email ends with "Looking forward to...," GPT-4 Turbo can complete it with "…your response."

Additionally, GPT-4 Turbo has an expanded context window (the amount of text considered during generation). A larger context window allows the model to better understand the meaning of queries and provide more relevant responses without straying off-topic. GPT-4 Turbo has a context window of 128,000 tokens, which is four times larger than that of GPT-4. This is the largest context window among all commercially available AI models and surpasses the context window of Anthropic's Claude 2 model, which supports up to 100,000 tokens. Anthropic claims to be experimenting with a context window of 200,000 tokens but has not made these changes publicly available yet. A context window of 128,000 tokens is roughly equivalent to 100,000 words or 300 pages of text, similar in size to novels like "Wuthering Heights" by Emily Brontë, "Gulliver's Travels" by Jonathan Swift, or "Harry Potter and the Prisoner of Azkaban" by J.K. Rowling.

GPT-4 Turbo is capable of generating valid JSON format. According to OpenAI, this is convenient for web applications that transmit data, such as those sending data from a server to be displayed on a web page. GPT-4 Turbo, in general, offers more flexible settings that will be useful for developers. More details on this can be found in OpenAI's blog.

"GPT-4 Turbo performs better than our previous models when it comes to tasks that require strict adherence to instructions, such as generating specific formats (e.g., 'always respond in XML'). Additionally, GPT-4 Turbo is more likely to return correct function parameters," the company reports.

GPT-4 Turbo can also be integrated with DALL-E 3, text-to-speech translation features, and visual perception, expanding the capabilities of AI usage.

OpenAI has also announced that it will provide copyright protection guarantees for corporate users through the Copyright Shield program. "We will now protect our customers and cover the costs if they face legal claims related to copyright infringement," the company stated in its blog. Microsoft and Google have previously implemented similar measures for users of their AI models. Copyright Shield will cover the publicly available functions of ChatGPT Enterprise and OpenAI's developer platforms.

For GPT-4, the company has launched a fine-tuning program, offering developers more tools for customizing AI for specific tasks. According to the company, unlike the GPT-3.5 fine-tuning program, the GPT-4 fine-tuning program will require more control and guidance from OpenAI, mainly due to technical challenges.

The company has also doubled the token input and output speed limit per minute for all paid GPT-4 users. The price remains the same: $0.03 per input token and $0.06 per output token (for the GPT-4 model with an 8,000-token context window) or $0.06 per input token and $0.012 per output token (for the GPT-4 model with a 32,000-token context window).

by ELDEVELOP

Rating: 0/5 0 votes cast

News