OpenAI presented its flagship neural network GPT-4 Turbo, which turned out to be more powerful and several times cheaper than GPT-4 (which runs the premium version of the popular chat bot ChatGPT, the audience of which has reached 100 million active users per week). The company also introduced a program to protect business clients from copyright claims.
The GPT-4 Turbo language model will be available in two versions: one for text analysis and another for working with text and images. The first version can already be accessed through the API, but the company promises that both versions of the neural network will become publicly available “in the coming weeks.”
OpenAI claims that the cost of using GPT-4 Turbo is significantly lower than GPT-4: $0.01 per 1000 input tokens (about 750 words) and $0.03 per 1000 output tokens. Input tokens are pieces of raw text. The word "fantastic", for example, is split into the tokens "fan", "tas" and "tic". The model generates output tokens based on input tokens.
“We have optimized performance so we can offer GPT-4 Turbo at three times the price for input tokens and half the price for output tokens compared to GPT-4,” OpenAI said in a blog post.
As for the prices for the image processing version of the neural network, they will depend on the size of the image. For example, processing a 1080 × 1080 pixel image in GPT-4 Turbo will cost $0.00765.
The knowledge base of the new neural network, on the basis of which it generates answers to questions, has also been updated. The GPT-4 model was trained on web data until September 2021, while the GPT-4 Turbo knowledge limit is April 2023.
In addition, the new model received an expanded context window of 128,000 tokens (this is the amount of text that the model can take into account during the generation process; this window corresponds to approximately 100,000 words or 300 pages of text), which is four times larger than GPT-4 , allowing her to better understand queries and generate more appropriate responses while staying on topic. This is believed to be the largest context window of any AI model available today.
As reported on the OpenAI blog, the new GPT-4 Turbo model is also capable of generating valid JSON format, has more flexible settings, and generally performs better, especially when performing tasks that require careful following of instructions, such as generating specific formats (for example, “always respond in XML").
The company has also doubled the rate limit for token deposits and withdrawals per minute for all paid GPT-4 users. However, the price remained the same: $0.03 per input token and $0.06 per output token (for the GPT-4 model with a context window for 8000 tokens) or $0.06 per input token and $0.012 per output token (for GPT-4 model with a context window for 32,000 tokens).
OpenAI has also launched a Copyright Shield program aimed at protecting business customers from potential claims related to copyright infringement when using OpenAI products. Under this program, OpenAI will pay legal fees for customers who use its developer platform's publicly available tools and the commercial version of its ChatGPT Enterprise chatbot.
The protection, however, does not apply to the free and Plus versions of ChatGPT. It is also unclear whether the program provides protection against claims related to training data used in the company's generative AI models.
AI models like ChatGPT, GPT-4 and DALL-E 3 are trained on datasets (books, artwork, etc.) that may contain copyrighted material or have restrictive licenses. This raises questions about the extent to which such models can reproduce exact portions of the data being studied, which could lead to possible copyright violations.
Many large companies, such as IBM, Microsoft, Amazon, as well as the Getty Images, Shutterstock and Adobe platforms, have already expressed their willingness to protect clients from potential intellectual property claims. Now OpenAI has joined this ranks, calling for the development of appropriate protection mechanisms for users.
Popular chatbot ChatGPT has reached the impressive milestone of 100 million weekly active users. This was announced by OpenAI CEO Sam Altman during the first developer conference, which was held in San Francisco.
According to him, ChatGPT, launched just about a year ago, quickly gained the attention of the audience and within two months reached 100 million monthly active users. Currently, this indicator corresponds to the weekly activity of service users.
In addition, more than 2 million developers use the platform for their projects, including representatives from more than 92% of Fortune 500 companies.