OpenAI Releases GPT-4.1 With Improved Coding

OpenAI says GPT-4.1 model family can understand prompts with up to 1 million tokens, features improved coding over GPT-4o

2 min
OpenAI logo displayed on a smartphone. Image credit: Unsplash
Getting your Trinity Audio player ready...

OpenAI has released its GPT-4.1 model, the successor to its flagship GPT-4o, which it said improves upon the earlier model in “just about every dimension” including better coding and instruction-following.

The model and two smaller versions, GPT-4.1 Mini and GPT-4.1 Nano, can all process up to one million tokens of context, referring to the text, images or videos included in a prompt, far more than the 128,000 token limit for GPT-4o.

The company said the model is able to attend to information across the full 1-million-token context limit, and is more reliable than GPT-4o at noticing relevant text and ignoring distractors across both long and short context lengths.

A computer screen displays lines of code. Image credit: Unsplash
Image credit: Unsplash

Improved abilities

The new model also operates while consuming 26 percent less computing power than GPT-4o, a consideration that came to prominence after the success of DeepSeek’s high-performance and ultra-efficient model in January.

The three new models, which are available to developers via OpenAI’s application programming interface (API), also include refreshed background information up to June 2024.

GPT-4.1 showed a 21 percent improvement in coding abilities over GPT-4o and 27 percent over GPT-4.5, OpenAI said.

It said improved instruction-following and long context comprehension make the model family more effective for the AI agents it has begun launching.

Chief executive Sam Altman said “developers seem very happy” with the model’s coding abilities.

The company said it would turn off the GPT-4.5 preview available via the API in July, as the new model offers “improved or similar performance on many key capabilities at much lower cost and latency”.

Shift to GPT-5

The GPT-4.5 preview, announced earlier this year, incorporated a process called post-training in which it incorporated human feedback to improve responses and refine the nuances of how it interacts with users.

With GPT-5, expected later this year, OpenAI has said it plans to take a different approach by combining its GPT-series models with its o-series “reasoning” models, with the ChatGPT chatbot itself deciding which model to use.

ChatGPT itself currently offers users a choice of the model they would like to use, something OpenAI said was overly complex.

OpenAI is in the process of shifting from a non-profit to a for-profit model, a requirement for it to receive some of its funding from investors, but is being sued by co-founder and competitor Elon Musk over the move.