OpenAI appears to be adopting a facade of openness by offering access to its models through Hugging Face Skills. While Meta and Mistral generously share their developments, Codex presents a scenario where you fine-tune and deploy models locally, but under close supervision. The mechanics are as follows: via AGENTS.md, you issue a command to Codex, such as 'fine-tune Qwen3-0.6B.' The system will suggest hardware configuration (a t4-small is more of a demonstration run than actual work for serious tasks), launch scripts, submit the task to Hugging Face Jobs, estimate the budget, and provide a report. Training will occur on Hugging Face GPUs, and the completed model will be sent to your Hugging Face Hub. The service claims support for production methods like supervised fine-tuning, direct preference optimization, and RL alignment for models ranging from 0.5B to 7B parameters, with conversion capabilities to GGUF. Delegating ML experiments to Codex seems attractive, as monitoring, evaluation, and reporting can be offloaded. Codex itself, according to assurances, will make 'more independent decisions,' which, of course, boils down to following an algorithm.

Stripping away the public relations wrap, the picture becomes clearer. For full access to these capabilities, you will need a paid Hugging Face account (Pro, Team, or Enterprise) and a corresponding token. Codex itself is evidently part of your paid OpenAI subscriptions. In this way, OpenAI is pushing the market towards its proprietary solutions, masking them as 'openness.' This move prompts other major players whose business models are built on closed systems to reconsider. However, the base models remain under Codex's control. This raises security questions: who will be responsible for bugs in 'open' models, and how free will you truly be when your infrastructure and, essentially, your models, depend on paid services from OpenAI and Hugging Face? Could this lead to a risk of data leakage when working with third-party scripts and uncontrolled cost escalation during scaling?

This release of Codex is an attempt to occupy a new niche by offering automation of ML processes under the guise of openness. For businesses, it presents a potential opportunity to accelerate R&D, but at the cost of entering a new form of dependency on proprietary platforms and incurring hidden expenses. CEOs, when deciding on integration, must soberly assess: is this 'openness' transforming into another expensive subscription with non-obvious risks to operational activities and control over their own AI developments? If you are striving for real independence in AI development, it is wise to be skeptical of offers where 'freedom' means choosing from a limited set of paid tools and a hidden tie-in to the vendor's infrastructure.

Why this matters: OpenAI's Codex offering on Hugging Face presents businesses with a seemingly convenient path to automate ML workflows, but it introduces a new layer of dependency on proprietary services. Executives must carefully weigh the potential for accelerated R&D against the risks of hidden costs and reduced control over their AI initiatives. True AI independence may lie in solutions that offer genuine transparency and avoid vendor lock-in.

OpenAIHugging FaceAI ToolsAI in BusinessFine-tuning