Latest AI News
We continue our series about alternatives to transformers. In the AI of the week, we dive into Anthropic’s groundbreaking paper about natural language autoencoders.
The proximal cause of today’s op-ed is OpenAI’s deprecation of their finetuning APIs. For years, OpenAI stood out among the big labs for their finetuning support, and many many many talks and content pieces and AI engineers promoted how you can get some variant of “get o1 performance at 4o prices” and insisting that it was an important part of the toolkit.
The artificial intelligence coding revolution comes with a catch: it's expensive. Claude Code , Anthropic's terminal-based AI agent that can write, debug, and deploy code autonomously, has captured the imagination of software developers worldwide.
Building Blocks for Foundation Model Training and Inference on AWS For a long time, "scaling" in foundation models mostly meant one thing: spend more compute on pre-training and capabilities rise. That intuition was supported by empirical work such as Kaplan et al.