The year 2025 has marked a turning point in software development, with AI's role in engineering undergoing significant scrutiny and evolution. Early in the year, the tech world witnessed what many called a real-time experiment, pitting AI’s coding capabilities against the skills of human developers. While AI initially seemed to have the upper hand, the shift from what’s been dubbed "vibe coding" to "context engineering" has underscored the continuing importance of human insight in software creation. The latest Thoughtworks Technology Radar highlights this trend, showcasing new tools and methods designed to help teams manage context more effectively when working with large language models (LLMs) and AI agents. This change signals a broader shift in the industry’s understanding of AI progress—not just about scaling computation or speeding up processes, but rather mastering the nuances of context. Back in February 2025, Andrej Karpathy introduced the term vibe coding, which quickly stirred debate. The idea reflected a looser, less precise way of coding assisted by AI, relying heavily on intuitive guesses or "vibes" rather than structured logic. Thoughtworks’ own internal discussions showed skepticism about vibe coding’s long-term viability. By April, concerns about its imprecision and the proliferation of anti-patterns—inefficient or problematic coding habits—had become apparent. Users’ growing demands pushed AI models to their limits, revealing reliability issues as prompts expanded and complexity grew. This experience has driven the industry to focus more on engineering context carefully. Tools like Claude Code and Augment Code illustrate this push by emphasizing "knowledge priming," where providing AI with the right background information results in more consistent, reliable outputs. This approach not only reduces the need for extensive rewrites but also boosts overall productivity. One surprising lesson from using generative AI to work on legacy codebases is that, sometimes, less specific context actually helps. Abstracting the AI’s view away from the messy details of old systems opens a wider solution space, enabling the AI to be more creative and generative. The rise of agentic systems—AI agents designed to act autonomously—has further complicated the context challenge. Unlike simple scripted bots, these agents demand ongoing human involvement to navigate complex and dynamic environments. Several emerging technologies, including agents.md, Context7, and Mem0, aim to tackle these issues by anchoring agents to a reliable "reference application" or ground truth. Experimenting with teams of coding agents has also shown promise; rather than overloading a single agent with tons of context, distributing tasks can reduce complexity and improve performance. As these practices evolve, industry standards like the Model Context Protocol and agent2agent (A2A) protocol are gaining traction to unify how AI models and agents access and share context. Whether these protocols become universal remains uncertain, but they highlight the need for structured collaboration in complex AI ecosystems. On a human level, simple practices like curated shared instructions for software teams remain surprisingly effective for aligning efforts. Looking ahead, the software development landscape in 2025 is rich with both opportunity and challenge. Agile methodologies may need to adapt to balance flexibility with the need for solid contextual foundations that AI systems require. Despite ongoing fears about AI replacing jobs, the renewed focus on context places software engineers firmly at the center of innovation. Their ability to experiment, collaborate, and learn will be crucial in shaping the future of software engineering.