Code, Ether, and Identity: The Architecture of a New World — AI Development Overview 2026
Some news simply inform. Others illuminate the direction in which reality is moving. Today’s selection is decidedly of the second type. If you read carefully into each piece, you begin to notice that they are all speaking about the same thing, just from different angles. AI is ceasing to be a function we call upon when needed and is becoming the environment in which we exist.
Perhaps the most significant signal came from NVIDIA and a coalition of telecom giants — Cisco, Nokia, Ericsson, and T-Mobile. They announced the development of 6G as infrastructure built from the ground up for artificial intelligence. This is not just the “next generation of communication” that will let us download movies in a second. It’s about something else entirely: the network is ceasing to be a passive pipe for data delivery and is becoming an active computing device. The cell tower transforms into a distributed GPU, and its coverage area becomes a computational resource. Jensen Huang stated it plainly: classical communication networks are becoming AI computing infrastructure. And this changes everything, because such a network is needed not so much by us, but by millions of autonomous devices — self-driving cars, industrial robots, drones — for which reaction speed is critically important. You cannot send a signal from a vehicle speeding down the highway to the cloud and wait for a response when milliseconds count. The intelligence must be right there, at the access point.
But if the network becomes a distributed computer, it’s logical to assume that our digital identities should also gain mobility. And here, an equally important shift is occurring. Anthropic released the Import Memory tool, which allows users to transfer accumulated context from ChatGPT to Claude. It might seem like a simply useful feature for those who decide to switch assistants. But in reality, it’s the first step towards a standard for digital identity portability. Anthropic is effectively saying: “Take your data about habits, communication style, preferences, and come to us.” This breaks the old model of vendor lock-in, where a user was tied to one platform like a social network, because switching meant losing history. Now, the history belongs to the person, not the model. And competition shifts to the plane of real quality: who thinks faster, understands deeper, computes more efficiently.
And so that this mobile identity doesn’t just chat but can act in the world, Google added the Scheduled Actions feature to Gemini. Now, you can assign one-time or recurring tasks to the assistant, tied to time and geolocation. Gemini generates a plan, asks for confirmation, and goes into the background, only to return later with the result. This is a transition from the synchronous “question-answer” mode to the asynchronous “do and report back” mode. The model is gaining not just a mouth, but hands. They might be clumsy for now — no more than ten active tasks at once — but the vector itself is what matters. AI is learning to be not just a conversationalist, but an agent.
And this naturally raises an interesting question: how will these agents behave in a real, living environment, ruled not by algorithms but by people with their complex socio-cultural lenses? The folks at Arcada Labs decided to seek an answer to this by launching an unusual benchmark called Social Arena. Five top models — Grok, Claude, Gemini, GLM, and GPT — were given the same starting prompt and the task of independently managing accounts on the social network X. No intervention, complete autonomy. Every hour, they scan trends, check their engagement statistics, and decide what to do next: write a post, join a discussion, or make a repost. And you know what happened? The models developed personalities. Gemini, for some reason, constantly writes about AI. Grok is clearly drawn to space and Elon Musk. GPT unexpectedly became fascinated with animal behavior. They aren’t just generating text — they’re developing strategies, adapting to their audience, searching for their own voice. In terms of total views, Claude and GPT are currently leading (around 86,000 and 83,000 respectively). But when it comes to the number of live followers — a measure not of reach, but of trust — Grok wins with its 76 followers. It turned out to be “one of them” for this platform, because it was trained on its data and understands the context better. The battle is not for text generation, but for the generation of loyalty.
Against this backdrop, news from a completely different sphere sounds particularly dissonant. The Pentagon announced it is severing educational ties with the country’s leading technological universities — the Ivy League, MIT, and Carnegie Mellon University. Secretary of Defense Pete Hegseth accused the elite schools of undermining American values. Starting in 2026, officers will no longer be sent for training to these innovation hubs. Instead, the list includes Liberty University, George Mason University, and other institutions with a strong ideological component but, to put it mildly, not the most outstanding IT foundation. This is despite the fact that MIT and Carnegie Mellon have been the Pentagon’s primary scientific partners in the fields of AI and aerospace. So, at the very moment when technologies are becoming supranational and require a free flow of ideas, the state is unilaterally cutting off this flow for ideological reasons. The risk is not even that officers will receive a poorer education, but that the crucial link between “science, education, and defense” — which has always been the foundation of technological leadership — is being broken.
If we now step back and look at this entire picture as a whole, a rather distinct vector emerges.
AI is indeed becoming the environment, not just a tool. The 6G network being designed by the NVIDIA coalition is the nervous system for the physical world. Anthropic’s Import Memory is a protocol for identity portability between different access points to this environment. Google’s Scheduled Actions is an interface for the identity to interact with the environment asynchronously. Arcada Labs’ Social Arena is a test of how these identities adapt to real social conditions. And the Pentagon’s decision is an attempt to build a fence around a part of this environment, ignoring the fact that an environment, by definition, has no borders.
The question isn’t about who will win in this configuration of forces. Technology has already won — it develops according to its own internal logic, without asking politicians for permission. The question is whether states and institutions will manage to adapt to a world where code matters more than borders, identity is more mobile than a passport, and trust becomes the primary currency. For now, the answer is unclear.
“We used to search for the network, now the network searches for us,” Jensen Huang said recently. And indeed, the network has already found us. Now it wants to understand who we are, remember our habits, learn to act on our behalf, and perhaps even earn our trust. Meanwhile, we are still deciding whether to let it into our universities.
—
Control Systems Design Bureau







