Myanmar’s Escalating Civil War and the Limits of Chinese Intervention
Damocles’s Switchboard w/ Meicen Sun
- Interviews
- Nick Zeller Chris Mao
- 12/16/2024
- 0
Meicen Sun is an assistant professor in the School of Information Sciences at the University of Illinois Urbana-Champaign and an affiliated faculty at MIT FutureTech. An expert on data politics and digital trade, she served as a Fellow on the World Economic Council’s Global Future Council on China, and had previously conducted research at the Center for Strategic and International Studies in Washington, DC and at the UN Regional Centre for Peace and Disarmament in Africa. She has published in outlets including International Organization, Foreign Policy Analysis, Harvard Business Review, World Economic Forum, and the Asian Development Bank Institute. She has also written stories, plays, and music and staged many of her works — in both languages — in China, Singapore and the U.S. Sun holds a PhD from the Massachusetts Institute of Technology followed by a postdoctoral fellowship at Stanford University.
Her doctoral research recently published in International Organization, “Damocles’s Switchboard”, finds that China’s internet control, the “Great Firewall”, has benefited Chinese data-heavy firms (+26% revenue) but hurt Chinese research quality (-10%) in knowledge-driven fields, widening the research gap with the US by 22%.
Nick Zeller: For most people, what we call AI is a tool for cheating on college papers or making bad art. Yet, governments around the world are treating it as a matter of strategic importance – dedicating enormous resources toward it and expressing (ingenuously or not) the need to develop plans for AI governance. Why do you think AI governance has become such a pressing concern at the national and international levels?
Meicen Sun: There are two reasons that are opposite sides of the same coin: First, AI has proven to be a formidable force multiplier for both the economy and the military. The effective leveraging of AI can amplify both economic growth and military capabilities in unprecedented ways. The flip side is that in seeking ever greater gains to get ahead, we may be negligent of the risks – be it algorithmic biases, AI-driven unemployment, or more catastrophic events that some have projected. It doesn’t mean, however, that the current approach to AI governance is sound. The anxiety associated with a disruptive technology like AI has given rise to policy proposals that are at best half-assed and at worst regulatory capture in disguise.
NZ: That gets to our next question: To what extent do you think concerns about AI in the geopolitical realm are driven by market concerns? Many firms stand to make a lot of money if Washington and Beijing remain convinced of AI’s strategic importance and untold potential.
MS: No doubt, but the less obvious and more urgent question is who will benefit from the regulation of AI. Tech companies have long been throwing their weight at lobbying for favorable regulation. Standard essential patents and fights over opt-in/opt-out default privacy settings are reprises of the same motif. AI is the latest arena. Given how much of regulatory capture is achieved through technical specifications that elude public scrutiny, algorithmic black-boxes are about the wildest dream come true.
Here’s another way of looking at this: It’s easy to spot them when companies recklessly deploy AI in pursuit of profit, but much harder when they strategically call for certain restrictions in the name of “AI safety.” As AI companies, they know the technology better than do the regulators or the public, so they would only advocate for restrictions that would benefit them and drive out competitors. This first mover advantage is a gift that keeps giving because it compounds over time. We must be vigilant about whose values and visions are being embedded in the early AI regulatory frameworks, which will become harder to resist and reset down the road. It’s encouraging to see prominent figures in AI come out against blunt – even if well-meaning – AI regulatory instruments, which should help counterbalance any one-sided call for more regulation.
Chris Mao: In your article published by the World Economic Forum, you mentioned that cyberspace governance could become “a virtual domain in which borders are drawn in an even more authoritarian fashion than those in the physical territory.” What triggered China to view AI as such a critical piece in its national strategy? With what overarching strategy has China pursued AI and data governance?
MS: China’s strategy toward AI must be understood as an extension of its strategy toward data. The key to decoding it lies in a series of policy documents from about a decade ago when “big data”, not “AI”, were the buzzwords. These include the 13th Five Year Plan and the Big Data Guidelines, where data was accorded the status of a “fundamental strategic resource.”
The extraction of data isn’t depletive as with oil and gas. Neither does it entail costly R&D. But most importantly, data scales with population. This is crucial for China as a country low in capital and skilled labor. The burden of a large, low-skilled population would now be transformed into an asset. The high internet penetration rate, loose privacy rules, and strict internet control all serve to hoard data even more closely within China’s borders.
Data has been front and center for all major powers, but it’s been nothing short of an obsession for China. For the state, big data was not just another critical juncture but a moment of providence. Getting data right wasn’t simply about boosting comprehensive national power. It’s about redefining the rules of engagement and winning the next big race – AI – on China’s turf. The catchphrase popular at the time, wandao chaoche, or to get ahead on a bend, makes no sense except as an attempt at skipping stages, where the so-called “data dividend” might spare China a head-on competition with the U.S. given the technological gap. Xi’s articulation that informatization is China’s “once-in-a-thousand-year opportunity” would likewise sound generic unless one grappled with the significance of data in particular and not modernization in general.
This obsession has fueled what I call a “Data for X” strategy: Whatever we lack in X, we’ll make it up with data. In AI, the “X” has been compute or algorithms. More broadly, it has been anything from capital to talent to technology – practically any ingredient for growth. China’s aggressive turn to data accelerated countermeasures from the U.S. through restrictions over hardware and talent, which only pressed China to double down on its bet on data.
CM: On data, we’d like to ask you about your recently published paper, “Damocles’s Switchboard.” You note in the paper how digital measures like the Great Firewall can function as a protectionist tariff in international trade. U.S. policymakers have criticized such measures as unfairly advantaging Chinese firms while limiting U.S. market access. Could you elaborate on how this works? How do restrictive policies like internet control, data localization, and export controls impact data-intensive companies? How do you see these dynamics playing out under a Trump 2.0 presidency?
MS: Information affects economic growth in several distinct ways. The most apparent one is through knowledge; internet control slows down innovation by making it harder to access knowledge. This is the crux of the so-called “dictator’s dilemma”, which I’d talked about in a podcast. A less apparent one is through data. Data-driven firms leverage user data to train algorithms for their core products, such as ride history for route optimization. In blocking foreign digital goods and services, internet control pushes users to domestic “substitutes.” On this score, my paper finds a highly positive effect from internet control on these domestic firms – close to 30% and over 50% boost in revenue for the most data-driven firms.
Shouldn’t the government be happy that their national champions are winning? Not quite, because these companies now not only have money – they also have domestic data that could be exploited by foreign actors should the companies expand overseas, which they do. The “tech crackdown” that began in 2020 was part and parcel of the government’s reaction to such an expansion.
This said, digital trade liberalization won’t be a high priority for the next U.S. administration. Even the current administration walked back on it last year. That had less to do with pushback from abroad than with competing interests at home. As the policy space gets busy especially with AI now in the mix, digital trade will need to be balanced against other, more pressing concerns.
CM: In the same paper, you discuss the dilemma faced by autocratic states like China, where they face the dilemma of imminent political threats with immediate economic costs. In non-technical terms, could you elaborate on the concept of “Damocles’s Switchboard”? How does it influence China’s decisions in designing its AI governance?
MS: I’ve been asked that if internet control really exerts such a pull on innovation – a 22% drop in research quality relative to the U.S. as I’ve found – why doesn’t China seem bothered? To be sure, China has adjusted its approach. The control measures now are more precise in both directions, meaning they filter undesirable content more effectively while letting through more innocuous content to reduce collateral damage. Internet control in China isn’t as blunt as it used to be.
But ultimately, the state simply doesn’t grasp the degree of the damage. The aggregate numbers they typically look at – patents, papers, and even high-impact patents and papers – all suggest that China has been catching up to if not overtaking the U.S. in some areas. How can there be a 22% marginal decline in innovation when so many indicators are showing the exact opposite?
The keyword here is “marginal,” which is the effect on innovation from just internet control. Imagine a runner who’s been training with an improved regimen and gear and is finally on the par with their rival despite wearing weights around the ankles. Anyone can see that the runner would be even faster without the weights. Internet control is like the weights, whose drag on the runner is masked by favorable factors like better regimen and gear – increase in research funding, for instance. Parsing out the drag from just the weights requires some data science acrobatics, which is what I did in my paper. The problem with the “dictator’s dilemma” story is there would only be a dilemma for dictators who’ve run the numbers the way I’ve done, but dictators rarely have the time or humility for this.
Let’s take the runner analogy one step further to address another issue in the U.S.-China AI race. Some believe that U.S. sanctions have only forced China to innovate harder in a bid for self-reliance, much like the weights a runner would wear for training. The latest Huawei phone wouldn’t have come so soon had the hardware supply not snapped. But from a technical standpoint, these examples really aren’t equivalents of state-of-the-art (SOTA) models from Nvidia on the compute front or those from the likes of OpenAI on the algorithm front.
A more intriguing possibility, though, is these constraints have nudged China and the U.S. apart not in the quality but in the kind of AI being developed. Besides racing Chinese and U.S. AI models against uniform benchmarks, we should watch for how the two countries may be working on increasingly divergent topics using increasingly different methods. This poses a conundrum for AI policy: If it turns out that censorship and sanctions have been fostering diversity in the global AI landscape, should we just let AI balkanize? Technology policy has primarily dealt with bad consequences from well-intentioned measures, much less the reverse.
CM: Beyond U.S.-China competition, is China’s strategy for catching up in AI something that other developing countries in the Global South should consider emulating? What aspects of its approach are relevant, and what challenges might arise in adapting these strategies?
MS: An immediate lesson to borrow from China is on AI applications. China has been innovative in applying AI to domains from education to healthcare. For most countries, these are more relevant than developing the next SOTA model. A caveat is, again, the scale factor that has underlain China’s success, whether it’s massive data or the associated labeling. What has worked for China may not yield the same results elsewhere. Meanwhile, countries in the Global South have been exploited for data labeling by AI companies in the Global North, including that involving mentally disturbing content. This issue should be much higher on any global AI safety agenda.
NZ: What are the most prominent domains where AI-driven, data-intensive innovation showcases the U.S.-China rivalry for technological leadership? How does the current landscape reflect each country’s comparative advantages?
MS: I’ve covered the U.S.-China AI race with respect to SOTA and China’s potential comparative advantage in data. Here’s some bad news about that “Data for X” strategy: Research by my colleagues at MIT FutureTech shows that with data often relatively abundant, capital will be more – not less – important. If sufficiently backed by capital, AI-augmented R&D will speed up technological change, leading to more and faster “moonshots.” Those familiar with the Cobb-Douglas production function can think of it as AI spurring growth through “A” – total factor productivity, rather than through “L” – labor. We’ve been so singularly focused on the labor aspect: How does AI automate production? How many jobs will disappear? How does it impact wages? We should continue thinking about these, but we’ll be missing the bigger piece of the puzzle if we don’t pay proper attention to AI’s impact on growth through innovation.
NZ: How likely is it for the United States and China to sit down together and set global AI norms? What conditions or incentives might make such cooperation feasible?
MS: Global AI norms won’t just be “set” by two delegations of bureaucrats and that’s a good thing on balance. State actors don’t keep tabs on data and algorithms the same way they do with nukes. Private actors will be pivotal in shaping global AI norms because of the sheer market incentives. The technical expertise is also vastly distributed, with the bulk of frontier AI research coming from industry. Governments will have to accept an increasingly auxiliary role at the AI governance negotiation table.
This mirrors the complex web of interdependent resources across states in the evolution of AI as a technology. Mutual assured destruction during the Cold War hinged on the simplicity of bipolarity and the certainty in annihilation. In contrast, AI is uncertain by design, its very power derived in large part from its inscrutability. This doesn’t lend itself to two-by-two tables or fault trees when it comes to risk assessment, and it should unnerve us all a little. But for all the lives that AI stands to enrich and enliven, it’s still what I’d rather wake up to every morning.