4 Comments

> Given this, a platform or government with a desire to censor could do it using another LLM to "review" the output of the first model and modify it according to the desired guidelines.

And even though people could use prompt engineering / ‘jailbreaks’ to circumvent this (get the LLM to phrase their response such that the censor won’t censor it), most people simply won’t take the trouble.

But what TC suggests is that that hobbles the usefulness of the responses to such an extent that China will be at a too severe disadvantage, losing too much relative economic growth, to keep it up.

Expand full comment
author

I guess part of the question is how do LLMs contribute to economic growth. Is it via "hard technical" skills like programming / making data processing systems more efficient / (eventually) advanced robotics etc directly contributing to productivity? Or is it by making society as a whole better functioning in some qualitatively new manner (eg by re-organising governance in some way, in which case you can obviously see the motive for censorship)?

It just seems to me that the former is much more likely at least at the current and near-future level of capability. And topics around technical details of programming aren't sensitive so the part of LLMs which could contribute to growth won't be impacted by censorship.

But maybe I am wrong, curious to know in what way you think it could be otherwise?

Expand full comment

Great to see you ported your content here and are talking about AI + China. It's one of my favorite thought experiments these days. Wild times.

Expand full comment
author
Apr 8, 2023·edited Apr 8, 2023Author

haha thanks Nathan. just a shame I did the port on the day of the substack apocalypse...

Expand full comment