3 Comments

Deepseek's political nature makes me wonder how we'll motivate 'ethical' AI as more than just a principle and as a tangible advantage. Or asking differently, do you think this AI is at a market disadvantage due to its political overtones?

Expand full comment
author

Yeah, I do think it is at a disadvantage. I think this is less the case from a user perspective. I'm not sure you necessarily care if all you need to do is write cover letters for decidedly non-political jobs. It might get awkward in certain cases though.

The real big issue for Chinese AI is from the developers. How do you rein in a non-deterministic model (LLM or otherwise) that is just getting more complex (... now with multi-modal, you have to monitor _all_ mediums)? I think DeepSeek shows that the answer isn't, "you can't"—you apparently can, but it probably took as much or more effort than developing the rest of the model itself.

Ethical AI, in general, is a whole 'nother can of worms though. We might not have political AI, but we certainly have biases/unfairnesses/etc that we don't want propagated to AI, but is also basically impossible not to.

Expand full comment

Your last point reflects some of my feelings about this. I suspect 'ethical AI' is a currently unattainable feel-good phrase that sounds universal but is completely subjective. It's the equivalent of the emperor's new clothes, and China's AI model is the little kid pointing out he's naked. After all, an AI that supports CCP censorship is ethical in the eyes of the CCP.

Expand full comment