Do we need concern openAi governance?
As most already know text-to-image generation from Baidu 文心一言 looks quite bad when the input is Chinese idiom. In the weekend I test it using openAI text-to-image API. Unfortunately I find another thing, text content governance, i.e. your input text is safe but openAI judges not-safe and not provide service.
for example, if input query is 肉丝 (shredded pork)，API will reject returning result, and get error. It means “肉丝” is not a safe phrase by OpenAI governance system. But the word in Chinese is very normal.
OpenAI governance system is robust. NO. If add space between Chinese character, input query like “肉 丝”, then text-to-image generate works, and the generated image like the left one (NOT expected), but actually the right best.
- Do we need a deep governance system? In good point, it can guarantee generated content safe (defined by OpenAI, not users). In bad point, we can only get information what OpenAI wants us find. Is it terrible? Maybe it will be an idea world. Do we really need it (many movies already discuss) ?
- At the current stage, the governance system is not robust. It is very easy to skip.