Yeah, there are certain topics where the bot suddenly switches to lecture mode which is very obvious. I wouldn’t really call it a liberal bias, more of a programmers culture bias. They obviously messed with the default product to push certain topics and avoid others. That’s why I consider it a useless product. When someone publishes the first LLM that has a true unbiased nature, that’s going to certainly change some things.
The title of the article is clickbait. The actual text says something which I think is accurate, i.e. these bots create answers according to a pretty inscrutable process, and it’s very difficult to get them to behave any particular way (whether that be to be “unbiased” politically, or accurate, or refuse to do illegal things, or what have you).