During the interview, Zhang Xiaojun asked Yang Zhilin a question:
Your ultimate goal is to develop a general-purpose large-scale model rather than a coding model, right?
This question further reflects the fact that the boundaries between how users use, migrate, and form habits for a new AI product, including all large-scale models, agents, and other product forms, are actually quite blurred.
Often, exceptional performance in a specific vertical attracts users, while generalization/universality reaching 60% keeps them engaged.
This is particularly evident when I use Perplexity and GPT:
Perplexity outperforms ChatGPT in market research capabilities (speed);
and its practical performance in Google Mail, such as searching and writing, is better than Google's own Gemini.
It's precisely because Perplexity outperforms other general-purpose large-scale models in these occasional comparative tasks that I've used it more and more.
The same is true for other AI products:
The discussion of model vs. no model seems a bit outdated to me,
or at least, it's not something front-line builders are considering.
The key considerations must be the entry point and the stickiness point. Small problems and small scenarios can be solved one by one, surrounding the city from the countryside, and perplexity can be more useful than GPT and Gemini.