Thoughts on Accessibility, Serving, and the ‘AI for Everyone’ Vision

#6
by lesj0610 - opened

First of all, I sincerely wish success to the K-AI–related project and the team behind HyperCLOVAX.

At the same time, I would like to share some concerns and disappointments from the perspective of an independent researcher and general user.

President Lee Jae-myung has repeatedly emphasized the vision of “AI for everyone.”
While it is completely understandable that companies must pursue sustainability and profit, this project is ultimately funded by public resources. From that standpoint, the release of a 32B model is genuinely appreciated and welcomed, as it represents a practical upper bound that motivated individuals can still attempt to run with relatively minimal hardware.

However, several aspects make the current ecosystem feel unnecessarily restrictive.

First, multimodal input appears to be practically usable only through a tightly coupled Docker-based environment with a Qwen-2.5 vision encoder. This closed setup, combined with delayed integration with widely adopted serving frameworks such as vLLM or llama.cpp, significantly limits accessibility. The reliance on an older vLLM version (0.6.0) and the current vision encoder choice also seem to contribute to this lag.

Second, based on testing with the 14B variant, long-standing issues commonly observed in Qwen-based models—such as repetitive recursion and failure to exit the “thinking” or inference loop—still appear to persist. In multiple cases, the model struggled to reach a stable, coherent final answer, which was particularly disappointing.

Finally, the omni-serve Docker system raises the biggest question. A setup that effectively assumes access to hardware on the level of dual A100 80GB GPUs places it far beyond the reach of ordinary users. Framing this as an “agent” system does not fully address the core concern: in practice, this environment is usable only by well-funded research labs or institutions.

Given these constraints, it becomes difficult not to ask: who is this AI truly for?

I share these thoughts not to dismiss the effort or ambition behind the project, but in the hope that future iterations move closer to the stated goal of broader accessibility—both in terms of software openness and realistic hardware requirements.

Sign up or log in to comment