Large language models (LLMs) have attracted many interests of academy and industry. They utilize a simple and unified task (next word prediction) to empower all downstreming tasks in NLP. Recently, they have been validated effectively on vison and other modals. Thus, LLMs are seen as a promising way to Artificial General Intelligence (AGI).

However, its hallucination and large resource demanding limits the deployment on decision-making applications (e.g., autonmous driving, medicial diagnosis).

In this blog, we will focus on Medical LLMs and efficient inference/training techniques for LLMs.

Table of contents

Conclusion

xxx

References

xxx


Table of contents