From Code Generation Towards Software Engineering–Yangruibo Ding (Columbia University)
AI Summary
In this talk presented by Yangruibo Ding from Columbia University on April 7, 2025, the focus is on advancing large language models (LLMs) for more robust software engineering beyond mere code generation. Ding discusses the limitations of current LLMs in comprehensively reasoning about software programs and tackling complex engineering tasks such as debugging and understanding dependencies. Through a detailed exploration, he highlights the necessity of integrating symbolic reasoning and global context into the LLM training process to enhance their capabilities. The research objectives include improving code intelligence with enriched semantics, thus bridging the gap between code generation and software engineering tasks. Ding’s findings suggest a transformative path towards achieving full-stack automation in software development while ensuring reliability and security. This work addresses significant challenges within the AI-driven software engineering landscape and provides insights into future capabilities of LLMs.