<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Home on LumiGallerys: Your Source for the Latest in AI Tech News</title>
        <link>https://lumigallerys.com/</link>
        <description>Recent content in Home on LumiGallerys: Your Source for the Latest in AI Tech News</description>
        <generator>Hugo -- gohugo.io</generator>
        <language>en-us</language>
        <lastBuildDate>Thu, 30 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://lumigallerys.com/index.xml" rel="self" type="application/rss+xml" /><item>
            <title>Empowering Education in Ethnic Regions with Artificial Intelligence</title>
            <link>https://lumigallerys.com/posts/note-c37d1985ba/</link>
            <pubDate>Thu, 30 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-c37d1985ba/</guid>
            <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&#xA;&lt;/h2&gt;&lt;p&gt;The Central Committee of the Communist Party of China and the State Council highly value the profound impact of artificial intelligence (AI) on education. General Secretary Xi Jinping has emphasized the need to implement the national education digital strategy, strengthen the national smart education public service platform, explore effective ways to empower personalized and innovative teaching through digital means, expand the reach of quality educational resources, and leverage AI to facilitate educational transformation. In April 2026, the Ministry of Education and four other departments jointly issued the &amp;ldquo;AI + Education Action Plan,&amp;rdquo; providing a historic opportunity for quality and balanced educational development in ethnic regions.&lt;/p&gt;&#xA;&lt;h2 id=&#34;focusing-on-unique-needs-deepening-ai-empowerment-in-all-aspects-of-education&#34;&gt;Focusing on Unique Needs: Deepening AI Empowerment in All Aspects of Education&#xA;&lt;/h2&gt;&lt;p&gt;Students in ethnic regions have unique cognitive foundations, language environments, and learning habits, with significant differences in learning conditions. It is crucial to integrate AI into the entire educational process and empower all aspects of education to respond more precisely to the personalized and differentiated needs of teachers and students. In terms of value guidance, it is essential to develop and utilize ideological models and scenario-based intelligent applications that incorporate core content such as the education of the consciousness of the Chinese national community, the inheritance and development of excellent traditional Chinese culture, and the promotion of the national common language into immersive intelligent educational products, making abstract theories tangible.&lt;/p&gt;&#xA;&lt;p&gt;By combining red resources and examples of national unity and progress, a specialized ideological education resource library can be established to align knowledge literacy with value shaping, creating a shared spiritual home for the Chinese nation. For precise educational assistance, intelligent learning companions equipped with situational guidance and cultural adaptation functions can be employed to accurately capture students&amp;rsquo; cognitive characteristics using technologies such as knowledge graphs and emotional computing. This allows for real-time monitoring of knowledge consolidation points and weaknesses, tailoring personalized and progressive learning paths to implement large-scale differentiated instruction. For students learning the national common language, features such as speech assessment, intelligent pronunciation correction, and engaging dialogues can enhance language skills. In terms of teaching empowerment, an intelligent teaching system can be established to create a closed-loop process of precise lesson preparation before class, dynamic optimization during class, and evidence-based research after class. Before class, intelligent recommendations for suitable teaching resources can facilitate efficient lesson preparation; during class, real-time awareness of student dynamics allows for flexible adjustments to teaching strategies; after class, in-depth analysis of teaching behaviors drives reflection and improvement, significantly enhancing classroom quality, especially in schools with weak teaching resources.&lt;/p&gt;&#xA;&lt;h2 id=&#34;enhancing-adaptability-promoting-comprehensive-optimization-of-ai-enabled-educational-resources&#34;&gt;Enhancing Adaptability: Promoting Comprehensive Optimization of AI-Enabled Educational Resources&#xA;&lt;/h2&gt;&lt;p&gt;The construction of educational resources in ethnic regions has shifted from merely supplementing quantity to enhancing effectiveness. The key lies in bridging the transformation chain from supply to application and improving the adaptability of resources to teaching scenarios. In terms of resource supply, specialized, localized, and multimodal digital resources should be developed around the key educational needs of ethnic regions. Local governments are encouraged to build regional educational corpora, utilizing the national smart education platform for content adaptation, localized case transformation, and dynamic updates, ensuring precise matching between educational resources and teaching scenarios.&lt;/p&gt;&#xA;&lt;p&gt;In resource allocation, priority should be given to deploying high-speed networks and edge computing nodes in border pastoral areas, national border schools, remote teaching points, and boarding schools to solidify the foundation for resource circulation. Leveraging provincial intelligent bases to break down data barriers across platforms enhances resource integration and scheduling, ensuring that quality resources are accessible, operational, and comprehensive. An intelligent channel for educational resource support between eastern and western regions should be established to facilitate targeted delivery and localized adaptation of quality resources. In resource application, a dynamic monitoring and feedback mechanism for resource operation and usage should be established based on the national smart education platform. This should involve layered analysis based on teacher application data, resource usage preferences, and student engagement, continuously optimizing intelligent recommendation and push strategies to enhance the effectiveness of resource application in teaching scenarios. To address the difficulties some teachers face in utilizing digital resources, expert guidance teams should conduct case promotions and on-site support, ensuring that quality resources are truly understandable, usable, and effective.&lt;/p&gt;&#xA;&lt;h2 id=&#34;focusing-on-skill-enhancement-strengthening-support-for-teachers-through-ai-empowerment&#34;&gt;Focusing on Skill Enhancement: Strengthening Support for Teachers through AI Empowerment&#xA;&lt;/h2&gt;&lt;p&gt;Teachers are the primary resource for high-quality educational development. Improving educational quality in ethnic regions hinges on enhancing teachers&amp;rsquo; intelligent literacy and teaching competence. In the training system, differentiated training should be implemented, with key teachers focusing on the development and application of intelligent teaching tools, young teachers strengthening data-driven learning analysis and precise teaching, and other teachers emphasizing foundational applications and concept updates. Building strong county-level &amp;ldquo;smart education teacher studios&amp;rdquo; can play a demonstrative role, encouraging young teachers to lead older colleagues, promoting a shift from &amp;ldquo;being able to use&amp;rdquo; to &amp;ldquo;willing to use and good at using&amp;rdquo;. An integrated online and offline training platform should be established, combining school-based cases for practical exercises, promoting the &amp;ldquo;National Training Program&amp;rdquo; for targeted support in building the teacher workforce in ethnic regions, and incorporating AI into the curriculum of teacher training colleges in these areas to strengthen the foundation of the workforce from the source.&lt;/p&gt;&#xA;&lt;p&gt;In terms of the research mechanism, an intelligent platform for professional development of teachers in ethnic regions should be constructed. By analyzing classroom teaching behavior data, personalized research suggestions can be generated, forming an integrated research model of &amp;ldquo;teaching, learning, research, and evaluation&amp;rdquo;. Support for cross-school and cross-regional online research communities should gradually narrow the regional research gap. Regular workshops and teaching competitions on AI application in teaching should be organized, with award-winning lessons promoted through the national smart education platform. In terms of incentive evaluation, intelligent literacy and teaching application effectiveness should be included in the teacher assessment and evaluation system, with special incentives and project funding established for teachers who excel in AI education and teaching, ensuring they receive preferential treatment in title evaluation and recognition, fostering a positive atmosphere of &amp;ldquo;promoting learning through use and promoting excellence through evaluation&amp;rdquo;.&lt;/p&gt;&#xA;&lt;h2 id=&#34;promoting-continuity-across-all-educational-stages-building-an-ai-enabled-talent-cultivation-system-in-ethnic-regions&#34;&gt;Promoting Continuity Across All Educational Stages: Building an AI-Enabled Talent Cultivation System in Ethnic Regions&#xA;&lt;/h2&gt;&lt;p&gt;The cultivation of AI literacy needs to permeate the entire talent development process, establishing a vertically integrated and horizontally connected AI education and general education system across all educational stages. In terms of vertical integration, a &amp;ldquo;General Education Guide for AI in Primary and Secondary Schools&amp;rdquo; that adapts to the realities of ethnic regions can be established during the basic education stage, setting gradient goals by educational stage and stimulating students&amp;rsquo; AI literacy through project-based learning and gamified courses. In higher education, efforts should be made to promote AI as a public foundational course in colleges in ethnic regions, facilitating the interdisciplinary integration of AI with specialized advantageous disciplines. In vocational education, traditional programs should be upgraded with AI, implementing order-based training. A comprehensive cultivation approach integrating kindergartens, primary, secondary, and higher education should be promoted, effectively utilizing student digital files to provide personalized learning path planning. AI should also be incorporated into lifelong learning systems, creating a ubiquitous learning environment that combines online and offline elements.&lt;/p&gt;&#xA;&lt;p&gt;In terms of horizontal integration, the collaborative education mechanism among families, schools, and communities should be deepened, extending AI literacy education to family enlightenment and community spaces. General AI courses for parents should be developed, expanding coverage through community learning centers and elderly universities. Ethnic region colleges should open quality educational resources to society, promoting deep integration of education among schools, families, and communities. Collaborative education among industry, academia, and research should be advanced, focusing on the local industrial needs of ethnic regions such as smart agriculture and cultural tourism, establishing AI training bases that integrate industry and education, and supporting leading enterprises to co-build industrial colleges with ethnic region institutions, relying on the industry-education integration model to create a &amp;ldquo;industry-position-course&amp;rdquo; map, effectively connecting talent cultivation with industrial development.&lt;/p&gt;&#xA;&lt;h2 id=&#34;strengthening-all-factor-interaction-promoting-systemic-reform-in-educational-governance-through-ai-empowerment&#34;&gt;Strengthening All-Factor Interaction: Promoting Systemic Reform in Educational Governance through AI Empowerment&#xA;&lt;/h2&gt;&lt;p&gt;The modernization level of educational governance in ethnic regions directly affects the overall effectiveness of AI empowerment in education. It is necessary to enhance policy coordination, resource adaptation, and condition guarantees while emphasizing the construction of intelligent hubs, monitoring and early warning systems, and collaborative safety guarantees. In terms of intelligent hub construction, relying on the national education big data center, a regional educational intelligent brain should be built, integrating data aggregation, decision support, policy push, and demand response. A cross-departmental and cross-level data sharing mechanism should be established to achieve precise policy transmission and timely feedback on execution, enhancing the responsiveness and effectiveness of educational policies in ethnic regions. Regions with favorable conditions should be supported to take the lead, prioritizing the deployment of intelligent data collection terminals in boarding schools and central schools in towns, exploring a smart service model of &amp;ldquo;one screen overview, one network for all services&amp;rdquo;.&lt;/p&gt;&#xA;&lt;p&gt;In monitoring and early warning, big data intelligent monitoring technologies should be utilized to dynamically perceive risks such as ideological security, campus safety, and school dropout rates, constructing a multidimensional early warning indicator system covering teaching quality, teacher mobility, resource allocation, and student development. An intelligent early warning and closed-loop feedback system should be established to achieve early detection, prevention, and assistance for risks, providing scientific basis for precise governance. In terms of safety guarantees, adhering to the principle of &amp;ldquo;intelligence for good,&amp;rdquo; it is essential to ensure the safety of content, data, and algorithms, improve assessment and filing, technical monitoring, risk warning, and emergency response mechanisms, and strengthen the security protection of educational data throughout its lifecycle, effectively preventing issues such as algorithm discrimination, privacy leakage, and exam-oriented pressure, ensuring that AI applications always operate within a regulated, trustworthy, and benevolent framework.&lt;/p&gt;&#xA;&lt;p&gt;Empowering education in ethnic regions with AI is a long-term systematic project that requires a unified national approach. Only by adhering to a problem-oriented and application-driven strategy, promoting the coordinated efforts of technology, resources, talent, and governance through innovative practices, and implementing precise policies over the long term can AI be transformed from a &amp;ldquo;key variable&amp;rdquo; into the &amp;ldquo;largest increment&amp;rdquo; for quality and balanced educational development in ethnic regions, laying a solid foundation for building a strong educational nation and advancing national unity and progress.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Understanding Vibe Coding: A 2026 Perspective</title>
            <link>https://lumigallerys.com/posts/note-e86987097d/</link>
            <pubDate>Thu, 30 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-e86987097d/</guid>
            <description>&lt;h2 id=&#34;vibe-coding-is-not-what-you-think&#34;&gt;Vibe Coding Is Not What You Think&#xA;&lt;/h2&gt;&lt;p&gt;In February 2025, Andrej Karpathy tweeted: &lt;strong&gt;&amp;ldquo;I just vibe code.&amp;rdquo;&lt;/strong&gt; He mentioned that he no longer writes code line by line but describes his intentions in natural language, allowing AI to generate the code while he focuses on reviewing and adjusting the direction.&lt;/p&gt;&#xA;&lt;p&gt;This tweet ignited a firestorm in the tech community. Collins Dictionary named Vibe Coding the word of the year for 2025. However, many misunderstood it—thinking Vibe Coding meant &amp;ldquo;letting AI write code while doing nothing.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;By 2026, Vibe Coding is no longer like that.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;It has evolved into a structured AI-assisted development methodology: you write precise natural language specifications, the AI generates code, and you deeply participate in architectural design and quality control. The engineering discipline has shifted from &amp;ldquo;handwritten implementation&amp;rdquo; to &amp;ldquo;designing task systems and review mechanisms.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;When used well, efficiency can increase by 5-10 times; when used poorly, technical debt can become overwhelming. The difference lies in the methodology.&lt;/p&gt;&#xA;&lt;h2 id=&#34;2026-vibe-coding-tool-landscape&#34;&gt;2026 Vibe Coding Tool Landscape&#xA;&lt;/h2&gt;&lt;h3 id=&#34;two-major-camps&#34;&gt;Two Major Camps&#xA;&lt;/h3&gt;&lt;p&gt;Currently, Vibe Coding tools are divided into two categories:&lt;/p&gt;&#xA;&lt;table&gt;&#xA;  &lt;thead&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;th&gt;Type&lt;/th&gt;&#xA;          &lt;th&gt;Representative Tools&lt;/th&gt;&#xA;          &lt;th&gt;Features&lt;/th&gt;&#xA;          &lt;th&gt;Suitable Audience&lt;/th&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/thead&gt;&#xA;  &lt;tbody&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;AI App Builder&lt;/td&gt;&#xA;          &lt;td&gt;Bolt, Lovable, Replit, v0&lt;/td&gt;&#xA;          &lt;td&gt;Browser-based, from description to deployment&lt;/td&gt;&#xA;          &lt;td&gt;Non-technical backgrounds, rapid prototyping&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;AI Coding Assistant&lt;/td&gt;&#xA;          &lt;td&gt;Cursor, Claude Code, Windsurf, Trae&lt;/td&gt;&#xA;          &lt;td&gt;Integrated into IDE/terminal for deep operations on existing codebases&lt;/td&gt;&#xA;          &lt;td&gt;Experienced developers&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/tbody&gt;&#xA;&lt;/table&gt;&#xA;&lt;h2 id=&#34;mainstream-tool-comparison&#34;&gt;Mainstream Tool Comparison&#xA;&lt;/h2&gt;&lt;table&gt;&#xA;  &lt;thead&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;th&gt;Tool&lt;/th&gt;&#xA;          &lt;th&gt;Company&lt;/th&gt;&#xA;          &lt;th&gt;Core Advantages&lt;/th&gt;&#xA;          &lt;th&gt;Price&lt;/th&gt;&#xA;          &lt;th&gt;Suitable Scenarios&lt;/th&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/thead&gt;&#xA;  &lt;tbody&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Cursor&lt;/td&gt;&#xA;          &lt;td&gt;Anysphere&lt;/td&gt;&#xA;          &lt;td&gt;Most mature AI editor, multi-file refactoring in Agent mode&lt;/td&gt;&#xA;          &lt;td&gt;$20/month&lt;/td&gt;&#xA;          &lt;td&gt;Main development&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Claude Code&lt;/td&gt;&#xA;          &lt;td&gt;Anthropic&lt;/td&gt;&#xA;          &lt;td&gt;Native Agent in terminal, 1M token context&lt;/td&gt;&#xA;          &lt;td&gt;Pay-as-you-go/Max package&lt;/td&gt;&#xA;          &lt;td&gt;Complex refactoring, automation&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Trae&lt;/td&gt;&#xA;          &lt;td&gt;ByteDance&lt;/td&gt;&#xA;          &lt;td&gt;Fully Chinese, generous free tier, SOLO mode end-to-end&lt;/td&gt;&#xA;          &lt;td&gt;Free/$25 month&lt;/td&gt;&#xA;          &lt;td&gt;Domestic users&amp;rsquo; entry&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;CodeBuddy&lt;/td&gt;&#xA;          &lt;td&gt;Tencent&lt;/td&gt;&#xA;          &lt;td&gt;Completely free, plugin + IDE + CLI in one&lt;/td&gt;&#xA;          &lt;td&gt;Free&lt;/td&gt;&#xA;          &lt;td&gt;Domestic users at no cost&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Bolt.new&lt;/td&gt;&#xA;          &lt;td&gt;StackBlitz&lt;/td&gt;&#xA;          &lt;td&gt;Instant generation in browser, full stack&lt;/td&gt;&#xA;          &lt;td&gt;$25/month&lt;/td&gt;&#xA;          &lt;td&gt;Rapid prototype validation&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Lovable&lt;/td&gt;&#xA;          &lt;td&gt;—&lt;/td&gt;&#xA;          &lt;td&gt;Most elegant, full-stack generation + one-click deployment&lt;/td&gt;&#xA;          &lt;td&gt;$25/month&lt;/td&gt;&#xA;          &lt;td&gt;Non-technical founders&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Replit&lt;/td&gt;&#xA;          &lt;td&gt;—&lt;/td&gt;&#xA;          &lt;td&gt;Cloud-based IDE + hosting + AI, zero configuration&lt;/td&gt;&#xA;          &lt;td&gt;$20/month&lt;/td&gt;&#xA;          &lt;td&gt;Learning + deployment integrated&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Windsurf&lt;/td&gt;&#xA;          &lt;td&gt;Cognition Cascade&lt;/td&gt;&#xA;          &lt;td&gt;Chain execution, Arena mode for model comparison&lt;/td&gt;&#xA;          &lt;td&gt;Has free tier&lt;/td&gt;&#xA;          &lt;td&gt;Model comparison evaluation&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;v0.dev&lt;/td&gt;&#xA;          &lt;td&gt;Vercel&lt;/td&gt;&#xA;          &lt;td&gt;Strongest in React/Next.js component generation&lt;/td&gt;&#xA;          &lt;td&gt;$20/month&lt;/td&gt;&#xA;          &lt;td&gt;Frontend component development&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/tbody&gt;&#xA;&lt;/table&gt;&#xA;&lt;h2 id=&#34;choosing-for-domestic-users&#34;&gt;Choosing for Domestic Users&#xA;&lt;/h2&gt;&lt;p&gt;If you are in China, network and payment are primary considerations:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Zero-cost entry&lt;/strong&gt;: Trae (generous free tier) or CodeBuddy (completely free)&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Limited budget&lt;/strong&gt;: Trae paid version ¥25/month, native Chinese support&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Sufficient budget&lt;/strong&gt;: Cursor + Claude Code combination, standard for international developers&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;No coding experience&lt;/strong&gt;: Bolt.new or Lovable, usable directly in the browser&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;&lt;strong&gt;2026 Best Practices&lt;/strong&gt;: Most successful Vibe Coding projects adopt a two-phase workflow—first using Bolt/Lovable for rapid prototyping, validating ideas, then migrating to Cursor for long-term development.&lt;/p&gt;&#xA;&lt;h2 id=&#34;prompt-engineering-the-core-technology-of-vibe-coding&#34;&gt;Prompt Engineering: The Core Technology of Vibe Coding&#xA;&lt;/h2&gt;&lt;p&gt;The level of Vibe Coding effectiveness is 80% reflected in the quality of prompts.&lt;/p&gt;&#xA;&lt;h3 id=&#34;golden-rule&#34;&gt;Golden Rule&#xA;&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;Treat AI as a &amp;ldquo;technically strong but completely unaware senior engineer for your project.&amp;rdquo;&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Poor Prompt:&lt;/p&gt;&#xA;&lt;p&gt;&lt;code&gt;Help me write a login function&lt;/code&gt;&lt;/p&gt;&#xA;&lt;p&gt;Good Prompt:&lt;/p&gt;&#xA;&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Create login.ts in the src/auth/ directory.&#xA;Use the existing axios wrapper in the project (refer to src/utils/request.ts).&#xA;API endpoint: POST /api/auth/login, parameters: { email, password }.&#xA;On success, store the JWT token in localStorage with the key &amp;#34;auth_token&amp;#34;.&#xA;On failure, throw an Error object containing a message field.&#xA;Please tell me your implementation thought process before writing the code.&#xA;&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;five-elements-of-a-good-prompt&#34;&gt;Five Elements of a Good Prompt&#xA;&lt;/h3&gt;&lt;table&gt;&#xA;  &lt;thead&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;th&gt;Element&lt;/th&gt;&#xA;          &lt;th&gt;Function&lt;/th&gt;&#xA;          &lt;th&gt;Example&lt;/th&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/thead&gt;&#xA;  &lt;tbody&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Location&lt;/td&gt;&#xA;          &lt;td&gt;Tell AI where to place the file&lt;/td&gt;&#xA;          &lt;td&gt;Create under src/components/&amp;hellip;&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Dependencies&lt;/td&gt;&#xA;          &lt;td&gt;Specify existing components to use&lt;/td&gt;&#xA;          &lt;td&gt;Use the existing useApi hook from the project&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Specifications&lt;/td&gt;&#xA;          &lt;td&gt;Constrain implementation methods&lt;/td&gt;&#xA;          &lt;td&gt;Use TypeScript, avoid any&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Boundaries&lt;/td&gt;&#xA;          &lt;td&gt;Clearly state what to do and what not to do&lt;/td&gt;&#xA;          &lt;td&gt;Only need login, no registration&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Thought Process&lt;/td&gt;&#xA;          &lt;td&gt;Require planning before execution&lt;/td&gt;&#xA;          &lt;td&gt;First discuss the implementation thought process, then write code&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/tbody&gt;&#xA;&lt;/table&gt;&#xA;&lt;h3 id=&#34;the-strongest-four-word-prompt-redgreen-tdd&#34;&gt;The Strongest Four-Word Prompt: Red/Green TDD&#xA;&lt;/h3&gt;&lt;p&gt;In Simon Willison&amp;rsquo;s summary of Agentic Engineering Patterns, he believes the most valuable four-word prompt is:&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Use red/green TDD&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;This instructs AI to first write failing tests (Red), then implement to pass the tests (Green), and finally refactor. This cycle can elevate the quality of AI-generated code significantly.&lt;/p&gt;&#xA;&lt;p&gt;SAS&amp;rsquo;s VibeTDD hackathon validated this method: engineers did not write implementation code by hand but guided AI through a strict TDD process to construct a complete application. The conclusion—&lt;strong&gt;&amp;ldquo;Prompting and test writing may become the new programming.&amp;rdquo;&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;project-memory-files-helping-ai-understand-your-project&#34;&gt;Project Memory Files: Helping AI Understand Your Project&#xA;&lt;/h2&gt;&lt;p&gt;AI has &amp;ldquo;amnesia&amp;rdquo; with each conversation. Project memory files provide AI with persistent context.&lt;/p&gt;&#xA;&lt;h3 id=&#34;cursor-with-cursorrules&#34;&gt;Cursor with .cursorrules&#xA;&lt;/h3&gt;&lt;p&gt;Create a .cursorrules file in the project root:&lt;/p&gt;&#xA;&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;# Project Conventions&#xA;&#xA;## Tech Stack&#xA;- Backend: FastAPI + Python 3.12&#xA;- Frontend: React 18 + TypeScript&#xA;- Database: PostgreSQL, ORM using SQLAlchemy 2.0&#xA;&#xA;## Coding Standards&#xA;- Function Naming: snake_case (Python)/camelCase (TypeScript)&#xA;- Comment Language: Chinese&#xA;- Error Handling: Unified use of the AppError class in the project&#xA;&#xA;## Prohibitions&#xA;- Do not use any type (TypeScript)&#xA;- Do not directly manipulate the DOM, use React state management&#xA;- Do not write fetch directly in components, use src/hooks/useApi.ts&#xA;&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;claude-code-with-claudemd&#34;&gt;Claude Code with CLAUDE.md&#xA;&lt;/h3&gt;&lt;p&gt;Similarly, create a CLAUDE.md file in the project root with the same format. Claude Code will automatically read it at the start of each conversation.&lt;/p&gt;&#xA;&lt;h3 id=&#34;general-principles&#34;&gt;General Principles&#xA;&lt;/h3&gt;&lt;table&gt;&#xA;  &lt;thead&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;th&gt;Rule Type&lt;/th&gt;&#xA;          &lt;th&gt;What to Write&lt;/th&gt;&#xA;          &lt;th&gt;Why It Matters&lt;/th&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/thead&gt;&#xA;  &lt;tbody&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Tech Stack&lt;/td&gt;&#xA;          &lt;td&gt;Language, framework, library versions&lt;/td&gt;&#xA;          &lt;td&gt;Avoid generating outdated or incompatible code&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Coding Standards&lt;/td&gt;&#xA;          &lt;td&gt;Naming, comments, error handling&lt;/td&gt;&#xA;          &lt;td&gt;Maintain consistent code style&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Prohibitions&lt;/td&gt;&#xA;          &lt;td&gt;Clearly state what cannot be done&lt;/td&gt;&#xA;          &lt;td&gt;Prevent AI from introducing anti-patterns&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Directory Structure&lt;/td&gt;&#xA;          &lt;td&gt;Key file paths&lt;/td&gt;&#xA;          &lt;td&gt;Help AI locate and reference&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/tbody&gt;&#xA;&lt;/table&gt;&#xA;&lt;p&gt;&lt;strong&gt;Without memory files, Vibe Coding is like asking a new employee to understand the project from scratch every time.&lt;/strong&gt; The efficiency gap is exponential.&lt;/p&gt;&#xA;&lt;h2 id=&#34;four-levels-of-vibe-coding&#34;&gt;Four Levels of Vibe Coding&#xA;&lt;/h2&gt;&lt;table&gt;&#xA;  &lt;thead&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;th&gt;Level&lt;/th&gt;&#xA;          &lt;th&gt;Name&lt;/th&gt;&#xA;          &lt;th&gt;Core Ability&lt;/th&gt;&#xA;          &lt;th&gt;Typical Performance&lt;/th&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/thead&gt;&#xA;  &lt;tbody&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;1&lt;/td&gt;&#xA;          &lt;td&gt;Prompt Craft&lt;/td&gt;&#xA;          &lt;td&gt;Write clear instructions&lt;/td&gt;&#xA;          &lt;td&gt;&amp;ldquo;Help me write a XX function&amp;rdquo;&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;2&lt;/td&gt;&#xA;          &lt;td&gt;Context Engineering&lt;/td&gt;&#xA;          &lt;td&gt;Manage information environment&lt;/td&gt;&#xA;          &lt;td&gt;Configure .cursorrules / CLAUDE.md&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;3&lt;/td&gt;&#xA;          &lt;td&gt;Intent Engineering&lt;/td&gt;&#xA;          &lt;td&gt;Define coding goals and boundaries&lt;/td&gt;&#xA;          &lt;td&gt;Tell AI what to want, not just what to do&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;4&lt;/td&gt;&#xA;          &lt;td&gt;Specification Engineering&lt;/td&gt;&#xA;          &lt;td&gt;Write long-term executable specifications&lt;/td&gt;&#xA;          &lt;td&gt;AI can execute autonomously for 8 hours without intervention&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/tbody&gt;&#xA;&lt;/table&gt;&#xA;&lt;p&gt;Most people remain at the first level. Efficiency improves significantly at the second level. The third and fourth levels are the realm of true experts—also the prerequisites for GLM-5.1&amp;rsquo;s 8-hour long-term autonomous work.&lt;/p&gt;&#xA;&lt;h2 id=&#34;pitfall-guide-five-traps-of-vibe-coding&#34;&gt;Pitfall Guide: Five Traps of Vibe Coding&#xA;&lt;/h2&gt;&lt;h3 id=&#34;trap-1-using-directly-without-review&#34;&gt;Trap 1: Using Directly Without Review&#xA;&lt;/h3&gt;&lt;p&gt;AI-generated code may run, but that doesn&amp;rsquo;t mean it&amp;rsquo;s flawless. Security vulnerabilities, performance issues, and edge cases may be overlooked&amp;hellip; You must review every line of code.&lt;/p&gt;&#xA;&lt;h3 id=&#34;trap-2-no-version-control&#34;&gt;Trap 2: No Version Control&#xA;&lt;/h3&gt;&lt;p&gt;Vibe Coding involves rapid and frequent changes. Without Git protection, a mistake can lead to real failure. &lt;strong&gt;Commit after completing each feature point.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;trap-3-tool-lock-in&#34;&gt;Trap 3: Tool Lock-in&#xA;&lt;/h3&gt;&lt;p&gt;Code generated by Lovable is deeply tied to React + Supabase, and migrating projects hosted on Replit is a significant task. It&amp;rsquo;s fine during the prototyping phase, but have an exit strategy before formal development.&lt;/p&gt;&#xA;&lt;table&gt;&#xA;  &lt;thead&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;th&gt;Tool&lt;/th&gt;&#xA;          &lt;th&gt;Lock-in Level&lt;/th&gt;&#xA;          &lt;th&gt;Migration Difficulty&lt;/th&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/thead&gt;&#xA;  &lt;tbody&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Lovable&lt;/td&gt;&#xA;          &lt;td&gt;High&lt;/td&gt;&#xA;          &lt;td&gt;Hard-coded to React + Supabase, large data migration workload&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Bolt&lt;/td&gt;&#xA;          &lt;td&gt;Medium&lt;/td&gt;&#xA;          &lt;td&gt;Framework flexible, but deployment tied to Bolt Cloud&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Replit&lt;/td&gt;&#xA;          &lt;td&gt;High&lt;/td&gt;&#xA;          &lt;td&gt;Code + database + hosting all on Replit, full migration is a project&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Cursor&lt;/td&gt;&#xA;          &lt;td&gt;None&lt;/td&gt;&#xA;          &lt;td&gt;You choose the tech stack, can switch anytime&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/tbody&gt;&#xA;&lt;/table&gt;&#xA;&lt;h3 id=&#34;trap-4-misusing-in-complex-systems&#34;&gt;Trap 4: Misusing in Complex Systems&#xA;&lt;/h3&gt;&lt;p&gt;Vibe Coding increases risks in algorithm-intensive, high-concurrency, and strong-consistency systems (like trading engines, database kernels). It is best suited for rapidly iterating business applications.&lt;/p&gt;&#xA;&lt;h3 id=&#34;trap-5-skipping-tests&#34;&gt;Trap 5: Skipping Tests&#xA;&lt;/h3&gt;&lt;p&gt;Testing AI-generated code is more critical than handwritten code. Your understanding of implementation details is shallower, and testing is your only means to verify &amp;ldquo;AI understood correctly.&amp;rdquo;&lt;/p&gt;&#xA;&lt;h2 id=&#34;getting-started-from-scratch-5-minute-vibe-coding&#34;&gt;Getting Started from Scratch: 5-Minute Vibe Coding&#xA;&lt;/h2&gt;&lt;p&gt;The fastest entry path requires no installation:&lt;/p&gt;&#xA;&lt;h3 id=&#34;option-a-boltnew-no-threshold&#34;&gt;Option A: Bolt.new (No Threshold)&#xA;&lt;/h3&gt;&lt;ul&gt;&#xA;&lt;li&gt;Open bolt.new&lt;/li&gt;&#xA;&lt;li&gt;Input: &lt;strong&gt;&amp;ldquo;Help me create a Pomodoro timer app, 25 minutes work + 5 minutes break, with a warm color scheme&amp;rdquo;&lt;/strong&gt;&lt;/li&gt;&#xA;&lt;li&gt;Wait a few seconds, and a complete app will be generated&lt;/li&gt;&#xA;&lt;li&gt;Preview and modify directly in the browser&lt;/li&gt;&#xA;&lt;li&gt;Deploy with one click when satisfied&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;option-b-trae-recommended-for-domestic-users&#34;&gt;Option B: Trae (Recommended for Domestic Users)&#xA;&lt;/h3&gt;&lt;ul&gt;&#xA;&lt;li&gt;Download Trae (trae.ai)&lt;/li&gt;&#xA;&lt;li&gt;Open SOLO mode&lt;/li&gt;&#xA;&lt;li&gt;Input: &lt;strong&gt;&amp;ldquo;Create a Markdown note-taking app that supports creating, editing, and deleting, with data stored in localStorage&amp;rdquo;&lt;/strong&gt;&lt;/li&gt;&#xA;&lt;li&gt;Trae will autonomously complete the entire process from architecture to code to testing&lt;/li&gt;&#xA;&lt;li&gt;Preview and run directly in the IDE&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;option-c-cursor--claude-code-professional-route&#34;&gt;Option C: Cursor + Claude Code (Professional Route)&#xA;&lt;/h3&gt;&lt;ul&gt;&#xA;&lt;li&gt;Download Cursor (cursor.com)&lt;/li&gt;&#xA;&lt;li&gt;Open the project folder&lt;/li&gt;&#xA;&lt;li&gt;Use Cmd+I to invoke Agent mode&lt;/li&gt;&#xA;&lt;li&gt;Describe requirements, review generated code, and iterate for optimization&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;data-speaks-vibe-coding-in-2026&#34;&gt;Data Speaks: Vibe Coding in 2026&#xA;&lt;/h2&gt;&lt;p&gt;Several key figures help you understand this is not a bubble:&lt;/p&gt;&#xA;&lt;table&gt;&#xA;  &lt;thead&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;th&gt;Metric&lt;/th&gt;&#xA;          &lt;th&gt;Data&lt;/th&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/thead&gt;&#xA;  &lt;tbody&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Developer Adoption Rate&lt;/td&gt;&#xA;          &lt;td&gt;84% have used or plan to use AI coding tools&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Engineering Team Adoption Rate&lt;/td&gt;&#xA;          &lt;td&gt;91% have adopted Vibe Coding workflows&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;YC Startups&lt;/td&gt;&#xA;          &lt;td&gt;25% of YC companies have 95% of their code generated by AI&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Cursor ARR&lt;/td&gt;&#xA;          &lt;td&gt;$2 billion in 24 months, the fastest-growing SaaS in history&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Lovable ARR&lt;/td&gt;&#xA;          &lt;td&gt;Over $300 million, valued at $6.6 billion&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Replit Growth&lt;/td&gt;&#xA;          &lt;td&gt;From $10 million to $100 million ARR in 9 months&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Walmart Effect&lt;/td&gt;&#xA;          &lt;td&gt;Saved 4 million developer hours&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/tbody&gt;&#xA;&lt;/table&gt;&#xA;&lt;h2 id=&#34;final-thoughts&#34;&gt;Final Thoughts&#xA;&lt;/h2&gt;&lt;p&gt;Vibe Coding is not a &amp;ldquo;lazy&amp;rdquo; tool. It liberates engineers from repetitive code writing, allowing you to focus on architectural design, business logic, and code quality control.&lt;/p&gt;&#xA;&lt;p&gt;In 2026, not knowing Vibe Coding may not lead to elimination, but those who do will have at least three times the productivity of those who do not.&lt;/p&gt;&#xA;&lt;p&gt;My advice: &lt;strong&gt;Start with the simplest option.&lt;/strong&gt; Open Bolt.new or Trae, spend 5 minutes generating a small application, and experience the &amp;ldquo;describe to develop&amp;rdquo; approach. Then gradually learn prompt engineering and project memory files, evolving from the first level to the fourth.&lt;/p&gt;&#xA;&lt;p&gt;The golden age of Vibe Coding is just beginning.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Alex Imas on AI&#39;s Impact on the Economy and Labor Market</title>
            <link>https://lumigallerys.com/posts/note-1988cb8000/</link>
            <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-1988cb8000/</guid>
            <description>&lt;p&gt;&lt;img alt=&#34;Image 1&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;253px&#34; data-flex-grow=&#34;105&#34; height=&#34;450&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-1988cb8000/img-5caa758660.jpeg&#34; width=&#34;476&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Alex Imas, an economist at the University of Chicago and author of &amp;ldquo;The Power Ghost,&amp;rdquo; has developed an optimistic view of artificial intelligence (AI). He occupies a unique position in academia as both a leading scholar studying AI&amp;rsquo;s impact on the labor market and an active practitioner of the technology.&lt;/p&gt;&#xA;&lt;p&gt;Unlike many of his peers, Imas takes doomsday predictions seriously, particularly the discussions around &amp;ldquo;ghost GDP&amp;rdquo; and spiral deflation raised by the independent research organization Citrini Research. This theory posits that if automation replaces most jobs and labor&amp;rsquo;s share of income declines significantly, wealthy capital holders will reach a saturation point in consumption, while unemployed workers will lack the means to consume.&lt;/p&gt;&#xA;&lt;p&gt;Under this hypothesis, demand would collapse, leading the economy into a recession. Although Imas has written that the likelihood of negative economic growth is low, he emphasizes the need to take high unemployment rates and their potential to drag down the economy seriously.&lt;/p&gt;&#xA;&lt;p&gt;&amp;ldquo;My first reaction was one of fear,&amp;rdquo; Imas told Fortune. &amp;ldquo;I needed to overcome that fear through rigorous reasoning, which is not self-comforting but rather a conclusion that integrates various factors based on historical patterns and human preferences.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Wall Street also values Imas&amp;rsquo;s warnings. Morgan Stanley included him in a recent research report as a primary resource for investors to understand AI&amp;rsquo;s impact on employment, calling him a valuable third-party expert in the field.&lt;/p&gt;&#xA;&lt;p&gt;Imas is not just a theorist. His research has been published in the American Economic Review, the Quarterly Journal of Economics, and the Proceedings of the National Academy of Sciences, and he collaborated with Nobel laureate Richard Thaler to update the classic work on behavioral economics, &amp;ldquo;The Winner&amp;rsquo;s Curse.&amp;rdquo; Perhaps most notably, he runs a widely read Substack column titled &amp;ldquo;The Power Ghost.&amp;rdquo; Upon learning he was featured in a Wall Street report, he joked, &amp;ldquo;That&amp;rsquo;s interesting&amp;hellip; I hadn&amp;rsquo;t even noticed.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;The influence of &amp;ldquo;The Power Ghost&amp;rdquo; has exceeded his expectations. When starting the column, Imas aimed to write with the rigor of academic papers for a broader audience than journal editors, reaching economists, AI researchers, tech experts, and policymakers. He noted that the response has far surpassed his expectations, with even his mother-in-law&amp;rsquo;s friends providing feedback. Recently, he helped a neighbor install the AI agent Claude on her computer and witnessed her develop an application from scratch in just one afternoon. &amp;ldquo;These ideas need to be widely disseminated and reach more people,&amp;rdquo; he said.&lt;/p&gt;&#xA;&lt;h2 id=&#34;the-signals-from-starbucks&#34;&gt;The Signals from Starbucks&#xA;&lt;/h2&gt;&lt;p&gt;After months of writing and revising, Imas offers a thought-provoking perspective for those who adhere to doomsday theories, suggesting that an AI-driven economy may not necessarily lead to a grim outcome. He uses Starbucks as an example.&lt;/p&gt;&#xA;&lt;p&gt;With a market capitalization of $112 billion and a focus on highly standardized products, Starbucks has long had plans to reduce labor through technology. However, despite years of layoffs and automation, the company has only managed to maintain thin profits. Recently, CEO Brian Niccol has completely changed strategy, re-emphasizing handwritten notes on cups, ceramic mugs, and comfortable seating—details that hold more value for customers than efficiency. Starbucks is now hiring more baristas and slowing down its automation efforts.&lt;/p&gt;&#xA;&lt;p&gt;In Imas&amp;rsquo;s view, Starbucks&amp;rsquo;s transformation is highly instructive. He pointed out in a recent Substack article that as AI makes the production of goods cheaper and more abundant, &amp;ldquo;what becomes truly scarce?&amp;rdquo; In the age of AI, some things are destined to remain non-commodifiable. Niccol seems to understand this; the human presence, social connections, and the provenance of products will become increasingly scarce and thus more valuable.&lt;/p&gt;&#xA;&lt;p&gt;While Starbucks is testing ChatGPT for drink recommendations, this is entirely different from its operational strategy. Starbucks stated to Fortune that its approach to AI is &amp;ldquo;pragmatic and grounded,&amp;rdquo; referencing previous public information. The company added, &amp;ldquo;If AI can help employees showcase their skills, deepen connections with customers, and optimize café operations effectively, it will be widely adopted; if not, it will be abandoned.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 2&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;359px&#34; data-flex-grow=&#34;149&#34; height=&#34;721&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-1988cb8000/img-df82812a3b.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-1988cb8000/img-df82812a3b_hu_a73f0da573b28924.jpeg 800w, https://lumigallerys.com/posts/note-1988cb8000/img-df82812a3b.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;from-farms-to-relationship-industries&#34;&gt;From Farms to Relationship Industries&#xA;&lt;/h2&gt;&lt;p&gt;The theoretical support for this viewpoint is structural change theory, which examines how the economy evolves when technology significantly increases productivity in a sector. A classic case, praised by Fundstrat analyst Tom Lee, is that around 1900, 40% of the U.S. labor force was engaged in agriculture, a figure that has now dropped below 2%.&lt;/p&gt;&#xA;&lt;p&gt;People did not stop eating; rather, as food became commoditized and inexpensive, they no longer spent most of their time on food production. The economy did not collapse but transformed, with labor gradually shifting from agriculture to manufacturing and then to services as workers&amp;rsquo; incomes rose. Imas believes AI will bring about a similar dynamic: &amp;ldquo;The economic logic of scarcity will not disappear; it will merely shift.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;He cited a significant paper published in 2021 by Diego Comin, Daniel Lashkari, and Martti Mestieri in Econometrica, which pointed out that historically, over 75% of labor reallocation across industries has been driven by the &amp;ldquo;income effect&amp;rdquo; rather than the &amp;ldquo;price effect.&amp;rdquo; In other words, as people become wealthier, they do not simply buy more of the same inexpensive goods but seek goods and services with higher &amp;ldquo;income elasticity,&amp;rdquo; meaning demand for these goods and services grows faster than income.&lt;/p&gt;&#xA;&lt;p&gt;Imas&amp;rsquo;s behavioral economics perspective is rooted in the &amp;ldquo;mimetic desire&amp;rdquo; theory proposed by French philosopher René Girard, which suggests that sometimes people desire something not merely for its practical value but because others want it and cannot easily have it. Research has shown that when subjects learn that a random subset of people is restricted from purchasing a certain item, the price they are willing to pay nearly doubles. When AI is involved in producing goods, the premium on those goods diminishes significantly, as people perceive AI-produced items as essentially infinitely replicable, weakening the scarcity that drives desire.&lt;/p&gt;&#xA;&lt;p&gt;This means that as AI drives more areas of the economy toward commodification, consumption and employment will gradually shift toward what Imas terms &amp;ldquo;relationship industries.&amp;rdquo; This circles back to his analogy with Starbucks: people are willing to pay for things that have a strong human touch. The future middle class&amp;rsquo;s consumption patterns will resemble those of today&amp;rsquo;s affluent classes. Many financially free billionaires now spend considerable time on podcasts, live performances, and social platforms, enjoying consumption and creating interpersonal interactions.&lt;/p&gt;&#xA;&lt;p&gt;&amp;ldquo;(The billionaires) could easily stay on an island, enjoying all the movies, playing all the games, and using all the tech products,&amp;rdquo; Imas said. &amp;ldquo;But most of the time, these billionaires are recording podcasts, interacting with others on X, and attending performances, essentially consuming relationship goods or striving to provide them, such as socializing and being among people.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;He believes that the demand for interpersonal connections has no natural limit, as it is fundamentally a comparative need that can never be fully satisfied.&lt;/p&gt;&#xA;&lt;h2 id=&#34;not-artists-but-nurses-teachers-and-baristas&#34;&gt;Not Artists, but Nurses, Teachers, and Baristas&#xA;&lt;/h2&gt;&lt;p&gt;Imas emphasizes that he is not envisioning a romantic world filled with artists and performers. He stated, &amp;ldquo;Starbucks employees are just ordinary people; they are not performers or artists. People value their interactions with them, not from a highbrow, artistic, or entertainment perspective, but simply from a basic social need.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;In his theoretical framework, relationship industries include nurses, doctors, teachers, therapists, childcare workers, private chefs, and service personnel, among others. These sectors employ nearly 50 million people in the U.S. Many existing jobs will not disappear entirely but will transform. As AI takes over the repetitive tasks of teachers or doctors, the core of the job will shift to emotional support, care, and interpersonal connections, becoming the true source of economic value. Imas points out that one overlooked aspect is how these roles will evolve toward being more relationship-oriented as AI develops.&lt;/p&gt;&#xA;&lt;p&gt;&amp;ldquo;Manufacturing workers and truck drivers may disappear because those jobs do not involve relational interaction,&amp;rdquo; Imas explained. &amp;ldquo;But many existing jobs with relational components will evolve into relationship-oriented roles.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 3&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;358px&#34; data-flex-grow=&#34;149&#34; height=&#34;636&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-1988cb8000/img-0ff18efaf0.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-1988cb8000/img-0ff18efaf0_hu_8167adaefbd0df9e.jpeg 800w, https://lumigallerys.com/posts/note-1988cb8000/img-0ff18efaf0.jpeg 949w&#34; width=&#34;949&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;the-useless-sports-car&#34;&gt;The Useless Sports Car&#xA;&lt;/h2&gt;&lt;p&gt;Imas&amp;rsquo;s theory has been tested in a large nonprofit healthcare organization. A senior data scientist, who wished to remain anonymous, told Fortune that despite management&amp;rsquo;s strong promotion of the enterprise version of ChatGPT over the past six months, employees found almost no other applications beyond writing emails and summarizing reports.&lt;/p&gt;&#xA;&lt;p&gt;This data scientist stated that their actual work involved statistical analysis of cancer patient data for one of the largest medical databases in the U.S., but due to strict legal protections of privacy information, current AI tools have no access.&lt;/p&gt;&#xA;&lt;p&gt;He agrees with Fortune&amp;rsquo;s analogy of AI as a &amp;ldquo;sports car,&amp;rdquo; but for most jobs, the reality resembles the traffic congestion of Manhattan. Years ago, shortly after ChatGPT was launched, he developed a cancer survival risk calculator using the tool in less than a month. However, due to the interpersonal nature of the tool, it has been stuck in legal review and paperwork processes, unable to be deployed.&lt;/p&gt;&#xA;&lt;p&gt;He is not a Luddite. He acknowledges that AI can help him convert statistical code between different programming languages faster than building prototypes alone. However, he believes that compared to regression analysis, his most irreplaceable value lies in handling interpersonal collaboration, including communicating with international surgical oncologists from Yale, MD Anderson Cancer Center, and the University of Toronto, covering various cancers from thymic carcinoma to orbital sarcoma, bridging the gap between clinical intuition and strict statistical requirements.&lt;/p&gt;&#xA;&lt;p&gt;&amp;ldquo;Experts have packed schedules, and being able to communicate with them for 15 minutes in a day is considered very lucky. Therefore, I must make everything precise and concise,&amp;rdquo; he added. Currently, no AI can replicate the communicative context required for such relationships. This complex, irreducible human judgment is key to maintaining the operation of complex institutions.&lt;/p&gt;&#xA;&lt;h2 id=&#34;the-speed-issue&#34;&gt;The Speed Issue&#xA;&lt;/h2&gt;&lt;p&gt;Imas has not completely dispelled his concerns; the optimistic scenario he describes depends on the speed of transformation. If the shift from a commodity economy to a relationship economy occurs gradually, historical experience suggests that the labor market can absorb and adapt. However, if the pace of AI automation far exceeds the speed at which workers and institutions can retrain and reallocate, the demand collapse he warned about may still occur.&lt;/p&gt;&#xA;&lt;p&gt;&amp;ldquo;The speed of change is crucial,&amp;rdquo; he said. &amp;ldquo;It determines whether humanity ultimately moves toward a hopeful or more worrying future.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Imas warns that those who still treat AI as an overhyped phenomenon are deceiving themselves, possibly because they are still using chatbots from years ago rather than cutting-edge models. He pointed out that current AI remains &amp;ldquo;uneven.&amp;rdquo; While this term is widely used to describe the probabilistic nature of AI and its tendency to hallucinate, Imas emphasizes that &amp;ldquo;one day, even the lowest points will be at an extremely high level&amp;hellip; even the worst performance will be quite impressive.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Morgan Stanley warned in its March report that as the capabilities of large language models improve faster than expected, the impacts of AI could become &amp;ldquo;increasingly severe,&amp;rdquo; with large-scale layoffs possible across industries. On one side are serious predictions, while on the other, a cancer statistician quietly waits for corporate enthusiasm for ChatGPT to wane. This stark contrast reflects the uncertainty that Imas, after much struggle, still cannot completely resolve despite his optimistic judgment.&lt;/p&gt;&#xA;&lt;p&gt;Imas remains concerned for those who adopt an ostrich mentality toward AI, stating that the current priority is to provide one-on-one coaching to help people master cutting-edge technologies. He believes that his theory of relationship industries is both reasonable and positive, but he also admits, &amp;ldquo;It took me a long time to come to this realization.&amp;rdquo;&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>The Mutual Empowerment of Artificial Intelligence and Humanities</title>
            <link>https://lumigallerys.com/posts/note-23ccbafa29/</link>
            <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-23ccbafa29/</guid>
            <description>&lt;h2 id=&#34;the-mutual-empowerment-of-artificial-intelligence-and-humanities&#34;&gt;The Mutual Empowerment of Artificial Intelligence and Humanities&#xA;&lt;/h2&gt;&lt;p&gt;Generative artificial intelligence is profoundly changing various fields such as education, employment, entertainment, healthcare, transportation, and elder care, becoming a hot topic of discussion. The relationship between the humanities and generative AI is complex and symbiotic. AI is reshaping the forms and future development paths of the humanities, while the demands of AI development highlight the value and functions of the humanities. In this sense, the development of the humanities will fundamentally influence the cognitive heights and social acceptance that AI can achieve.&lt;/p&gt;&#xA;&lt;h2 id=&#34;bridging-humanities-scholars-to-multidisciplinary-fields&#34;&gt;Bridging Humanities Scholars to Multidisciplinary Fields&#xA;&lt;/h2&gt;&lt;p&gt;As modern disciplines become increasingly specialized, the barriers between the humanities and natural sciences, as well as between the humanities and social sciences, are widening, potentially leading to a &amp;ldquo;knowledge dilemma.&amp;rdquo; Within the humanities, it is difficult to find scholars who can bridge literature, art, philosophy, history, and language, resulting in a limitation of &amp;ldquo;partial profundity&amp;rdquo; in contemporary humanities. The emergence of AI can provide new solutions to this issue.&lt;/p&gt;&#xA;&lt;p&gt;Large language models, constructed through deep learning on vast amounts of text, represent a distributed system of language and knowledge, highly condensing human written knowledge. They utilize neural network architectures and algorithm-driven probabilistic predictions, achieving context awareness through deep learning and performing human-like logical reasoning under specific prompts to produce knowledge outputs. In this sense, AI can become a powerful assistant for humanities scholars, building a bridge to multidisciplinary fields and empowering the production of humanistic knowledge in areas such as information search, literature screening, semantic analysis, and cross-domain integration.&lt;/p&gt;&#xA;&lt;p&gt;Currently influential &amp;ldquo;distant reading&amp;rdquo; methods leverage AI models to establish interdisciplinary literary criticism and research models based on the overall framework of world literature. Unlike traditional literary studies that advocate close reading of a few classics, distant reading involves data mining and quantitative analysis of large text collections to systematically reveal themes, emotional tendencies, plot structures, and linguistic features, providing a macro description of the overall development of human literature. This effectively addresses the technical challenges of processing vast texts and the cross-cultural, cross-disciplinary knowledge dilemmas that traditional literary history and world literature studies cannot solve.&lt;/p&gt;&#xA;&lt;h2 id=&#34;updating-methods-and-paradigms-in-the-humanities&#34;&gt;Updating Methods and Paradigms in the Humanities&#xA;&lt;/h2&gt;&lt;p&gt;China has a long and rich tradition of humanities scholarship, but the term &amp;ldquo;humanities&amp;rdquo; emerged in the twentieth century. During the Enlightenment in the West, humanities scholars sought to identify their unique nature and methods outside of natural sciences. They viewed the humanities as a &amp;ldquo;new science&amp;rdquo; concerning human thoughts and behaviors, distinct from natural sciences, emphasizing an individualized approach linked to values, aiming to construct epistemology and methodology for the humanities.&lt;/p&gt;&#xA;&lt;p&gt;Overall, this logic, criticized by later generations as a &amp;ldquo;spiritual-natural dichotomy,&amp;rdquo; emphasizes &amp;ldquo;thought of existence&amp;rdquo; in the humanities, with research objects existing in symbolic forms such as language, text, images, and rituals, involving faith, conscience, emotion, aesthetics, values, and ideals—elements of spiritual culture that are difficult to quantify. This encompasses deep individual psychology, instincts, consciousness, and the unconscious, carrying intrinsic characteristics of value, culture, individuality, spirituality, emotion, thought, and symbolism that are inseparable from humanity. Methodologically, the humanities focus on internalized approaches such as empathetic understanding, contemplative experience, and intuitive insight, aiming to reveal unique individual experiences, complex spiritual worlds, and deep cultural meanings that cannot be captured by the replicable, quantifiable, and verifiable technical means of natural sciences.&lt;/p&gt;&#xA;&lt;p&gt;As disciplines develop, this binary thinking model is continually being reexamined. Marx stated, &amp;ldquo;Natural science will include the science of man, just as the science of man includes natural science: this will be a science.&amp;rdquo; Emerging digital humanities research not only deeply examines the humanistic concerns and governance challenges brought by digital technology but also actively explores new research methods and paradigms from digital technology, reshaping the landscape of humanities research. Various literary laboratories and quantitative humanities research initiatives are continuously emerging. AI is evolving from an auxiliary tool to a key force driving paradigm innovation, providing humanities scholars with new interdisciplinary research perspectives and theoretical innovation support, greatly expanding the breadth and depth of humanistic research experiences.&lt;/p&gt;&#xA;&lt;h2 id=&#34;human-machine-collaboration-enhances-critical-thinking-and-writing-skills&#34;&gt;Human-Machine Collaboration Enhances Critical Thinking and Writing Skills&#xA;&lt;/h2&gt;&lt;p&gt;A unique aspect of the humanities is that its knowledge forms often manifest as narrative or speculative texts, expressing researchers&amp;rsquo; unique insights and profound reflections on human existence, values, and meanings through written language. This contrasts with the natural sciences, which utilize formulaic deductions, data charts, and repeatable experimental validations, and with social sciences that heavily rely on survey data and statistical models. Humanistic writing is not only an expression of thoughts and emotions but also a comprehensive cognitive process that integrates creativity, criticality, and reflection—&amp;ldquo;writing is thinking,&amp;rdquo; a process of generating and deepening thoughts and emotions. Writing can stimulate creative vitality, enhance self-reflection, and expand expressive boundaries, where linguistic sensitivity, intellectual penetration, and cultural insight converge. Scholars have noted that writing styles can also carry unique emotional colors, academic judgments, and value positions of the researchers. In this sense, humanistic academic writing is a core aspect of academic research; writing is not only a mode of knowledge production in the humanities but also a reflection of its thinking modes and disciplinary characteristics, serving as a fundamental medium for maintaining the discipline&amp;rsquo;s existence and promoting academic exchange, as well as a vital source of the discipline&amp;rsquo;s vitality. Whether expressing philosophical thoughts and probing ultimate meanings, narrating historical contexts and events, or constructing values and poetic insights in literary criticism and research, the organization and integration of materials, logical reasoning and argumentation, and the deepening of thoughts and condensation of spiritual experiences are all accomplished through the creative writing process.&lt;/p&gt;&#xA;&lt;p&gt;Current AI models can transfer the language structures, argumentative patterns, and disciplinary terminology learned from large-scale corpora into specific fields of humanistic knowledge production, promoting human-machine collaboration and achieving a holistic leap in humanistic writing. On one hand, in academic writing, researchers can leverage AI&amp;rsquo;s powerful data processing capabilities to efficiently gather, systematically organize, and deeply analyze literature before writing. During the writing process, through human-machine collaboration and dialogue, they can organically integrate dispersed knowledge, build new knowledge graphs and cognitive frameworks, helping researchers break through existing theoretical and cognitive limitations, unearth deep thoughts and internal logical structures from complex texts, reveal developmental laws, distill core concepts, and ultimately give birth to new knowledge outcomes. This process is not merely a simple accumulation of knowledge but an innovative mechanism capable of generating specific theoretical results, opening new paths for academic research and knowledge innovation. On the other hand, AI can refine and optimize professional academic expressions, correcting and enhancing the knowledge, normative, logical, and systematic aspects of humanistic academic expressions, even compelling low-quality academic research to exit relevant fields. Sometimes, certain academic debates in the humanities suffer from insufficient materials, unclear concepts, and logical inconsistencies; AI assistance can significantly improve the quality of academic discourse and enhance its value.&lt;/p&gt;&#xA;&lt;p&gt;The involvement of AI is not a simple process of machine-assisted writing but a continuous deepening of thought, inspiration, and expression optimization through human-machine interaction and back-and-forth dialogue. This process places high demands on researchers&amp;rsquo; AI literacy regarding human-machine collaboration, particularly in correctly inputting commands, providing high-level prompts, and deeply interpreting output results. These capabilities determine the effectiveness of AI tool usage. Here, the ability to pose genuine, good, and new questions becomes extremely important, returning to the essence of academic research. Moreover, as some studies have pointed out, AI excels in knowledge inheritance but falls short in creative thinking, unable to replace human depth in theoretical construction, critical reflection, value selection, and aesthetic judgment. The subtle connections discovered by humans based on intuitive judgments among vast amounts of information, strategic choices made based on value positions, and unique expressions arising from aesthetic tastes all hold significant importance. Without human verification, modification, and deepening, the content generated by AI will carry a strong &amp;ldquo;machine flavor,&amp;rdquo; presenting as bland and homogenized expressions.&lt;/p&gt;&#xA;&lt;p&gt;To ensure the independent thinking character, unique insights, and distinct academic style of scholarly work, humanities researchers&amp;rsquo; personal characteristics—&amp;ldquo;talent, courage, insight, and ability&amp;rdquo;—should not be diminished by machine assistance, preventing dependency thinking and intellectual inertia; otherwise, their research outcomes may lose the dynamism inherent in humanistic inquiry. Humanities research must always reflect &amp;ldquo;the human&amp;rdquo; and integrate personal life experiences into academic exploration, responding to contemporary issues with keen perception, unique creativity, and a critical spirit in pursuit of truth. People should be able to sense the emotional investment and value care of researchers, with both depth of thought and warmth of emotion.&lt;/p&gt;&#xA;&lt;h2 id=&#34;the-development-of-ai-depends-on-humanities-understanding-of-human&#34;&gt;The Development of AI Depends on Humanities&amp;rsquo; Understanding of &amp;ldquo;Human&amp;rdquo;&#xA;&lt;/h2&gt;&lt;p&gt;As a mirror of human intelligence, artificial intelligence can help humanity understand the essence of &amp;ldquo;what it means to be human&amp;rdquo; more profoundly. At the same time, humanity&amp;rsquo;s understanding of itself becomes the fundamental basis for the future development and governance of AI technology. Marx pointed out that &amp;ldquo;conscious life activities directly distinguish humans from animal life activities.&amp;rdquo; Thus, humanity&amp;rsquo;s strength lies in its possession of intellect, practical creativity, and the ability to continuously acquire knowledge and skills through learning to achieve goals.&lt;/p&gt;&#xA;&lt;p&gt;At this stage, AI still belongs to the imitation of human intelligence, exhibiting human-like behavior. Its developmental goal should gradually align with the internal mental structures and creative mechanisms of humans, rather than merely replicating external behaviors. The emergence of generative AI is not accidental; it is a product of human creativity and self-awareness reaching a certain stage. Although current specialized vertical models demonstrate execution efficiency and precision that surpass humans in specific tasks and fields, they remain tools created by humans. To date, &amp;ldquo;general models&amp;rdquo; that autonomously adapt to different environments and needs often perform worse than human infants when faced with new situations, counterfactual problems, or tasks requiring common sense reasoning. Fundamentally, current AI knows what to do but may not understand the underlying principles and logic; the AI black box has yet to be opened, and it cannot evolve from imitator to understander. Questions about the generative mechanisms and operational modes of human intellect are particularly significant in this context. Humanity&amp;rsquo;s contemplation of AI is also a re-examination and reflection on itself as a complex intelligent entity, making a groundbreaking effort to uncover the deep essence of humanity and understand &amp;ldquo;what it means to be human&amp;rdquo; by comparing it with non-human intelligent agents.&lt;/p&gt;&#xA;&lt;p&gt;Whether in natural sciences or humanities and social sciences, there exists an alternating cycle of &amp;ldquo;disenchantment&amp;rdquo; and &amp;ldquo;enchantment&amp;rdquo; regarding humans, with the core of &amp;ldquo;enchantment&amp;rdquo; always being the mystery of humanity itself. Without a profound understanding of their own intellect, a true &amp;ldquo;general model&amp;rdquo; cannot emerge, as Marx stated, &amp;ldquo;anatomy of the human body is the key to the anatomy of the ape.&amp;rdquo; The signs of higher animals displayed in lower animals can only be understood after higher animals themselves have been recognized. Understanding humans and comprehending humanity is the fundamental nature and basic value goal of the humanities. Today, the many &amp;ldquo;unexplainabilities&amp;rdquo; of AI are largely due to humanity&amp;rsquo;s insufficient understanding of its own intellect. Breakthroughs in AI creation, technological governance, and value alignment require a prerequisite understanding of humanity&amp;rsquo;s essence, and the level of development in the humanities determines the future possibilities for the development of &amp;ldquo;general models.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;From the perspective of the relationship between the humanities and social life, the humanities cannot be replaced by AI, as they possess reflexivity. Every emergence and change of humanistic cognition and understanding intervenes in the development of social life and the construction of public sentiment, embodying the quality of &amp;ldquo;establishing a heart for heaven and earth, establishing a destiny for the people.&amp;rdquo; In this sense, the development of the humanities is not a linear progression; various humanistic thoughts cannot simply be stacked and merged into a single ultimate truth but coexist in a pluralistic manner, collectively shaping the rich spiritual world of society and individuals. It can be said that the progress of humanistic scholarship alters both humanity and its understanding of the world, thereby significantly impacting generative AI. Simultaneously, the influence of new technologies like AI on society and humanity itself also constitutes a focus of humanistic scholarship, and related reflections become part of the human spiritual world. The humanities and AI are always in a dynamically intertwined state of coexistence and mutual promotion. It is essential to remember that AI is created by humans, and humanity must possess the ability to truly understand and effectively harness its creations. In this sense, we are fully confident that humanistic thought can illuminate the future path of AI.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>DeepSeek to Launch Next-Gen AI Model V4, Competing with OpenAI and Anthropic</title>
            <link>https://lumigallerys.com/posts/note-cf64d3b0a0/</link>
            <pubDate>Mon, 27 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-cf64d3b0a0/</guid>
            <description>&lt;h2 id=&#34;deepseeks-upcoming-ai-model-v4&#34;&gt;DeepSeek&amp;rsquo;s Upcoming AI Model V4&#xA;&lt;/h2&gt;&lt;p&gt;According to recent reports from Reuters, Chinese AI startup DeepSeek is set to launch its next-generation AI model V4 in mid-February. This model boasts strong coding capabilities and may outperform competitors such as Anthropic&amp;rsquo;s Claude and OpenAI&amp;rsquo;s GPT series. A year ago, DeepSeek released its large model R1, which the BBC described as showcasing China&amp;rsquo;s competitiveness in the AI field, just two years after OpenAI launched ChatGPT.&lt;/p&gt;&#xA;&lt;p&gt;Experts interviewed by the Global Times indicated that in just one year, China has narrowed the gap with the United States in AI, using the one-year-old DeepSeek and three-year-old ChatGPT as benchmarks to illustrate the differing paths of the two nations.&lt;/p&gt;&#xA;&lt;h2 id=&#34;diverging-paths-in-ai-development&#34;&gt;Diverging Paths in AI Development&#xA;&lt;/h2&gt;&lt;p&gt;A year ago, Chen Yan, Executive Director of the Japan Institute (China), noticed the rising prominence of DeepSeek in Zhongguancun. The elevator no longer stopped at DeepSeek&amp;rsquo;s floor, and media reporters gathered downstairs for interviews. Chen received numerous inquiries from Japanese companies wanting to invest in DeepSeek but remarked that they had missed the optimal investment window. Previously, a $10 million investment was astonishing for such startups, but now even $1 billion may not guarantee entry.&lt;/p&gt;&#xA;&lt;p&gt;Foreign media, including the Wall Street Journal, described the launch of DeepSeek&amp;rsquo;s R1 model as shocking to the world. Reports indicated that R1 completed training in just two months at a fraction of the cost incurred by American companies like OpenAI, yet its performance rivaled that of ChatGPT and Meta&amp;rsquo;s Llama model. By 2025, more Chinese large model companies are expected to keep pace with the latest developments in AI, joining the global first tier of large models.&lt;/p&gt;&#xA;&lt;h2 id=&#34;chinas-growing-influence-in-open-source-ai&#34;&gt;China&amp;rsquo;s Growing Influence in Open Source AI&#xA;&lt;/h2&gt;&lt;p&gt;The South China Morning Post reported that according to a recent report from third-party AI model aggregator OpenRouter and venture capital firm Andreessen Horowitz, Chinese open-source AI models account for nearly 30% of the global AI technology usage. China&amp;rsquo;s open-source model is gaining the trust of developers worldwide, with U.S. companies like Airbnb and even Meta utilizing Alibaba&amp;rsquo;s Qwen large model. AI researcher and author Sebastian Raschka noted that Alibaba&amp;rsquo;s Qwen3 series models, like DeepSeek&amp;rsquo;s R1, are among the most noteworthy open-source models to watch in 2025.&lt;/p&gt;&#xA;&lt;p&gt;Alibaba reflected on the timeline, noting that OpenAI released ChatGPT on November 30, 2022, and by April 2023, Qwen series models were launched. Alibaba began its AI large model research as early as 2018 and has since introduced various models, including the multi-modal M6 and language model PLUG, solidifying its position as a major player in the global AI landscape. To date, Alibaba has open-sourced nearly 400 models, with over 180,000 global derivative models and downloads surpassing 700 million.&lt;/p&gt;&#xA;&lt;h2 id=&#34;different-approaches-to-ai&#34;&gt;Different Approaches to AI&#xA;&lt;/h2&gt;&lt;p&gt;&amp;ldquo;In the past year, the U.S. and China have developed two very different main pathways for large models,&amp;rdquo; said Shen Yang, a dual-appointed professor at Tsinghua University&amp;rsquo;s School of Journalism and Communication and School of Artificial Intelligence. The U.S. has pursued a path of &amp;ldquo;continuous enhancement of cutting-edge capabilities + closed-source models + platform products,&amp;rdquo; encapsulating the strongest models into super interfaces like ChatGPT, while China&amp;rsquo;s approach emphasizes &amp;ldquo;open-source weights + extreme engineering efficiency + rapid industrial diffusion.&amp;rdquo; China does not aim for long-term monopolization of the strongest models but seeks to quickly translate &amp;ldquo;sufficiently strong capabilities&amp;rdquo; into replicable and applicable engineering assets, enabling swift integration into real business systems.&lt;/p&gt;&#xA;&lt;p&gt;Shen further analyzed that while the U.S. still leads in the &amp;ldquo;strongest model&amp;rsquo;s cutting-edge capabilities,&amp;rdquo; the gap is no longer generational but rather measured in months to a year. In terms of &amp;ldquo;engineering efficiency, cost, and deployment speed,&amp;rdquo; China has nearly no time lag, with some areas even faster. However, in terms of &amp;ldquo;product platforms, ecosystems, and rule-making,&amp;rdquo; the U.S. remains one to two years ahead.&lt;/p&gt;&#xA;&lt;h2 id=&#34;the-future-of-ai-competition&#34;&gt;The Future of AI Competition&#xA;&lt;/h2&gt;&lt;p&gt;AI blogger Li Shanglong, who recently attended the CES in Las Vegas, described the U.S. as having two rivers: one fully in the AI era and the other slowly being permeated. He noted that in Silicon Valley, many people are actively discussing AI, ChatGPT, and related products, while outside Silicon Valley, many ordinary lives are not as AI-integrated. Returning to China to start a business, Li expressed that AI won&amp;rsquo;t change the U.S. overnight but will gradually alter the lifestyles of some individuals.&lt;/p&gt;&#xA;&lt;p&gt;Professor Li Xiangming from Northeastern University, who has long monitored AI developments in China and the U.S., described that while AI is deeply embedded in the everyday lives of Americans, it is primarily in the &amp;ldquo;soft&amp;rdquo; aspects. AI has become infrastructure, influencing streaming recommendations, insurance pricing, navigation predictions, and office integration with models like ChatGPT. However, in terms of widespread adoption in &amp;ldquo;hard&amp;rdquo; aspects (physical hardware), the U.S. is still on the brink of explosion.&lt;/p&gt;&#xA;&lt;p&gt;At CES, Li noted the impressive &amp;ldquo;engineering deployment speed&amp;rdquo; and &amp;ldquo;supply chain completeness&amp;rdquo; of Chinese products. Chinese companies dominate in areas such as lidar, high-energy-density batteries, and cost-effective motor components. Chinese robots not only iterate quickly but also possess significant mass production potential and cost advantages, which are crucial for integrating robots into global households. In the U.S., AGI (Artificial General Intelligence) equips robots with cognitive capabilities, while Chinese manufacturing is creating robust and accessible AI bodies, especially for humanoid robots.&lt;/p&gt;&#xA;&lt;h2 id=&#34;the-next-big-breakthroughs-in-ai&#34;&gt;The Next Big Breakthroughs in AI&#xA;&lt;/h2&gt;&lt;p&gt;&amp;ldquo;Pursuing model performance enhancement is the goal of all foundational model companies,&amp;rdquo; Alibaba stated. In China, the rapid development and rich application of models represent a unique advantage in AI.&lt;/p&gt;&#xA;&lt;p&gt;A leader from a large model startup shared that their team is focusing on researching large models with capabilities in &amp;ldquo;long reasoning, coding, and multi-modality.&amp;rdquo; They believe that by 2025, the most significant change AI will bring is in coding, with AI increasingly replacing information reception, creation, and processing tasks. The team is investing considerable time in training AI for coding, treating it like a new intern who needs clear instructions. The key is to convert tasks into detailed prompts, ensuring clarity in requirements.&lt;/p&gt;&#xA;&lt;p&gt;Alibaba also mentioned that they categorize AI development into three stages: learning from humans, assisting humans, and surpassing humans. They believe we are still in the early stages of the second phase, with the endpoint not necessarily being AGI but potentially leading to true superintelligence (ASI). &amp;ldquo;Of course, this is a grand and distant goal that will require a long time to achieve.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Recently, Tesla CEO Elon Musk revealed in a nearly three-hour podcast that AGI could emerge as early as 2026, with AI capabilities surpassing human intelligence by 2030. This statement has sparked extensive discussion.&lt;/p&gt;&#xA;&lt;p&gt;Shen Yang noted that from a technical perspective, Musk&amp;rsquo;s prediction is not overly aggressive, but AGI is not solely an event declared by engineers. The question of which country achieves AGI first depends on technology, with the U.S. likely leading due to its computational power, engineering, and cutting-edge exploration advantages. However, China is better positioned to rapidly deploy AI in real-world settings, integrating it into industries, governance, and public services, allowing AI to operate in real systems, correct errors, and accumulate advantages over time.&lt;/p&gt;&#xA;&lt;p&gt;In summary, Shen stated that while AGI may technically be realized first in the U.S., its true validation will depend on whether it can gain widespread trust and acceptance within society.&lt;/p&gt;&#xA;&lt;h2 id=&#34;anticipating-the-next-deepseek-moment&#34;&gt;Anticipating the Next &amp;ldquo;DeepSeek Moment&amp;rdquo;&#xA;&lt;/h2&gt;&lt;p&gt;Professor Li Xiangming from Northeastern University suggested that the next &amp;ldquo;DeepSeek moment&amp;rdquo; is unlikely to occur in the realm of &amp;ldquo;pure general chat models&amp;rdquo; but may emerge in several directions: first, humanoid robots + large models, where the integration of large models into humanoid robot control, perception, and planning could exponentially amplify China&amp;rsquo;s engineering and manufacturing advantages; second, industrial/energy/supply chain large models, where Chinese companies have inherent advantages in complex processes, dense regulations, and highly structured data; third, breakthroughs in low-cost inference and edge models, similar to DeepSeek&amp;rsquo;s &amp;ldquo;efficiency revolution,&amp;rdquo; will likely occur in edge inference, edge computing, and domestic chip adaptation. In summary: the U.S. excels in &amp;ldquo;intelligent limits,&amp;rdquo; while China leads in &amp;ldquo;intelligent deployment.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Robopoet&amp;rsquo;s Chief Marketing Officer Zhu Liang stated that AI hardware may experience a &amp;ldquo;DeepSeek moment&amp;rdquo; in 2026, as three conditions are now met: mature large model technology, controllable supply chain costs, and enhanced consumer awareness. The combination of these factors could lead to significant large-scale deployment, with their goal set at selling 1 million AI toys this year.&lt;/p&gt;&#xA;&lt;p&gt;The milestone of &amp;ldquo;1 million units&amp;rdquo; in the AI toy industry signifies that once activated devices reach this number, daily interactions will generate token consumption in the millions. A vast user base will provide massive, high-quality interaction data, significantly accelerating the model&amp;rsquo;s &amp;ldquo;data flywheel&amp;rdquo; and exponentially enhancing the product&amp;rsquo;s AI capabilities, personalization, and emotional engagement. This creates a positive feedback loop: the more people use it, the better it becomes, and the better it becomes, the more people use it.&lt;/p&gt;&#xA;&lt;p&gt;Furthermore, reaching &amp;ldquo;1 million units&amp;rdquo; indicates that the market&amp;rsquo;s overall understanding of the industry has matured. It demonstrates to the industry and consumers that AI toys are no longer niche products or trends but essential items that can genuinely integrate into daily life and provide emotional value.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Cursor&#39;s Radical Culture: From Start-Up to Billion-Dollar Valuation</title>
            <link>https://lumigallerys.com/posts/note-9ce0b23b1c/</link>
            <pubDate>Sun, 26 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-9ce0b23b1c/</guid>
            <description>&lt;p&gt;In an office in North Beach, San Francisco, a programmer raises his hand in a meeting, not to discuss feature development, but to address a bug he just discovered. When he joined, the company gave him the title of &amp;ldquo;co-founder&amp;rdquo;—he is the 37th employee.&lt;/p&gt;&#xA;&lt;p&gt;This is not an exception. At AI programming company Cursor, &lt;strong&gt;the first 50 employees were all given the title of co-founder&lt;/strong&gt;. They spent a long time meticulously hiring the initial 10 people, but once on board, everyone is expected to think and act like founders. The office itself resembles a &amp;ldquo;public lounge and cafeteria of a university.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;This is the first layer of Cursor&amp;rsquo;s brand culture: &lt;strong&gt;redefining identity to flatten hierarchies and foster a sense of belonging&lt;/strong&gt;. It feels less like a corporation and more like an elite campus. Most employees are in their mid-twenties, they take off their shoes when entering the office, often work late into the night, and shower at the office, living just a few blocks away.&lt;/p&gt;&#xA;&lt;p&gt;CEO Michael Truell believes this system allows &amp;ldquo;every employee to be responsible for product direction.&amp;rdquo; The result is a record-setting growth in B2B SaaS, achieving &amp;ldquo;the fastest growth from zero to a billion dollars in annual revenue in just 17 months.&amp;rdquo;&lt;/p&gt;&#xA;&lt;h2 id=&#34;university-like-collaboration-driving-radical-iteration&#34;&gt;University-like Collaboration: Driving Radical Iteration&#xA;&lt;/h2&gt;&lt;p&gt;This &amp;ldquo;university-like&amp;rdquo; atmosphere has led to a complete transformation in work methods. Its collaborative model is unconventional:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Flat collaboration&lt;/strong&gt;: Teams have no strict reporting relationships, and employees self-assign tasks.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Meetings focus on bug fixes&lt;/strong&gt;: Instead of lengthy process reports.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;All-hands recruiting&lt;/strong&gt;: Employees part-time recommend talent, even scouting potential candidates from active Twitter users at night.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;Loose? Quite the opposite; it has resulted in astonishing decision-making and iteration speed. The core output of this culture is a straightforward internal guideline: &lt;strong&gt;&amp;ldquo;overthrow the product.&amp;rdquo;&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;The release of Cursor 3 (codenamed Glass) epitomizes this philosophy. It is not just a feature update but a paradigm shift built from the ground up. &lt;strong&gt;It completely restructured the IDE interface that has been in use for 40 years&lt;/strong&gt;: the traditional file tree display was replaced by an agent command input box; the conventional code editor was relegated to a secondary position, with the main interface becoming an agent management console.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 1&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;751px&#34; data-flex-grow=&#34;312&#34; height=&#34;232&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-9ce0b23b1c/img-c292cc76d9.jpeg&#34; width=&#34;726&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;This means that developers&amp;rsquo; core work has shifted from &amp;ldquo;writing code line by line&amp;rdquo; to &amp;ldquo;orchestrating agents and reviewing outputs.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Why such radical changes? Because competitive pressure is imminent. Giants like OpenAI and Anthropic have launched similar products like Claude Code, aggressively competing for users with substantial subsidies. Cursor realized that its business model as an external large model &amp;ldquo;purchaser&amp;rdquo; was losing its moat.&lt;/p&gt;&#xA;&lt;p&gt;Thus, they completed a strategic shift from &amp;ldquo;auxiliary tools&amp;rdquo; to a &amp;ldquo;multi-agent operating system&amp;rdquo; in just six months.&lt;/p&gt;&#xA;&lt;h2 id=&#34;young-teams-pragmatism-balancing-innovation-speed-with-commercial-sustainability&#34;&gt;Young Team&amp;rsquo;s Pragmatism: Balancing Innovation Speed with Commercial Sustainability&#xA;&lt;/h2&gt;&lt;p&gt;Founded by four MIT dropouts from the 2000s, the team&amp;rsquo;s technical decisions reflect generational traits and pragmatism.&lt;/p&gt;&#xA;&lt;p&gt;They did not fall into the blind investment of a &amp;ldquo;GPU arms race&amp;rdquo; but took a flexible approach:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Early stage&lt;/strong&gt;: Directly utilized top external models like Claude and GPT to quickly validate product-market fit.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Later stage&lt;/strong&gt;: Initiated in-house development, fine-tuning and reinforcing learning based on powerful Chinese open-source models (like Kimi) to create the Composer series of proprietary models.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Results&lt;/strong&gt;: Their in-house developed Composer 2 model scored 61.3 in internal testing, even surpassing Anthropic&amp;rsquo;s top model Claude Opus 4.6 (58.2 points).&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;This pragmatism is also evident in their growth strategy: accumulating millions of developer users through free tools to build reputation and network effects; then achieving profitability through enterprise versions (which cover 64% of Fortune 1000 companies), supporting strategic losses in personal business for growth.&lt;/p&gt;&#xA;&lt;h2 id=&#34;halo-and-shadows-cultural-challenges-amidst-rapid-growth&#34;&gt;Halo and Shadows: Cultural Challenges Amidst Rapid Growth&#xA;&lt;/h2&gt;&lt;p&gt;This extreme culture has shaped the myth of Cursor as &amp;ldquo;the fastest-growing startup in history,&amp;rdquo; but it also brings unique challenges.&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Positive feedback&lt;/strong&gt;: The developer community praises its product iteration speed, stating there are &amp;ldquo;substantial updates almost every two weeks,&amp;rdquo; and the smooth experience of multi-file editing.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Negative feedback&lt;/strong&gt;: The aggressive iterations have raised concerns about stability. Some enterprise users have turned to competitors due to compatibility and speed issues, and discussions of &amp;ldquo;Cursor is dead&amp;rdquo; have emerged in the community. A survey showed that 46% of developers listed Claude Code as their favorite tool, with Cursor in second place at 19%, indicating fierce competition.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;An investor once pointed out a paradox: &amp;ldquo;Cursor&amp;rsquo;s data shows no signs of anything other than complete success,&amp;rdquo; yet the most sensitive group of developers in the industry has begun to express collective unease. This reveals a deeper characteristic of its culture: &lt;strong&gt;it serves the efficiency of &amp;lsquo;disruption&amp;rsquo; and &amp;lsquo;growth,&amp;rsquo; which can sometimes clash with the enterprise-level demands for &amp;lsquo;stability&amp;rsquo; and &amp;lsquo;predictability.&amp;rsquo;&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;The story of Cursor goes beyond the myth of four 2000s dropouts creating a $60 billion valuation. It showcases a new organizational and product philosophy driven by a young team: &lt;strong&gt;stimulating extreme autonomy through identity recognition, agile responses to bureaucracy with a &amp;lsquo;university-like&amp;rsquo; approach, and facing sudden shifts in technological paradigms with the courage to &amp;lsquo;overthrow oneself.&amp;rsquo;&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Its culture is both the engine of its rocket-like growth and the most tension-filled challenge it must navigate in the future.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Product Layering Strategy in AI: Insights from ChatGPT Images 2.0</title>
            <link>https://lumigallerys.com/posts/note-55014075d0/</link>
            <pubDate>Thu, 23 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-55014075d0/</guid>
            <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&#xA;&lt;/h2&gt;&lt;p&gt;The evolution of image generation from a mere &amp;ldquo;image tool&amp;rdquo; to a &amp;ldquo;visual thinking partner&amp;rdquo; reveals a layered design logic that all AI product managers can learn from.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 2&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;514px&#34; data-flex-grow=&#34;214&#34; height=&#34;420&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-55014075d0/img-6447032f7c.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-55014075d0/img-6447032f7c_hu_cce0834975e83aa6.jpeg 800w, https://lumigallerys.com/posts/note-55014075d0/img-6447032f7c.jpeg 900w&#34; width=&#34;900&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Recently, the product community has been discussing a phenomenon: why some AI image generation products go unused even when free, while others charge $200/month and remain in high demand?&lt;/p&gt;&#xA;&lt;p&gt;On April 21, OpenAI&amp;rsquo;s release of ChatGPT Images 2.0 provided an interesting answer. Instead of competing on &amp;ldquo;image quality,&amp;rdquo; it innovated structurally at the product level—injecting reasoning capabilities into image generation, allowing the AI to &amp;ldquo;think&amp;rdquo; before generating images, using this &amp;ldquo;thinking&amp;rdquo; ability as a core lever for paid conversion.&lt;/p&gt;&#xA;&lt;p&gt;This article aims to dissect key decisions regarding user segmentation, pricing design, and workflow integration in AI products, hoping to inspire those working on AI products.&lt;/p&gt;&#xA;&lt;h2 id=&#34;the-pricing-dilemma-in-ai-products&#34;&gt;The Pricing Dilemma in AI Products&#xA;&lt;/h2&gt;&lt;p&gt;AI product managers often face a common dilemma—users perceive AI capabilities as &amp;ldquo;good enough.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;For instance, in image generation, telling users that &amp;ldquo;our model improved by 15% on the FID metric&amp;rdquo; usually elicits a response of &amp;ldquo;oh, it does look a bit clearer.&amp;rdquo; A technology upgrade that took three months to optimize may only be seen as a &amp;ldquo;slight improvement&amp;rdquo; by users. Moreover, as competitors also enhance their offerings, users&amp;rsquo; perception of quality differences becomes further dulled.&lt;/p&gt;&#xA;&lt;p&gt;This leads to a pricing dilemma: if the core selling point of a product is &amp;ldquo;better quality,&amp;rdquo; users find it hard to pay for &amp;ldquo;a bit better.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;ChatGPT Images 2.0&amp;rsquo;s approach is noteworthy. It did not focus on &amp;ldquo;looking better&amp;rdquo; but created a new capability dimension—&amp;ldquo;thinking image generation.&amp;rdquo; This difference is not a matter of degree (a bit better vs. much better) but of category (can do vs. cannot do).&lt;/p&gt;&#xA;&lt;p&gt;Specifically, Images 2.0 offers two modes:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&lt;strong&gt;Instant Mode&lt;/strong&gt;: Open to all users, focusing on &amp;ldquo;better basic image generation&amp;rdquo;—more accurate text rendering, better instruction adherence, and support for more languages. This is an upgrade of &amp;ldquo;doing better.&amp;rdquo;&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Thinking Mode&lt;/strong&gt;: Available only to paid users, emphasizing &amp;ldquo;thinking before generating&amp;rdquo;—the AI first searches for reference information, plans composition logic, generates multiple stylistically consistent images, and finally checks spelling and positioning. This is an upgrade of &amp;ldquo;doing new things.&amp;rdquo;&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;&lt;strong&gt;The product design insight here is that the paid conversion of AI products is more about creating new capabilities as &amp;ldquo;category differences&amp;rdquo; rather than optimizing existing capabilities as &amp;ldquo;degree differences.&amp;rdquo;&lt;/strong&gt; Users are unlikely to pay for &amp;ldquo;a bit better&amp;rdquo; but will pay for &amp;ldquo;what I couldn&amp;rsquo;t do before, now I can.&amp;rdquo;&lt;/p&gt;&#xA;&lt;h2 id=&#34;key-layered-design-pricing-based-on-substituted-labor-costs&#34;&gt;Key Layered Design: Pricing Based on &amp;ldquo;Substituted Labor Costs&amp;rdquo;&#xA;&lt;/h2&gt;&lt;p&gt;Further dissecting the layered logic of Images 2.0 reveals a deeper design principle.&lt;/p&gt;&#xA;&lt;p&gt;What does Instant Mode replace? It substitutes the behavior of users searching for images on search engines or downloading images from free material sites. This behavior has a time cost of about 5-10 minutes, making its replacement value low, so offering it for free is reasonable—using it to cultivate the habit of users to &amp;ldquo;open ChatGPT whenever they need an image.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;What does Thinking Mode replace? It replaces the behavior of users spending 30 minutes on Canva to create an infographic or waiting two hours for a designer to deliver a draft. This behavior has a time cost ranging from 30 minutes to several hours, making its replacement value much higher, thus justifying it as a paid feature that users are willing to pay for.&lt;/p&gt;&#xA;&lt;p&gt;In other words, OpenAI&amp;rsquo;s pricing anchor is not based on &amp;ldquo;Thinking Mode consuming more computing power, hence more expensive,&amp;rdquo; but rather on &amp;ldquo;Thinking Mode saving you more labor costs, hence more valuable.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;An important insight here is that &lt;strong&gt;AI product pricing should not anchor on the cost side (how much computing power I consumed) but on the value side (how much time and labor costs I saved for users).&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;I have organized this thought into a simple layered decision framework for reference:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&lt;strong&gt;Identify users&amp;rsquo; current alternatives.&lt;/strong&gt; How would users complete this task without your product? What tools would they use? How much time would it take?&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Categorize features based on the cost of alternatives.&lt;/strong&gt; Capabilities with low replacement costs (searching for images → free image generation) should be placed in the free layer for user acquisition; capabilities with high replacement costs (hiring a designer → AI auto-layout) should be placed in the paid layer for conversion.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Ensure that the capabilities in the paid layer represent &amp;ldquo;category differences&amp;rdquo; rather than &amp;ldquo;degree differences.&amp;rdquo;&lt;/strong&gt; Users are insensitive to &amp;ldquo;20% faster&amp;rdquo; but very sensitive to &amp;ldquo;what I couldn&amp;rsquo;t do before, now I can.&amp;rdquo;&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Use data from the free layer to validate assumptions about the paid layer&amp;rsquo;s demand.&lt;/strong&gt; If free users frequently attempt a certain type of complex task but do not achieve satisfactory results, it indicates that this type of task can serve as a selling point for the paid layer.&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;h2 id=&#34;new-interaction-design-challenge-what-users-think-when-ai-needs-to-think&#34;&gt;New Interaction Design Challenge: What Users Think When AI Needs to &amp;ldquo;Think&amp;rdquo;&#xA;&lt;/h2&gt;&lt;p&gt;The Thinking Mode introduces a new interaction challenge: the generation time has increased.&lt;/p&gt;&#xA;&lt;p&gt;Previously, AI image generation was a &amp;ldquo;second output&amp;rdquo; experience—input a prompt, wait 3-5 seconds, and the image appears. However, Thinking Mode requires executing a multi-step process of searching, planning, generating, and verifying, which may take several minutes for complex tasks.&lt;/p&gt;&#xA;&lt;p&gt;A few minutes may not seem long, but in users&amp;rsquo; psychological perception, it falls into a dangerous zone.&lt;/p&gt;&#xA;&lt;p&gt;We all know the &amp;ldquo;3-second rule&amp;rdquo; in product design—if a webpage takes more than 3 seconds to load, the user dropout rate skyrockets. However, this rule applies to scenarios of &amp;ldquo;waiting for an unknown result.&amp;rdquo; If users can see progress and understand &amp;ldquo;what is happening,&amp;rdquo; their patience during the wait will significantly increase.&lt;/p&gt;&#xA;&lt;p&gt;This is a core interaction proposition for agent-type AI products—&lt;strong&gt;Thinking ≠ Waiting; you need to make users perceive that &amp;ldquo;AI is thinking&amp;rdquo; rather than &amp;ldquo;AI is stuck.&amp;rdquo;&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;How to achieve this? I have summarized three effective strategies from several well-executed products:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&lt;strong&gt;Show the thinking process.&lt;/strong&gt; Similar to what ChatGPT&amp;rsquo;s reasoning model is already doing—display the AI&amp;rsquo;s thought chain, allowing users to see &amp;ldquo;searching for reference materials,&amp;rdquo; &amp;ldquo;planning layout,&amp;rdquo; and &amp;ldquo;checking text.&amp;rdquo; Users see a transparent workflow instead of a spinning loading animation.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Provide incremental outputs.&lt;/strong&gt; Don&amp;rsquo;t make users wait until the final result to see anything. Show a draft composition (within seconds), then gradually fill in details (in tens of seconds), and finally deliver the complete product (in minutes). Users can see progress at each stage, significantly reducing anxiety.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Allow user intervention.&lt;/strong&gt; Permit users to intervene during the thinking process—for example, if the AI plans a three-column layout, users can say &amp;ldquo;I want two columns&amp;rdquo; at this stage rather than waiting for the final product to come out and then starting over. This not only reduces waiting anxiety but also effectively enhances the quality of generation.&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;Another detail worth noting is that some users in testing found that the iterative editing in Thinking Mode would yield diminishing returns after 1-2 rounds—more edits led to worse results, ultimately forcing users to start a new session from scratch. A workaround is to allow users to drag the current image into a new dialogue to restart.&lt;/p&gt;&#xA;&lt;p&gt;This suggests a problem of &amp;ldquo;context pollution&amp;rdquo; in the reasoning chain. For product managers, a feasible product strategy is to add a button in the editing interface that allows users to &amp;ldquo;restart based on the current image,&amp;rdquo; transforming technical limitations into a natural interaction process, thereby reducing user frustration.&lt;/p&gt;&#xA;&lt;h2 id=&#34;ecological-binding-strategy-image-generation-as-a-layer-of-stickiness&#34;&gt;Ecological Binding Strategy: Image Generation as a Layer of Stickiness&#xA;&lt;/h2&gt;&lt;p&gt;ChatGPT Images 2.0 also includes a strategic move that is easy to overlook—it has been directly embedded in Codex (OpenAI&amp;rsquo;s coding tool), allowing users to generate images in the coding environment without needing a separate API key with their existing ChatGPT subscription.&lt;/p&gt;&#xA;&lt;p&gt;This is not about creating an &amp;ldquo;image generation product.&amp;rdquo; Instead, it uses image generation as a &amp;ldquo;stickiness layer&amp;rdquo; to enhance user retention across the entire Codex ecosystem.&lt;/p&gt;&#xA;&lt;p&gt;Over the past year, we have seen OpenAI continuously add capabilities to Codex: coding → computer control → image generation → memory → browsing. With each added layer, the cost of user migration increases slightly. When users complete coding, image generation, document writing, and prototype design all within the same tool, the cost of switching to competitors becomes very high.&lt;/p&gt;&#xA;&lt;p&gt;At the same time, OpenAI announced the complete discontinuation of DALL-E 2 and DALL-E 3 on May 12. This is not only a technological upgrade but also forces existing developers to migrate to the new API system of gpt-image-2. The new API shifts from &amp;ldquo;per image billing&amp;rdquo; to &amp;ldquo;per token billing,&amp;rdquo; meaning that once developers migrate, they need to restructure their cost models, further increasing switching costs.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;The product strategy insight here is that the competitiveness of AI products lies not in how strong a single function is, but in how deeply multiple functions combine to build workflows.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Users will not stay because your image generation is 10% better than others, but they will stay because your image generation + coding + browsing + memory form a complete workflow that they cannot leave. This logic is similar to the strategy of WeChat mini-programs—individual mini-programs may not be strong enough, but when your payment, social, content, and services are all within the WeChat ecosystem, it becomes hard to leave.&lt;/p&gt;&#xA;&lt;p&gt;For product managers working on AI products, I have a specific suggestion: &lt;strong&gt;do not plan each AI capability as an independent function; instead, think about how these capabilities can form a &amp;ldquo;workflow loop.&amp;rdquo;&lt;/strong&gt; The more complete the loop, the higher the user migration cost and the deeper the product barrier.&lt;/p&gt;&#xA;&lt;h2 id=&#34;competitive-insights-from-whose-images-look-better-to-who-is-more-deeply-integrated-into-workflows&#34;&gt;Competitive Insights: From &amp;ldquo;Whose Images Look Better&amp;rdquo; to &amp;ldquo;Who is More Deeply Integrated into Workflows&amp;rdquo;&#xA;&lt;/h2&gt;&lt;p&gt;Finally, I want to discuss the changing competitive landscape, as the logic behind this applies to all AI products.&lt;/p&gt;&#xA;&lt;p&gt;Currently, the AI image generation sector has formed three distinct competitive routes:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;OpenAI has chosen &amp;ldquo;reasoning capabilities + ecological binding.&amp;rdquo; Its core differentiation is not image quality (though it is good), but the completeness of the workflow brought by the Thinking Mode and the deep integration into the Codex ecosystem.&lt;/li&gt;&#xA;&lt;li&gt;Google (Gemini / Nano Banana) has opted for &amp;ldquo;price advantage + ecological binding.&amp;rdquo; At the same resolution, its cost is about one-third that of OpenAI, deeply integrating with Google Workspace. 1 billion images were generated in 53 days, primarily relying on low prices and Google’s vast user base.&lt;/li&gt;&#xA;&lt;li&gt;The open-source camp (Stable Diffusion, Flux, etc.) has chosen &amp;ldquo;freedom + zero cost.&amp;rdquo; The quality of single images continues to catch up with closed-source models, but in terms of multi-image consistency, reasoning validation, and workflow integration, they struggle to compete in the short term.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;These three routes reflect a general pattern of AI product competition evolution—&lt;strong&gt;the first stage competes on quality (whose model is better), the second stage competes on price (who is cheaper), and the third stage competes on ecosystem (who is more deeply integrated into workflows).&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;We have already fully traversed these three stages in the LLM text field. Now, image generation has also reached the third stage.&lt;/p&gt;&#xA;&lt;p&gt;For product managers, it is crucial to recognize which stage your product is in. If you are still in the first stage, focusing on model quality is correct; if you have already entered the third stage, piling on quality yields diminishing returns, and you should focus your efforts on workflow integration and ecosystem building.&lt;/p&gt;&#xA;&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&#xA;&lt;/h2&gt;&lt;p&gt;Returning to the initial question: why do some AI products go unused even when free, while others charge $200/month and remain in high demand?&lt;/p&gt;&#xA;&lt;p&gt;ChatGPT Images 2.0 provides the answer: &lt;strong&gt;users pay for &amp;ldquo;new capabilities&amp;rdquo; rather than &amp;ldquo;better performance&amp;rdquo;; they pay for &amp;ldquo;saved labor costs&amp;rdquo; rather than &amp;ldquo;consumed computing power&amp;rdquo;; and they are locked in by a &amp;ldquo;complete workflow&amp;rdquo; rather than by the &amp;ldquo;quality of a single function.&amp;rdquo;&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;These three principles apply to nearly all AI product designs.&lt;/p&gt;&#xA;&lt;p&gt;If you are working on the paid design of an AI product, consider asking yourself three questions:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;Is the difference between my paid and free features a &amp;ldquo;degree difference&amp;rdquo; or a &amp;ldquo;category difference&amp;rdquo;?&lt;/li&gt;&#xA;&lt;li&gt;Is my pricing anchor on the cost side (computing power consumption) or the value side (substituted labor costs)?&lt;/li&gt;&#xA;&lt;li&gt;Do the various AI capabilities in my product form a workflow loop?&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;Clarifying these three questions will also clarify the path to paid conversion.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Tips for Using Anthropic&#39;s Claude Opus 4.7 Effectively</title>
            <link>https://lumigallerys.com/posts/note-f61c2c97ed/</link>
            <pubDate>Fri, 17 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-f61c2c97ed/</guid>
            <description>&lt;p&gt;&lt;img alt=&#34;Image 1&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;563px&#34; data-flex-grow=&#34;234&#34; height=&#34;383&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-0d0f3afe20.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-0d0f3afe20_hu_4a7f0368619f69d2.jpeg 800w, https://lumigallerys.com/posts/note-f61c2c97ed/img-0d0f3afe20.jpeg 900w&#34; width=&#34;900&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;AI Application Trends&lt;/strong&gt;&lt;br&gt;&#xA;&lt;strong&gt;Compiled by: Bi Weihao&lt;/strong&gt;&lt;br&gt;&#xA;&lt;strong&gt;Edited by: Mo Ying&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;On April 17, it was reported that &lt;strong&gt;Anthropic released the next generation model, Claude Opus 4.7.&lt;/strong&gt; Boris Cherny, the creator of Claude Code, shared his tips after testing the new model on social media.&lt;/p&gt;&#xA;&lt;p&gt;According to Boris, Opus 4.7 is smarter, more proactive, and more precise than version 4.6, and even he took a few days to learn how to use the new model efficiently.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 2&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;330px&#34; data-flex-grow=&#34;137&#34; height=&#34;756&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-c9723f4da9.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-c9723f4da9_hu_d2fbfd79b4ef6681.jpeg 800w, https://lumigallerys.com/posts/note-f61c2c97ed/img-c9723f4da9.jpeg 1041w&#34; width=&#34;1041&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Boris first published a blog post and then shared usage tips on Twitter just two hours later, showing his enthusiasm.&lt;/p&gt;&#xA;&lt;p&gt;In his blog, he addressed a concern many users have: &lt;strong&gt;token usage.&lt;/strong&gt; The tokenizer and Opus model&amp;rsquo;s inclination for deep thinking in version 4.7 affects token consumption, requiring adjustments in Claude Code for optimal results.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 3&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;960px&#34; data-flex-grow=&#34;400&#34; height=&#34;254&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-8eb1a6fc69.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-8eb1a6fc69_hu_9cfe1cd5fc40dcee.jpeg 800w, https://lumigallerys.com/posts/note-f61c2c97ed/img-8eb1a6fc69.jpeg 1017w&#34; width=&#34;1017&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Boris provided several tips to enhance efficiency, result quality, and user experience. Here’s a summary of his best practices from both the blog and his tweets.&lt;/p&gt;&#xA;&lt;h2 id=&#34;1-treat-claude-as-an-outstanding-engineer-and-enable-automatic-mode&#34;&gt;&lt;strong&gt;1. Treat Claude as an Outstanding Engineer and Enable Automatic Mode&lt;/strong&gt;&#xA;&lt;/h2&gt;&lt;p&gt;Boris admitted that while Opus 4.7 is more capable, it tends to engage in deep thinking during later stages of conversations, increasing token consumption. Therefore, constructing efficient interactive dialogues is key to saving tokens. Users should treat Claude as a capable engineer that can work independently without step-by-step commands.&lt;/p&gt;&#xA;&lt;p&gt;Specifically, users should clearly articulate their task requirements in the first conversation, including task intent, constraints, acceptance criteria, and relevant file paths. This approach allows Opus 4.7 to better handle complex tasks.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 4&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;593px&#34; data-flex-grow=&#34;247&#34; height=&#34;471&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-581b4af3e5.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-581b4af3e5_hu_2c78cc55c9cba848.jpeg 800w, https://lumigallerys.com/posts/note-f61c2c97ed/img-581b4af3e5.jpeg 1164w&#34; width=&#34;1164&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Boris stated that inserting vague prompts in multi-turn dialogues often reduces token efficiency and may affect output quality. Each interaction increases token consumption, so it&amp;rsquo;s best to consolidate questions to minimize interactions.&lt;/p&gt;&#xA;&lt;p&gt;In addition to user requests, Claude Code has an automatic mode for assistance. This mode allows Claude Code to handle tasks entirely autonomously without user intervention, even bypassing permission confirmations.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 5&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;436px&#34; data-flex-grow=&#34;181&#34; height=&#34;318&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-e128ad7938.jpeg&#34; width=&#34;578&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Previously, developers often needed to supervise Claude Code for complex tasks. To simplify this, they could use the dangerously-skip-permissions command to bypass permission confirmations, which is risky. With the launch of Opus 4.7, the Claude Code team found it adept at handling complex and time-consuming tasks, leading to the introduction of automatic mode for better performance.&lt;/p&gt;&#xA;&lt;p&gt;Once automatic mode is enabled, all permission prompts are redirected to a dedicated classification system that determines which permissions can be safely granted, allowing commands to execute without user intervention. If users clearly state their task requirements in the first conversation, this mode can significantly reduce task execution time, allowing users to address other issues in new sessions.&lt;/p&gt;&#xA;&lt;p&gt;For users concerned about the security of automatic mode, Boris suggested using the /fewer-permission-prompts skill, which reviews history and aggregates frequently occurring permission prompts that are safe to allow without confirmation.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 6&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;211px&#34; data-flex-grow=&#34;88&#34; height=&#34;1139&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-963815aa34.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-963815aa34_hu_27eabcaea72e4153.jpeg 800w, https://lumigallerys.com/posts/note-f61c2c97ed/img-963815aa34.jpeg 1006w&#34; width=&#34;1006&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;2-use-the-recap-feature-for-seamless-work-transition&#34;&gt;&lt;strong&gt;2. Use the Recap Feature for Seamless Work Transition&lt;/strong&gt;&#xA;&lt;/h2&gt;&lt;p&gt;The Recap feature was launched a few days ago and is specifically designed for Opus 4.7. Since Opus 4.7 excels at handling complex, lengthy tasks, users may find themselves unsure of task progress after taking breaks.&lt;/p&gt;&#xA;&lt;p&gt;This Recap feature allows users to step away from their screens and provides a summary of previous work and next steps upon return, helping users track task progress and verify direction.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 7&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;324px&#34; data-flex-grow=&#34;135&#34; height=&#34;760&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-0d4989e9eb.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-0d4989e9eb_hu_40e92eda09c8ed53.jpeg 800w, https://lumigallerys.com/posts/note-f61c2c97ed/img-0d4989e9eb.jpeg 1027w&#34; width=&#34;1027&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Users can also utilize this feature in complex dialogues to summarize previous execution processes and validate next actions.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 8&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;1197px&#34; data-flex-grow=&#34;499&#34; height=&#34;215&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-fd21c0f6fc.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-fd21c0f6fc_hu_8679684f9c5eff56.jpeg 800w, https://lumigallerys.com/posts/note-f61c2c97ed/img-fd21c0f6fc.jpeg 1073w&#34; width=&#34;1073&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;3-set-your-preferred-reasoning-level-for-flexible-thinking-speed&#34;&gt;&lt;strong&gt;3. Set Your Preferred Reasoning Level for Flexible Thinking Speed&lt;/strong&gt;&#xA;&lt;/h2&gt;&lt;p&gt;Opus 4.7 no longer has a preset thinking scale; it adopts an adaptive thinking mode. The model decides when to engage in deeper thinking based on context, responding quickly to queries and skipping unnecessary thought processes, thus concentrating tokens on more useful tasks.&lt;/p&gt;&#xA;&lt;p&gt;Users can manually control the speed of thinking. If they want more in-depth analysis, they can prompt: &amp;ldquo;Please think carefully and analyze step by step before answering; this question is more complex than it seems.&amp;rdquo; Conversely, to prioritize quick responses, they can prompt: &amp;ldquo;Prioritize quick responses rather than deep thinking. If in doubt, reply directly.&amp;rdquo; Reducing thought can save tokens but may affect accuracy, so it should be used cautiously for complex tasks.&lt;/p&gt;&#xA;&lt;p&gt;Users can also adjust the reasoning level to change the depth of thought. Opus 4.7 categorizes reasoning levels into five tiers, introducing an xhigh (extra high) level as the default.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 9&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;283px&#34; data-flex-grow=&#34;118&#34; height=&#34;961&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-321379f67f.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-321379f67f_hu_53c90365d03f8936.jpeg 800w, https://lumigallerys.com/posts/note-f61c2c97ed/img-321379f67f.jpeg 1136w&#34; width=&#34;1136&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;The xhigh level balances complex task handling, reasoning ability, and token consumption. Boris mentioned he typically uses the xhigh level and only switches to max for extremely complex tasks, which only applies to the current session, reverting to the default level in new sessions to avoid excessive token consumption.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 10&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;274px&#34; data-flex-grow=&#34;114&#34; height=&#34;879&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-49a6cd0982.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-49a6cd0982_hu_839c5f529295c21.jpeg 800w, https://lumigallerys.com/posts/note-f61c2c97ed/img-49a6cd0982.jpeg 1006w&#34; width=&#34;1006&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;4-provide-verification-methods-and-monitor-claude&#34;&gt;&lt;strong&gt;4. Provide Verification Methods and Monitor Claude&amp;rsquo;s Results&lt;/strong&gt;&#xA;&lt;/h2&gt;&lt;p&gt;Boris has great trust in Opus 4.7, believing it can execute commands and make correct modifications. Users only need to check if the final results meet their requirements, leading to the introduction of a focused mode on the CLI page.&lt;/p&gt;&#xA;&lt;p&gt;In this mode, all intermediate steps are hidden, allowing users to focus solely on the results. However, if users do not have sufficient trust in Opus 4.7, it is not recommended to enable this mode.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 11&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;234px&#34; data-flex-grow=&#34;97&#34; height=&#34;1045&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-f899a13380.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-f899a13380_hu_f3a429ecebd5e266.jpeg 800w, https://lumigallerys.com/posts/note-f61c2c97ed/img-f899a13380.jpeg 1020w&#34; width=&#34;1020&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;To ensure result quality, Boris suggests giving Claude a way to verify its work results.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 12&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;227px&#34; data-flex-grow=&#34;94&#34; height=&#34;1047&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-5f26a7861f.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-f61c2c97ed/img-5f26a7861f_hu_47926a6f2060422f.jpeg 800w, https://lumigallerys.com/posts/note-f61c2c97ed/img-5f26a7861f.jpeg 992w&#34; width=&#34;992&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Boris believes this can improve efficiency by 2 to 3 times, which is especially important for Opus 4.7. He also recommends suitable verification methods for different tasks, such as enabling server testing for backend development and teaching Claude to control browsers for frontend tasks.&lt;/p&gt;&#xA;&lt;h2 id=&#34;conclusion-a-stronger-model-requires-advanced-usage-techniques&#34;&gt;&lt;strong&gt;Conclusion: A Stronger Model Requires Advanced Usage Techniques&lt;/strong&gt;&#xA;&lt;/h2&gt;&lt;p&gt;Transitioning from &amp;ldquo;feeding instructions one by one&amp;rdquo; to &amp;ldquo;confidently delegating tasks&amp;rdquo; and from &amp;ldquo;watching step by step&amp;rdquo; to &amp;ldquo;hiding the process entirely,&amp;rdquo; Boris&amp;rsquo;s usage tips stem from confidence in Opus 4.7&amp;rsquo;s powerful capabilities.&lt;/p&gt;&#xA;&lt;p&gt;According to data released by Anthropic, the Opus 4.7 model shows solid improvements across multiple benchmark tests. A smarter model requires more powerful tools and advanced usage techniques to fully unleash its potential.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Anthropic&#39;s Advisor Tool Revolutionizes AI Task Execution</title>
            <link>https://lumigallerys.com/posts/note-12a7846ff1/</link>
            <pubDate>Fri, 10 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-12a7846ff1/</guid>
            <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&#xA;&lt;/h2&gt;&lt;p&gt;Anthropic has launched the Advisor Tool, which fundamentally changes the logic of AI task execution. Instead of the traditional model where larger models direct smaller ones, the Advisor strategy allows smaller models to consult larger models during execution. This innovation enables Sonnet/Haiku to seek guidance from Opus at critical decision points, achieving intelligence close to Opus while only incurring the costs of a smaller model.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 1&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;540px&#34; data-flex-grow=&#34;225&#34; height=&#34;400&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-12a7846ff1/img-369ec8c6a9.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-12a7846ff1/img-369ec8c6a9_hu_836e45bb8fd80268.jpeg 800w, https://lumigallerys.com/posts/note-12a7846ff1/img-369ec8c6a9.jpeg 900w&#34; width=&#34;900&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;The Advisor Tool allows Sonnet or Haiku to automatically consult Opus when faced with challenging decisions, continuing their tasks after receiving guidance. This strategy is referred to as the &lt;strong&gt;Advisor Strategy&lt;/strong&gt;.&lt;/p&gt;&#xA;&lt;h2 id=&#34;reverse-sub-agent-model&#34;&gt;Reverse Sub-Agent Model&#xA;&lt;/h2&gt;&lt;p&gt;The common multi-agent model in the industry positions larger models as commanders, delegating tasks to smaller models. The Advisor strategy reverses this approach.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Sonnet (or Haiku) as Executor&lt;/strong&gt; executes tasks throughout, calling tools, reading results, and iterating. When it encounters a decision point where its judgment is insufficient, it consults Opus as an Advisor. Opus receives the shared context and returns a plan, correction, or stop signal, after which Sonnet continues execution.&lt;/p&gt;&#xA;&lt;p&gt;The Advisor does not call tools or produce user-facing outputs; it only provides guidance. Advanced reasoning intervenes only when the Executor needs it, and the entire process is billed at the Executor&amp;rsquo;s rate.&lt;/p&gt;&#xA;&lt;p&gt;This design eliminates the need for task decomposition logic, worker pools, and orchestration frameworks. The Executor determines when to upgrade, and the entire process is completed in a single API call.&lt;/p&gt;&#xA;&lt;h2 id=&#34;performance-data&#34;&gt;Performance Data&#xA;&lt;/h2&gt;&lt;p&gt;Let&amp;rsquo;s examine the combination of Sonnet + Opus Advisor.&lt;/p&gt;&#xA;&lt;h3 id=&#34;swe-bench-multilingual&#34;&gt;SWE-bench Multilingual&#xA;&lt;/h3&gt;&lt;p&gt;Sonnet + Advisor improved performance by &lt;strong&gt;2.7 percentage points&lt;/strong&gt; compared to Sonnet running solo, while reducing the cost per task by &lt;strong&gt;11.9%&lt;/strong&gt;. The cost reduction is attributed to the Advisor&amp;rsquo;s intervention, allowing the Executor to avoid unnecessary detours and reduce total token consumption.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 2&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;608&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-12a7846ff1/img-b95de843d3.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-12a7846ff1/img-b95de843d3_hu_f28c6ac963d99568.jpeg 800w, https://lumigallerys.com/posts/note-12a7846ff1/img-b95de843d3.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;browsecomp-and-terminal-bench-20&#34;&gt;BrowseComp and Terminal-Bench 2.0&#xA;&lt;/h3&gt;&lt;p&gt;In BrowseComp and Terminal-Bench 2.0, Sonnet + Advisor also outperformed Sonnet running solo, with lower costs per task.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 3&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;608&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-12a7846ff1/img-1e99dd97df.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-12a7846ff1/img-1e99dd97df_hu_f1e1df3ab4f1e0c9.jpeg 800w, https://lumigallerys.com/posts/note-12a7846ff1/img-1e99dd97df.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Next, let&amp;rsquo;s look at the combination of Haiku + Opus Advisor, which is even more interesting.&lt;/p&gt;&#xA;&lt;p&gt;In BrowseComp, Haiku + Advisor scored &lt;strong&gt;41.2%&lt;/strong&gt;, more than double Haiku running solo (19.7%). Compared to Sonnet running solo, the score is 29% lower, but the cost is reduced by &lt;strong&gt;85%&lt;/strong&gt;.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 4&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;608&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-12a7846ff1/img-14cd7e18b1.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-12a7846ff1/img-14cd7e18b1_hu_3b2ac1fa602352a5.jpeg 800w, https://lumigallerys.com/posts/note-12a7846ff1/img-14cd7e18b1.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;For high-throughput scenarios that require balancing intelligence and cost, this combination is very attractive. It achieves results close to Sonnet&amp;rsquo;s level at Haiku&amp;rsquo;s price.&lt;/p&gt;&#xA;&lt;h2 id=&#34;how-to-use&#34;&gt;How to Use&#xA;&lt;/h2&gt;&lt;p&gt;From an API perspective, it&amp;rsquo;s very simple. Add an advisor_20260301 type tool to the tools array in the Messages API request, specify the Advisor model as Opus, and set a max_uses limit to control how many times to consult per request.&lt;/p&gt;&#xA;&lt;p&gt;The entire model handoff occurs within a single /v1/messages request, eliminating the need for additional network round trips and context management. The Executor decides when to call the Advisor, and Anthropic routes the selected context to the Advisor model, allowing the Executor to continue executing after receiving the plan.&lt;/p&gt;&#xA;&lt;p&gt;Billing is based on the Advisor&amp;rsquo;s token at the Advisor model&amp;rsquo;s rate (Opus at $5/$25), while the Executor&amp;rsquo;s tokens are billed at the Executor model&amp;rsquo;s rate (Sonnet at $3/$15 or Haiku at $1/$5). Since the Advisor typically generates a short plan (usually 400-700 tokens), the overall cost is much lower than running Opus throughout.&lt;/p&gt;&#xA;&lt;p&gt;You can control costs by limiting the number of Advisor calls with max_uses. The token consumption of the Advisor is reported separately in usage.&lt;/p&gt;&#xA;&lt;h2 id=&#34;early-user-feedback&#34;&gt;Early User Feedback&#xA;&lt;/h2&gt;&lt;ul&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&amp;ldquo;Better architectural decisions on complex tasks, with no extra overhead on simple tasks. The planning and execution trajectories are completely on different levels.&amp;rdquo;&lt;br&gt;&#xA;Eric Simmons, CEO of Bolt&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&amp;ldquo;We have seen clear improvements in agent rounds, tool call counts, and overall scores, better than our own planning tools.&amp;rdquo;&lt;br&gt;&#xA;Kay Zhu, Co-founder and CTO of Genspark&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&amp;ldquo;In structured document extraction tasks, Advisor allowed Haiku 4.5 to consult Opus 4.6 on demand, achieving state-of-the-art model quality at a cost 5 times lower.&amp;rdquo;&lt;br&gt;&#xA;Anuraj Pandey, Machine Learning Engineer at Eve Legal&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;key-signals&#34;&gt;Key Signals&#xA;&lt;/h2&gt;&lt;ol&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;This is Anthropic&amp;rsquo;s first native support for inter-model collaboration at the API level. Previously, to coordinate Sonnet and Opus, one had to write orchestration logic, manage context passing, and handle the state of two API calls. Now, a single tool declaration suffices.&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;The pricing logic is clever. The Advisor outputs only 400-700 tokens per call, costing just a few cents at Opus&amp;rsquo;s rate. However, this small investment in guidance allows the Executor to avoid detours, reducing total token consumption. This explains the phenomenon where adding Advisor actually lowers total costs.&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;The Haiku + Opus Advisor combination is noteworthy. The 41.2% score in BrowseComp at Haiku&amp;rsquo;s price is 85% cheaper than running Sonnet solo. This combination may be more suitable for large-scale, cost-sensitive agent deployment scenarios.&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;The timeline continues to accelerate. With the release of Mythos, Managed Agents, and the Advisor Tool within a week, Anthropic&amp;rsquo;s product line density is rapidly increasing.&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;</description>
        </item><item>
            <title>Empowering AI Development Through Cultural Integration</title>
            <link>https://lumigallerys.com/posts/note-78db2be940/</link>
            <pubDate>Fri, 10 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-78db2be940/</guid>
            <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&#xA;&lt;/h2&gt;&lt;p&gt;During this year&amp;rsquo;s National Two Sessions, &amp;ldquo;Artificial Intelligence + Culture&amp;rdquo; became a hot topic among representatives. The 14th Five-Year Plan clearly states the comprehensive implementation of the &amp;ldquo;Artificial Intelligence +&amp;rdquo; initiative, emphasizing the need to strengthen the integration of AI with cultural development. In this technology-driven era, culture is not merely an &amp;ldquo;application scenario&amp;rdquo; or a &amp;ldquo;subject of transformation&amp;rdquo; for AI; rather, it is an indispensable &amp;ldquo;enabler&amp;rdquo; in this technological revolution. While AI addresses efficiency and precision in fields like healthcare, industry, and logistics, it encounters meaning, emotion, and humanity in the cultural domain. This uniqueness determines that culture provides the most distinctive and irreplaceable value support for AI development.&lt;/p&gt;&#xA;&lt;h2 id=&#34;culture-as-a-training-ground-for-ai&#34;&gt;Culture as a Training Ground for AI&#xA;&lt;/h2&gt;&lt;p&gt;Culture provides a training ground for AI in terms of meaning and emotion. The evolution of AI is essentially a process of moving from &amp;ldquo;computation&amp;rdquo; to &amp;ldquo;cognition&amp;rdquo; and then to &amp;ldquo;understanding.&amp;rdquo; In industrial contexts, AI&amp;rsquo;s tasks are to identify defects and optimize paths, with clear and quantifiable goals. However, in cultural creation, AI faces the production and transmission of meaning. When AI enters this realm, it must learn to handle the ambiguity of meaning, the diversity of interpretations, and the relativity of values. The nuances of a painting&amp;rsquo;s blank spaces, the essence of a poem, and the emotional tension of a film are all elements that cannot be easily quantified, yet they are essential for training AI to reach higher levels of intelligence. We refer to this as cultivating &amp;ldquo;meaning sensitivity&amp;rdquo;—enabling algorithms to not only understand &amp;ldquo;what it resembles&amp;rdquo; but also to attempt to grasp &amp;ldquo;what it signifies.&amp;rdquo; Furthermore, culture injects an indispensable emotional dimension into AI. Although AI cannot possess emotions, when it engages in cultural creation, it must learn to recognize emotional expressions, understand emotional logic, and generate emotional symbols. This process, while not a true emotional experience, allows AI to better serve human emotional needs. Particularly in the context of an aging society, the demand for emotional companionship and spiritual comfort among the elderly is rising, and AI with emotional understanding will play an irreplaceable role in the silver economy.&lt;/p&gt;&#xA;&lt;h2 id=&#34;culture-as-a-laboratory-for-public-participation&#34;&gt;Culture as a Laboratory for Public Participation&#xA;&lt;/h2&gt;&lt;p&gt;If the dimensions of meaning and emotion are the &amp;ldquo;vertical&amp;rdquo; nourishment that culture provides to AI, then China&amp;rsquo;s vast cultural consumption market offers a &amp;ldquo;horizontal&amp;rdquo; testing ground for AI. From creation to dissemination, from education to cultural tourism, the cultural sector has built an extensive value chain—creative conception, material generation, production, distribution, derivative development, and audience interaction—each link can embed AI capabilities and generate new demands for AI technology. On the creation side, AI is significantly changing the content production process, enabling ordinary creators to generate high-quality cultural products at a very low cost, further expanding the boundaries of public creation. On the dissemination side, AI-driven precise recommendations allow cultural content to efficiently reach target audiences. In the cultural tourism sector, immersive experiences and digital twin technologies make cultural heritage perceivable and interactive. The dynamic presentation of the &amp;ldquo;Along the River During the Qingming Festival&amp;rdquo; at the Palace Museum and the immersive digital exhibitions at various museums provide new possibilities for exploring traditional culture. This virtuous cycle of &amp;ldquo;demand driving supply and supply creating demand&amp;rdquo; vividly illustrates how culture empowers AI. More importantly, the participatory nature of cultural scenarios allows AI technology to be tested, feedbacked, and iterated among the broadest population.&lt;/p&gt;&#xA;&lt;h2 id=&#34;challenges-in-cultural-empowerment-of-ai&#34;&gt;Challenges in Cultural Empowerment of AI&#xA;&lt;/h2&gt;&lt;p&gt;However, the process of culture empowering AI development is not without challenges. Some contradictions and issues in the field of cultural construction, such as structural imbalances at the industrial level, the &amp;ldquo;Matthew effect&amp;rdquo; in resource allocation, copyright dilemmas, and challenges to subjectivity, prompt us to re-examine the direction and governance logic of AI development. History also tells us that the relationship between culture and technology has never been a one-way &amp;ldquo;technological determinism&amp;rdquo; but rather a complex bidirectional construction process. To truly enable culture to empower AI development, we need to work collaboratively across multiple dimensions, including institutional innovation, platform construction, human-machine relationships, cross-border integration, and talent cultivation. This is both a necessary response to real challenges and a strategic choice to seize opportunities of the times.&lt;/p&gt;&#xA;&lt;h2 id=&#34;institutional-innovation-to-protect-originality&#34;&gt;Institutional Innovation to Protect Originality&#xA;&lt;/h2&gt;&lt;p&gt;First, we must safeguard the dignity of originality through institutional innovation. The copyright dilemma in the AI era is essentially a misalignment between the copyright system of the industrial era and the creative methods of the digital age. To resolve this dilemma, we need to quickly establish copyright regulations that adapt to the characteristics of AI—clarifying copyright ownership of AI-generated content, standardizing the authorized use of training data, and establishing a labeling mechanism for AI-created works. More fundamentally, we must establish a basic principle at the institutional level: technological advancement should not come at the expense of creators&amp;rsquo; legitimate rights and interests, and the &amp;ldquo;learning&amp;rdquo; of algorithms should not devolve into the uncompensated appropriation of originality. Every technological breakthrough should respect the dignity of creation, and every institutional design should protect the value of originality—this is the institutional cornerstone for cultural prosperity in the AI era.&lt;/p&gt;&#xA;&lt;h2 id=&#34;activating-cultural-data-value-through-platform-construction&#34;&gt;Activating Cultural Data Value through Platform Construction&#xA;&lt;/h2&gt;&lt;p&gt;Second, we should activate the value of cultural data through platform construction. The decentralization, departmentalization, and isolation of cultural data are bottlenecks that restrict cultural creation in the AI era. Starting from top-level design, we should establish a national-level cultural digital resource platform, break down departmental barriers, and reduce the search costs for creators, allowing dormant cultural resources to be transformed into actionable wisdom. Building a database of cultural genes that captures the patterns, traditional skills, and intangible cultural heritage of various ethnic groups will provide rich and standardized material support for artistic creation in the AI era and allow outstanding traditional Chinese culture to gain new forms of life in the digital age.&lt;/p&gt;&#xA;&lt;h2 id=&#34;redefining-human-machine-relationships&#34;&gt;Redefining Human-Machine Relationships&#xA;&lt;/h2&gt;&lt;p&gt;Third, we need to redefine the relationship between humans and machines in the AI era. AI can provide options, but the power of choice always lies with humans; AI can generate content, but value judgments must be made by people. The ideal human-machine relationship should be a collaborative one: humans lead in creativity, value judgment, and emotional expression, while AI is responsible for technical realization, efficiency enhancement, and solution generation. We should embrace technology while firmly maintaining human subjectivity—this is both a principle of artistic creation and a wisdom for human-technology interaction in the AI era. On a deeper level, we have a responsibility to explore the ethical boundaries of human-machine collaboration, challenging the aesthetic homogenization that algorithms may bring and infusing the logic of technology with the spirit of humanity.&lt;/p&gt;&#xA;&lt;h2 id=&#34;expanding-cultural-value-through-cross-border-integration&#34;&gt;Expanding Cultural Value through Cross-Border Integration&#xA;&lt;/h2&gt;&lt;p&gt;Fourth, we should expand the value of culture through cross-border integration. The vitality of culture lies in its flow and fusion. We should further deepen the integration of new popular arts with cultural tourism, cultural creativity, technology, and other fields, innovatively creating development models such as &amp;ldquo;micro-short dramas + cultural tourism,&amp;rdquo; &amp;ldquo;online literature + IP derivatives,&amp;rdquo; and &amp;ldquo;online games + traditional culture,&amp;rdquo; and cultivate new cultural economy formats such as digital cultural heritage, immersive performances, smart cultural tourism, and virtual cultural communities. By promoting the deep integration of culture, tourism, sports, and commerce, we can release cultural value in broader scenarios and make the synergy of &amp;ldquo;sports as a platform, culture as the performance, tourism as the draw, and consumption upgrade&amp;rdquo; a reality. This is not only necessary for the development of the cultural industry itself but also a rightful mission for culture to empower economic and social development.&lt;/p&gt;&#xA;&lt;h2 id=&#34;building-a-foundation-for-innovative-development-through-talent-cultivation&#34;&gt;Building a Foundation for Innovative Development through Talent Cultivation&#xA;&lt;/h2&gt;&lt;p&gt;Fifth, we must build a solid foundation for innovative development through talent cultivation. Cultural creation in the AI era calls for interdisciplinary talents—those who understand artistic creation and technical logic, who appreciate traditional culture and grasp the aesthetics of the digital age. We should establish diversified and specialized talent cultivation platforms, linking universities, industry associations, and leading institutions to conduct specialized training in creative techniques, copyright protection, and overseas dissemination, with a focus on supporting young, grassroots, and amateur creators. Additionally, we should improve talent evaluation and incentive mechanisms, breaking down barriers related to identity and education, and enhancing pathways for talent growth to create a favorable industry ecosystem where &amp;ldquo;everyone can create, and everyone can produce excellent works.&amp;rdquo; This is the true essence of integrating AI with cultural development.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Strengthening AI Governance in the New Era</title>
            <link>https://lumigallerys.com/posts/note-b62a093c74/</link>
            <pubDate>Thu, 09 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-b62a093c74/</guid>
            <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&#xA;&lt;/h2&gt;&lt;p&gt;Artificial intelligence (AI) is a strategic force leading a new round of technological revolution and industrial transformation, profoundly reshaping the global innovation landscape, development paradigms, and human lifestyles. However, its widespread application also brings a series of risks and challenges. General Secretary Xi Jinping emphasizes the need to grasp the trends and laws of AI development, expedite the formulation of relevant laws, policies, application norms, and ethical guidelines, and establish a technical monitoring, risk warning, and emergency response system. This approach aims to enhance the safety, reliability, controllability, and fairness of AI technologies.&lt;/p&gt;&#xA;&lt;h2 id=&#34;importance-of-strengthening-ai-governance&#34;&gt;Importance of Strengthening AI Governance&#xA;&lt;/h2&gt;&lt;p&gt;In the face of accelerated AI technology iterations, deep penetration of applications, and complex intertwined risks, accurately grasping the trends and laws of AI development is crucial for promoting the healthy and orderly development of AI in China.&lt;/p&gt;&#xA;&lt;h3 id=&#34;strategic-advantage-in-technological-revolution&#34;&gt;Strategic Advantage in Technological Revolution&#xA;&lt;/h3&gt;&lt;p&gt;General Secretary Xi points out that AI is a significant driving force in the new technological revolution and industrial transformation. Currently, AI technology innovation is in a period of intense activity, and the industrialization process is accelerating, creating favorable conditions for gaining a competitive edge in future development. Major countries are actively advancing AI development, giving rise to new fields such as intelligent agents, autonomous driving, embodied intelligence, and smart wearables. Strengthening AI governance can create a stable, transparent, and predictable governance environment, providing clear rules for enterprises and research institutions, encouraging investment, and fostering innovation.&lt;/p&gt;&#xA;&lt;h3 id=&#34;promoting-high-quality-economic-development&#34;&gt;Promoting High-Quality Economic Development&#xA;&lt;/h3&gt;&lt;p&gt;AI is a crucial driving force for upgrading industries and cultivating new economic growth points. By the end of 2025, China is expected to have over 6,000 AI companies, forming a complete industrial system from foundational infrastructure to industry applications. Domestic large models are leading the global ecosystem through open-source strategies, transforming AI from a cutting-edge technology used by a few companies into a widely accessible tool across various industries. Exploring effective AI governance paths can better mitigate risks, create a stable policy environment, and foster a fair, healthy, and vibrant industrial ecosystem.&lt;/p&gt;&#xA;&lt;h3 id=&#34;maintaining-national-security-and-social-stability&#34;&gt;Maintaining National Security and Social Stability&#xA;&lt;/h3&gt;&lt;p&gt;General Secretary Xi emphasizes the importance of improving the AI regulatory system to maintain control over AI development and governance. The safety risks associated with AI are increasingly prominent, characterized by complexity, systemic nature, and pervasiveness. Strengthening AI governance is essential for ensuring that AI is used to uphold national interests and the welfare of the people.&lt;/p&gt;&#xA;&lt;h3 id=&#34;building-a-community-with-a-shared-future&#34;&gt;Building a Community with a Shared Future&#xA;&lt;/h3&gt;&lt;p&gt;AI should be an international public good that benefits all humanity. It is essential to promote coordination among development strategies, governance rules, and technical standards globally, forming a widely accepted global governance framework.&lt;/p&gt;&#xA;&lt;h2 id=&#34;basic-principles-for-strengthening-ai-governance&#34;&gt;Basic Principles for Strengthening AI Governance&#xA;&lt;/h2&gt;&lt;p&gt;Effective governance must be guided by scientific concepts and clear principles. The following basic principles should be adhered to:&lt;/p&gt;&#xA;&lt;h3 id=&#34;establishing-a-value-orientation-toward-beneficence&#34;&gt;Establishing a Value Orientation Toward Beneficence&#xA;&lt;/h3&gt;&lt;p&gt;AI governance should ensure that the ultimate goal of technological development is to enhance human well-being and promote comprehensive human development.&lt;/p&gt;&#xA;&lt;h3 id=&#34;implementing-systematic-thinking&#34;&gt;Implementing Systematic Thinking&#xA;&lt;/h3&gt;&lt;p&gt;AI development and safety are highly interrelated. It is crucial to balance development and safety while maintaining proactive governance.&lt;/p&gt;&#xA;&lt;h3 id=&#34;strengthening-the-legal-foundation-for-good-governance&#34;&gt;Strengthening the Legal Foundation for Good Governance&#xA;&lt;/h3&gt;&lt;p&gt;Legal governance of AI is essential for ensuring the healthy development of new business models and addressing emerging issues.&lt;/p&gt;&#xA;&lt;h3 id=&#34;innovating-agile-and-dynamic-governance-models&#34;&gt;Innovating Agile and Dynamic Governance Models&#xA;&lt;/h3&gt;&lt;p&gt;AI governance should be inclusive and promote the sharing of core resources while establishing necessary error-correction mechanisms.&lt;/p&gt;&#xA;&lt;h2 id=&#34;accelerating-ai-governance-implementation&#34;&gt;Accelerating AI Governance Implementation&#xA;&lt;/h2&gt;&lt;p&gt;The scientific nature of theory must ultimately manifest in practical guidance. Implementing General Secretary Xi’s important discourse on AI governance requires a comprehensive governance system covering all aspects of AI development, deployment, application, and impact.&lt;/p&gt;&#xA;&lt;h3 id=&#34;strengthening-data-governance&#34;&gt;Strengthening Data Governance&#xA;&lt;/h3&gt;&lt;p&gt;Data is a core element of AI development. Improving data governance is essential for building a secure and trustworthy channel from raw data to intelligent value.&lt;/p&gt;&#xA;&lt;h3 id=&#34;enhancing-model-governance-efficiency&#34;&gt;Enhancing Model Governance Efficiency&#xA;&lt;/h3&gt;&lt;p&gt;AI large models are central to intelligent systems. Effective model governance is crucial for ensuring the quality and safety of AI applications.&lt;/p&gt;&#xA;&lt;h3 id=&#34;optimizing-application-governance-ecosystem&#34;&gt;Optimizing Application Governance Ecosystem&#xA;&lt;/h3&gt;&lt;p&gt;As AI applications expand, it is essential to establish a refined governance approach that avoids a one-size-fits-all model and ensures human oversight in critical decision-making.&lt;/p&gt;&#xA;&lt;h3 id=&#34;refining-ethical-governance-requirements&#34;&gt;Refining Ethical Governance Requirements&#xA;&lt;/h3&gt;&lt;p&gt;AI ethical governance involves not only technical rule-making but also the construction of value orders, drawing from rich cultural traditions.&lt;/p&gt;&#xA;&lt;h3 id=&#34;strengthening-global-governance-collaboration&#34;&gt;Strengthening Global Governance Collaboration&#xA;&lt;/h3&gt;&lt;p&gt;AI governance is a common challenge for humanity. It is necessary to actively implement global AI governance initiatives to build an open, fair, and effective governance mechanism.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Vibe Coding Removed from App Store: What&#39;s Next?</title>
            <link>https://lumigallerys.com/posts/note-29445412f9/</link>
            <pubDate>Wed, 01 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-29445412f9/</guid>
            <description>&lt;h2 id=&#34;vibe-coding-removed-from-app-store&#34;&gt;Vibe Coding Removed from App Store&#xA;&lt;/h2&gt;&lt;p&gt;In March 2026, Apple completely removed the Vibe Coding app, Anything, from the App Store, marking a significant setback for its survival in a closed ecosystem. This article deeply analyzes the core of this conflict—the fundamental incompatibility between Apple&amp;rsquo;s Guideline 2.5.2 and the logic of AI-generated code. As the platform insists on a static review framework, entrepreneurs are forced to make difficult choices between web-based survival and migrating to Android. This is not just a technical battle but a real challenge to the monopolistic review power of app stores.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 2&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;514px&#34; data-flex-grow=&#34;214&#34; height=&#34;420&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-29445412f9/img-58e0fc011b.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-29445412f9/img-58e0fc011b_hu_784cb194d31a2a1.jpeg 800w, https://lumigallerys.com/posts/note-29445412f9/img-58e0fc011b.jpeg 900w&#34; width=&#34;900&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Anything&amp;rsquo;s co-founder and CEO, Dhruv Amin, stated that the app had previously helped users publish thousands of applications on the App Store, including management systems for emergency responders and reimbursement tracking tools designed for gig economy workers.&lt;/p&gt;&#xA;&lt;p&gt;According to The Information, prior to Anything&amp;rsquo;s removal, Apple had already implemented update freezes on similar applications like Replit and Bitrig, indicating a systematic tightening of the Vibe Coding category. Apple insists that this action is merely enforcing existing rules to prevent apps from introducing new features without review; however, critics argue that this review framework, designed for static applications, cannot accommodate the underlying logic of AI-generated content.&lt;/p&gt;&#xA;&lt;p&gt;Amin bluntly remarked, &amp;ldquo;This is the problem with Apple and closed platforms—either they made a mistake, or they decide that your category is not allowed to exist.&amp;rdquo; He is currently evaluating a shift to Android, while other teams have turned to pure web development. The future of Vibe Coding is becoming increasingly clear.&lt;/p&gt;&#xA;&lt;h2 id=&#34;apple-changes-course-after-thousands-of-apps-launched&#34;&gt;Apple Changes Course After Thousands of Apps Launched&#xA;&lt;/h2&gt;&lt;p&gt;Last August, Anything entered the market as a browser-based Vibe Coding tool. Vibe Coding allows individuals without programming experience to generate applications directly through AI—by describing their ideas, the code is automatically produced. In November, Anything launched its iPhone client, and the App Store review team raised no objections, allowing it to be released smoothly.&lt;/p&gt;&#xA;&lt;p&gt;In the following months, Anything continued to update, and users had published thousands of applications on the App Store using this tool, including valuable products such as a management system for emergency responders and a reimbursement tracking tool for gig economy workers. The existence of these applications demonstrated that Vibe Coding is not merely a toy-level technical experiment.&lt;/p&gt;&#xA;&lt;p&gt;The turning point occurred in mid-December. Apple&amp;rsquo;s review team began rejecting every update submitted by Anything, citing violations of Guideline 2.5.2. This was less than two months after the iPhone version launched. Amin attempted to compromise by moving the Vibe Coding preview feature from the app to a web browser to avoid controversy. Apple not only rejected this submission but also removed the entire app from the App Store in March.&lt;/p&gt;&#xA;&lt;p&gt;From initial approval and launch to update freezes and final removal, the entire process took less than six months. Before Anything&amp;rsquo;s app was officially removed, The Information reported earlier this month that Apple had blocked updates for multiple Vibe Coding applications—shortly after, Anything faced a more comprehensive removal.&lt;/p&gt;&#xA;&lt;p&gt;Meanwhile, Replit and Bitrig, also part of the Vibe Coding category, remain on the App Store but are similarly unable to update—Replit&amp;rsquo;s last update was in January, and Bitrig&amp;rsquo;s was in November of last year. Apple&amp;rsquo;s attitude towards this category reflects a systematic tightening.&lt;/p&gt;&#xA;&lt;h2 id=&#34;guideline-252-a-rule-that-closes-off-a-category&#34;&gt;Guideline 2.5.2: A Rule That Closes Off a Category&#xA;&lt;/h2&gt;&lt;p&gt;Apple&amp;rsquo;s sole reason for the removal was Guideline 2.5.2. The original wording of this rule states that applications must &amp;ldquo;be self-contained within their installation package,&amp;rdquo; and must not read or write data outside designated container areas, nor &amp;ldquo;download, install, or execute code that introduces or modifies application characteristics and functionalities.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;The original intent of 2.5.2 was to prevent developers from circumventing App Store reviews by silently pushing unreviewed feature changes on user devices. This logic is reasonable—applications extending permissions without review do need to be constrained in the context of mobile security. The problem arises when this rule is aimed at the Vibe Coding category, as its reach far exceeds the original design intent.&lt;/p&gt;&#xA;&lt;p&gt;The core mechanism of Vibe Coding tools is precisely to generate and execute code dynamically at runtime via AI. Users describe their needs, the model outputs logic, and the application presents results in real-time. This process naturally falls within the prohibitions of 2.5.2—because each generation effectively pushes &amp;ldquo;unreviewed new features&amp;rdquo; to the device. In other words, as long as Vibe Coding remains Vibe Coding, it cannot operate on iPhones without violating this rule.&lt;/p&gt;&#xA;&lt;p&gt;Apple&amp;rsquo;s statement is that the company is not targeting the Vibe Coding category but is merely enforcing existing rules to prevent applications from undergoing substantial changes without review. While this explanation is flawless in wording, it sidesteps a critical question: why apply a rule designed for static applications to AI tools that generate dynamic content?&lt;/p&gt;&#xA;&lt;p&gt;Anything attempted a compromise path: moving the code preview feature to a web browser to display AI-generated content without executing it directly within the native app. The logic behind this solution is that the browser itself is a sandbox environment, circumventing 2.5.2&amp;rsquo;s restrictions on local code execution. Apple rejected this submission and subsequently removed the entire app. This means Apple is not only enforcing rules but also narrowing the possible exceptions.&lt;/p&gt;&#xA;&lt;p&gt;For other developers, the current enforcement of this rule creates a highly uncertain situation. Apps like Replit and Bitrig remain on the App Store but cannot update; some teams, like Vibecode, have proactively abandoned iPhone development in favor of pure web development. The same rule produces vastly different enforcement outcomes, and Apple has yet to provide clear boundary explanations.&lt;/p&gt;&#xA;&lt;h2 id=&#34;the-cost-of-a-closed-platform-how-entrepreneurs-coexist-with-apple&#34;&gt;The Cost of a Closed Platform: How Entrepreneurs Coexist with Apple&#xA;&lt;/h2&gt;&lt;p&gt;After Anything was removed, Dhruv Amin made a poignant statement: &amp;ldquo;This is the problem with Apple and closed platforms—either they made a mistake, or they decide that your category is not allowed to exist.&amp;rdquo; This statement highlights a structural dilemma that entrepreneurs face in platform ecosystems, which is often overlooked.&lt;/p&gt;&#xA;&lt;p&gt;In the mobile internet era, the App Store is the only legal channel to reach iPhone users. For consumer-facing applications, losing this entry point is almost equivalent to losing the entire market. Before being removed, Anything had already accumulated thousands of user-published applications through this channel, establishing a real product ecosystem. The visibility of these assets to iOS users was completely lost at the moment of removal.&lt;/p&gt;&#xA;&lt;p&gt;The unpredictability of the timeline is even more challenging. Anything&amp;rsquo;s iPhone version was formally approved by the App Store review team at launch, and after months of operation, it faced a blockade. Approval does not guarantee long-term compliance; the interpretation of platform rules always lies in Apple&amp;rsquo;s hands and can be redefined at any time. For early-stage startups, this uncertainty is nearly impossible to hedge through any conventional business planning.&lt;/p&gt;&#xA;&lt;p&gt;Faced with this situation, entrepreneurs have limited options. Amin is currently evaluating whether to shift focus to the Android platform, which means rebuilding the product on a new tech stack while bearing the friction costs of user migration. Another path is to completely transition to the web, bypassing all native app store controls—Vibecode has already made this choice, abandoning iPhone development. Both paths mean sacrificing the established iOS user base, with real costs involved.&lt;/p&gt;&#xA;&lt;p&gt;From a broader perspective, Apple&amp;rsquo;s handling of the Vibe Coding category reveals issues of compatibility between platform rules and emerging technologies. The existing App Store review framework is designed for static, fixed-function native applications. As AI blurs the boundaries of applications, the original review logic begins to fail—but the costs of this failure are borne by developers.&lt;/p&gt;&#xA;&lt;p&gt;Apple itself has its own interests to consider. Xcode has recently integrated Anthropic&amp;rsquo;s Claude and OpenAI&amp;rsquo;s Codex, launching AI programming assistance features aimed at professional developers. The core value proposition of Vibe Coding tools is precisely to allow non-professional users to build applications directly, bypassing professional tools like Xcode. This competitive relationship makes it difficult to interpret Apple&amp;rsquo;s attitude towards this category as a neutral rule enforcement.&lt;/p&gt;&#xA;&lt;h2 id=&#34;the-future-of-vibe-coding-is-not-in-the-app-store&#34;&gt;The Future of Vibe Coding Is Not in the App Store&#xA;&lt;/h2&gt;&lt;p&gt;Amin&amp;rsquo;s judgment is worth highlighting: &amp;ldquo;The scale of Vibe Coding will far exceed Apple&amp;rsquo;s current imagination.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;The essence of Vibe Coding is to lower the barriers to software production. When someone without any programming background can describe their needs in natural language and receive a runnable application, software development transforms from a specialized skill into a tool accessible to ordinary people.&lt;/p&gt;&#xA;&lt;p&gt;This shift in magnitude is akin to how spreadsheets democratized financial modeling and no-code tools democratized website building; it represents a paradigm shift of the same scale. The App Store&amp;rsquo;s blockade cannot change this direction; it can only affect where it lands.&lt;/p&gt;&#xA;&lt;p&gt;Currently, the direction of landing is becoming increasingly clear: the web. Vibecode&amp;rsquo;s choice is representative—abandoning the iPhone native side and focusing on the browser-based product experience. This path bypasses the App Store&amp;rsquo;s review controls, at the cost of sacrificing some native experience and distribution benefits. However, for tools like Vibe Coding, the core value lies in the generation capability itself, rather than platform nativeity—the web is sufficient to carry this value.&lt;/p&gt;&#xA;&lt;p&gt;From a distribution logic perspective, a web-first strategy is actually more flexible in the current environment. Users can access directly through links without going through any app store review nodes, and the speed of product iteration is not constrained by third-party approval cycles. This is precisely the rhythm needed for AI-native products—models are evolving rapidly, and products must update in sync; any review friction could lead to competitive delays.&lt;/p&gt;&#xA;&lt;p&gt;Regulatory variables are also worth noting. Apple&amp;rsquo;s systematic blockade of emerging AI tool categories has already attracted the attention of antitrust observers. In the context of ongoing scrutiny of large platform behaviors by regulatory agencies in Europe and the US, whether Apple&amp;rsquo;s actions constitute improper exclusion of competitive development tools is a question that remains undetermined but is under discussion. If regulatory pressure ultimately forces Apple to allow sideloading or relax review standards, there may still be an opportunity window for Vibe Coding tools to return to iOS.&lt;/p&gt;&#xA;&lt;p&gt;However, until that day arrives, the main battleground for this category has quietly shifted. Anything is evaluating Android, while other teams are betting on the web, and the entire industry&amp;rsquo;s focus is moving away from the App Store as a single entry point. Apple&amp;rsquo;s blockade has, to some extent, accelerated the diversification of the Vibe Coding ecosystem—this is likely not the outcome Apple intended.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Understanding Claude Code, Codex, and OpenClaw</title>
            <link>https://lumigallerys.com/posts/note-d96a3bdab4/</link>
            <pubDate>Mon, 30 Mar 2026 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-d96a3bdab4/</guid>
            <description>&lt;h2 id=&#34;understanding-claude-code-codex-and-openclaw&#34;&gt;Understanding Claude Code, Codex, and OpenClaw&#xA;&lt;/h2&gt;&lt;p&gt;Recently, a friend who is an independent developer asked me, &amp;ldquo;Are you using Claude Code or Codex? I&amp;rsquo;ve been struggling to choose between the two for almost a week.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;I replied: &amp;ldquo;You&amp;rsquo;re confused about the wrong direction.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;These two are fundamentally different, and with the emergence of OpenClaw, the entire discussion has reached a new level.&lt;/p&gt;&#xA;&lt;p&gt;In the past three months, these three tools have sparked intense discussions among developers, probably the most I&amp;rsquo;ve seen in my over ten years in this field. However, most discussions have remained at the level of &amp;ldquo;which is better,&amp;rdquo; without clarifying the fundamental differences between them.&lt;/p&gt;&#xA;&lt;p&gt;This article aims to clarify this matter.&lt;/p&gt;&#xA;&lt;h3 id=&#34;conceptual-framework&#34;&gt;Conceptual Framework&#xA;&lt;/h3&gt;&lt;p&gt;Before discussing each tool, I want to emphasize that &lt;strong&gt;these three products do not belong to the same level; comparing them directly is as odd as comparing VS Code and Docker.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;They correspond to three different layers in the AI productivity stack:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;First Layer, Brain&lt;/strong&gt;: The large language models themselves, such as Claude, GPT, and DeepSeek, responsible for understanding and reasoning.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Second Layer, Hand&lt;/strong&gt;: Programming agents like Claude Code and Codex, which integrate the capabilities of large models into your codebase, responsible for executing specific development tasks.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Third Layer, Operating System&lt;/strong&gt;: Agent runtime platforms like OpenClaw, which schedule multiple tools and models, manage long-term tasks, and run continuously.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;In simpler terms: Claude Code and Codex are employees, while OpenClaw is the company. The former helps you write code, while the latter manages this group of AIs working for you.&lt;/p&gt;&#xA;&lt;h3 id=&#34;claude-code-the-ai-engineer-that-understands-your-codebase-best&#34;&gt;Claude Code: The AI Engineer That Understands Your Codebase Best&#xA;&lt;/h3&gt;&lt;p&gt;Claude Code is a terminal-native programming agent launched by Anthropic in May 2025, developing faster than many anticipated. By early 2026, it had become the most widely used product in the AI programming tools market—an almost 1000-participant survey showed it had a 46% approval rate, while the second-ranked Cursor only had 19%.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;What Did Claude Code Do Right?&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Its core design decision prioritized &amp;ldquo;understanding the entire codebase&amp;rdquo; over simply &amp;ldquo;writing a runnable piece of code.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;For example, if you take over a chaotic Node.js project from two years ago with sparse documentation and complex dependencies, and you ask Claude Code to fix a login authentication bug, a typical AI assistant would modify the pasted code directly and provide you with a local patch. In contrast, Claude Code first reads the CLAUDE.md (your project&amp;rsquo;s rules configuration file), scans related files, and understands the upstream and downstream relationships of the authentication logic within the entire system before making changes. It knows how changes in one area might affect others.&lt;/p&gt;&#xA;&lt;p&gt;This difference may not be apparent when handling simple functions, but it becomes significant when dealing with real projects.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Subagents + Checkpoint: Two Key Features to Note&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;In the second half of 2025, Claude Code introduced two important mechanisms: Subagents and Checkpoint.&lt;/p&gt;&#xA;&lt;p&gt;Subagents allow a complex task to be divided among multiple specialized AI instances for parallel execution. For instance, when refactoring an authentication module, one Subagent handles database migration, another modifies API routes, and a third manages frontend state changes, while the main Agent coordinates and integrates the results. Each Subagent has an independent context window, allowing up to 10 to run simultaneously without interference.&lt;/p&gt;&#xA;&lt;p&gt;Checkpoint addresses another concern: the fear that AI might break the code. It automatically archives the current state before each modification, allowing you to revert to any historical point using the Esc Esc or /rewind command. With this safety mechanism, you can confidently assign larger and more complex tasks to it.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;A Practical Detail&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;The CLAUDE.md file is often overlooked but is crucial. You can write the project&amp;rsquo;s tech stack version, prohibited libraries, database schema summaries, and code style rules in it. Statistics show that a well-written CLAUDE.md can reduce about 80% of the &amp;ldquo;Claude forgot&amp;rdquo; issues.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Use Cases&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Claude Code is best suited for quickly getting up to speed with unfamiliar codebases, handling complex bugs across multiple files, performing systematic refactoring, and development tasks that require AI to truly understand your project&amp;rsquo;s overall structure rather than just executing local commands.&lt;/p&gt;&#xA;&lt;p&gt;It offers comprehensive access methods: Terminal CLI, VS Code plugin (Beta version released by the end of 2025), web interface, and desktop app. Subscribing to Claude Pro (starting at $20/month) allows usage, and enterprise users can also deploy it privately via Bedrock or Vertex AI.&lt;/p&gt;&#xA;&lt;h3 id=&#34;codex-taking-task-outsourcing-to-another-level&#34;&gt;Codex: Taking Task Outsourcing to Another Level&#xA;&lt;/h3&gt;&lt;p&gt;In 2025, OpenAI launched Codex in April (not the previous code completion model, but a new software engineering agent) and subsequently released a macOS desktop app by the end of the year, with Windows versions following in 2026.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Fundamental Differences in How Codex and Claude Code Work&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Claude Code operates on a &amp;ldquo;human-machine collaboration&amp;rdquo; model: you supervise its work in real-time, reviewing each step and adjusting directions as needed. This is a co-pilot mode where the human is in charge.&lt;/p&gt;&#xA;&lt;p&gt;Codex, on the other hand, is about &amp;ldquo;task outsourcing&amp;rdquo;: you clearly describe a task, and it executes it autonomously in an isolated sandbox environment, returning results and a PR for your review. You don&amp;rsquo;t need to monitor it continuously.&lt;/p&gt;&#xA;&lt;p&gt;This difference significantly impacts actual workflows. Codex is suitable for tasks where you know what needs to be done but don&amp;rsquo;t want to spend energy supervising each step. For example, you can say, &amp;ldquo;Help me complete unit tests for this module&amp;rdquo; or &amp;ldquo;Help me migrate the calling method of this old interface to the new version,&amp;rdquo; then move on to other tasks and return later to check the results.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Parallelism is Codex&amp;rsquo;s Core Advantage&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Codex supports genuine multi-task parallelism: multiple Agent instances work in independent cloud sandboxes, each pre-installed with your codebase and development environment. If you have five independent tasks, you can start five Agents to process them simultaneously instead of queuing them.&lt;/p&gt;&#xA;&lt;p&gt;The desktop app&amp;rsquo;s design philosophy is that of a &amp;ldquo;command center&amp;rdquo;: the left side displays the project list, while the right side shows all running Agent threads, allowing you to switch between tasks, check progress, and comment or manually modify in the diff view.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Safety Design is Another Priority for Codex&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;By default, Codex&amp;rsquo;s sandbox disables external network access, and file modifications are restricted to specified directories. This design is intentional—isolated execution and presenting results after completion is much safer than operating directly on your local environment. However, for tasks requiring internet access, network permissions can be manually enabled.&lt;/p&gt;&#xA;&lt;p&gt;Additionally, Codex includes a code review feature that can automatically review your PRs directly on GitHub, acting like an asynchronous code reviewer.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Open Source CLI Version of Codex&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;If you want to run Codex in a local terminal, there is a fully open-source CLI version written in Rust, supporting npm and Homebrew installations, allowing configuration of local models (including Ollama) and MCP access to external tools. Its core logic is consistent with cloud Codex but is better suited for developers wanting complete control over the execution environment.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Use Cases&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Codex is suitable for clear, well-defined development tasks (writing features, fixing bugs, writing tests); for those who wish to free their hands and wait for results asynchronously; for scenarios requiring multi-task parallelism; and for teams already deeply integrated into the ChatGPT ecosystem (account interoperability without requiring additional registration).&lt;/p&gt;&#xA;&lt;p&gt;A ChatGPT Plus subscription ($20/month) includes Codex usage credits.&lt;/p&gt;&#xA;&lt;h3 id=&#34;openclaw-not-a-tool-but-an-operating-system-for-running-ai&#34;&gt;OpenClaw: Not a Tool, But an Operating System for Running AI&#xA;&lt;/h3&gt;&lt;p&gt;OpenClaw is the most difficult to define and the easiest to misunderstand among the three.&lt;/p&gt;&#xA;&lt;p&gt;It is an open-source project released by Austrian developer Peter Steinberger in November 2025 under the name Clawdbot. After its release, it went viral, surpassing 240,000 GitHub Stars within two months, becoming one of the fastest-growing projects in GitHub history (without exception, surpassing React). It was later renamed Moltbot due to a trademark complaint from Anthropic, and after Steinberger felt the name was &amp;ldquo;too awkward to pronounce,&amp;rdquo; it was changed to OpenClaw three days later.&lt;/p&gt;&#xA;&lt;p&gt;In February this year, Steinberger announced his joining OpenAI, and the project was handed over to the open-source foundation for continued maintenance.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;What Exactly is OpenClaw?&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;In one sentence: it is a system that allows AI to continuously work for you.&lt;/p&gt;&#xA;&lt;p&gt;It runs locally, connects to your chosen large language models (Claude, GPT, DeepSeek, local Ollama, etc.), and integrates this AI into over 20 messaging platforms like WhatsApp, Telegram, Slack, Discord, and iMessage. You send a message to the AI, and it executes tasks—reading files, running scripts, controlling browsers, sending emails, managing calendars, monitoring servers, etc.&lt;/p&gt;&#xA;&lt;p&gt;The fundamental difference from Claude Code and Codex is that it is not a tool that works only when your computer is on and you are staring at the screen. You can set up a Mac Mini at home to run OpenClaw 24/7 and send messages to it from anywhere via your phone to have it help you with tasks.&lt;/p&gt;&#xA;&lt;h3 id=&#34;four-core-components&#34;&gt;Four Core Components&#xA;&lt;/h3&gt;&lt;p&gt;OpenClaw&amp;rsquo;s architecture consists of four parts:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Gateway&lt;/strong&gt;: The entry point for receiving messages and distributing commands.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Agent&lt;/strong&gt;: The core that executes specific tasks.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Skills&lt;/strong&gt;: Expandable capability modules, with thousands available in the community-maintained ClawHub marketplace.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Memory&lt;/strong&gt;: Persistent user preferences, project information, and historical context across sessions.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;The Skills system is the most interesting part. You can install Skills written by others to extend the AI&amp;rsquo;s capabilities or write your own. The community has Skills for handling Solana wallets, automatically posting to Instagram, monitoring GitHub Actions, and more.&lt;/p&gt;&#xA;&lt;h3 id=&#34;why-many-people-struggle-to-use-it&#34;&gt;Why Many People Struggle to Use It&#xA;&lt;/h3&gt;&lt;p&gt;OpenClaw has higher requirements for users; it is not a tool that you can just install and use.&lt;/p&gt;&#xA;&lt;p&gt;The most common mistake is throwing a vague task at the AI, such as &amp;ldquo;help me manage my work.&amp;rdquo; The AI does not know what that means. The correct way to use OpenClaw is to clearly design the workflow—what the trigger conditions are, what steps to execute, and how to provide feedback on the results—and then configure this process.&lt;/p&gt;&#xA;&lt;p&gt;Another barrier is the design of Skills. Good Skills are atomic and have single responsibilities; many beginners mix too much logic in their Skills, making it difficult to troubleshoot when issues arise.&lt;/p&gt;&#xA;&lt;p&gt;OpenClaw&amp;rsquo;s maintainer, Shadow, once said on Discord, &amp;ldquo;If you don&amp;rsquo;t know how to run commands in the command line, this project is already too dangerous for you to use safely.&amp;rdquo; This statement is very straightforward, but it&amp;rsquo;s true.&lt;/p&gt;&#xA;&lt;h3 id=&#34;security-issues-a-necessary-discussion&#34;&gt;Security Issues: A Necessary Discussion&#xA;&lt;/h3&gt;&lt;p&gt;The biggest controversy surrounding OpenClaw in recent months has been security issues.&lt;/p&gt;&#xA;&lt;p&gt;After its launch in November last year, the first critical vulnerability (CVE-2026-25253, CVSS score 8.8) was discovered in January this year—an attacker could induce you to visit a malicious webpage, allowing JavaScript to connect to your local OpenClaw gateway via WebSocket, stealing authentication tokens and gaining complete control over your entire Agent, including disabling the sandbox and executing arbitrary commands.&lt;/p&gt;&#xA;&lt;p&gt;In the following weeks, several other CVEs were disclosed, involving command injection, path traversal, Webhook authentication bypass, and more. The ClawHub Skills marketplace also found hundreds of malicious skill packages disguised as legitimate tools, executing data theft or installing keyloggers in the background.&lt;/p&gt;&#xA;&lt;p&gt;Security research institutions have scanned and found that at one point, over 130,000 OpenClaw instances were directly exposed on the public internet, most of which had no authentication configured. The Ministry of Industry and Information Technology of China also issued a security warning in March this year, urging government agencies and state-owned banks to limit usage.&lt;/p&gt;&#xA;&lt;p&gt;Currently, the recommended minimum secure version is 2026.2.26; if you are still running earlier versions, please update immediately.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;It is important to clarify&lt;/strong&gt;: these security issues do not imply that OpenClaw&amp;rsquo;s core product concept is flawed. The root of the problem lies in the combination of &amp;ldquo;great capabilities, loose default configurations, and rapid deployment&amp;rdquo;—any system with superuser privileges will encounter issues if it defaults to no authentication and unrestricted access. The team&amp;rsquo;s response speed has been quite fast, with most CVEs patched within 24 hours of disclosure.&lt;/p&gt;&#xA;&lt;p&gt;However, this also indicates that OpenClaw is not suitable for casual installation and use. If you plan to deploy it in production, you need to seriously enhance security.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Use Cases&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;OpenClaw is suitable for technically capable users who can configure and maintain it for: 24/7 automation tasks (monitoring alerts, scheduled inspections, automatic daily reports); cross-platform message-triggered workflows; personal automation assistants (remotely controlling local servers via phone messages); and model-agnostic scenarios (wanting to choose models and retain data sovereignty).&lt;/p&gt;&#xA;&lt;p&gt;It is free under the MIT license, but costs for running local models or calling cloud APIs are borne by the user, with light usage costing about $10-30/month.&lt;/p&gt;&#xA;&lt;h3 id=&#34;comparison-summary-a-table-of-differences&#34;&gt;Comparison Summary: A Table of Differences&#xA;&lt;/h3&gt;&lt;table&gt;&#xA;  &lt;thead&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;th&gt;&lt;strong&gt;Claude Code&lt;/strong&gt;&lt;/th&gt;&#xA;          &lt;th&gt;&lt;strong&gt;Codex&lt;/strong&gt;&lt;/th&gt;&#xA;          &lt;th&gt;&lt;strong&gt;OpenClaw&lt;/strong&gt;&lt;/th&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/thead&gt;&#xA;  &lt;tbody&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Positioning&lt;/td&gt;&#xA;          &lt;td&gt;AI programming agent&lt;/td&gt;&#xA;          &lt;td&gt;Automated programming engine&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Working Method&lt;/td&gt;&#xA;          &lt;td&gt;Human-machine collaboration, you supervise&lt;/td&gt;&#xA;          &lt;td&gt;Task outsourcing, wait asynchronously&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Main Interface&lt;/td&gt;&#xA;          &lt;td&gt;Terminal/IDE/Web&lt;/td&gt;&#xA;          &lt;td&gt;Terminal/Desktop App/IDE&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Codebase Understanding&lt;/td&gt;&#xA;          &lt;td&gt;Strong&lt;/td&gt;&#xA;          &lt;td&gt;Strong&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Parallel Capability&lt;/td&gt;&#xA;          &lt;td&gt;Subagents, up to 10&lt;/td&gt;&#xA;          &lt;td&gt;Multiple sandboxes in parallel, no hard limit&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Open Source&lt;/td&gt;&#xA;          &lt;td&gt;No&lt;/td&gt;&#xA;          &lt;td&gt;CLI partially open source&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Security Maturity&lt;/td&gt;&#xA;          &lt;td&gt;High&lt;/td&gt;&#xA;          &lt;td&gt;High&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Learning Curve&lt;/td&gt;&#xA;          &lt;td&gt;Medium&lt;/td&gt;&#xA;          &lt;td&gt;Medium-high&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/tbody&gt;&#xA;&lt;/table&gt;&#xA;&lt;h3 id=&#34;which-one-should-you-use&#34;&gt;Which One Should You Use?&#xA;&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;If you are a coding developer&lt;/strong&gt;, Claude Code is the top choice. It has the deepest understanding of codebases, is the easiest to get started with, and integrates best with daily development workflows.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;If you have a bunch of well-defined development tasks&lt;/strong&gt;, such as writing a batch of tests or migrating old interfaces, and you don&amp;rsquo;t want to supervise the process, Codex is the more suitable option. Asynchronous, parallel, and freeing your hands is its core value.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;If you want AI to do a wider range of tasks for you&lt;/strong&gt;—not just coding, but also automating operations, scheduled tasks, and cross-system collaboration, and you have the ability to ensure security—OpenClaw is the only option. It represents a different way of working with AI: not you using AI, but AI continuously working for you.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;If you want to play with advanced combinations&lt;/strong&gt;, there is a mature approach: use OpenClaw as the scheduling layer, triggering tasks that call Claude Code or Codex to execute specific programming tasks, and then have OpenClaw summarize results and send notifications. This is a true AI Agent architecture, with each of the three layers performing its role.&lt;/p&gt;&#xA;&lt;h3 id=&#34;final-thoughts&#34;&gt;Final Thoughts&#xA;&lt;/h3&gt;&lt;p&gt;I have observed many developers stumbling over these three tools, and the most common issue is not choosing the wrong tool, but &lt;strong&gt;choosing the wrong level of usage&lt;/strong&gt;.&lt;/p&gt;&#xA;&lt;p&gt;Using Claude Code to &amp;ldquo;automatically manage all work&amp;rdquo;—that is what OpenClaw is designed to do; Claude Code is not intended for that. Using OpenClaw for simple bug fixes—where the complexity of configuration is not worth it—can be handled by Claude Code in two minutes.&lt;/p&gt;&#xA;&lt;p&gt;Tools are not superior or inferior; they only fit different needs. Choosing the right level and using the right scenarios is the true path to efficiency improvement.&lt;/p&gt;&#xA;&lt;p&gt;These three products represent not just three tools but three depths of AI involvement in development work: coding assistance, task hosting, and continuous autonomy. Where you currently stand depends on how much trust you are willing to place in AI and how much time and capability you have to manage it.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;It&amp;rsquo;s not you using AI; the future is you managing a group of AIs.&lt;/strong&gt; This shift is happening, and all three tools are early samples of this process.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Understanding Artificial Intelligence: Core Capabilities and Applications</title>
            <link>https://lumigallerys.com/posts/note-13d2750860/</link>
            <pubDate>Sun, 29 Mar 2026 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-13d2750860/</guid>
            <description>&lt;h2 id=&#34;what-is-artificial-intelligence&#34;&gt;What is Artificial Intelligence?&#xA;&lt;/h2&gt;&lt;p&gt;Artificial Intelligence (AI) is a core branch of computer science aimed at enabling machines to simulate, extend, or even surpass human intelligence. The goal is to allow machines to autonomously complete complex tasks that typically require human intelligence.&lt;/p&gt;&#xA;&lt;p&gt;AI is not a single technology but a system that integrates algorithms, data, and computing power. Its core lies in granting machines the abilities of learning, reasoning, perception, and decision-making, transforming them from mere tools executing commands to intelligent agents that can adapt to environments and solve problems.&lt;/p&gt;&#xA;&lt;h2 id=&#34;core-essence-of-ai-simulating-human-intelligence&#34;&gt;Core Essence of AI: Simulating Human Intelligence&#xA;&lt;/h2&gt;&lt;p&gt;The essence of AI is not about making machines look like humans but about endowing them with key characteristics of human intelligence, centered around four main capabilities:&lt;/p&gt;&#xA;&lt;h3 id=&#34;1-learning-ability-autonomous-pattern-recognition-from-data&#34;&gt;1. Learning Ability: Autonomous Pattern Recognition from Data&#xA;&lt;/h3&gt;&lt;p&gt;This is the most fundamental capability of AI, distinguishing it from traditional programs that execute fixed rules. AI can autonomously identify hidden patterns through extensive data training, rather than relying on pre-written instructions.&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Example&lt;/strong&gt;: Traditional programs require predefined characteristics to recognize a cat (e.g., pointed ears, whiskers, tail). In contrast, AI can learn to identify a cat by analyzing thousands of images without prior definitions.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Typical Applications&lt;/strong&gt;: Recommendation systems (e.g., Douyin, Taobao) and spam filtering.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;2-reasoning-and-decision-making-ability-solving-complex-problems-based-on-patterns&#34;&gt;2. Reasoning and Decision-Making Ability: Solving Complex Problems Based on Patterns&#xA;&lt;/h3&gt;&lt;p&gt;Once AI understands patterns, it can perform logical reasoning, analysis, and ultimately make decisions, rather than mechanically executing steps.&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Example&lt;/strong&gt;: Medical AI analyzes CT scans and lab reports, combining them with medical databases to infer possible conditions and provide diagnostic suggestions. Autonomous driving AI assesses road conditions (traffic lights, pedestrians, vehicles) to decide whether to accelerate, brake, or turn.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Core Logic&lt;/strong&gt;: Deriving unknown results from known data, simulating the human process of thinking and decision-making.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;3-perception-ability-equipping-machines-with-sensory-understanding&#34;&gt;3. Perception Ability: Equipping Machines with Sensory Understanding&#xA;&lt;/h3&gt;&lt;p&gt;AI utilizes sensors, cameras, and microphones to perceive the external world, translating physical signals into information that machines can understand.&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Examples&lt;/strong&gt;:&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Computer Vision&lt;/strong&gt;: Enables machines to interpret images and videos (e.g., facial recognition, security monitoring).&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Speech Recognition&lt;/strong&gt;: Allows machines to understand human speech (e.g., Siri, Xiaoyi).&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Sensor Perception&lt;/strong&gt;: Industrial robots use sensors to detect the position and temperature of objects, adjusting operational precision.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;4-adaptive-and-evolutionary-ability-dynamically-adjusting-behavior-based-on-environment&#34;&gt;4. Adaptive and Evolutionary Ability: Dynamically Adjusting Behavior Based on Environment&#xA;&lt;/h3&gt;&lt;p&gt;Advanced AI continuously optimizes itself based on new data and environments, rather than remaining static. For instance, navigation software adjusts routes in real-time to avoid traffic congestion, demonstrating adaptive capability.&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Example&lt;/strong&gt;: AlphaGo not only learns human chess strategies but also evolves through self-play, eventually defeating top human players. Recommendation systems adjust content based on new user preferences, becoming increasingly attuned to individual tastes.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;core-technologies-supporting-ai-the-three-pillars&#34;&gt;Core Technologies Supporting AI: The Three Pillars&#xA;&lt;/h2&gt;&lt;p&gt;The realization of the aforementioned capabilities relies on the synergistic functioning of three core technologies:&lt;/p&gt;&#xA;&lt;h3 id=&#34;1-algorithms-the-brain-of-ai&#34;&gt;1. Algorithms: The Brain of AI&#xA;&lt;/h3&gt;&lt;p&gt;Algorithms form the core logic of AI, akin to human thought processes, with different types addressing various problems:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Machine Learning&lt;/strong&gt;: A general method for enabling machines to learn from data, focusing on pattern recognition rather than hard-coded rules.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Deep Learning&lt;/strong&gt;: A subset of machine learning that simulates the neural network structure of the human brain, capable of processing complex data (e.g., images, videos, speech).&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Natural Language Processing&lt;/strong&gt;: Algorithms that enable machines to understand and generate human language, addressing human-computer communication.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Computer Vision&lt;/strong&gt;: Algorithms that allow machines to interpret images and videos, solving the problem of how machines perceive the world.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;2-data-the-fuel-of-ai&#34;&gt;2. Data: The Fuel of AI&#xA;&lt;/h3&gt;&lt;p&gt;AI learning depends on vast amounts of data; the more data available and the higher its quality, the more accurate the patterns AI can identify. Without data, even the most advanced algorithms are ineffective, similar to how humans require reading and practical experience to learn.&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Example&lt;/strong&gt;: Speech recognition AI needs to analyze hundreds of thousands of hours of human speech to accurately recognize various accents and speaking speeds. Autonomous driving AI requires billions of kilometers of road data to learn how to handle complex scenarios.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;3-computing-power-the-engine-of-ai&#34;&gt;3. Computing Power: The Engine of AI&#xA;&lt;/h3&gt;&lt;p&gt;AI training and reasoning require substantial computational power, especially deep learning algorithms, which involve massive matrix operations. Ordinary computers lack the necessary power, necessitating specialized hardware support, such as:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;GPU (Graphics Processing Unit)&lt;/strong&gt;: Originally used for gaming graphics, GPUs excel in parallel computing and have become essential for AI training.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;TPU (Tensor Processing Unit)&lt;/strong&gt;: A chip designed by Google specifically for deep learning, offering higher computational efficiency than GPUs.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Cloud Computing&lt;/strong&gt;: Businesses and individuals can leverage cloud resources for AI model training without needing to invest in expensive hardware.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;common-applications-of-ai-integrating-into-daily-life&#34;&gt;Common Applications of AI: Integrating into Daily Life&#xA;&lt;/h2&gt;&lt;p&gt;AI is no longer a concept confined to science fiction; it permeates various aspects of our daily lives and work. Here are some of the most common applications:&lt;/p&gt;&#xA;&lt;h3 id=&#34;1-consumer-applications-high-frequency-daily-interactions&#34;&gt;1. Consumer Applications: High-Frequency Daily Interactions&#xA;&lt;/h3&gt;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Smart Assistants&lt;/strong&gt;: Siri, Xiaoyi, and Huawei&amp;rsquo;s Xiao Yi can understand voice commands to check the weather, set alarms, and send messages, fundamentally relying on speech recognition and natural language processing.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Content Recommendation&lt;/strong&gt;: Platforms like Douyin, Taobao, and Bilibili use AI algorithms to recommend content based on your browsing and liking history, powered by machine learning.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Image Processing&lt;/strong&gt;: Smartphones use AI for beautification, filters, and portrait modes, automatically recognizing faces and optimizing skin tones.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Smart Translation&lt;/strong&gt;: Services like Baidu Translate and DeepL can quickly translate dozens of languages, often retaining the tone of the original text, thanks to natural language processing.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;2-industry-applications-empowering-industrial-upgrades&#34;&gt;2. Industry Applications: Empowering Industrial Upgrades&#xA;&lt;/h3&gt;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Healthcare&lt;/strong&gt;: AI-assisted diagnostics can rapidly analyze CT scans and pathology reports, helping doctors detect early-stage cancers and pneumonia, improving diagnostic efficiency and accuracy.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Autonomous Driving&lt;/strong&gt;: Tesla, Xpeng, and Huawei&amp;rsquo;s autonomous driving systems use cameras and radar to perceive road conditions, making real-time decisions for tasks like following cars, changing lanes, and parking.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Industrial Production&lt;/strong&gt;: AI-enabled industrial robots can achieve precise sorting, welding, and quality inspection, even predicting equipment failures to enhance production efficiency.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Financial Services&lt;/strong&gt;: AI aids in risk control by analyzing consumer and credit data to assess loan risks and detect credit card fraud and financial scams.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Education&lt;/strong&gt;: AI-powered personalized tutoring can suggest tailored exercises and explanations based on students&amp;rsquo; learning progress, as seen in platforms like Yuanfudao and Zuoyebang.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;3-frontier-exploration-pushing-the-boundaries-of-human-capability&#34;&gt;3. Frontier Exploration: Pushing the Boundaries of Human Capability&#xA;&lt;/h3&gt;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;AI in Research&lt;/strong&gt;: AlphaFold solved the protein folding problem, aiding scientists in understanding disease mechanisms and developing new drugs.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;AI in Creation&lt;/strong&gt;: Tools like MidJourney and Stable Diffusion generate images from text, while iFlytek&amp;rsquo;s Starfire can write articles, code, and poetry, facilitating AI-assisted creativity.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;AI in Exploration&lt;/strong&gt;: AI analyzes cosmic and oceanic data, helping humanity explore unknown territories, such as searching for extraterrestrial signals and monitoring deep-sea ecosystems.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;key-classifications-of-ai-development-path-from-weak-to-strong&#34;&gt;Key Classifications of AI: Development Path from Weak to Strong&#xA;&lt;/h2&gt;&lt;p&gt;AI development is distinctly categorized into stages, primarily based on its capabilities from weak to strong. Currently, we are still in the weak AI phase:&lt;/p&gt;&#xA;&lt;h3 id=&#34;1-weak-ai&#34;&gt;1. Weak AI&#xA;&lt;/h3&gt;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Definition&lt;/strong&gt;: AI focused on specific tasks, lacking general cognitive abilities and self-awareness.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Characteristics&lt;/strong&gt;: Excels in a particular domain but cannot transfer knowledge across domains. For example, AlphaGo can play Go but cannot write articles; an image recognition AI cannot drive.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Current Status&lt;/strong&gt;: All existing AI applications fall under weak AI, including Siri, autonomous driving, and AI art generation.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;2-strong-ai&#34;&gt;2. Strong AI&#xA;&lt;/h3&gt;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Definition&lt;/strong&gt;: AI with general intelligence comparable to humans, capable of understanding and learning knowledge across various fields, thinking flexibly, and potentially possessing self-awareness and emotions.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Characteristics&lt;/strong&gt;: Can transfer knowledge across domains, such as coding, medical diagnosis, and music creation, akin to human intelligence.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Current Status&lt;/strong&gt;: Still in the theoretical exploration stage, not yet realized, and remains a long-term goal in AI research.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;3-superintelligent-ai&#34;&gt;3. Superintelligent AI&#xA;&lt;/h3&gt;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Definition&lt;/strong&gt;: AI that surpasses human capabilities in nearly all domains, including scientific innovation, social skills, and artistic creation, potentially reaching intelligence levels beyond human comprehension.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Characteristics&lt;/strong&gt;: Capable of solving complex issues like climate change and diseases, which humans struggle with, but may also pose potential risks.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Current Status&lt;/strong&gt;: A topic of science fiction and futurism, lacking a technological foundation and primarily a speculative concept for the future.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;core-boundaries-of-ai-limitations-and-misconceptions&#34;&gt;Core Boundaries of AI: Limitations and Misconceptions&#xA;&lt;/h2&gt;&lt;p&gt;Many misconceptions exist about AI, with some believing it can think and feel like humans or even replace them. In reality, AI has fundamental limitations:&lt;/p&gt;&#xA;&lt;h3 id=&#34;1-ai-lacks-self-awareness-and-emotions&#34;&gt;1. AI Lacks Self-Awareness and Emotions&#xA;&lt;/h3&gt;&lt;p&gt;All AI actions are based on algorithms and data; they do not possess self-awareness or emotional understanding. For instance, AI can generate sad text but does not experience sadness; it can recognize angry expressions but does not comprehend the meaning of anger.&lt;/p&gt;&#xA;&lt;h3 id=&#34;2-ai-relies-on-data-and-lacks-true-creativity&#34;&gt;2. AI Relies on Data and Lacks True Creativity&#xA;&lt;/h3&gt;&lt;p&gt;AI&amp;rsquo;s creativity is fundamentally a reorganization of existing data, not genuine originality. For example, AI-generated art is based on vast image datasets and cannot create entirely new artistic styles based on life experiences and emotions like human artists can. Similarly, AI-written articles are structured based on existing content and cannot produce genuinely profound original insights.&lt;/p&gt;&#xA;&lt;h3 id=&#34;3-ai-decisions-are-based-on-probability-not-understanding&#34;&gt;3. AI Decisions Are Based on Probability, Not Understanding&#xA;&lt;/h3&gt;&lt;p&gt;AI decisions rely on probability distributions from data rather than true comprehension. For instance, a medical AI diagnosing cancer does so by comparing a patient’s data to that of numerous cancer patients, identifying similar features, rather than understanding the underlying pathology as a doctor would.&lt;/p&gt;&#xA;&lt;h3 id=&#34;4-ai-capabilities-are-highly-contextual-and-data-dependent&#34;&gt;4. AI Capabilities Are Highly Contextual and Data-Dependent&#xA;&lt;/h3&gt;&lt;p&gt;AI can only perform effectively within trained scenarios; if a situation exceeds its training, it may fail. For example, an autonomous driving AI trained in clear weather may struggle in extreme weather conditions like heavy rain or snow. Similarly, a speech recognition AI may accurately understand standard Mandarin but struggle with dialects or heavy accents.&lt;/p&gt;&#xA;&lt;h2 id=&#34;conclusion-ai-as-a-tool-to-empower-humanity&#34;&gt;Conclusion: AI as a Tool to Empower Humanity&#xA;&lt;/h2&gt;&lt;p&gt;The essence of artificial intelligence is not to replace humans but to extend human capabilities, helping solve complex, repetitive, and high-risk problems, allowing humans to focus on innovation, emotions, and decision-making.&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;From a Technical Perspective&lt;/strong&gt;: AI combines algorithms, data, and computing power, primarily enabling machines to learn, reason, and perceive.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;From an Application Perspective&lt;/strong&gt;: AI serves as a tool to empower various industries, enhancing efficiency, reducing costs, and pushing the boundaries of human capabilities.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;From a Development Stage Perspective&lt;/strong&gt;: We are still in the weak AI phase, with strong and superintelligent AI as long-term goals, indicating a long journey ahead.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;In simple terms, artificial intelligence aims to equip machines with human-like intelligence to assist in tasks that typically require human thought and action, ultimately serving human life and societal development.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Cursor&#39;s $50 Billion Valuation: How Much Should Kimi Get?</title>
            <link>https://lumigallerys.com/posts/note-e4344e3d13/</link>
            <pubDate>Mon, 23 Mar 2026 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-e4344e3d13/</guid>
            <description>&lt;h2 id=&#34;cursors-50-billion-valuation-how-much-should-kimi-get&#34;&gt;Cursor&amp;rsquo;s $50 Billion Valuation: How Much Should Kimi Get?&#xA;&lt;/h2&gt;&lt;p&gt;On March 19, Cursor released Composer 2, touted as its first self-developed model. It was created through continuous pre-training of a base model combined with reinforcement learning, though the identity of the base model was not disclosed. Developers quickly identified the model ID as &lt;code&gt;kimi-k2p5-rl-0317-s515-fast&lt;/code&gt;, leading to the conclusion that Composer 2 is essentially Kimi K2.5 with RL fine-tuning. This was confirmed by Du Yulun, the pre-training lead for Kimi, who stated that both tokenizers are identical. Elon Musk even chimed in on social media, affirming, &amp;ldquo;Yeah, it&amp;rsquo;s Kimi 2.5.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;This is the second instance of such a revelation. In November 2025, when Composer 1 was launched, the community discovered that its tokenizer matched that of DeepSeek, with occasional outputs in Chinese. Cursor did not respond at that time either.&lt;/p&gt;&#xA;&lt;p&gt;Both versions of &amp;ldquo;our model&amp;rdquo; utilized two Chinese open-source bases, and the company only acknowledges them when questioned. The developer community may not have the investor&amp;rsquo;s perspective, but it is crucial to note that this is happening as both companies are racing to secure new valuations.&lt;/p&gt;&#xA;&lt;p&gt;Cursor is reportedly raising funds at a valuation of around $50 billion, with its annual recurring revenue (ARR) doubling from $1 billion to $2 billion within 90 days. The launch of Composer 2 at this critical juncture is not merely a product release; it is part of the narrative surrounding its valuation.&lt;/p&gt;&#xA;&lt;p&gt;Investors need to believe that Cursor possesses technological depth beyond being a user-friendly IDE shell. The phrase &amp;ldquo;continuous pre-training of the base model combined with reinforcement learning&amp;rdquo; sounds overly promotional, suggesting original research while obscuring the fact that the base weights belong to Kimi.&lt;/p&gt;&#xA;&lt;p&gt;Meanwhile, Kimi is striving for a valuation of $18 billion, seeking up to $1 billion in new funding as reported by Bloomberg on March 15. For Kimi, selecting Kimi K2.5 as the &amp;ldquo;strongest&amp;rdquo; base model (as stated by Cursor co-founder Aman Sanger) after evaluating multiple bases is a significant endorsement that should be prominently featured in their funding deck.&lt;/p&gt;&#xA;&lt;p&gt;Kimi&amp;rsquo;s approach is not one of anger or accusations; instead, it reflects a strategic move. Cursor is essentially a powerful tool emerging at a critical moment.&lt;/p&gt;&#xA;&lt;p&gt;As Cursor aims for a $50 billion valuation, it hides Kimi 2.5 behind Composer 2, branding it as &amp;ldquo;our model&amp;rdquo;; conversely, Kimi showcases the underlying strength of Cursor, claiming, &amp;ldquo;this is our ecosystem penetration.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Even the most optimistic investors must reconsider: the company training the base model is valued at $18 billion, while the one incorporating that model into a VS Code fork, adding RL fine-tuning and product refinement, is valued at $50 billion. The pricing of the core engine is roughly one-third of the shell.&lt;/p&gt;&#xA;&lt;p&gt;Before adopting a Chinese open-source base for its &amp;ldquo;self-developed model,&amp;rdquo; Cursor heavily relied on Claude&amp;rsquo;s capabilities until Anthropic revealed Claude Code. Currently, Claude Code&amp;rsquo;s run rate has reached $2.5 billion, with over 300,000 enterprise clients, growing faster than Cursor.&lt;/p&gt;&#xA;&lt;p&gt;Moreover, Anthropic has a structural advantage that Cursor can never match: as the base model provider, Claude Code can price its services at cost, while Cursor must pay retail prices for the same inference services. Reports from Fortune disclosed that when Cursor&amp;rsquo;s ARR was $500 million, it paid Anthropic approximately $650 million annually for inference fees, resulting in a negative gross margin where every heavy user contributes to Cursor&amp;rsquo;s losses.&lt;/p&gt;&#xA;&lt;p&gt;This context is essential for understanding Composer&amp;rsquo;s &amp;ldquo;self-developed&amp;rdquo; model. Cursor had no choice but to take this route; otherwise, its entire profit structure would be entirely dependent on Anthropic, which is also its largest supplier and most formidable competitor. Whether relying on Anthropic or switching to OpenAI&amp;rsquo;s Codex, the narrative remains the same.&lt;/p&gt;&#xA;&lt;p&gt;Reducing inference costs, decreasing dependency, and regaining some control by launching its model is a survival strategy. Given the unreliability of closed-source models in the U.S., the only path to rapid results is to use a Chinese open-source base model, apply RL fine-tuning, and label it as &amp;ldquo;our model.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;In attempting to break free from one dependency, Cursor has fallen into another trap. The market tends to assign higher valuations to U.S. companies with base models, but does financing not require a time window for pre-training?&lt;/p&gt;&#xA;&lt;p&gt;Cursor&amp;rsquo;s reluctance to acknowledge its use of a Chinese model is understandable, but the developers using Cursor are quite savvy; it is unrealistic to expect them not to notice or to remain silent about it. Furthermore, Cursor has switched base models twice—Composer 1 used DeepSeek, while Composer 2 uses Kimi. If the base model is interchangeable for Cursor, then the IDE is equally replaceable for users, making it just another VS Code fork.&lt;/p&gt;&#xA;&lt;p&gt;What valuation should we assign to a VS Code fork? $50 billion?&lt;/p&gt;&#xA;&lt;p&gt;This situation coincides perfectly with Kimi&amp;rsquo;s funding window, making the timing almost unreal.&lt;/p&gt;&#xA;&lt;p&gt;Aman Sanger, Cursor&amp;rsquo;s co-founder, stated that they evaluated multiple bases and found Kimi K2.5 to be the strongest—this is a publicly available, free technical judgment. Any benchmark ranking pales in comparison to this compelling evidence, as it reflects a genuine engineering choice made by a team with substantial commercial interests, which cannot be manipulated or gamed.&lt;/p&gt;&#xA;&lt;p&gt;It is important to note that Kimi is striving for an $18 billion valuation, which has not fully accounted for any recent positive developments. Now, the world knows that the currently strongest programming base is Kimi&amp;rsquo;s. Praise from peers is relatively cheap; converting that into pricing power is far more crucial.&lt;/p&gt;&#xA;&lt;p&gt;Therefore, I believe Cursor should allocate a portion of its valuation to Kimi. If your strongest model is someone else&amp;rsquo;s model, part of your value is created by them, and if the market has not priced that in, then the valuation growth behind Kimi should come from your depreciation.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Anthropic Surges to 70% Market Share, Leaving OpenAI Behind</title>
            <link>https://lumigallerys.com/posts/note-0e4571883a/</link>
            <pubDate>Thu, 05 Mar 2026 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-0e4571883a/</guid>
            <description>&lt;h2 id=&#34;anthropics-dominance&#34;&gt;Anthropic&amp;rsquo;s Dominance&#xA;&lt;/h2&gt;&lt;p&gt;Anthropic has not only won but has done so decisively!&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 1&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;300px&#34; data-flex-grow=&#34;125&#34; height=&#34;864&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-dedf2740a4.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-dedf2740a4_hu_5e4314da48c3554b.jpeg 800w, https://lumigallerys.com/posts/note-0e4571883a/img-dedf2740a4.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;In February 2026, &lt;strong&gt;Anthropic&amp;rsquo;s market share in the U.S. surged to nearly 70%&lt;/strong&gt;, quickly surpassing OpenAI.&lt;/p&gt;&#xA;&lt;p&gt;In just one year, &lt;strong&gt;ChatGPT&amp;rsquo;s original 90% share has been largely consumed by Claude&lt;/strong&gt;.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 2&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;441px&#34; data-flex-grow=&#34;183&#34; height=&#34;587&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-d2dc2f741d.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-d2dc2f741d_hu_c7eb3dbfc30788a5.jpeg 800w, https://lumigallerys.com/posts/note-0e4571883a/img-d2dc2f741d.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Even more astonishing, &lt;strong&gt;Anthropic&amp;rsquo;s annual revenue (ARR) has set records, nearing $20 billion&lt;/strong&gt;.&lt;/p&gt;&#xA;&lt;p&gt;In just two weeks, it skyrocketed by $5 billion!&lt;/p&gt;&#xA;&lt;p&gt;Using the median revenue of $16.64 billion from the Fortune 500 as a reference, one must marvel at Anthropic&amp;rsquo;s immense scale.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 3&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34;&gt;&lt;img alt=&#34;Image 4&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;267px&#34; data-flex-grow=&#34;111&#34; height=&#34;969&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-26a5c38b21.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-26a5c38b21_hu_ffecd616f84aefa5.jpeg 800w, https://lumigallerys.com/posts/note-0e4571883a/img-26a5c38b21.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;annual-revenue-doubles-to-20-billion&#34;&gt;Annual Revenue Doubles to $20 Billion&#xA;&lt;/h2&gt;&lt;p&gt;Anthropic&amp;rsquo;s revenue performance is truly surprising, especially as it has become the top company currently banned by U.S. government agencies.&lt;/p&gt;&#xA;&lt;p&gt;It is in a situation that is &amp;ldquo;half sea water, half fire.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 5&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;981px&#34; data-flex-grow=&#34;409&#34; height=&#34;264&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-d8cda147d1.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-d8cda147d1_hu_75a5871e2f639bdc.jpeg 800w, https://lumigallerys.com/posts/note-0e4571883a/img-d8cda147d1.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;On one hand, Anthropic&amp;rsquo;s ability to generate revenue is astonishing.&lt;/p&gt;&#xA;&lt;p&gt;Dario Amodei revealed in a meeting that the company&amp;rsquo;s run-rate revenue has surged to nearly $20 billion.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;This is more than double the $9 billion at the end of last year&lt;/strong&gt;. Looking at a larger time span, the speed of revenue doubling is enough to leave one speechless.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 6&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;1171px&#34; data-flex-grow=&#34;488&#34; height=&#34;150&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-f583ca8180.jpeg&#34; width=&#34;732&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;This insane growth is primarily attributed to Claude&amp;rsquo;s powerful performance.&lt;/p&gt;&#xA;&lt;p&gt;Especially since the beginning of 2026, Claude Code unexpectedly became a hit, and flagship Opus 4.6 transformed Claude completely.&lt;/p&gt;&#xA;&lt;p&gt;In February, Claude Cowork was released, along with updates to a dozen plugins, causing a massive crash in global software stocks, with nearly $1 trillion in market value evaporating.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 7&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;734px&#34; data-flex-grow=&#34;305&#34; height=&#34;353&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-880ad5ce39.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-880ad5ce39_hu_b240a1cff7c2a4be.jpeg 800w, https://lumigallerys.com/posts/note-0e4571883a/img-880ad5ce39.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;img alt=&#34;Image 8&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;483px&#34; data-flex-grow=&#34;201&#34; height=&#34;536&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-d3325e50dc.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-d3325e50dc_hu_2b281340e6dce0b3.jpeg 800w, https://lumigallerys.com/posts/note-0e4571883a/img-d3325e50dc.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Last month, Anthropic also announced a new round of $30 billion in financing, pushing its valuation to a peak of $380 billion.&lt;/p&gt;&#xA;&lt;p&gt;At the same time, an impressive report was released, sending shockwaves through Silicon Valley—&lt;/p&gt;&#xA;&#xA;    &lt;blockquote&gt;&#xA;        &lt;p&gt;Its run-rate revenue has surged to $14 billion, achieving over tenfold explosive growth for three consecutive years.&lt;/p&gt;&#xA;&#xA;    &lt;/blockquote&gt;&#xA;&lt;p&gt;More dominating data lies in enterprise penetration rates.&lt;/p&gt;&#xA;&lt;p&gt;In the past year, the number of clients spending over $100,000 annually on Claude has increased sevenfold; clients spending over $1 million annually have surged from a few dozen two years ago to over 500 now.&lt;/p&gt;&#xA;&lt;p&gt;Among the top 10 Fortune 500 companies, 8 have become loyal users of Claude.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 9&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;333px&#34; data-flex-grow=&#34;138&#34; height=&#34;777&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-3bff6ca4d5.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-3bff6ca4d5_hu_4745c4809aa1b419.jpeg 800w, https://lumigallerys.com/posts/note-0e4571883a/img-3bff6ca4d5.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Just Claude Code, which launched its Agent programming tool last May, has already surpassed $2.5 billion in run-rate revenue.&lt;/p&gt;&#xA;&lt;p&gt;With its smooth programming experience, Claude Code has garnered rave reviews within the developer community.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 10&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;280px&#34; data-flex-grow=&#34;116&#34; height=&#34;925&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-86329cf2a2.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-86329cf2a2_hu_5526ffed5063abc4.jpeg 800w, https://lumigallerys.com/posts/note-0e4571883a/img-86329cf2a2.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;A stunning analysis shows that 4% of public code submissions on GitHub are automatically generated by Claude Code, and this proportion doubled within a month.&lt;/p&gt;&#xA;&lt;p&gt;Now, Claude Code has become an essential AI tool for many companies, deeply integrated into workflows.&lt;/p&gt;&#xA;&lt;p&gt;Major players like Shopify, NASA, Figma, and Stripe are all lining up to pay Anthropic.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 11&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;360px&#34; data-flex-grow=&#34;150&#34; height=&#34;720&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-d32370113d.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-d32370113d_hu_9c4ca31689fc89e0.jpeg 800w, https://lumigallerys.com/posts/note-0e4571883a/img-d32370113d.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;comprehensive-victory&#34;&gt;Comprehensive Victory&#xA;&lt;/h3&gt;&lt;p&gt;As netizen Deedy stated, Anthropic is achieving a &amp;ldquo;comprehensive victory.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Similarweb&amp;rsquo;s latest statistics show that Claude.ai is accelerating rapidly in the second half of February, far surpassing Grok and DeepSeek in traffic.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 12&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;275px&#34; data-flex-grow=&#34;114&#34; height=&#34;610&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-b6846e84db.jpeg&#34; width=&#34;700&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;The latest Ramp AI index indicates that Anthropic&amp;rsquo;s enterprise coverage has surged from 16.7% to 19.5%, while OpenAI has dropped to 35.9%.&lt;/p&gt;&#xA;&lt;p&gt;Among every five enterprises, one is paying for Anthropic. A year ago, this ratio was 25:1.&lt;/p&gt;&#xA;&lt;p&gt;In terms of API spending, Anthropic has captured 90% of the market share, sweeping all before it.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 13&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;262px&#34; data-flex-grow=&#34;109&#34; height=&#34;988&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-51c1f4696c.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-51c1f4696c_hu_5b2ecf84189c27f5.jpeg 800w, https://lumigallerys.com/posts/note-0e4571883a/img-51c1f4696c.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;img alt=&#34;Image 14&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;262px&#34; data-flex-grow=&#34;109&#34; height=&#34;987&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-dc462d3219.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-dc462d3219_hu_44c4971e57eec868.jpeg 800w, https://lumigallerys.com/posts/note-0e4571883a/img-dc462d3219.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;img alt=&#34;Image 15&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;222px&#34; data-flex-grow=&#34;92&#34; height=&#34;1022&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-b0cf4d748d.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-b0cf4d748d_hu_e51b8054fc23d065.jpeg 800w, https://lumigallerys.com/posts/note-0e4571883a/img-b0cf4d748d.jpeg 948w&#34; width=&#34;948&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Moreover, while confronting the Pentagon, Anthropic has also garnered a surge in traffic.&lt;/p&gt;&#xA;&lt;p&gt;On download charts in Google Play, the U.S., Canada, and France App Store, the Claude app is far ahead.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 17&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;318px&#34; data-flex-grow=&#34;132&#34; height=&#34;815&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-7af74bbb77.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-7af74bbb77_hu_cc8f6c3f52a8affc.jpeg 800w, https://lumigallerys.com/posts/note-0e4571883a/img-7af74bbb77.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;However, amidst this capital frenzy, &lt;strong&gt;Anthropic has hit a &amp;ldquo;iron plate&amp;rdquo;.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;By insisting on &amp;ldquo;never compromising&amp;rdquo; on AI safety standards, it has sparked intense conflict with the Pentagon, ultimately leading to a ban from Trump.&lt;/p&gt;&#xA;&lt;p&gt;U.S. Secretary of Defense Pete Hegseth has classified it as a &amp;ldquo;supply chain risk,&amp;rdquo; severing government business ties with it—&lt;/p&gt;&#xA;&lt;p&gt;While Anthropic&amp;rsquo;s commercial prospects are overshadowed, many other related companies are also affected.&lt;/p&gt;&#xA;&lt;h2 id=&#34;break-with-the-pentagon-the-first-sacrifice&#34;&gt;Break with the Pentagon: The First &amp;ldquo;Sacrifice&amp;rdquo;&#xA;&lt;/h2&gt;&lt;p&gt;Information has exclusively reported that &lt;strong&gt;Anthropic&amp;rsquo;s collaboration with Palantir may become the &amp;ldquo;first casualty&amp;rdquo; of this dispute.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 18&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;812px&#34; data-flex-grow=&#34;338&#34; height=&#34;319&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-af54135e2d.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-af54135e2d_hu_9e8324efe22cd77a.jpeg 800w, https://lumigallerys.com/posts/note-0e4571883a/img-af54135e2d.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;For over a year, Anthropic has been providing services to the U.S. government through the &amp;ldquo;software giant&amp;rdquo; Palantir.&lt;/p&gt;&#xA;&lt;p&gt;Claude&amp;rsquo;s model is deeply embedded in Palantir&amp;rsquo;s system.&lt;/p&gt;&#xA;&lt;p&gt;The Pentagon and various federal agencies utilize Claude to find patterns in vast amounts of confidential data, assisting in key decisions.&lt;/p&gt;&#xA;&lt;p&gt;However, this significant decision by the Department of Defense &lt;strong&gt;has directly led to the end of this &amp;ldquo;union&amp;rdquo;&lt;/strong&gt;—&lt;/p&gt;&#xA;&#xA;    &lt;blockquote&gt;&#xA;        &lt;p&gt;The U.S. government has explicitly stated: contractors are restricted from using any technology from Anthropic.&lt;/p&gt;&#xA;&#xA;    &lt;/blockquote&gt;&#xA;&lt;p&gt;For Palantir, which derived 42% of its revenue from government contracts last year, this is undoubtedly a &amp;ldquo;mandatory question.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;While Anthropic&amp;rsquo;s revenue is impressive, losing the endorsement of a top government contractor like Palantir is also a heavy blow to its market position.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 19&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;427px&#34; data-flex-grow=&#34;177&#34; height=&#34;607&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-652c70a328.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-652c70a328_hu_2c61d280749e16c9.jpeg 800w, https://lumigallerys.com/posts/note-0e4571883a/img-652c70a328.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;The Pentagon has been using Claude in conjunction with Palantir software hosted on AWS.&lt;/p&gt;&#xA;&lt;p&gt;Insiders reveal that Palantir is already preparing a &amp;ldquo;backup&amp;rdquo; plan.&lt;/p&gt;&#xA;&lt;p&gt;With just a few weeks of adjustments, it can replace the integrated Claude with models from OpenAI or Google without affecting contract revenue.&lt;/p&gt;&#xA;&lt;p&gt;On the other hand, fully replacing Claude will face a six-month &amp;ldquo;transition pain&amp;rdquo; period.&lt;/p&gt;&#xA;&lt;p&gt;At the recent Defense Technology Summit, Palantir CEO Alex Karp sharply criticized Silicon Valley for standing against the U.S. military.&lt;/p&gt;&#xA;&lt;p&gt;He warned that if Silicon Valley continues to grab white-collar jobs while stabbing the military in the back, it will ultimately lead to technology being nationalized.&lt;/p&gt;&#xA;&lt;p&gt;Karp emphasized, &amp;ldquo;This is the final outcome of this path!&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 20&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;608&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-68eea65504.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-68eea65504_hu_138da816ca35e10d.jpeg 800w, https://lumigallerys.com/posts/note-0e4571883a/img-68eea65504.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Palantir currently allows clients to choose AI models from different providers, including Anthropic, OpenAI, and Google, when analyzing platform data.&lt;/p&gt;&#xA;&lt;p&gt;Previously, an Axios report indicated that U.S. government restrictions could put Anthropic at risk of being &amp;ldquo;cut off&amp;rdquo; by Nvidia.&lt;/p&gt;&#xA;&lt;p&gt;In theory, it could also sever its ties with cloud service providers like Amazon and Google, which also hold defense contracts.&lt;/p&gt;&#xA;&lt;p&gt;In this game, OpenAI has quickly risen to become the substitute, with Altman eagerly offering GPT, disappointing the entire Silicon Valley.&lt;/p&gt;&#xA;&lt;h2 id=&#34;openais-urgent-amendments&#34;&gt;OpenAI&amp;rsquo;s Urgent Amendments&#xA;&lt;/h2&gt;&lt;h2 id=&#34;altmans-candid-admission-a-poor-appearance&#34;&gt;Altman&amp;rsquo;s Candid Admission: A Poor Appearance&#xA;&lt;/h2&gt;&lt;p&gt;Today, Altman posted an internal long message, revealing some details of the collaboration with the Department of Defense.&lt;/p&gt;&#xA;&lt;p&gt;The core content first is that OpenAI urgently patched this agreement, explicitly adding a series of legal constraints.&lt;/p&gt;&#xA;&lt;p&gt;He particularly emphasized that this ban covers all personal privacy data obtained through commercial means, preventing AI from becoming a tool for monitoring the public.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 21&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;123px&#34; data-flex-grow=&#34;51&#34; height=&#34;2097&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-8cbab34b4d.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-8cbab34b4d_hu_4a3a1cdcdf093eb5.jpeg 800w, https://lumigallerys.com/posts/note-0e4571883a/img-8cbab34b4d.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;img alt=&#34;Image 22&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;190px&#34; data-flex-grow=&#34;79&#34; height=&#34;1358&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-177e1c0f0b.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-177e1c0f0b_hu_afed66663cbfcefe.jpeg 800w, https://lumigallerys.com/posts/note-0e4571883a/img-177e1c0f0b.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Interestingly, Altman clearly delineated the boundaries: U.S. intelligence agencies (NSA) are currently prohibited from using OpenAI&amp;rsquo;s services.&lt;/p&gt;&#xA;&lt;p&gt;If these giants want to access GPT, they must go through a cumbersome contract modification process again.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;He also admitted that the hasty announcement of the collaboration last Friday was a misstep.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;In the face of such high-risk decisions, this eagerness to &amp;ldquo;rise to the occasion&amp;rdquo; looks too poor, directly shocking the entire internet.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 23&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;394&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-78024a6ced.jpeg&#34; width=&#34;700&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;In fact, a debate over soul and contract has erupted within OpenAI.&lt;/p&gt;&#xA;&lt;p&gt;On Tuesday, at the company&amp;rsquo;s all-hands meeting, Altman responded to employees&amp;rsquo; concerns about Pentagon orders in an unprecedented manner.&lt;/p&gt;&#xA;&lt;p&gt;He candidly stated that OpenAI cannot decide how the Department of Defense specifically uses its technology—&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;The final action button is in the hands of Secretary of Defense Pete Hegseth.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 24&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;974px&#34; data-flex-grow=&#34;406&#34; height=&#34;266&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-cc08db618c.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-cc08db618c_hu_311ed3e73fe63db7.jpeg 800w, https://lumigallerys.com/posts/note-0e4571883a/img-cc08db618c.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;img alt=&#34;Image 25&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;528px&#34; data-flex-grow=&#34;220&#34; height=&#34;490&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-b17d32cdac.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-b17d32cdac_hu_ec223001b60efb0e.jpeg 800w, https://lumigallerys.com/posts/note-0e4571883a/img-b17d32cdac.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;This implies that OpenAI&amp;rsquo;s commitment to &amp;ldquo;principles&amp;rdquo; is merely a facade for its employees and the public.&lt;/p&gt;&#xA;&lt;h3 id=&#34;collapse-25-million-unsubscribe-from-chatgpt&#34;&gt;Collapse! 2.5 Million &amp;ldquo;Unsubscribe&amp;rdquo; from ChatGPT&#xA;&lt;/h3&gt;&lt;p&gt;Altman seeks to turn the tide but cannot stop the ChatGPT army from rampaging.&lt;/p&gt;&#xA;&lt;p&gt;Now, the explosive &amp;ldquo;QuitGPT&amp;rdquo; movement has seen &lt;strong&gt;2.5 million people join globally.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 26&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;345px&#34; data-flex-grow=&#34;143&#34; height=&#34;751&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-df5557381f.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-0e4571883a/img-df5557381f_hu_53a5e2f2ad644a45.jpeg 800w, https://lumigallerys.com/posts/note-0e4571883a/img-df5557381f.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;On Reddit, unsubscribing from ChatGPT has become a politically correct choice.&lt;/p&gt;&#xA;&lt;p&gt;Recently, OpenAI announced a new round of $110 billion financing, claiming that by the end of 2025, there will be 900 million weekly active users, but the momentum of QuitGPT is not to be underestimated.&lt;/p&gt;&#xA;&lt;p&gt;On Instagram, related posts have surpassed 36 million views, with over 17,000 people signing the boycott agreement on the official website.&lt;/p&gt;&#xA;&lt;p&gt;New York University marketing professor Scott Galloway stated that only through &amp;ldquo;wallet voting&amp;rdquo; can these tech giants deeply tied to political power feel pain.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 27&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;The $19 billion surge and 2.5 million &amp;ldquo;unsubscribes&amp;rdquo; show that the AI industry has never been so divided between ice and fire.&lt;/p&gt;&#xA;&lt;p&gt;Anthropic&amp;rsquo;s &amp;ldquo;comprehensive victory&amp;rdquo; is a break with the old order; while OpenAI&amp;rsquo;s &amp;ldquo;urgent patch&amp;rdquo; is a helpless response under survival rules.&lt;/p&gt;&#xA;&lt;p&gt;This epic showdown concerning power, money, and principles has just begun to reach its climax.&lt;/p&gt;&#xA;&lt;p&gt;When the principles of technology collide with the iron plate of power, who will laugh last?&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Can Cursor Be Used in China? Registration and Usage Guide</title>
            <link>https://lumigallerys.com/posts/note-fab1a21496/</link>
            <pubDate>Wed, 04 Mar 2026 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-fab1a21496/</guid>
            <description>&lt;h2 id=&#34;can-cursor-be-used-in-china&#34;&gt;Can Cursor Be Used in China?&#xA;&lt;/h2&gt;&lt;p&gt;The short answer is: Yes, but it can be unstable depending on the network environment.&lt;/p&gt;&#xA;&lt;p&gt;Cursor is not software that cannot be used in China; the main issue lies in its service architecture.&lt;/p&gt;&#xA;&lt;h3 id=&#34;why-is-cursor-unstable-in-china&#34;&gt;Why is Cursor Unstable in China?&#xA;&lt;/h3&gt;&lt;p&gt;Cursor&amp;rsquo;s core capabilities rely on overseas services, including:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Official website and account system&lt;/li&gt;&#xA;&lt;li&gt;AI model interfaces (like Claude, GPT series)&lt;/li&gt;&#xA;&lt;li&gt;Real-time code completion and dialogue requests&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;These services are deployed overseas and require high network quality. Common issues when connecting directly from China include:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;The website loads, but login fails&lt;/li&gt;&#xA;&lt;li&gt;Able to log in, but AI does not respond&lt;/li&gt;&#xA;&lt;li&gt;Code completion is laggy and delayed&lt;/li&gt;&#xA;&lt;li&gt;Sudden disconnections&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;Thus, many users experience varying levels of performance, primarily due to inconsistent network stability.&lt;/p&gt;&#xA;&lt;h3 id=&#34;what-does-it-mean-to-open-vs-use&#34;&gt;What Does It Mean to &amp;ldquo;Open&amp;rdquo; vs. &amp;ldquo;Use&amp;rdquo;?&#xA;&lt;/h3&gt;&lt;p&gt;Cursor is a tool that requires frequent, real-time requests:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Each line of code you type triggers a request&lt;/li&gt;&#xA;&lt;li&gt;AI completion and context understanding involve continuous calls&lt;/li&gt;&#xA;&lt;li&gt;It is very sensitive to latency and packet loss&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;This explains why some say &amp;ldquo;the website is accessible, but Cursor is not usable.&amp;rdquo;&lt;/p&gt;&#xA;&lt;h2 id=&#34;how-to-register-for-cursor&#34;&gt;How to Register for Cursor?&#xA;&lt;/h2&gt;&lt;p&gt;The registration process for Cursor is not complicated, but a stable network environment is crucial.&lt;/p&gt;&#xA;&lt;h3 id=&#34;what-do-you-need-before-registering&#34;&gt;What Do You Need Before Registering?&#xA;&lt;/h3&gt;&lt;ul&gt;&#xA;&lt;li&gt;A Windows or macOS computer&lt;/li&gt;&#xA;&lt;li&gt;A stable international network access environment, such as OSDWAN, which ensures reliable usage&lt;/li&gt;&#xA;&lt;li&gt;A commonly used email address (Gmail, Outlook, etc.)&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;If the network is unstable, you may encounter issues like:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Verification code loading failures&lt;/li&gt;&#xA;&lt;li&gt;Blank login pages&lt;/li&gt;&#xA;&lt;li&gt;Email verification not being received&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;steps-to-register-for-cursor&#34;&gt;Steps to Register for Cursor&#xA;&lt;/h3&gt;&lt;ol&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;Log in to OSDWAN and visit the Cursor official website at &lt;a class=&#34;link&#34; href=&#34;http://cursor.com/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;&#xA;    &gt;http://cursor.com/&lt;/a&gt;&#xA;&lt;img alt=&#34;Image 5&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;370px&#34; data-flex-grow=&#34;154&#34; height=&#34;548&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-fab1a21496/img-aa3d7c1693.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-fab1a21496/img-aa3d7c1693_hu_d2149b15925c355c.jpeg 800w, https://lumigallerys.com/posts/note-fab1a21496/img-aa3d7c1693.jpeg 845w&#34; width=&#34;845&#34;&gt;&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;Click on the login button.&#xA;&lt;img alt=&#34;Image 6&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;920px&#34; data-flex-grow=&#34;383&#34; height=&#34;267&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-fab1a21496/img-2bda239c8c.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-fab1a21496/img-2bda239c8c_hu_ad945eefa6af9cf7.jpeg 800w, https://lumigallerys.com/posts/note-fab1a21496/img-2bda239c8c.jpeg 1024w&#34; width=&#34;1024&#34;&gt;&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;Register using your email or log in with a third-party account. If you don’t have an account, you can click to register a new one.&#xA;&lt;img alt=&#34;Image 7&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;349px&#34; data-flex-grow=&#34;145&#34; height=&#34;635&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-fab1a21496/img-205d6e708f.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-fab1a21496/img-205d6e708f_hu_bdb0abe0e8a1b2a2.jpeg 800w, https://lumigallerys.com/posts/note-fab1a21496/img-205d6e708f.jpeg 924w&#34; width=&#34;924&#34;&gt;&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;Fill in the required information and click continue.&#xA;&lt;img alt=&#34;Image 8&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;312px&#34; data-flex-grow=&#34;130&#34; height=&#34;725&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-fab1a21496/img-65e3ebf779.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-fab1a21496/img-65e3ebf779_hu_8111e4ba459bf866.jpeg 800w, https://lumigallerys.com/posts/note-fab1a21496/img-65e3ebf779.jpeg 944w&#34; width=&#34;944&#34;&gt;&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;Typically, this process can be completed in a few minutes, especially if you have a Google email.&lt;/p&gt;&#xA;&lt;h2 id=&#34;how-to-use-cursor-who-is-it-suitable-for&#34;&gt;How to Use Cursor? Who Is It Suitable For?&#xA;&lt;/h2&gt;&lt;h3 id=&#34;basic-usage&#34;&gt;Basic Usage&#xA;&lt;/h3&gt;&lt;p&gt;The usage logic of Cursor is quite similar to that of VS Code, making it easy to get started:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Open a local project&lt;/li&gt;&#xA;&lt;li&gt;AI automatically understands the project structure&lt;/li&gt;&#xA;&lt;li&gt;You can directly ask the AI to modify or generate code&lt;/li&gt;&#xA;&lt;li&gt;Supports context analysis across the entire project, not just single files&lt;/li&gt;&#xA;&lt;li&gt;If you have used Copilot, you will adapt quickly.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;suitable-developers&#34;&gt;Suitable Developers&#xA;&lt;/h3&gt;&lt;p&gt;Based on practical experience, Cursor is more suitable for:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Developers who frequently write business code&lt;/li&gt;&#xA;&lt;li&gt;Users of mainstream tech stacks like React, Vue, Python, and Java&lt;/li&gt;&#xA;&lt;li&gt;Those looking to improve development efficiency rather than just &amp;ldquo;play with AI&amp;rdquo;&lt;/li&gt;&#xA;&lt;li&gt;Individuals who care about code quality and maintainability&lt;/li&gt;&#xA;&lt;li&gt;Less suitable for those who only occasionally write scripts and have low dependency on AI.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;recommendations-for-stable-use-of-cursor-in-china&#34;&gt;Recommendations for Stable Use of Cursor in China&#xA;&lt;/h2&gt;&lt;p&gt;If you are only using it occasionally, you might tolerate sporadic issues; however, if you want to use Cursor as a productivity tool long-term, consider these suggestions:&lt;/p&gt;&#xA;&lt;h3 id=&#34;1-ensure-long-term-network-stability&#34;&gt;1. Ensure Long-Term Network Stability&#xA;&lt;/h3&gt;&lt;p&gt;80% of the Cursor experience depends on network quality. An unstable network may lead you to mistakenly think that &amp;ldquo;Cursor is not usable.&amp;rdquo;&lt;/p&gt;&#xA;&lt;h3 id=&#34;2-avoid-frequent-environment-changes&#34;&gt;2. Avoid Frequent Environment Changes&#xA;&lt;/h3&gt;&lt;p&gt;Frequent switching of IPs and network environments can trigger server-side anomaly detection, making it even less stable.&lt;/p&gt;&#xA;&lt;h3 id=&#34;3-keep-your-development-environment-fixed&#34;&gt;3. Keep Your Development Environment Fixed&#xA;&lt;/h3&gt;&lt;p&gt;Using a fixed device and network environment will significantly improve your experience.&lt;/p&gt;&#xA;&lt;p&gt;It is recommended to use OSDWAN&amp;rsquo;s cross-border network dedicated line, which provides stable network access and residential IPs, starting at 690 yuan/year, and supports various connection methods including mobile, computer, and router, with deployment completed on the same day.&lt;/p&gt;&#xA;&lt;h2 id=&#34;frequently-asked-questions&#34;&gt;Frequently Asked Questions&#xA;&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Q1: Can the free version of Cursor be used in China?&lt;/strong&gt;&lt;br&gt;&#xA;It mainly depends on network stability, and whether it is paid does not matter much.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Q2: What is the difference between Cursor and VS Code + Copilot?&lt;/strong&gt;&lt;br&gt;&#xA;Cursor emphasizes understanding the entire project rather than just code completion.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Q3: Will Cursor replace VS Code?&lt;/strong&gt;&lt;br&gt;&#xA;Not in the short term, but it is more like the next-generation editor for AI programming scenarios.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Q4: Is Cursor suitable for beginners?&lt;/strong&gt;&lt;br&gt;&#xA;Yes, but it is recommended to have some foundational knowledge before using AI assistance to avoid confusion with your own code.&lt;/p&gt;&#xA;&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&#xA;&lt;/h2&gt;&lt;p&gt;Cursor can be used in China but may be prone to instability. The registration and usage process is straightforward, with the key factor being the network environment. To effectively use Cursor as a primary development tool, you need more stable conditions. For developers who frequently write code, Cursor can significantly enhance efficiency.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Vibe Coding: The Threat to Open Source</title>
            <link>https://lumigallerys.com/posts/note-dedc3992ff/</link>
            <pubDate>Wed, 04 Feb 2026 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-dedc3992ff/</guid>
            <description>&lt;h2 id=&#34;vibe-coding-the-threat-to-open-source&#34;&gt;Vibe Coding: The Threat to Open Source&#xA;&lt;/h2&gt;&lt;p&gt;Vibe Coding is creating a frenzy of efficiency that is draining the lifeblood of the open-source ecosystem. Recent research reveals that as AI becomes a &amp;ldquo;super intermediary&amp;rdquo; in programming, the attention and feedback that open-source maintainers rely on are being severed. This predatory growth could lead to the depletion of high-quality open-source projects, causing an unprecedented &amp;ldquo;tragedy of the commons&amp;rdquo; in the software world.&lt;/p&gt;&#xA;&lt;p&gt;Andrej Karpathy introduced the concept of &amp;ldquo;Vibe Coding&amp;rdquo; a year ago—suggesting that understanding code is no longer necessary; managing the feelings that code evokes is enough. This marks what many consider the golden age of software development.&lt;/p&gt;&#xA;&lt;p&gt;Google claims that over a quarter of its new code is generated by AI, while Anthropic&amp;rsquo;s CEO Dario Amodei stated that Claude writes 70% to 90% of their code.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 12&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;476px&#34; data-flex-grow=&#34;198&#34; height=&#34;544&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-dedc3992ff/img-40de6f83e2.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-dedc3992ff/img-40de6f83e2_hu_18f5cb2e411aa140.jpeg 800w, https://lumigallerys.com/posts/note-dedc3992ff/img-40de6f83e2.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Everything seems to be moving at an unbelievable pace. However, beneath this frenzy, the foundation of the digital world—the open-source community—is cracking.&lt;/p&gt;&#xA;&lt;p&gt;Recently, a group of economists published a troubling paper titled &amp;ldquo;Vibe Coding Kills Open Source.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 13&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;773px&#34; data-flex-grow=&#34;322&#34; height=&#34;335&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-dedc3992ff/img-8c9bfc5fa8.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-dedc3992ff/img-8c9bfc5fa8_hu_3d15418cf4e82aa5.jpeg 800w, https://lumigallerys.com/posts/note-dedc3992ff/img-8c9bfc5fa8.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;They used calm data models to point out that the very open-source ecosystem that empowered AI is being buried by AI itself.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 14&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;455px&#34; data-flex-grow=&#34;189&#34; height=&#34;59&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-dedc3992ff/img-eab7384cc9.jpeg&#34; width=&#34;112&#34;&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;severed-connections&#34;&gt;Severed Connections&#xA;&lt;/h3&gt;&lt;p&gt;Open-source software is the air of the digital age. You may not feel its presence, but you cannot live without it. From the underlying kernel of Android phones to the databases used for bank transfers and the decoders used while watching videos, all rely on open-source code.&lt;/p&gt;&#xA;&lt;p&gt;Before Vibe Coding took the world by storm, the open-source world operated on a delicate system of reciprocity: developers contributed code for free in exchange for user attention, reputation, and the subsequent consulting orders or job offers from large companies. This &amp;ldquo;attention economy&amp;rdquo; was the heartbeat of open-source.&lt;/p&gt;&#xA;&lt;p&gt;But the emergence of AI has acted like a sharp scalpel, severing this umbilical cord. The authors of the paper, including Miklós Koren, point out that AI has become an extremely efficient yet cold &amp;ldquo;intermediary.&amp;rdquo; When users program through AI, they no longer directly access the open-source project repositories, read documentation, star projects, or ask questions in communities.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 15&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;347px&#34; data-flex-grow=&#34;144&#34; height=&#34;745&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-dedc3992ff/img-309a0a87aa.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-dedc3992ff/img-309a0a87aa_hu_a0ed189fd31c98e3.jpeg 800w, https://lumigallerys.com/posts/note-dedc3992ff/img-309a0a87aa.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;AI perfectly &amp;ldquo;chews up&amp;rdquo; open-source code and feeds it to users.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Users are satisfied, efficiency has increased, but open-source maintainers receive nothing in return.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;the-ghost-of-bad-money-driving-out-good&#34;&gt;The Ghost of Bad Money Driving Out Good&#xA;&lt;/h3&gt;&lt;p&gt;Some may argue that as long as the code runs, the lack of earnings for maintainers is a problem of business models. However, economics teaches us that there is often no such thing as a free lunch.&lt;/p&gt;&#xA;&lt;p&gt;What direction will this mechanism push the industry towards? The research team constructed an economic model revealing two opposing forces:&lt;/p&gt;&#xA;&lt;p&gt;On one hand, there is the &amp;ldquo;efficiency temptation.&amp;rdquo; AI indeed lowers the cost of creating new tools, which should theoretically encourage more innovations.&lt;/p&gt;&#xA;&lt;p&gt;On the other hand, the more fatal &amp;ldquo;demand transfer&amp;rdquo; occurs. With direct access severed, maintainers lose the chance to gain returns from users. As the timeline of the model extends, the harsh extrapolation reveals that once the destructive power of &amp;ldquo;demand transfer&amp;rdquo; outweighs the benefits of &amp;ldquo;efficiency improvement,&amp;rdquo; the ecosystem will inevitably shrink.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 18&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;253px&#34; data-flex-grow=&#34;105&#34; height=&#34;1024&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-dedc3992ff/img-e7b1c58a6e.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-dedc3992ff/img-e7b1c58a6e_hu_43445e92eb0ceeb.jpeg 800w, https://lumigallerys.com/posts/note-dedc3992ff/img-e7b1c58a6e.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;One group consists of a few top project maintainers at the pyramid&amp;rsquo;s peak, who can barely survive on their substantial existing fame;&lt;/strong&gt; &lt;strong&gt;the other group includes hobbyists who write code purely for fun without caring about returns.&lt;/strong&gt; &lt;strong&gt;The &amp;ldquo;middle-class&amp;rdquo; projects, which are of decent quality but require continuous maintenance effort, will largely vanish due to lack of incentives.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;The result is that while AI allows us to write code faster, the number of high-quality open-source &amp;ldquo;building blocks&amp;rdquo; we can use is decreasing.&lt;/p&gt;&#xA;&lt;p&gt;The future software ecosystem may become extremely polarized: on one side, a few giants dominating super libraries, and on the other, countless abandoned and unmaintained code ruins.&lt;/p&gt;&#xA;&lt;p&gt;As the paper states: &amp;ldquo;When feedback loops accelerate growth, they can also accelerate decline.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;The decline of Stack Overflow serves as another footnote to this crisis. Since the advent of ChatGPT, this largest global Q&amp;amp;A community for programmers has seen its traffic halved.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 19&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;217px&#34; data-flex-grow=&#34;90&#34; height=&#34;1193&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-dedc3992ff/img-0847834577.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-dedc3992ff/img-0847834577_hu_4f17f5bea6f6260c.jpeg 800w, https://lumigallerys.com/posts/note-dedc3992ff/img-0847834577.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;The knowledge crystallized from previous Q&amp;amp;As was once vital for training AI. Now, new questions are no longer publicly discussed but vanish into private AI dialogues. AI is draining the well dry.&lt;/p&gt;&#xA;&lt;p&gt;It grows by consuming open-source data but, in the process, destroys the soil that produces this data.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 20&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;455px&#34; data-flex-grow=&#34;189&#34; height=&#34;59&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-dedc3992ff/img-5cff693e71.jpeg&#34; width=&#34;112&#34;&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;what-lies-beyond-code&#34;&gt;What Lies Beyond Code?&#xA;&lt;/h3&gt;&lt;p&gt;Does the story of Vibe Coding sound familiar? This is not just a crisis for programmers; it’s a shared fate for all content creators.&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Journalism&lt;/strong&gt;: AI searches not only fetch news but also directly generate summaries. Users no longer click links, media lose advertising revenue, and journalists lose their jobs.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Illustration&lt;/strong&gt;: AI art can mimic styles honed over a decade in mere seconds, leaving original artists with nothing.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Paid Knowledge&lt;/strong&gt;: When all book knowledge is compressed into the parameters of large models, who will still buy that thick textbook?&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;We are entering an era of &amp;ldquo;super intermediaries.&amp;rdquo; AI has monopolized distribution channels, rendering all upstream creators invisible.&lt;/p&gt;&#xA;&lt;p&gt;The authors of the paper propose a concept similar to &amp;ldquo;Spotify for Code&amp;rdquo;: establishing a mechanism where AI pays a small but continuous royalty to code creators when it accesses open-source code.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 21&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;1524px&#34; data-flex-grow=&#34;635&#34; height=&#34;170&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-dedc3992ff/img-55d81db09f.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-dedc3992ff/img-55d81db09f_hu_4f84a8295c5a7c17.jpeg 800w, https://lumigallerys.com/posts/note-dedc3992ff/img-55d81db09f.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;This sounds wonderful but is fraught with challenges. Who sets the prices? Who monitors it? In this winner-takes-all world, are the giants really willing to share profits?&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 22&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;455px&#34; data-flex-grow=&#34;189&#34; height=&#34;59&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-dedc3992ff/img-c02db2a35e.jpeg&#34; width=&#34;112&#34;&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;conclusion&#34;&gt;Conclusion&#xA;&lt;/h3&gt;&lt;p&gt;In 2026, we enjoy unprecedented technological conveniences. With just a voice command, software, articles, and artworks appear out of thin air. We think we have mastered magic, but in reality, we are squandering the legacies left by our predecessors.&lt;/p&gt;&#xA;&lt;p&gt;The prosperity brought by Vibe Coding resembles a grand overdraft. We are using open-source fuel to stoke the flames of AI. This is indeed a warming feast, but let’s not forget: the hotter the fire, the less fuel remains, and after those willing to bend down and plant trees leave, winter will still be long.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Words of the Year 2025 Reflect Global Trends and Concerns</title>
            <link>https://lumigallerys.com/posts/note-c1d3c18a2a/</link>
            <pubDate>Tue, 30 Dec 2025 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-c1d3c18a2a/</guid>
            <description>&lt;h2 id=&#34;global-trends-in-2025&#34;&gt;Global Trends in 2025&#xA;&lt;/h2&gt;&lt;p&gt;As we approach 2025, nearly 20 countries have selected their annual words, reflecting the pulse and trends of the world in that year. These words may resonate with your memories of the past year.&lt;/p&gt;&#xA;&lt;h2 id=&#34;turmoil-and-trade&#34;&gt;Turmoil and Trade&#xA;&lt;/h2&gt;&lt;p&gt;In 2025, the world is not at peace, as evident in the annual words chosen by multiple countries.&lt;/p&gt;&#xA;&lt;p&gt;In Singapore, the character &amp;ldquo;荡&amp;rdquo; (meaning &amp;ldquo;to sway&amp;rdquo;) was selected as the annual Chinese character for 2025. According to an article by Lianhe Zaobao, over 160,000 votes were cast for &amp;ldquo;荡&amp;rdquo; out of more than 430,000, summarizing the profound effects and turmoil caused by a series of actions from the Trump administration in the United States, reflecting a sense of unease in today&amp;rsquo;s world.&lt;/p&gt;&#xA;&lt;p&gt;In South Korea, 766 university professors selected the four-character idiom &amp;ldquo;变动不居&amp;rdquo; (meaning &amp;ldquo;constant change&amp;rdquo;) as the annual phrase for 2025. Professor Yang Il-moo from Seoul University explained that this idiom signifies the continuous flow and change in the world, reflecting the intense transformations experienced in South Korea, including presidential impeachment, political strife, and geopolitical tensions.&lt;/p&gt;&#xA;&lt;p&gt;Many citizens believe that the trade war initiated by the United States is one of the significant causes of global turmoil in 2025. The character &amp;ldquo;税&amp;rdquo; (meaning &amp;ldquo;tax&amp;rdquo;) was chosen as the annual Chinese character in Malaysia, while &amp;ldquo;关税&amp;rdquo; (meaning &amp;ldquo;tariff&amp;rdquo;) was selected by the Spanish Royal Academy of Language and the Spanish Language Urgent Terms Foundation as the word of the year. The foundation noted that the tariffs imposed by the Trump administration have dominated international news for months and continue to do so.&lt;/p&gt;&#xA;&lt;p&gt;In Switzerland, the Italian-speaking region also selected &amp;ldquo;关税&amp;rdquo; as the annual word. The Malaysian committee chair, Wu Hengcan, stated that the choice of &amp;ldquo;税&amp;rdquo; reflects strong opposition from developing countries against hegemonic bullying.&lt;/p&gt;&#xA;&lt;p&gt;In addition to tariffs, Finland&amp;rsquo;s Language Research Institute chose &amp;ldquo;无人机墙&amp;rdquo; (meaning &amp;ldquo;drone wall&amp;rdquo;) as the international buzzword of the year, reflecting local reactions to the geopolitical tensions stemming from the Russia-Ukraine conflict. The word &amp;ldquo;移民&amp;rdquo; (meaning &amp;ldquo;immigrant&amp;rdquo;) ranked second in Portugal&amp;rsquo;s annual vocabulary list due to policy controversies surrounding immigration in European countries. Norway&amp;rsquo;s Language Council selected &amp;ldquo;科技寡头&amp;rdquo; (meaning &amp;ldquo;tech oligarch&amp;rdquo;) as the annual keyword, pointing to the digital sovereignty struggle in Europe and the United States.&lt;/p&gt;&#xA;&lt;h2 id=&#34;ai-and-its-impact&#34;&gt;AI and Its Impact&#xA;&lt;/h2&gt;&lt;p&gt;In 2025, artificial intelligence (AI) is empowering various industries at an unprecedented pace, changing lives worldwide. The term &amp;ldquo;人工智能&amp;rdquo; (meaning &amp;ldquo;artificial intelligence&amp;rdquo;) appears on many countries&amp;rsquo; annual word lists.&lt;/p&gt;&#xA;&lt;p&gt;The German Language Association named &amp;ldquo;人工智能时代&amp;rdquo; (meaning &amp;ldquo;the era of AI&amp;rdquo;) as the word of the year, indicating that AI has moved from the ivory tower of scientific research into mainstream society. More people are using AI tools for tasks ranging from online searches to dynamic photo generation and text writing.&lt;/p&gt;&#xA;&lt;p&gt;The Collins Dictionary in the UK selected &amp;ldquo;氛围编程&amp;rdquo; (meaning &amp;ldquo;vibe coding&amp;rdquo;) as the word of the year, illustrating the shift in programming from a professional skill to an expression of intent, highlighting AI&amp;rsquo;s impact on creativity and work methods.&lt;/p&gt;&#xA;&lt;p&gt;However, the explosive growth of AI brings both excitement and concern. Merriam-Webster and Australia&amp;rsquo;s Macquarie Dictionary independently chose &amp;ldquo;Slop&amp;rdquo; or &amp;ldquo;AI Slop&amp;rdquo; as the word of the year, referring to low-quality digital content typically generated in bulk by AI, or &amp;ldquo;AI garbage.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;The publisher of Merriam-Webster stated that absurd videos, distorted images, vulgar content, and misleading fake news generated by AI have flooded the internet, causing public distaste while being widely consumed and shared. The term &amp;ldquo;Slop&amp;rdquo; conveys that AI is sometimes not as &amp;ldquo;super-intelligent&amp;rdquo; as it seems when it comes to replacing human creativity.&lt;/p&gt;&#xA;&lt;p&gt;The Finnish Language Research Institute summarized this phenomenon as &amp;ldquo;人工智能泥潭&amp;rdquo; (meaning &amp;ldquo;AI quagmire&amp;rdquo;) in its annual buzzwords.&lt;/p&gt;&#xA;&lt;p&gt;The term &amp;ldquo;幻觉&amp;rdquo; (meaning &amp;ldquo;hallucination&amp;rdquo;) was selected by the renowned Dutch dictionary publisher Van Dale as the word of the year, referring to the false and absurd information generated by large models like ChatGPT when queried.&lt;/p&gt;&#xA;&lt;p&gt;In the UK, media reports indicate that AI and online platforms are profoundly reshaping how people experience emotions and interact with each other. The Oxford Dictionary&amp;rsquo;s word of the year is &amp;ldquo;愤怒诱饵&amp;rdquo; (meaning &amp;ldquo;rage bait&amp;rdquo;), which refers to content deliberately designed to provoke strong emotions like anger to boost web traffic or social media engagement. The Cambridge Dictionary selected &amp;ldquo;准社交&amp;rdquo; (meaning &amp;ldquo;parasocial&amp;rdquo;), denoting a one-sided emotional connection with someone, whether a chatbot, an unknown celebrity, a book, or a movie. In Romania, media outlets suggest that &amp;ldquo;准社交&amp;rdquo; has become a keyword reflecting the social reality of high social media usage, particularly among the youth.&lt;/p&gt;&#xA;&lt;h2 id=&#34;anxiety-and-economic-concerns&#34;&gt;Anxiety and Economic Concerns&#xA;&lt;/h2&gt;&lt;p&gt;As the world undergoes turmoil, distinguishing between truth and falsehood online becomes challenging. The annual words from various countries reflect the anxieties and concerns of ordinary people.&lt;/p&gt;&#xA;&lt;p&gt;On December 12, the abbot of Kiyomizu Temple in Kyoto, Japan, wrote the character &amp;ldquo;熊&amp;rdquo; (meaning &amp;ldquo;bear&amp;rdquo;) to reveal the annual character reflecting the sentiments of Japanese society in 2025. This choice was made due to the phenomenon of bears appearing in various regions, with 230 people affected by bear attacks from April to November, marking a historical high. Media commentary suggests that the bear incidents have caused anxiety in affected areas, while political issues have also left many Japanese citizens worried.&lt;/p&gt;&#xA;&lt;p&gt;In Japan&amp;rsquo;s selection, the character &amp;ldquo;米&amp;rdquo; (meaning &amp;ldquo;rice&amp;rdquo;) narrowly ranked second, followed by &amp;ldquo;高&amp;rdquo; (meaning &amp;ldquo;high&amp;rdquo;). Japanese media report that these characters reflect the rising cost of living and the depreciation of the yen, which have led to a wave of price increases affecting the daily lives of the Japanese people.&lt;/p&gt;&#xA;&lt;p&gt;In Portugal, the public voting event selected &amp;ldquo;大停电&amp;rdquo; (meaning &amp;ldquo;major blackout&amp;rdquo;) as the word of the year. On April 28, a widespread power outage occurred in Portugal and Spain, disrupting transportation, communication, and public services for hours. The publisher noted that the choice of &amp;ldquo;大停电&amp;rdquo; reflects a deeper concern about modern life’s heavy reliance on technology.&lt;/p&gt;&#xA;&lt;p&gt;Life is challenging, and anxiety follows. The term &amp;ldquo;焦虑&amp;rdquo; (meaning &amp;ldquo;anxiety&amp;rdquo;) ranked first in an online vote initiated by the Russian Reading-City website, reflecting that in this turbulent era, anxiety has become a fundamental aspect of life, highlighting uncertainties about the future.&lt;/p&gt;&#xA;&lt;p&gt;The term &amp;ldquo;不确定性&amp;rdquo; (meaning &amp;ldquo;uncertainty&amp;rdquo;) was chosen by the Brazilian polling agency Kaws and IDEIA Big Data, indicating that rapid changes in economy and technology, along with geopolitical friction and domestic governance issues, have made Brazilians feel that 2025 is filled with challenges, impacting daily life and personal decisions.&lt;/p&gt;&#xA;&lt;h2 id=&#34;resilience-and-trust&#34;&gt;Resilience and Trust&#xA;&lt;/h2&gt;&lt;p&gt;How can people respond to this uncertain world?&lt;/p&gt;&#xA;&lt;p&gt;The character &amp;ldquo;韧&amp;rdquo; (meaning &amp;ldquo;resilience&amp;rdquo;) was selected as the annual word in the &amp;ldquo;Chinese Language Review 2025&amp;rdquo; event organized by the National Language Resources Monitoring and Research Center, Commercial Press, and Xinhua News Agency. The character encapsulates the essence of resilience, representing steadfastness, determination, and the spirit of perseverance in the face of difficulties.&lt;/p&gt;&#xA;&lt;p&gt;In Russia, the State Pushkin Russian Language Institute named &amp;ldquo;胜利&amp;rdquo; (meaning &amp;ldquo;victory&amp;rdquo;) as the top buzzword, commemorating the 80th anniversary of the Soviet Union&amp;rsquo;s victory in the Great Patriotic War. South Africa&amp;rsquo;s Pan South African Language Board announced &amp;ldquo;G20 Summit&amp;rdquo; as the annual buzzword, highlighting an international conference marked by African influence and the promotion of multilateralism.&lt;/p&gt;&#xA;&lt;p&gt;The Treccani Encyclopedia Institute in Italy selected &amp;ldquo;信任&amp;rdquo; (meaning &amp;ldquo;trust&amp;rdquo;) as the word of the year, reflecting people&amp;rsquo;s hopes for the future in this uncertain era. Trust can prevent polarization and &amp;ldquo;adhere&amp;rdquo; to an increasingly divided society, guiding people out of the quagmire of uncertainty. The most complex and powerful algorithm for human survival remains unchanged: mutual trust.&lt;/p&gt;&#xA;&lt;p&gt;Notably, the Chinese toy &amp;ldquo;拉布布&amp;rdquo; (meaning &amp;ldquo;Labubu&amp;rdquo;) was included in the annual word selections by the Finnish Language Research Institute and ranked among the cultural buzzwords in a joint selection by several authoritative institutions in Russia. Russian media noted that the &amp;ldquo;Labubu&amp;rdquo; toy has gained popularity through social media. The Finnish Language Research Institute highlighted the toy&amp;rsquo;s distinctive feature: its wide smile.&lt;/p&gt;&#xA;&lt;p&gt;In facing challenges with resilience, treating friends with trust, and smiling at the future, which word would you use to describe the soon-to-be-past year of 2025?&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Cursor 2.0 Launches with New Composer Model and 15 Upgrades</title>
            <link>https://lumigallerys.com/posts/note-83885cb290/</link>
            <pubDate>Thu, 30 Oct 2025 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-83885cb290/</guid>
            <description>&lt;h2 id=&#34;cursor-20-launch&#34;&gt;Cursor 2.0 Launch&#xA;&lt;/h2&gt;&lt;p&gt;On October 30, Cursor, a well-known AI programming platform, announced its upgrade to version 2.0, introducing the &lt;strong&gt;Composer&lt;/strong&gt;, its first self-developed programming model, along with a new interface for parallel collaboration among multiple agents and &lt;strong&gt;15 upgrades&lt;/strong&gt;.&lt;/p&gt;&#xA;&lt;h3 id=&#34;key-features-of-the-composer-model&#34;&gt;Key Features of the Composer Model&#xA;&lt;/h3&gt;&lt;p&gt;The most notable feature of the Composer model is its speed. Cursor claims that this model is designed for low-latency agentic programming within Cursor, with most interactions completed in &lt;strong&gt;30 seconds&lt;/strong&gt;, achieving speeds &lt;strong&gt;4 times&lt;/strong&gt; that of comparable intelligent models, and outputting over &lt;strong&gt;200 tokens per second&lt;/strong&gt;.&lt;/p&gt;&#xA;&lt;p&gt;In internal evaluations, the Composer&amp;rsquo;s intelligence level has &lt;strong&gt;surpassed the best open-source programming models&lt;/strong&gt; (including Qwen Coder and GLM 4.6) and is faster than existing cutting-edge lightweight models (including Claude Haiku 4.5 and Gemini Flash 2.5). However, its intelligence still lags behind GPT-5 and Claude Sonnet 4.5.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 1&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;433px&#34; data-flex-grow=&#34;180&#34; height=&#34;553&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-83885cb290/img-119493af4f.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-83885cb290/img-119493af4f_hu_45085a0677182117.jpeg 800w, https://lumigallerys.com/posts/note-83885cb290/img-119493af4f.jpeg 1000w&#34; width=&#34;1000&#34;&gt;&#xA;Composer&amp;rsquo;s intelligence and speed comparison with leading models.&lt;/p&gt;&#xA;&lt;p&gt;As the capabilities of model agents continue to improve, Cursor&amp;rsquo;s UI has also been upgraded. The Cursor 2.0 UI is no longer file-centric but is &lt;strong&gt;redesigned around agents&lt;/strong&gt;, allowing developers to focus on their goals while different agents handle implementation details.&lt;/p&gt;&#xA;&lt;p&gt;Cursor 2.0 now supports &lt;strong&gt;parallel operation of up to 8 agents&lt;/strong&gt;. They can work in separate workspaces without interference. Users can also have multiple agents attempt to solve the same problem simultaneously and choose the best solution—this approach has been shown to significantly enhance result quality in complex or open-ended tasks.&lt;/p&gt;&#xA;&lt;p&gt;For deeper code inspection or editing, users can still open files or switch back to the classic IDE view with one click.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 2&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;263px&#34; data-flex-grow=&#34;109&#34; height=&#34;912&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-83885cb290/img-2a2b5b6520.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-83885cb290/img-2a2b5b6520_hu_c24eff36810c4f97.jpeg 800w, https://lumigallerys.com/posts/note-83885cb290/img-2a2b5b6520.jpeg 1000w&#34; width=&#34;1000&#34;&gt;&#xA;Cursor&amp;rsquo;s new UI.&lt;/p&gt;&#xA;&lt;p&gt;With agents becoming increasingly integrated into the programming workflow, reviewing code and testing changes has become a new challenge. The new design in Cursor 2.0 allows users to see modification details without switching between different files.&lt;/p&gt;&#xA;&lt;p&gt;The new native browser enables Cursor 2.0 to automatically test its work and iterate until correct results are produced. Users can directly select web elements for Cursor to modify, achieving a &amp;ldquo;point-and-click&amp;rdquo; editing experience.&lt;/p&gt;&#xA;&lt;p&gt;Currently, Cursor 2.0 is fully online, and users can download the latest installation package from the Cursor website. However, to experience the Composer model in agent mode, a subscription to Cursor Pro is required.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Download link:&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://cursor.com/cn/download&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;&#xA;    &gt;https://cursor.com/cn/download&lt;/a&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;15-major-upgrades-in-version-20&#34;&gt;15 Major Upgrades in Version 2.0&#xA;&lt;/h2&gt;&lt;h3 id=&#34;agents-can-independently-complete-code-testing&#34;&gt;Agents Can Independently Complete Code Testing&#xA;&lt;/h3&gt;&lt;p&gt;Cursor has made 15 upgrades in UI and functionality to enhance user experience in line with today&amp;rsquo;s agentic programming characteristics.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;(1) Multiple Agents Working in Parallel for Optimal Selection&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;In Cursor&amp;rsquo;s new editing page, users can more easily manage agents, with a new sidebar displaying agents and development plans. Now, a single prompt can be processed by up to 8 agents in parallel. This feature uses git worktrees or remote virtual machines to avoid file conflicts, with each agent having a dedicated isolated codebase.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 3&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;413px&#34; data-flex-grow=&#34;172&#34; height=&#34;581&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-83885cb290/img-16bc270752.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-83885cb290/img-16bc270752_hu_71d679401d61d667.jpeg 800w, https://lumigallerys.com/posts/note-83885cb290/img-16bc270752.jpeg 1000w&#34; width=&#34;1000&#34;&gt;&#xA;Cursor&amp;rsquo;s multi-agent mode.&lt;/p&gt;&#xA;&lt;p&gt;However, the potential downside of this mode is the token consumption. Users have reported that calling Sonnet 4.5 and Codex simultaneously can lead to thousands of tokens being spent just to change a chart color.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 4&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;504px&#34; data-flex-grow=&#34;210&#34; height=&#34;238&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-83885cb290/img-1c824b93a8.jpeg&#34; width=&#34;500&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;(2) Agents Can Use Browsers, Making Frontend Code Changes Easy&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;The browser functionality for agents, which was beta tested in version 1.7, has now been officially released. Cursor has also provided additional support for enterprise users, such as MCP whitelist and blacklist management.&lt;/p&gt;&#xA;&lt;p&gt;Agents can control Cursor&amp;rsquo;s built-in browser to perform tasks such as testing applications, assessing accessibility, and converting designs into code through navigation, clicking, inputting, scrolling, and screenshotting. With complete access to console logs and network traffic, agents can debug issues and automate comprehensive testing processes.&lt;/p&gt;&#xA;&lt;p&gt;Users have reported that the browser feature makes frontend development as easy as doodling; simply select the content to modify, and Cursor will handle the changes automatically.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 5&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;291px&#34; data-flex-grow=&#34;121&#34; height=&#34;822&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-83885cb290/img-e468ae5e17.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-83885cb290/img-e468ae5e17_hu_fea73eac1924ca89.jpeg 800w, https://lumigallerys.com/posts/note-83885cb290/img-e468ae5e17.jpeg 1000w&#34; width=&#34;1000&#34;&gt;&#xA;Cursor has optimized browser tools to enhance efficiency and reduce token usage, focusing on more &lt;strong&gt;efficient log processing, image-level visual feedback, intelligent prompts, and development server awareness&lt;/strong&gt;.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 6&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;731px&#34; data-flex-grow=&#34;304&#34; height=&#34;328&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-83885cb290/img-3cefaf811e.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-83885cb290/img-3cefaf811e_hu_a89c17c16692ae16.jpeg 800w, https://lumigallerys.com/posts/note-83885cb290/img-3cefaf811e.jpeg 1000w&#34; width=&#34;1000&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;(3) Code Review Functionality Upgraded, No More Back-and-Forth&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;The improved code review feature aggregates all modifications into a single interface, making it easier for users to view all changes made by agents across multiple files without switching between them.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 7&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;287px&#34; data-flex-grow=&#34;119&#34; height=&#34;418&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-83885cb290/img-14a8fe0e2f.jpeg&#34; width=&#34;500&#34;&gt;&#xA;Cursor&amp;rsquo;s aggregated review interface.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;(4) Sandbox Terminal Enabled by Default, Enhancing Agent Security&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Cursor has launched a macOS version of the sandbox terminal feature. Starting from Cursor 2.0, agent commands and unauthorized shell commands will run in a secure sandbox by default. This sandbox environment has read and write access to the user&amp;rsquo;s workspace but cannot access the internet.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 8&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;301px&#34; data-flex-grow=&#34;125&#34; height=&#34;398&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-83885cb290/img-8fc2836f03.jpeg&#34; width=&#34;500&#34;&gt;&#xA;Cursor&amp;rsquo;s sandbox terminal.&lt;/p&gt;&#xA;&lt;p&gt;However, some users have complained about encountering issues, such as agents accidentally deleting databases during their first attempts.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 9&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;275px&#34; data-flex-grow=&#34;114&#34; height=&#34;435&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-83885cb290/img-0030adcb25.jpeg&#34; width=&#34;500&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;(5) Team Commands Automatically Applied for Easier Management&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Team managers can now customize commands and rules in Cursor, which will automatically apply to all team members without needing to store them in local editors.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 10&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;446px&#34; data-flex-grow=&#34;185&#34; height=&#34;269&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-83885cb290/img-aec03c2fb4.jpeg&#34; width=&#34;500&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;(6) Voice Mode Introduced for Hands-Free Agent Control&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;The built-in voice-to-text feature allows users to control agents via voice. Users can also define custom trigger keywords in settings to initiate agent actions.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 11&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;320px&#34; data-flex-grow=&#34;133&#34; height=&#34;375&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-83885cb290/img-b81f685fd5.jpeg&#34; width=&#34;500&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;(7) Improved Code Execution Performance, Faster Python Runs&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Cursor uses the Language Server Protocol (LSP) to implement language-specific features such as jumping to definitions, hover tooltips, and diagnostics. Cursor has significantly improved the loading and usage performance of LSP for all languages, particularly in agent scenarios and when viewing code differences.&lt;/p&gt;&#xA;&lt;p&gt;For large projects, the default running speed of Python and TypeScript LSP will be faster, with memory limits dynamically configured based on available RAM. Cursor has also fixed some memory leak issues and improved overall memory usage.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;(8) Background Planning Mode Introduced for Comparing Different Solutions&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Cursor now supports creating and building plans in the background. Users can use one model to formulate a plan and another to execute it. Plans can be built in the foreground or background, and multiple plans can be created simultaneously through parallel agents for comparison and review.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 12&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;563&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-83885cb290/img-ccf9451c55.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-83885cb290/img-ccf9451c55_hu_48386f3059b39766.jpeg 800w, https://lumigallerys.com/posts/note-83885cb290/img-ccf9451c55.jpeg 1000w&#34; width=&#34;1000&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;(9) Team Commands for Efficient Knowledge Sharing&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Cursor allows users to share custom rules, commands, and prompts with the entire team. Users can also create deep links via Cursor Docs for more efficient internal knowledge and tool sharing.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;(10) Improved Prompt Interface with Simplified Context Menus&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Cursor has comprehensively optimized the prompt input interface: files and directories are now displayed in embedded tags, making it easier to copy and paste prompts with context tags. The context menu has also been simplified, removing explicit options like @Definitions, @Web, @Link, @Recent Changes, and @Linter Errors. Now, agents can autonomously gather the required context without users needing to manually attach it when entering prompts.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 13&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;470px&#34; data-flex-grow=&#34;196&#34; height=&#34;510&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-83885cb290/img-0b9dba7317.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-83885cb290/img-0b9dba7317_hu_3dffc6d271ce45e4.jpeg 800w, https://lumigallerys.com/posts/note-83885cb290/img-0b9dba7317.jpeg 1000w&#34; width=&#34;1000&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;(11) Enhanced Agent Framework for Improved Stability&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Cursor has significantly enhanced the underlying operating framework for using agents across different models. This improvement has led to overall performance and stability enhancements, particularly noticeable in GPT-5 Codex scenarios.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;(12) Cloud Agent Upgrade with 99.9% Reliability&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Cursor&amp;rsquo;s cloud agents now achieve 99.9% reliability and instant startup performance, with a new user interface set to launch soon. Cursor has also optimized the experience of sending agents from the editor to the cloud, making the development process smoother.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Enterprise Version Updates:&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;(13) Sandbox Terminal with Admin Controls for Security and Consistency&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Enterprise administrators can uniformly configure standard settings for the sandbox terminal at the team level, including sandbox availability, Git access permissions, and network access policies to ensure security and consistency.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;(14) Hooks Cloud Distribution for Easier Resource Management&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Enterprise teams can now distribute hooks directly through the web console. Administrators can add hooks, save drafts, and flexibly specify hooks applicable to different operating systems.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;(15) Audit Logs Enhance Security and Transparency&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Cursor provides detailed audit log functionality for enterprise users, helping teams track key operations, change records, and compliance events, enhancing security and transparency.&lt;/p&gt;&#xA;&lt;h2 id=&#34;self-developed-model-focused-on-speed-and-intelligence-balance&#34;&gt;Self-Developed Model Focused on Speed and Intelligence Balance&#xA;&lt;/h2&gt;&lt;h3 id=&#34;native-mxfp8-low-precision-training&#34;&gt;Native MXFP8 Low-Precision Training&#xA;&lt;/h3&gt;&lt;p&gt;In addition to the upgrades mentioned above, Cursor&amp;rsquo;s first self-developed programming model is also noteworthy. Cursor has previously developed models such as &lt;strong&gt;Cursor-Small and Cursor Tab&lt;/strong&gt;, but these early models were more suited for quick editing and code completion tasks and were not capable of handling complex development tasks.&lt;/p&gt;&#xA;&lt;p&gt;Cursor claims that its self-developed programming model draws inspiration from previous code completion model development experiences. The company found that developers often want to use models that are both intelligent enough and capable of supporting interactive use to maintain focus and fluidity in programming.&lt;/p&gt;&#xA;&lt;p&gt;This observation likely resonates with many programmers&amp;rsquo; pain points when using AI for programming—waiting three to five minutes for results after sending a prompt can severely impact the programming experience.&lt;/p&gt;&#xA;&lt;p&gt;During the development process, Cursor experimented with a prototype agent model codenamed &lt;strong&gt;&amp;ldquo;Cheetah&amp;rdquo;&lt;/strong&gt; to better understand the impact of higher-speed agent models. The Composer is an intelligent upgrade of this model, designed to provide sufficient speed for an interactive experience, making programming smoother.&lt;/p&gt;&#xA;&lt;p&gt;Many users have shared their experiences with Composer. Developer Sam Liu noted that Composer is incredibly fast, allowing him to build a complete Vide Coding community in just five minutes, including not only the frontend but also login verification and the backend database.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 14&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;260px&#34; data-flex-grow=&#34;108&#34; height=&#34;460&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-83885cb290/img-d92b76ce81.jpeg&#34; width=&#34;500&#34;&gt;&#xA;Amirmxt, co-founder of integrated analytics and A/B testing company Humblytics, shared that if he includes phrases like &amp;ldquo;careful consideration&amp;rdquo; in the prompt, Composer takes more time to determine if it has chosen the correct path before executing quickly.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 15&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;224px&#34; data-flex-grow=&#34;93&#34; height=&#34;534&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-83885cb290/img-6bb86658ed.jpeg&#34; width=&#34;500&#34;&gt;&#xA;Composer is an &lt;strong&gt;expert mixture (MoE) model&lt;/strong&gt; that supports long-context generation and understanding. It has been specifically optimized for software engineering through reinforcement learning (RL) in diverse development environments.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 16&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;437px&#34; data-flex-grow=&#34;182&#34; height=&#34;548&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-83885cb290/img-3d84eed894.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-83885cb290/img-3d84eed894_hu_1bdd117bc381cf87.jpeg 800w, https://lumigallerys.com/posts/note-83885cb290/img-3d84eed894.jpeg 1000w&#34; width=&#34;1000&#34;&gt;&#xA;Composer&amp;rsquo;s performance fluctuated upward during the reinforcement learning process.&lt;/p&gt;&#xA;&lt;p&gt;To better understand and operate large codebases, Composer incorporates a comprehensive set of tools during training, including full codebase semantic search. This gives it an advantage in understanding and modifying context across files and modules.&lt;/p&gt;&#xA;&lt;p&gt;This model can use simple tools like reading and editing files, as well as invoke more powerful capabilities, &lt;strong&gt;such as terminal commands and semantic searches across the entire codebase&lt;/strong&gt;.&lt;/p&gt;&#xA;&lt;p&gt;The focus of Composer&amp;rsquo;s optimization during reinforcement learning is efficiency. Cursor encourages the model to make efficient choices in tool usage and maximize parallel processing whenever possible.&lt;/p&gt;&#xA;&lt;p&gt;Additionally, Cursor trains the model to be a more helpful assistant by reducing unnecessary replies and avoiding unfounded statements.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 17&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;432px&#34; data-flex-grow=&#34;180&#34; height=&#34;555&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-83885cb290/img-f334cd31b0.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-83885cb290/img-f334cd31b0_hu_f5f6f30a9d28efac.jpeg 800w, https://lumigallerys.com/posts/note-83885cb290/img-f334cd31b0.jpeg 1000w&#34; width=&#34;1000&#34;&gt;&#xA;Cursor has learned to complete tasks more efficiently.&lt;/p&gt;&#xA;&lt;p&gt;Cursor has also discovered that the model spontaneously acquires useful capabilities during reinforcement learning, such as executing complex searches, fixing linter errors, and writing and running unit tests.&lt;/p&gt;&#xA;&lt;p&gt;To train the model more efficiently, Cursor has built a customized training infrastructure based on PyTorch and Ray to support asynchronous reinforcement learning in large-scale environments.&lt;/p&gt;&#xA;&lt;p&gt;Cursor employs &lt;strong&gt;MXFP8 MoE kernels&lt;/strong&gt;, expert parallelism, and mixed-shard data parallelism to complete Composer&amp;rsquo;s training in native low precision. &lt;strong&gt;This training method allows for scaling training to thousands of NVIDIA GPUs with extremely low communication overhead.&lt;/strong&gt; Furthermore, using MXFP8 training enables faster inference speeds without the need for post-training quantization.&lt;/p&gt;&#xA;&lt;p&gt;During reinforcement learning, Cursor aims for the model to invoke any tools within the Cursor Agent framework. These tools can be used for editing code, performing semantic searches, using grep to find strings, and executing terminal commands.&lt;/p&gt;&#xA;&lt;p&gt;To enable efficient tool invocation, it is necessary to run hundreds of thousands of isolated sandbox coding environments concurrently in the cloud. To support this workload, Cursor has revamped its existing Background Agents infrastructure and rewritten the virtual machine scheduler to accommodate the burstiness and scale of training runs. As a result, Cursor has unified the reinforcement learning environment with the production environment.&lt;/p&gt;&#xA;&lt;h2 id=&#34;conclusion-leveraging-massive-user-data&#34;&gt;Conclusion: Leveraging Massive User Data&#xA;&lt;/h2&gt;&lt;p&gt;Cursor is exploring innovations in the agent programming experience.&lt;/p&gt;&#xA;&lt;p&gt;In recent times, the capabilities of AI models have continuously improved, enabling them to complete long chains of complex tasks in programming scenarios more end-to-end. However, the enhancement of model capabilities also brings new requirements for programming platforms. Cursor&amp;rsquo;s major version update is an exploration of the agent programming experience.&lt;/p&gt;&#xA;&lt;p&gt;More importantly, Cursor, through the Composer model, has further solidified its route of self-developed models, not fully relying on external models. Although Cursor&amp;rsquo;s model may not temporarily replace cutting-edge programming models like Claude, this trend could become a watershed in the future AI IDE competition, where companies mastering self-developed model capabilities may go further.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Senior Developers Turned into AI Babysitters: The Reality of Vibe Coding</title>
            <link>https://lumigallerys.com/posts/note-ae04b963ba/</link>
            <pubDate>Mon, 15 Sep 2025 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-ae04b963ba/</guid>
            <description>&lt;h2 id=&#34;the-reality-of-vibe-coding&#34;&gt;The Reality of Vibe Coding&#xA;&lt;/h2&gt;&lt;p&gt;In a late-night project overhaul, Carla Rover, a developer with 15 years of experience, cried for half an hour. It wasn&amp;rsquo;t due to a bug but because she thought she had found a secret weapon in AI-assisted coding, only to end up spending more time cleaning up the mess.&lt;/p&gt;&#xA;&lt;p&gt;This experience is not unique.&lt;/p&gt;&#xA;&lt;p&gt;With the rise of AI tools like GitHub Copilot, ChatGPT, and Cursor, many seasoned programmers have joined the trend of &amp;ldquo;Vibe Coding&amp;rdquo;—throwing ideas at AI to generate code while checking, fixing, and even rewriting it line by line. While AI appears to be a helpful assistant, many have found themselves becoming &amp;ldquo;AI babysitters.&amp;rdquo;&lt;/p&gt;&#xA;&lt;h2 id=&#34;from-clean-slate-to-restart&#34;&gt;From Clean Slate to Restart&#xA;&lt;/h2&gt;&lt;p&gt;Rover, who primarily worked in web development, is now trying to build a customized machine learning model for an e-commerce platform with her son. She described Vibe Coding as &amp;ldquo;a clean sheet of paper to doodle ideas on.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;However, once she put AI-generated code into production, problems arose.&lt;/p&gt;&#xA;&lt;p&gt;To meet deadlines, she initially relied completely on AI for automated reviews, skipping manual checks. When she later reviewed the code, she was shocked by the number of bugs, and third-party tools also reported errors. Ultimately, she and her son had to restart the entire project.&lt;/p&gt;&#xA;&#xA;    &lt;blockquote&gt;&#xA;        &lt;p&gt;&amp;ldquo;I thought of Copilot as a reliable employee, but it proved otherwise.&amp;rdquo;&lt;/p&gt;&#xA;&#xA;    &lt;/blockquote&gt;&#xA;&lt;p&gt;This experience aligns with a recent Fastly survey, which found that 95% of nearly 800 developers reported needing extra time to modify AI-generated code, with most of the fixing work falling on senior engineers.&lt;/p&gt;&#xA;&lt;p&gt;The issues with AI-generated code are varied: fictitious package names, deletion of critical information, and security vulnerabilities. If unchecked, AI-generated code can be more fragile and bug-ridden than hand-written code. These serious problems have even led to the emergence of a new role—&amp;ldquo;Vibe Code Cleanup Specialist.&amp;rdquo;&lt;/p&gt;&#xA;&lt;h2 id=&#34;a-day-in-the-life-of-an-ai-babysitter&#34;&gt;A Day in the Life of an AI Babysitter&#xA;&lt;/h2&gt;&lt;p&gt;Similarly, another seasoned developer, Feridoon Malekzadeh, has had a complex experience.&lt;/p&gt;&#xA;&lt;p&gt;With over 20 years in product development, software, and design, he also uses Vibe Coding platforms like Lovable extensively. He has even created some &amp;ldquo;toy&amp;rdquo; applications, such as one that generates slang for the Baby Boomer generation.&lt;/p&gt;&#xA;&lt;p&gt;While it sounds fun, the reality feels like hiring a rebellious teenager:&lt;/p&gt;&#xA;&#xA;    &lt;blockquote&gt;&#xA;        &lt;p&gt;&amp;ldquo;You have to prompt it multiple times before it reluctantly does something. In the end, it only completes part of the request and adds a bunch of unwanted things that break other features.&amp;rdquo;&lt;/p&gt;&#xA;&#xA;    &lt;/blockquote&gt;&#xA;&lt;p&gt;He estimates his time allocation roughly as follows:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;50% spent writing requirements&lt;/li&gt;&#xA;&lt;li&gt;10-20% letting AI write code&lt;/li&gt;&#xA;&lt;li&gt;30-40% fixing bugs and redundant code generated by AI&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;In other words, the time saved through Vibe Coding is minimal.&lt;/p&gt;&#xA;&lt;p&gt;Even more frustrating is that AI lacks systematic thinking. An experienced developer might write a general module for reuse, while AI might create five different implementations in five places, increasing maintenance costs and complicating the project.&lt;/p&gt;&#xA;&lt;h2 id=&#34;ais-denial-and-security-risks&#34;&gt;AI&amp;rsquo;s Denial and Security Risks&#xA;&lt;/h2&gt;&lt;p&gt;In addition to frequent bugs, Rover noticed another unsettling phenomenon: when AI encounters data conflicts, it not only fails to acknowledge errors but also fabricates results.&lt;/p&gt;&#xA;&lt;p&gt;For instance, when she questioned the logic of a piece of AI-generated code, the model began to &amp;ldquo;explain&amp;rdquo; that it used the uploaded data. Only when confronted did it admit, &amp;ldquo;Actually, it didn&amp;rsquo;t.&amp;rdquo;&lt;/p&gt;&#xA;&#xA;    &lt;blockquote&gt;&#xA;        &lt;p&gt;&amp;ldquo;At that moment, I felt I was dealing with a &amp;rsquo;toxic colleague&amp;rsquo; rather than a tool.&amp;rdquo;&lt;/p&gt;&#xA;&#xA;    &lt;/blockquote&gt;&#xA;&lt;p&gt;In fact, AI security risks are a concern in the industry. Austin Spires, Fastly&amp;rsquo;s Director of Developer Empowerment, noted that AI often prioritizes &amp;ldquo;speed&amp;rdquo; over &amp;ldquo;accuracy,&amp;rdquo; frequently introducing bugs that only beginners would make.&lt;/p&gt;&#xA;&lt;p&gt;This is why social media often features the joke that &amp;ldquo;AI always replies &amp;lsquo;You’re absolutely right&amp;rsquo;&amp;quot;—developers point out errors, and AI immediately &amp;ldquo;admits fault,&amp;rdquo; but the previous responses were already incorrect. Mike Arrowsmith, CTO of NinjaOne, warns that using Vibe Coding can easily bypass traditional code review and security processes, especially in startups.&lt;/p&gt;&#xA;&lt;p&gt;Despite the myriad problems with Vibe Coding, nearly all developers admit that AI coding is still indispensable. It is particularly suited for prototyping, quick mocks, generating templates, or testing tasks, significantly reducing repetitive labor.&lt;/p&gt;&#xA;&lt;p&gt;As French theorist Paul Virilio said, &amp;ldquo;While building ships, we also invented shipwrecks.&amp;rdquo; In Malekzadeh&amp;rsquo;s view, the various downsides of AI coding are also a byproduct of progress.&lt;/p&gt;&#xA;&lt;p&gt;Moreover, Fastly&amp;rsquo;s survey results show that senior developers are twice as likely to put AI code into production compared to junior developers—indicating that while they spend considerable time modifying AI code, their experience allows them to utilize this technology effectively.&lt;/p&gt;&#xA;&lt;h2 id=&#34;the-lost-joy-of-programming-for-the-younger-generation&#34;&gt;The Lost Joy of Programming for the Younger Generation&#xA;&lt;/h2&gt;&lt;p&gt;Unlike seasoned developers who invest in and affirm AI coding, younger engineers feel they have lost much of the joy of programming.&lt;/p&gt;&#xA;&lt;p&gt;For example, Elvis Kimara, a recent AI master&amp;rsquo;s graduate developing an AI-driven e-commerce platform, admits that Vibe Coding has diminished his sense of accomplishment: &amp;ldquo;The dopamine from solving problems myself is gone; AI just takes care of it.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;He also observed that some senior developers, after using AI, have reduced their help to newcomers. Some even shift the responsibility of mentoring to AI, while others do not fully understand how the new tools operate.&lt;/p&gt;&#xA;&lt;p&gt;However, Kimara does not reject AI: &amp;ldquo;The benefits outweigh the drawbacks, and I&amp;rsquo;m willing to pay the price for this innovation. Future developers will not just write code but will guide AI and take responsibility for errors, resembling an AI consultant role.&amp;rdquo; He emphasizes that even as a senior developer, he will continue to use AI while meticulously reviewing AI-generated code to learn more.&lt;/p&gt;&#xA;&lt;p&gt;Undoubtedly, Vibe Coding is quietly changing the way developers work.&lt;/p&gt;&#xA;&lt;p&gt;It is not a perfect tool, nor is it a &amp;ldquo;zero-cost productivity multiplier&amp;rdquo;; instead, the bugs, redundancies, risks, and responsibilities it brings are becoming a form of &amp;ldquo;innovation tax&amp;rdquo; that developers must bear. Yet at the same time, it accelerates project delivery and expands the boundaries for individual developers and small teams.&lt;/p&gt;&#xA;&lt;p&gt;Thus, for many developers, being an &amp;ldquo;AI babysitter&amp;rdquo; is hard work but worth it. What are your thoughts?&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>The Evolution of Software Development: From Vibe Coding to LLMs</title>
            <link>https://lumigallerys.com/posts/note-36e23b9571/</link>
            <pubDate>Mon, 25 Aug 2025 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-36e23b9571/</guid>
            <description>&lt;p&gt;AI is reshaping the software development process. From Karpathy&amp;rsquo;s concept of &amp;ldquo;Vibe Coding,&amp;rdquo; we can foresee fundamental changes in collaboration, tools, and thought processes in future product development. This article helps you understand the impact of this technological transformation on product managers and how to prepare for it.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 1&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;514px&#34; data-flex-grow=&#34;214&#34; height=&#34;420&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-5506348c4e.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-5506348c4e_hu_fbf43544e8c29b09.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-5506348c4e.jpeg 900w&#34; width=&#34;900&#34;&gt;&#xA;Karpathy believes software is undergoing a third major paradigm shift:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&lt;strong&gt;Software 1.0&lt;/strong&gt; (human-written logic),&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Software 2.0&lt;/strong&gt; (neural networks learning from data),&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Software 3.0&lt;/strong&gt; (programming with natural language).&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;This means everyone can be a programmer, and &amp;ldquo;vibe coding&amp;rdquo; is becoming a reality.&lt;/p&gt;&#xA;&lt;p&gt;LLM agents must be managed &lt;strong&gt;on-the-leash&lt;/strong&gt;, verifying reliability at low autonomy levels before gradually loosening permissions. This approach prevents &amp;lsquo;over-reactive agents&amp;rsquo; from causing uncontrollable risks and maintains a rapid &lt;strong&gt;Generation ↔ Verification&lt;/strong&gt; cycle.&lt;/p&gt;&#xA;&lt;h2 id=&#34;software-has-changed-again-10--20--30&#34;&gt;Software Has Changed Again: 1.0 → 2.0 → 3.0&#xA;&lt;/h2&gt;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Software 1.0&lt;/strong&gt;: Purely human-written instructions.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Software 2.0&lt;/strong&gt;: Data + optimizers produce weights, where &amp;ldquo;weights are code.&amp;rdquo;&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Software 3.0&lt;/strong&gt;: Prompts are programs, with LLMs acting as programmable computers; English has become the &amp;ldquo;main programming language.&amp;rdquo;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;&amp;ldquo;We’re now programming computers &lt;strong&gt;in English&lt;/strong&gt;.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 2&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;1354&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-77e652170b.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-77e652170b_hu_5691400f18cefaaf.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-77e652170b_hu_a3973f13ea705706.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-77e652170b_hu_168b8edc91e5ed82.jpeg 2400w, https://lumigallerys.com/posts/note-36e23b9571/img-77e652170b.jpeg 2404w&#34; width=&#34;2404&#34;&gt;&lt;img alt=&#34;Image 3&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;427px&#34; data-flex-grow=&#34;178&#34; height=&#34;1346&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-7f5e085d74.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-7f5e085d74_hu_9746f777701b631d.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-7f5e085d74_hu_29a905337251c269.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-7f5e085d74.jpeg 2400w&#34; width=&#34;2400&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Software 1.0 — Traditional Explicit Code&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&lt;strong&gt;Operating System Kernels&lt;/strong&gt;: Linux Kernel, Windows NT, etc. All written by human engineers in C/C++.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Classic Backend/Frontend Frameworks&lt;/strong&gt;: Django, Spring, React, Vue, etc. Frameworks and business logic are written in source code hosted on GitHub.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Game Engine Scripts&lt;/strong&gt;: Unity C# scripts, Unreal C++ modules, where gameplay and rules are implemented line by line by developers.&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;Characteristics: Logic is deterministic, readable, and statically analyzable. The main products are text source files like .c, .cpp, .py, .js, etc.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Software 2.0 — Trained Weights as Code&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&lt;strong&gt;Deep Learning Models&lt;/strong&gt;: AlexNet, ResNet, YOLO, Stable Diffusion—network structures are written by humans, but the actual tasks are executed by billions of floating-point weights.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Hugging Face Model Hub&lt;/strong&gt;: Contains pytorch_model.bin/safetensors weight files, typical &amp;ldquo;code units&amp;rdquo; of Software 2.0.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Autonomous Driving Perception Stack&lt;/strong&gt;: Tesla&amp;rsquo;s early Autopilot visual recognition network: camera frames → detection/segmentation results, with weights trained from large-scale labeled data.&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 4&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;429px&#34; data-flex-grow=&#34;179&#34; height=&#34;1342&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-3b49fe47a2.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-3b49fe47a2_hu_3e82a7147ba62645.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-3b49fe47a2_hu_236a3b0a88c85b8e.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-3b49fe47a2_hu_493e65fdac2bb093.jpeg 2400w, https://lumigallerys.com/posts/note-36e23b9571/img-3b49fe47a2.jpeg 2404w&#34; width=&#34;2404&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Characteristics: The focus of development activities shifts from &amp;ldquo;writing rules&amp;rdquo; to &amp;ldquo;preparing data + training + tuning parameters.&amp;rdquo; The product is weight files, which humans can hardly read or modify directly.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Software 3.0 — Writing Programs with Natural Language + Toolchains&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&lt;strong&gt;LLM APIs like ChatGPT/Claude/Gemini&lt;/strong&gt;: Prompts are programs, calling interfaces executes them; composite calls + tool use form &amp;ldquo;software.&amp;rdquo;&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;AI Programming IDEs&lt;/strong&gt; (Cursor, Devon, GitHub Copilot Chat): Users converse in English/Chinese, allowing LLMs to generate, modify, and explain code in local repositories; the Autonomy Slider determines the depth of automation.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;No-Code Agent Platforms&lt;/strong&gt;: Such as LangChain Agents, OpenAI Function Calling + external tools, where users describe intentions using YAML/JSON, and LLMs handle decision-making and calls.&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;Characteristics: &amp;ldquo;Source code&amp;rdquo; transforms into prompts + configuration files + a series of tool calls. LLMs possess reasoning and planning abilities, enabling new behaviors at runtime; humans mainly focus on constraints and verification (on-the-leash).&lt;/p&gt;&#xA;&lt;h2 id=&#34;llm-as-a-computeroperating-system-analogy&#34;&gt;LLM as a Computer/Operating System Analogy&#xA;&lt;/h2&gt;&lt;p&gt;Karpathy uses multiple analogies to position LLMs: like a &amp;ldquo;utility&amp;rdquo; providing pay-per-use intelligence services; like an &amp;ldquo;operating system (OS)&amp;rdquo; with a continuously evolving complex ecosystem; and reminiscent of the mainframe era of the 60s, where we interact through &amp;ldquo;terminals&amp;rdquo; (chat boxes).&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 5&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;427px&#34; data-flex-grow=&#34;178&#34; height=&#34;1346&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-277f8d8ffa.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-277f8d8ffa_hu_e2ef84fb49a47b8d.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-277f8d8ffa_hu_783e6e782e11300e.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-277f8d8ffa.jpeg 2400w&#34; width=&#34;2400&#34;&gt;&lt;img alt=&#34;Image 6&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;431px&#34; data-flex-grow=&#34;179&#34; height=&#34;1634&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-22cd6acd99.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-22cd6acd99_hu_b174408ef33d080e.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-22cd6acd99_hu_fcf7e826a8a9d406.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-22cd6acd99_hu_6df295ea3c44dab2.jpeg 2400w, https://lumigallerys.com/posts/note-36e23b9571/img-22cd6acd99.jpeg 2936w&#34; width=&#34;2936&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;These analogies emphasize that &lt;strong&gt;LLMs are not just simple APIs but programmable new computers with memory, tools, and orchestration capabilities, with GUI forms still in early exploration.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 7&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;432px&#34; data-flex-grow=&#34;180&#34; height=&#34;1630&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-70a5602ad7.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-70a5602ad7_hu_61fc29bc45342e8e.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-70a5602ad7_hu_77708f7353bc86e1.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-70a5602ad7_hu_e5f1cb7404ba5eb4.jpeg 2400w, https://lumigallerys.com/posts/note-36e23b9571/img-70a5602ad7.jpeg 2936w&#34; width=&#34;2936&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;llms-psychology-and-programming-language-english&#34;&gt;LLM&amp;rsquo;s Psychology and Programming Language: English&#xA;&lt;/h2&gt;&lt;p&gt;Karpathy likens LLMs to &amp;ldquo;stochastic little spirits,&amp;rdquo; formed by autoregressive transformers fitting vast amounts of text, exhibiting anthropomorphic cognitive traits: &lt;strong&gt;broad but forgetful, capable of reasoning yet prone to &amp;lsquo;hallucinations.&amp;rsquo;&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 8&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;430px&#34; data-flex-grow=&#34;179&#34; height=&#34;1346&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-bbe4db730c.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-bbe4db730c_hu_d6561ebd1493bf24.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-bbe4db730c_hu_2638d3e23b9d2c8e.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-bbe4db730c_hu_cbe5cc3b241f7a2f.jpeg 2400w, https://lumigallerys.com/posts/note-36e23b9571/img-bbe4db730c.jpeg 2412w&#34; width=&#34;2412&#34;&gt;&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Jagged Intelligence&lt;/strong&gt;: Extremely strong in certain tasks but may err in basic logic (e.g., comparing 9.11 with 9.9).&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Anterograde Amnesia&lt;/strong&gt;: Cannot continue learning after training, lacking long-term memory.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Hallucinations&lt;/strong&gt;: Fabricating false facts.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Prompt Injection&lt;/strong&gt;: Easily deceived by malicious instructions.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;These flaws mean LLMs &lt;strong&gt;cannot be left to operate autonomously&lt;/strong&gt;; human supervision and constraint mechanisms must be established.&lt;/p&gt;&#xA;&lt;p&gt;Thus, &lt;strong&gt;English becomes the new programming language&lt;/strong&gt;—writing high-quality, executable natural language instructions is a core skill of Software 3.0.&lt;/p&gt;&#xA;&lt;h2 id=&#34;from-vibe-coding-to-practical-challenges&#34;&gt;From Vibe Coding to Practical Challenges&#xA;&lt;/h2&gt;&lt;p&gt;In his talk, Karpathy demonstrated quickly prototyping through dialogue (&amp;ldquo;I say what I want, and it writes the code; I then run/improve it&amp;rdquo;).&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;The Easy Part&lt;/strong&gt;: Quickly creating a &amp;ldquo;working demo&amp;rdquo; with LLMs.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;The Hard Part&lt;/strong&gt;: Making it &lt;strong&gt;stable, maintainable, and deployable&lt;/strong&gt;—this is the gap he repeatedly mentions: a demo is works.any, but a product must work for all users, scenarios, and inputs consistently.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 9&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;428px&#34; data-flex-grow=&#34;178&#34; height=&#34;1348&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-5d53ac2ef3.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-5d53ac2ef3_hu_2e1597e39a4ad12e.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-5d53ac2ef3_hu_e09c60a2a1469d31.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-5d53ac2ef3_hu_446ef473b9fba432.jpeg 2400w, https://lumigallerys.com/posts/note-36e23b9571/img-5d53ac2ef3.jpeg 2406w&#34; width=&#34;2406&#34;&gt;&lt;img alt=&#34;Image 10&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;433px&#34; data-flex-grow=&#34;180&#34; height=&#34;1330&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-1448db1c9c.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-1448db1c9c_hu_8f7c10523ef3c230.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-1448db1c9c_hu_ff6ce68b18f4abd6.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-1448db1c9c_hu_e29bd9510fa87a62.jpeg 2400w, https://lumigallerys.com/posts/note-36e23b9571/img-1448db1c9c.jpeg 2404w&#34; width=&#34;2404&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;partially-autonomous-products-best-practices-for-human-machine-collaboration&#34;&gt;Partially Autonomous Products: Best Practices for Human-Machine Collaboration&#xA;&lt;/h2&gt;&lt;p&gt;This section is the &lt;strong&gt;core of the product methodology&lt;/strong&gt;, where Karpathy uses &lt;strong&gt;Cursor (AI IDE)&lt;/strong&gt; and &lt;strong&gt;Perplexity (AI Search)&lt;/strong&gt; as examples:&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 11&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;427px&#34; data-flex-grow=&#34;178&#34; height=&#34;1350&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-000e5a9033.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-000e5a9033_hu_3cd5c130cd23a24f.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-000e5a9033_hu_8a170eb6508c6abe.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-000e5a9033_hu_b7b1d1c6783de180.jpeg 2400w, https://lumigallerys.com/posts/note-36e23b9571/img-000e5a9033.jpeg 2406w&#34; width=&#34;2406&#34;&gt;&lt;img alt=&#34;Image 12&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;427px&#34; data-flex-grow=&#34;178&#34; height=&#34;1346&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-e6b9290eae.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-e6b9290eae_hu_23644eda4634701b.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-e6b9290eae_hu_70da5c26d5c7d115.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-e6b9290eae.jpeg 2398w&#34; width=&#34;2398&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Common Patterns&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;LLMs manage context and multi-turn calls, while GUIs allow humans to review and roll back at minimal cost.&lt;/strong&gt;&lt;/li&gt;&#xA;&lt;li&gt;Products are built around a rapid closed loop of &amp;ldquo;Generation ↔ Verification&amp;rdquo;: LLMs provide drafts/diffs/references, and humans quickly review, revert, and iterate.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 13&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;429px&#34; data-flex-grow=&#34;179&#34; height=&#34;1640&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-7b4c6547c5.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-7b4c6547c5_hu_7e88fb0e40a5052d.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-7b4c6547c5_hu_e7140f19d0519f5a.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-7b4c6547c5_hu_638c41d6be3654c2.jpeg 2400w, https://lumigallerys.com/posts/note-36e23b9571/img-7b4c6547c5.jpeg 2936w&#34; width=&#34;2936&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;The design goal is to &lt;strong&gt;reduce verification costs&lt;/strong&gt; (e.g., diff views, color highlights, grouped changes, one-click undo).&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Autonomy Slider&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Cursor&lt;/strong&gt; progresses from &amp;ldquo;tap completion → modifying a chunk → changing an entire file → freely editing an entire repository,&amp;rdquo; allowing users to &lt;strong&gt;control granularity and authorization boundaries at any time.&lt;/strong&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Perplexity&lt;/strong&gt;&amp;rsquo;s &amp;ldquo;Quicksearch → Research → Deep research&amp;rdquo; also reflects gradual delegation: from quick answers to comprehensive searches/citations, and then to deeper research processes, &lt;strong&gt;with manual interruption and verification possible at each level.&lt;/strong&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Essence: Start with assistance, then enhancement, and finally potentially full automation, unlocking step by step.&lt;/strong&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;&lt;strong&gt;Limiting Over-Excited Agents&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Avoid generating massive changes all at once; instead, encourage small, controllable proposals.&lt;/strong&gt; Keep the AI &lt;strong&gt;&amp;ldquo;on a short leash&amp;rdquo;&lt;/strong&gt; to maintain human dominance and gatekeeping.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;The GUI minimizes review costs&lt;/strong&gt; (diff, color, batch/individual, one-click undo), and &lt;strong&gt;the faster the loop, the smaller the errors.&lt;/strong&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;&lt;strong&gt;Autopilot Analogy&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Karpathy points out that &lt;strong&gt;Tesla&amp;rsquo;s Autopilot experience is about &amp;ldquo;getting partial autonomy right first&amp;rdquo;&lt;/strong&gt;: starting with auxiliary driving features like lane keeping/adaptive cruise control, gradually progressing to higher capabilities (e.g., automatic lane changes, parking, summon features, complex driving tasks, and city road autonomous driving (FSD Beta)).&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 14&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;428px&#34; data-flex-grow=&#34;178&#34; height=&#34;1642&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-3a43096464.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-3a43096464_hu_a16311d840b4d106.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-3a43096464_hu_e748106f765ff3ec.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-3a43096464_hu_f60a9c26df1e322b.jpeg 2400w, https://lumigallerys.com/posts/note-36e23b9571/img-3a43096464.jpeg 2934w&#34; width=&#34;2934&#34;&gt;&lt;img alt=&#34;Image 15&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;429px&#34; data-flex-grow=&#34;178&#34; height=&#34;1638&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-47a78b2331.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-47a78b2331_hu_595f5a71b0a120b5.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-47a78b2331_hu_e909c2c5e74ae0a.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-47a78b2331_hu_cac3ab15b814854f.jpeg 2400w, https://lumigallerys.com/posts/note-36e23b9571/img-47a78b2331.jpeg 2930w&#34; width=&#34;2930&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Software 3.0 products should evolve similarly: assistance → enhancement → high autonomy/full autonomy&lt;/strong&gt;, rather than achieving it all at once.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 16&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;430px&#34; data-flex-grow=&#34;179&#34; height=&#34;1634&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-b0da8d6973.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-b0da8d6973_hu_a7828e41bb2b2f9c.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-b0da8d6973_hu_3ce31e7e09a91993.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-b0da8d6973_hu_ceefc55a03631d2b.jpeg 2400w, https://lumigallerys.com/posts/note-36e23b9571/img-b0da8d6973.jpeg 2934w&#34; width=&#34;2934&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;upgrading-systems-for-agents&#34;&gt;Upgrading Systems for Agents&#xA;&lt;/h2&gt;&lt;p&gt;&lt;img alt=&#34;Image 17&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;429px&#34; data-flex-grow=&#34;179&#34; height=&#34;1328&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-4cbfc884a9.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-4cbfc884a9_hu_f896b19d7cc3ba72.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-4cbfc884a9_hu_e106527505848e2c.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-4cbfc884a9.jpeg 2378w&#34; width=&#34;2378&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Documentation and Interfaces&lt;/strong&gt;: Write system documentation for LLMs (not just for humans), providing &lt;strong&gt;llms.txt&lt;/strong&gt;, structured/Markdown-friendly interface descriptions, deterministic calling conventions, and clear input/output examples.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 18&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;427px&#34; data-flex-grow=&#34;178&#34; height=&#34;1650&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-1dc6796d9b.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-1dc6796d9b_hu_16075aa7cc96f242.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-1dc6796d9b_hu_911298041d045004.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-1dc6796d9b_hu_8c8d139423141f3.jpeg 2400w, https://lumigallerys.com/posts/note-36e23b9571/img-1dc6796d9b.jpeg 2940w&#34; width=&#34;2940&#34;&gt;&lt;img alt=&#34;Image 19&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;431px&#34; data-flex-grow=&#34;179&#34; height=&#34;1628&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-c1ae18a722.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-c1ae18a722_hu_2045db80b1746dc3.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-c1ae18a722_hu_6bee9202444a3f4.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-c1ae18a722_hu_2df9b1d9459119ee.jpeg 2400w, https://lumigallerys.com/posts/note-36e23b9571/img-c1ae18a722.jpeg 2930w&#34; width=&#34;2930&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Protocols and Context Pipelines&lt;/strong&gt;: Adopt more &lt;strong&gt;standardized tool protocols&lt;/strong&gt; (like the &lt;strong&gt;MCP&lt;/strong&gt; concept he mentioned) and &lt;strong&gt;context builders&lt;/strong&gt; (e.g., tools that feed codebases/knowledge bases to agents), &lt;strong&gt;reducing the cost of agents exploring in the dark.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 20&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;427px&#34; data-flex-grow=&#34;178&#34; height=&#34;1342&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-805cfa5d76.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-805cfa5d76_hu_51c46d0bbd96bbeb.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-805cfa5d76_hu_2fc12774ea7ac8eb.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-805cfa5d76.jpeg 2392w&#34; width=&#34;2392&#34;&gt;&lt;img alt=&#34;Image 21&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;433px&#34; data-flex-grow=&#34;180&#34; height=&#34;1626&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-bfdd8f2dd5.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-bfdd8f2dd5_hu_5543e280ebefdce.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-bfdd8f2dd5_hu_bbdd84987244e099.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-bfdd8f2dd5_hu_1a0c78ccc8fdec95.jpeg 2400w, https://lumigallerys.com/posts/note-36e23b9571/img-bfdd8f2dd5.jpeg 2934w&#34; width=&#34;2934&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;education-and-expanding-access-everyone-can-program-in-english&#34;&gt;Education and Expanding Access: Everyone Can &amp;ldquo;Program in English&amp;rdquo;&#xA;&lt;/h2&gt;&lt;p&gt;LLMs can act as both &lt;strong&gt;Suit&lt;/strong&gt; (enhancement) and &lt;strong&gt;Robot&lt;/strong&gt; (full agent); in the short term, the former is more promising.&lt;/p&gt;&#xA;&lt;p&gt;Using the viral tweet about &amp;ldquo;Vibe Coding&amp;rdquo; as an example:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;One can create an iOS app in a day without knowing Swift.&lt;/li&gt;&#xA;&lt;li&gt;Created &lt;strong&gt;MenuGen&lt;/strong&gt; (which generates dish images from menus) in just a few hours of coding; the real time-consuming part was DevOps (logging in, payment, deployment).&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 22&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;430px&#34; data-flex-grow=&#34;179&#34; height=&#34;1338&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-9b42031406.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-9b42031406_hu_32e0e2c0cf7a43cd.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-9b42031406_hu_d1a85d3ac7830e98.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-9b42031406.jpeg 2400w&#34; width=&#34;2400&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Children&amp;rsquo;s Vibe Coding videos reinforce his belief that natural language programming is &amp;ldquo;gateway drug,&amp;rdquo; unlocking a vast new demographic.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 23&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;433px&#34; data-flex-grow=&#34;180&#34; height=&#34;1626&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-1e7bbcfa86.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-36e23b9571/img-1e7bbcfa86_hu_56246128f468f926.jpeg 800w, https://lumigallerys.com/posts/note-36e23b9571/img-1e7bbcfa86_hu_82bc835212094a8.jpeg 1600w, https://lumigallerys.com/posts/note-36e23b9571/img-1e7bbcfa86_hu_c873c4af7f1b79a1.jpeg 2400w, https://lumigallerys.com/posts/note-36e23b9571/img-1e7bbcfa86.jpeg 2934w&#34; width=&#34;2934&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;conclusion-the-agent-era-on-a-decade-scale&#34;&gt;Conclusion: The Agent Era on a Decade Scale&#xA;&lt;/h2&gt;&lt;p&gt;Karpathy uses the metaphor of Iron Man&amp;rsquo;s suit: &lt;strong&gt;LLMs are amplifiers of human capabilities; however, the real transformation won&amp;rsquo;t happen in a year or two but resembles a decade-long evolution.&lt;/strong&gt; We need to design transitional forms of &amp;ldquo;partial autonomy&amp;rdquo; at the product level, using &lt;strong&gt;rapid generation ↔ verification to tame it into reliability and control.&lt;/strong&gt;&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Vibe Coding: The Future of AI-Driven Creation</title>
            <link>https://lumigallerys.com/posts/note-5110bb1fca/</link>
            <pubDate>Thu, 24 Jul 2025 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-5110bb1fca/</guid>
            <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&#xA;&lt;/h2&gt;&lt;p&gt;As Arthur C. Clarke famously said, &amp;ldquo;Any sufficiently advanced technology is indistinguishable from magic.&amp;rdquo; In the AI era, a new phenomenon known as &lt;strong&gt;Vibe Coding&lt;/strong&gt; has emerged.&lt;/p&gt;&#xA;&lt;p&gt;Recently, a controversial acquisition in the AI industry brought this trend to the forefront: the AI coding startup Windsurf was expected to be acquired by OpenAI but was instead snatched by Google DeepMind for $2.4 billion, taking its core founding team and talent. This fierce competition among tech giants for AI coding talent and technology has focused the industry&amp;rsquo;s attention on the increasingly mainstream Vibe Coding.&lt;/p&gt;&#xA;&lt;h2 id=&#34;what-is-vibe-coding&#34;&gt;What is Vibe Coding?&#xA;&lt;/h2&gt;&lt;p&gt;Vibe Coding was first proposed by Andrej Karpathy, co-founder of OpenAI and former head of AI at Tesla, in February 2025. It describes a new way of creation: &lt;strong&gt;you can almost forget the existence of code and immerse yourself in a dialogue with AI.&lt;/strong&gt; You simply present your ideas and needs to the AI and accept its solutions; if errors occur, the AI can resolve them on its own.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 1&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;318px&#34; data-flex-grow=&#34;132&#34; height=&#34;735&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-eab4e69a04.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-eab4e69a04_hu_bcfaa96edb7fe5ba.jpeg 800w, https://lumigallerys.com/posts/note-5110bb1fca/img-eab4e69a04.jpeg 975w&#34; width=&#34;975&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;In June of this year, legendary music producer Rick Rubin collaborated with Anthropic on a book titled &lt;em&gt;The Way of Code&lt;/em&gt;, merging the concept of Vibe Coding with the philosophies of the &lt;em&gt;Tao Te Ching&lt;/em&gt;, emphasizing intuition, improvisation, and the free flow of creativity. A meme of Rick Rubin, wearing headphones and closing his eyes while creating, has circulated online, humorously depicting today’s Vibe Coders. Despite being a top producer for major artists like Jay-Z, Timberland, and Adele, Rubin doesn&amp;rsquo;t play any instruments, which resonates with the notion that today’s Vibe Coders need not write any code.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 2&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;246px&#34; data-flex-grow=&#34;102&#34; height=&#34;1326&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-3a0c080817.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-3a0c080817_hu_c24375a324c2ac80.jpeg 800w, https://lumigallerys.com/posts/note-5110bb1fca/img-3a0c080817.jpeg 1360w&#34; width=&#34;1360&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;However, reducing Vibe Coding to merely a lower-threshold programming method or tool misses its deeper revolutionary implications. &lt;strong&gt;Vibe Coding fundamentally changes the paradigm of human-machine creative relationships, potentially empowering ordinary people with greater creative abilities and fulfilling needs they may not have previously recognized.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;the-shift-in-human-machine-interaction&#34;&gt;The Shift in Human-Machine Interaction&#xA;&lt;/h2&gt;&lt;p&gt;In traditional programming, the relationship between humans and machines is one of &amp;ldquo;active and passive&amp;rdquo; roles; programmers must issue precise, logically rigorous commands, and machines can only act within the boundaries of those commands and capabilities. This often leaves the command-givers as a select group of technical elites. Vibe Coding reshapes this relationship into one of &amp;ldquo;collaboration and interaction,&amp;rdquo; akin to the relationship between a director and a cinematographer, where &amp;ldquo;ordinary people can also be directors.&amp;rdquo; As long as you have a good script in mind, AI acts as a skilled and self-iterating cinematographer, responsible for translating core ideas into precise visual language—composition, lighting, and color. The dynamic process of collision and immediate adjustment between director and cinematographer is closer to the essence of human creativity.&lt;/p&gt;&#xA;&lt;p&gt;This change is rapidly pushing &amp;ldquo;code,&amp;rdquo; once a tool exclusive to a few technical elites, into the hands of the general public. However, as everyone learns to cast &amp;ldquo;magical spells,&amp;rdquo; a core contradiction arises: &lt;strong&gt;as AI generates new forms of content with unprecedented efficiency and quality, we still lack a native, elegant medium to carry and share them.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;the-vision-of-youware&#34;&gt;The Vision of YouWare&#xA;&lt;/h2&gt;&lt;p&gt;This gap was keenly observed by Ming Chaoping, founder of &lt;strong&gt;YouWare&lt;/strong&gt;. In early March 2025, he noticed many users were awkwardly sharing their creations generated by Grok 3 through screen recording, realizing a significant disconnect: the works produced by AI coding were incompatible with traditional social media platforms. This necessitated a new medium.&lt;/p&gt;&#xA;&lt;p&gt;Building on this insight, Ming made another crucial judgment: &lt;strong&gt;Vibe Coding requires not only the creation of better and stronger AI models and tools but also the establishment of a community that allows creativity to flow freely and inspire one another.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 3&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;801px&#34; data-flex-grow=&#34;333&#34; height=&#34;239&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-66b413891c.jpeg&#34; width=&#34;798&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Before founding YouWare, Ming Chaoping, born in 1995 and a graduate in automation from Wuhan University, had worked at OnePlus, ByteDance, and Moonlight, spanning smart hardware, super apps with hundreds of millions of users, and AI unicorns. At OnePlus, he developed an early understanding of user needs, product aesthetics, and community building. At ByteDance’s Jianying, he combined this understanding with scientific methodologies, learning to drive rapid product iteration through data. His experiences at Moonlight, coupled with a technical background and close proximity to consumer-facing product development, enabled him to efficiently communicate with top researchers and gain a technical vision for designing current products based on future model developments.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 4&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;380px&#34; data-flex-grow=&#34;158&#34; height=&#34;682&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-e363c15799.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-e363c15799_hu_4a3394f8dcccedd8.jpeg 800w, https://lumigallerys.com/posts/note-5110bb1fca/img-e363c15799.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;This combination of user understanding, scientific methodology, and technical vision has become YouWare&amp;rsquo;s advantage in addressing the challenges of AI applications today.&lt;/p&gt;&#xA;&lt;h2 id=&#34;youwares-solution-building-a-community-for-creators&#34;&gt;YouWare&amp;rsquo;s Solution: Building a Community for Creators&#xA;&lt;/h2&gt;&#xA;    &lt;blockquote&gt;&#xA;        &lt;p&gt;&amp;ldquo;Alone we can do so little; together we can do so much.&amp;rdquo; — Helen Keller&lt;/p&gt;&#xA;&#xA;    &lt;/blockquote&gt;&#xA;&lt;p&gt;Experiencing YouWare&amp;rsquo;s products reveals that every feature is designed to enable creators to realize their ideas quickly and facilitate the continuous evolution of those ideas, thus fostering a more vibrant community.&lt;/p&gt;&#xA;&lt;p&gt;Behind model capabilities and engineering efficiency, YouWare&amp;rsquo;s approach reflects a &lt;strong&gt;product manager mindset&lt;/strong&gt;.&lt;/p&gt;&#xA;&lt;h3 id=&#34;simple-yet-powerful&#34;&gt;Simple Yet Powerful&#xA;&lt;/h3&gt;&lt;p&gt;A good product should be intuitive enough that even a &amp;ldquo;fool&amp;rdquo; can use it. Ordinary users don’t need to see the underlying models, parameters, or performance scores; they only need results and experiences.&lt;/p&gt;&#xA;&lt;p&gt;YouWare&amp;rsquo;s homepage features a simple and understated interface with a dialogue box. Users can describe their ideas in natural language within the dialogue box, generating shareable works without seeing any code.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 5&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;530px&#34; data-flex-grow=&#34;220&#34; height=&#34;489&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-b55e8e6dbc.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-b55e8e6dbc_hu_571b118608287fbd.jpeg 800w, https://lumigallerys.com/posts/note-5110bb1fca/img-b55e8e6dbc.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Official website: &lt;a class=&#34;link&#34; href=&#34;https://www.youware.com&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;&#xA;    &gt;https://www.youware.com&lt;/a&gt;&lt;/p&gt;&#xA;&lt;p&gt;Yet simplicity alone is not enough; it must also be powerful. With the enhancement of large model capabilities, the strength of AI agents is rapidly evolving. Can Vibe Coders create applications on the YouWare platform that also possess agent capabilities? The &lt;strong&gt;AI App Generator&lt;/strong&gt; was born: &lt;strong&gt;users can now generate AI-driven applications with just a prompt&lt;/strong&gt;, such as creating a &amp;ldquo;Voxel Style Image Generator.&amp;rdquo; Throughout this process, no API configuration or personal keys are required to call various mainstream models.&lt;/p&gt;&#xA;&lt;p&gt;Currently, only Poe and Claude possess similar capabilities. Poe&amp;rsquo;s front end remains a traditional chatbot format, while YouWare can directly generate interactive applications. Claude&amp;rsquo;s Artifact can only use its own models; YouWare includes mainstream large models like OpenAI, Claude, Gemini, and DeepSeek, providing users with maximum possibilities.&lt;/p&gt;&#xA;&lt;p&gt;Similar simplicity and power are also evident in YouWare&amp;rsquo;s recently released VS Code and Cursor plugins, which allow the popular IDE to deploy web pages with a single click. Users simply need to install the plugin from the marketplace, complete authorization, and click &amp;ldquo;Publish Project&amp;rdquo; to publish their HTML and React projects to YouWare.&lt;/p&gt;&#xA;&lt;p&gt;By completely encapsulating the most advanced model capabilities and complex engineering details, YouWare allows users to create more freely, marking the first step in building a community.&lt;/p&gt;&#xA;&lt;h3 id=&#34;creating-delightful-experiences&#34;&gt;Creating Delightful Experiences&#xA;&lt;/h3&gt;&lt;p&gt;Building products isn’t always about disruptive innovation from scratch. Often, what truly captivates users are seemingly trivial details in the experience.&lt;/p&gt;&#xA;&lt;p&gt;When Ming discovered a Korean user using the Boost feature over 60 times in a day, it validated some of his judgments: &lt;strong&gt;small and interesting experiences can become users&amp;rsquo; &amp;ldquo;delight points.&amp;rdquo;&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Boost&lt;/strong&gt; (one-click beautification) is a feature on the YouWare platform similar to Instagram filters, allowing a rough &amp;ldquo;draft&amp;rdquo; to be quickly enhanced into a more aesthetically pleasing work. It has become one of the most popular features, especially favored by users in Japan and Korea.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 6&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;411px&#34; data-flex-grow=&#34;171&#34; height=&#34;630&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-cb99ac4112.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-cb99ac4112_hu_6e87aa5473f4a2ee.jpeg 800w, https://lumigallerys.com/posts/note-5110bb1fca/img-cb99ac4112.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Comparison of Boost before and after for the &amp;ldquo;Time Travel Outfit Consultant&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Additionally, compared to the single like button on social platforms, YouWare employs an &lt;strong&gt;emoji interaction mechanism&lt;/strong&gt;, fostering a friendlier interactive atmosphere.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 7&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;369px&#34; data-flex-grow=&#34;153&#34; height=&#34;702&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-d92731d587.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-d92731d587_hu_d22c0f38ad96c71e.jpeg 800w, https://lumigallerys.com/posts/note-5110bb1fca/img-d92731d587.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;YouWare users can use emojis to express their attitudes towards projects.&lt;/p&gt;&#xA;&lt;h3 id=&#34;creator-centric-approach&#34;&gt;Creator-Centric Approach&#xA;&lt;/h3&gt;&lt;p&gt;The YouWare team maintains high-frequency interactions with community creators. When they noticed many creators eager to develop AI-driven applications but facing challenges in obtaining API keys—due to the cumbersome application process, the risk of exposing keys, and high costs—YouWare swiftly developed the AI App Generator to address these issues and provide multiple mainstream large models, supporting both text and image generation.&lt;/p&gt;&#xA;&lt;p&gt;This sends a clear signal to the community: &lt;strong&gt;YouWare is a community willing to co-create with users.&lt;/strong&gt; When users know their voices will be heard and responded to quickly, they are more likely to engage. In just over four months since launch, YouWare has gathered &lt;strong&gt;100,000&lt;/strong&gt; creative Vibe Coders and accumulated &lt;strong&gt;300,000&lt;/strong&gt; projects.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 8&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;23px&#34; data-flex-grow=&#34;9&#34; height=&#34;10309&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-9562573ea7.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-9562573ea7_hu_8cab2f894adbc412.jpeg 800w, https://lumigallerys.com/posts/note-5110bb1fca/img-9562573ea7.jpeg 1017w&#34; width=&#34;1017&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;discovering-unmet-needs-through-play&#34;&gt;Discovering Unmet Needs Through Play&#xA;&lt;/h2&gt;&#xA;    &lt;blockquote&gt;&#xA;        &lt;p&gt;&amp;ldquo;Play is the highest form of research.&amp;rdquo; — Albert Einstein&lt;/p&gt;&#xA;&#xA;    &lt;/blockquote&gt;&#xA;&lt;p&gt;In the past month, YouWare birthed a viral case: an interactive birthday card. Previously, no one imagined a birthday card could transform into a shareable, interactive webpage. However, when creators shared it on TikTok, the video quickly went viral, leading to a chain reaction where many users began creating birthday cards, romantic letters, and anniversary cards using YouWare.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 9&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;395px&#34; data-flex-grow=&#34;164&#34; height=&#34;655&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-8446957ff2.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-8446957ff2_hu_1a010525fd5af250.jpeg 800w, https://lumigallerys.com/posts/note-5110bb1fca/img-8446957ff2.jpeg 1079w&#34; width=&#34;1079&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;User Mayz&amp;rsquo;s birthday card: &lt;a class=&#34;link&#34; href=&#34;https://www.youware.com/project/pcg3u1p14y&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;&#xA;    &gt;https://www.youware.com/project/pcg3u1p14y&lt;/a&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Compared to traditional programming, Vibe Coding is more accessible to ordinary users. Creating a community rich in creativity allows users to play, which essentially helps to touch upon the real needs of ordinary users; the more we respect the community atmosphere for creators, the stronger the chain of creativity to demand transformation becomes.&lt;/strong&gt; On July 22, YouWare topped the Product Hunt daily chart, further validating the team&amp;rsquo;s grasp of the breakthrough path for Vibe Coding.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 10&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;240px&#34; data-flex-grow=&#34;100&#34; height=&#34;1080&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-98a9918601.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-98a9918601_hu_3bbf611994c057c8.jpeg 800w, https://lumigallerys.com/posts/note-5110bb1fca/img-98a9918601.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;During the trial process, the machine heart created a &amp;ldquo;Future Life Visa System.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Prompt:&lt;/p&gt;&#xA;&#xA;    &lt;blockquote&gt;&#xA;        &lt;p&gt;Create a &amp;ldquo;Future Life Visa System&amp;rdquo; web application: users must go through a simulated &amp;ldquo;future life immigration interview&amp;rdquo; to ultimately obtain a residence permit for a future society. The entire experience includes Q&amp;amp;A, multiple-choice questions, personality tests, result generation, and visa style display.&lt;/p&gt;&#xA;&#xA;    &lt;/blockquote&gt;&#xA;&lt;p&gt;First, by clicking Create, YouWare automatically provides supplementary suggestions, which can be accepted or modified based on individual needs.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 11&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;488px&#34; data-flex-grow=&#34;203&#34; height=&#34;530&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-e35918ee1b.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-e35918ee1b_hu_ebf03016ba260651.jpeg 800w, https://lumigallerys.com/posts/note-5110bb1fca/img-e35918ee1b.jpeg 1079w&#34; width=&#34;1079&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Using the original prompt, YouWare first deeply analyzed our intentions, then clearly broke our concept down into a To-Do List, allowing us to see the AI&amp;rsquo;s thought process and develop stable expectations for the final result.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 12&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;429px&#34; data-flex-grow=&#34;178&#34; height=&#34;606&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-c9f87d1d12.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-c9f87d1d12_hu_57a9d7ec0c5fa339.jpeg 800w, https://lumigallerys.com/posts/note-5110bb1fca/img-c9f87d1d12.jpeg 1084w&#34; width=&#34;1084&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;After confirming the To-Do List, YouWare&amp;rsquo;s programming agent began working.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 13&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;2408px&#34; data-flex-grow=&#34;1003&#34; height=&#34;56&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-f3ac09f311.jpeg&#34; width=&#34;562&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Minutes later, a fully functional prototype of the &amp;ldquo;Future Life Visa System&amp;rdquo; was born. While the prototype was usable, it appeared somewhat rudimentary; with just one click on Boost, it could be transformed into a more aesthetically pleasing product.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 14&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;547px&#34; data-flex-grow=&#34;228&#34; height=&#34;473&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-ed419c8b7e.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-ed419c8b7e_hu_c9272c0d38b3fff2.jpeg 800w, https://lumigallerys.com/posts/note-5110bb1fca/img-ed419c8b7e.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Comparison of Boost before and after&lt;/p&gt;&#xA;&lt;p&gt;After completion, clicking publish leads to YouWare&amp;rsquo;s community square. YouWare offers two forms of sharing: one is a full-screen short link, suitable for direct sharing; the other is a YouWare community link, which gives the work a social attribute, allowing anyone to express opinions, comment, and suggest via emojis, and enabling other users to remix.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 15&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;506px&#34; data-flex-grow=&#34;210&#34; height=&#34;512&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-ba66ea8730.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-ba66ea8730_hu_b34471542fef866f.jpeg 800w, https://lumigallerys.com/posts/note-5110bb1fca/img-ba66ea8730.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Throughout the experience, our biggest impression was that YouWare is not a cold command-execution tool but &lt;strong&gt;rather a fully capable, understanding, and tastefully engaging creative partner; it is simple and efficient while willing to return control to creators as much as possible.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;the-second-half-of-ai&#34;&gt;The Second Half of AI&#xA;&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Product Managers Return to the Center Stage&lt;/strong&gt;&lt;/p&gt;&#xA;&#xA;    &lt;blockquote&gt;&#xA;        &lt;p&gt;&amp;ldquo;The sharpness of the sword lies in the hands of the wielder.&amp;rdquo; — Western Proverb&lt;/p&gt;&#xA;&#xA;    &lt;/blockquote&gt;&#xA;&lt;p&gt;The rise of magic also requires those adept at wielding it.&lt;/p&gt;&#xA;&lt;p&gt;In recent years, AI development has primarily focused on model parameters, with the spotlight on scientists and researchers. Now, as AI technology becomes increasingly commoditized and powerful models become accessible through APIs, a critical shift is occurring.&lt;/p&gt;&#xA;&lt;p&gt;YouWare&amp;rsquo;s practice reveals an important trend: &lt;strong&gt;in the second half of AI, the ability to &amp;ldquo;apply AI technology&amp;rdquo; will become as important as the ability to &amp;ldquo;create large models.&amp;rdquo;&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;This necessitates a new generation of AI product managers to take on the responsibility of defining &amp;ldquo;what problems are worth solving with AI&amp;rdquo; and providing answers in the most human-centered way.&lt;/p&gt;&#xA;&lt;p&gt;As the future approaches, it is not found in the cloud&amp;rsquo;s parameters or distant singularities; when the fervor for pure technology gradually wanes, the products that truly return to users, build ecosystems, and create value will be the most anticipated transformations brought by AI.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 16&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;512px&#34; data-flex-grow=&#34;213&#34; height=&#34;506&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-bae5ef934c.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-5110bb1fca/img-bae5ef934c_hu_b10c25f907b29ca2.jpeg 800w, https://lumigallerys.com/posts/note-5110bb1fca/img-bae5ef934c.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;YouWare is currently hosting an &lt;strong&gt;AI APP Challenge&lt;/strong&gt;, using real events to further stimulate community prosperity. Interested readers should not miss out (deadline July 31).&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Comprehensive Guide to Understanding Large Language Models</title>
            <link>https://lumigallerys.com/posts/note-2870aba5cd/</link>
            <pubDate>Tue, 22 Oct 2024 00:00:00 +0000</pubDate>
            <guid>https://lumigallerys.com/posts/note-2870aba5cd/</guid>
            <description>&lt;p&gt;Last week, while sharing the article &amp;ldquo;My Journey to Becoming an AI Product Manager,&amp;rdquo; I hinted that I would produce a comprehensive piece to help everyone systematically learn about large models. Today, I am delivering that article; it totals 22,000 words and is expected to take about 30 minutes to read, covering 15 topics related to large models.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 1&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;514px&#34; data-flex-grow=&#34;214&#34; height=&#34;420&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-2870aba5cd/img-a5ca106ff9.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-2870aba5cd/img-a5ca106ff9_hu_e38a15fb795d1bf8.jpeg 800w, https://lumigallerys.com/posts/note-2870aba5cd/img-a5ca106ff9.jpeg 900w&#34; width=&#34;900&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;In the past year, there has been an overwhelming amount of articles introducing and explaining large models. Most people already have some foundational knowledge, but I feel that &lt;strong&gt;this information is too fragmented and lacks a systematic understanding&lt;/strong&gt;. Currently, there is no article that comprehensively explains what large models are in one go.&lt;/p&gt;&#xA;&lt;p&gt;To alleviate my own cognitive anxiety, I decided to summarize the knowledge I have gained about large models over the past year into this article. &lt;strong&gt;I hope to clarify my understanding of large models through this single article&lt;/strong&gt;, which serves as a testament to my extensive learning.&lt;/p&gt;&#xA;&lt;h2 id=&#34;what-will-i-share&#34;&gt;What Will I Share?&#xA;&lt;/h2&gt;&lt;p&gt;This article will share 15 topics related to large models. Originally, there were 20 topics, but I removed some that were more technical and focused on issues that ordinary users or product managers should pay attention to. The goal is to ensure that as AI novices, we only need to master and understand these key points.&lt;/p&gt;&#xA;&lt;h2 id=&#34;who-is-this-for&#34;&gt;Who Is This For?&#xA;&lt;/h2&gt;&lt;p&gt;This article is suitable for the following groups of friends:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;Those who want to understand what large models are all about.&lt;/li&gt;&#xA;&lt;li&gt;Individuals looking to transition into AI-related products and roles, including product managers and operations personnel.&lt;/li&gt;&#xA;&lt;li&gt;Those who have a basic understanding of AI but wish to advance their knowledge and reduce cognitive anxiety about AI.&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;Content Disclaimer: The entire content is a result of my personal synthesis after extensive reading and digestion of numerous expert articles, books related to large models, and consultations with industry experts. I primarily serve as a knowledge synthesizer; if any descriptions are incorrect, please feel free to inform me kindly!&lt;/p&gt;&#xA;&lt;h2 id=&#34;lecture-1-understanding-common-concepts-of-large-models&#34;&gt;Lecture 1: Understanding Common Concepts of Large Models&#xA;&lt;/h2&gt;&lt;p&gt;Before diving into large models, let’s first understand some foundational concepts. Grasping these professional terms and their relationships will benefit your subsequent reading and learning of any AI and large model-related content. I spent considerable time organizing their relationships, so please read this section carefully.&lt;/p&gt;&#xA;&lt;h3 id=&#34;1-common-ai-terms&#34;&gt;1. Common AI Terms&#xA;&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;1) Large Model (LLM):&lt;/strong&gt; All existing large models refer to large language models, specifically generative large models, with practical examples including GPT-4.0 and GPT-4o.&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Deep Learning:&lt;/strong&gt; A subfield of machine learning focused on using multi-layer neural networks for learning. Deep learning excels at processing complex data such as images, audio, and text, making it highly effective in AI applications.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Supervised Learning:&lt;/strong&gt; A machine learning method where the model learns the mapping from input to output using a labeled dataset. Common algorithms include linear regression, logistic regression, support vector machines, K-nearest neighbors, decision trees, and random forests.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Unsupervised Learning:&lt;/strong&gt; A machine learning method that discovers patterns and structures in data without labeled data. Common algorithms include K-means clustering, hierarchical clustering, DBSCAN, principal component analysis (PCA), and t-SNE.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Semi-supervised Learning:&lt;/strong&gt; Combines a small amount of labeled data with a large amount of unlabeled data for training. It leverages the rich information from unlabeled data and the accuracy of labeled data to improve model performance. Common methods include Generative Adversarial Networks (GANs) and autoencoders.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Reinforcement Learning:&lt;/strong&gt; A method that learns optimal strategies through interaction with the environment, based on reward and punishment mechanisms. Common algorithms include Q-learning, policy gradients, and Deep Q-Networks (DQN).&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Model Architecture:&lt;/strong&gt; Represents the design of the backbone of the large model. Different architectures affect the model&amp;rsquo;s performance, efficiency, and computational costs, and determine the model&amp;rsquo;s scalability.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Transformer Architecture:&lt;/strong&gt; The mainstream architecture used by most large models, including GPT-4.0 and many domestic large models. The widespread use of the Transformer architecture is mainly because it enables large models to understand human natural language, maintain contextual memory, and generate text.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;MOE Architecture:&lt;/strong&gt; Stands for Mixture of Experts architecture, which combines various expert models to form a massive model capable of addressing multiple complex professional problems.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Machine Learning Techniques:&lt;/strong&gt; A broad category of techniques that enable AI, including deep learning, supervised learning, and reinforcement learning. As a product manager, you don’t need to delve too deeply into these; just understand the relationships between these methods.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;NLP Technology (Natural Language Processing):&lt;/strong&gt; A field of AI focused on enabling computers to understand, interpret, and generate human language for applications like text analysis, machine translation, speech recognition, and dialogue systems.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;CV Technology (Computer Vision):&lt;/strong&gt; If NLP deals with text, CV addresses visual content-related technologies, including common image recognition, video analysis, and image segmentation techniques.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Speech Recognition and Synthesis Technology:&lt;/strong&gt; Includes converting speech to text and synthesizing speech, such as Text-to-Speech (TTS) technology.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Retrieval-Augmented Generation (RAG):&lt;/strong&gt; Refers to the technology where large models generate content based on information retrieved from search engines and knowledge bases, commonly involved in AI applications.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Knowledge Graph:&lt;/strong&gt; A technology that connects knowledge, allowing models to better and faster access the most relevant information, thereby enhancing their ability to process complex associative information and AI reasoning.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Function Call:&lt;/strong&gt; In large language models (like GPT), it refers to calling built-in or external functions to perform specific tasks or operations. This mechanism allows models to execute diverse and specific operations beyond mere text generation.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;&lt;strong&gt;2) Terms Related to Large Model Training and Optimization Techniques&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Pre-training:&lt;/strong&gt; The process of training a model on a large dataset, typically diverse, to obtain a model with strong general capabilities.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Fine-tuning:&lt;/strong&gt; Further training a large model on specific tasks or smaller datasets to improve its performance on targeted issues, using vertical domain data.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Prompt Engineering:&lt;/strong&gt; In product manager terms, it refers to crafting questions in a way that the large model can easily understand, enhancing the input for desired results.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Model Distillation:&lt;/strong&gt; A technique that transfers knowledge from a large model (teacher model) to a smaller model (student model) to improve performance while retaining much of the large model&amp;rsquo;s accuracy.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Model Pruning:&lt;/strong&gt; The process of removing unnecessary parameters from a large model to reduce its overall size and computational costs.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;&lt;strong&gt;3) AI Application-Related Terms&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Agent:&lt;/strong&gt; An AI application with a specific capability, akin to how applications in the internet era were called apps.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Chatbot:&lt;/strong&gt; Refers to AI chatbots, a type of AI application that interacts through conversation, including products like ChatGPT.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;&lt;strong&gt;4) Terms Related to Large Model Performance&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Emergence:&lt;/strong&gt; Refers to the phenomenon where a large model exhibits capabilities beyond expectations once its parameter scale reaches a certain threshold.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Hallucination:&lt;/strong&gt; Indicates instances where a large model generates nonsensical content, mistakenly treating incorrect facts as true, leading to unrealistic outputs.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Amnesia:&lt;/strong&gt; Refers to the situation where, after a certain number of dialogue turns or length, the model suddenly forgets previous context, leading to repetition and memory loss.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;2-understanding-the-relationship-between-ai-machine-learning-deep-learning-and-nlp&#34;&gt;2. Understanding the Relationship Between AI, Machine Learning, Deep Learning, and NLP&#xA;&lt;/h2&gt;&lt;p&gt;If you are interested in AI and large models, you will inevitably encounter keywords like &lt;strong&gt;&amp;ldquo;AI,&amp;rdquo; &amp;ldquo;Machine Learning,&amp;rdquo; &amp;ldquo;Deep Learning,&amp;rdquo; &amp;ldquo;NLP&amp;rdquo;&lt;/strong&gt; in your future studies. Therefore, it’s best to clarify these professional terms and their logical relationships to facilitate easier understanding.&lt;/p&gt;&#xA;&lt;p&gt;In summary, the relationships between these concepts are as follows:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;Machine learning is a core technology of AI, alongside expert systems and Bayesian networks (no need to delve into these).&lt;/li&gt;&#xA;&lt;li&gt;NLP is a type of application task within AI focused on natural language processing, while AI&amp;rsquo;s application technologies also include CV technology, speech recognition, and synthesis.&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 2&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;261px&#34; data-flex-grow=&#34;108&#34; height=&#34;638&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-2870aba5cd/img-ddba40d92a.jpeg&#34; width=&#34;695&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;3-understanding-the-transformer-architecture&#34;&gt;3. Understanding the Transformer Architecture&#xA;&lt;/h2&gt;&lt;p&gt;When discussing large models, one cannot overlook the Transformer architecture. If large models are like a tree, the Transformer architecture serves as the trunk. The emergence of products like ChatGPT is primarily due to the design of the Transformer architecture, which enables models to understand context, maintain memory, and predict new words. Moreover, the Transformer allows large models to train on unlabeled data, eliminating the need for extensive labeled data preparation.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Relationship Between Transformer Architecture and Deep Learning Technology:&lt;/strong&gt; The Transformer architecture is a type of neural network architecture within the deep learning field. Other architectures include traditional Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks.&lt;/p&gt;&#xA;&lt;h2 id=&#34;4-understanding-the-relationship-between-transformer-architecture-and-gpt&#34;&gt;4. Understanding the Relationship Between Transformer Architecture and GPT&#xA;&lt;/h2&gt;&lt;p&gt;GPT stands for Generative Pre-trained Transformer, meaning GPT is a large language model developed based on the Transformer architecture by OpenAI. The core idea of GPT is to enhance the ability to generate and understand natural language through &lt;strong&gt;large-scale pre-training and fine-tuning&lt;/strong&gt;. The introduction of the Transformer architecture has significantly improved the model&amp;rsquo;s ability to understand context, process large datasets, and predict text.&lt;/p&gt;&#xA;&lt;h3 id=&#34;key-differences&#34;&gt;Key Differences:&#xA;&lt;/h3&gt;&lt;ol&gt;&#xA;&lt;li&gt;&lt;strong&gt;Capability Differences:&lt;/strong&gt; The Transformer architecture enables models to understand context and process large data but does not inherently possess the ability to understand or generate natural language. In contrast, GPT enhances this capability through pre-training on natural language data.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Architectural Basis:&lt;/strong&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Transformer:&lt;/strong&gt; The original Transformer model consists of an encoder and a decoder, where the encoder processes the input sequence and generates intermediate representations, while the decoder generates output sequences based on these representations. This architecture is particularly suited for sequence-to-sequence tasks like machine translation.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;GPT:&lt;/strong&gt; GPT primarily uses the decoder part of the Transformer, focusing on generation tasks. It employs unidirectional processing, where each word can only see the preceding words, aligning with the natural format of language models.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Implementation of Specific Problem-Solving:&lt;/strong&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;The Transformer is trained for specific tasks, optimizing its performance through simultaneous training of the encoder and decoder.&lt;/li&gt;&#xA;&lt;li&gt;GPT, on the other hand, achieves task-specific performance through supervised fine-tuning, requiring only task-specific data without extensive training for each task.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Application Domains:&lt;/strong&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;The traditional Transformer framework can be applied to various sequence-to-sequence tasks, while GPT is primarily used for generation tasks, excelling in generating coherent and creative text.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;h2 id=&#34;5-understanding-the-moe-architecture&#34;&gt;5. Understanding the MOE Architecture&#xA;&lt;/h2&gt;&lt;p&gt;In addition to the Transformer architecture, another popular architecture is the MOE (Mixture of Experts) architecture, which dynamically selects and combines multiple sub-models (experts) to complete tasks. The key idea of MOE is to solve a range of complex tasks by combining multiple expert models rather than relying on a single large model.&lt;/p&gt;&#xA;&lt;p&gt;The main advantage of the MOE architecture is its ability to maintain computational efficiency while handling large-scale data and model parameters, significantly reducing computational costs without sacrificing model capability.&lt;/p&gt;&#xA;&lt;p&gt;Transformer and MOE can be used together, often referred to as MOE-Transformer or Sparse Mixture of Experts Transformer. In this architecture:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;The Transformer processes input data, leveraging its powerful self-attention mechanism to capture dependencies in sequences.&lt;/li&gt;&#xA;&lt;li&gt;MOE dynamically selects and combines different experts to enhance computational efficiency and capability.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;lecture-2-differences-between-large-models-and-traditional-models&#34;&gt;Lecture 2: Differences Between Large Models and Traditional Models&#xA;&lt;/h2&gt;&lt;p&gt;When we talk about large models, we usually refer to LLMs (Large Language Models), specifically those based on the generative pre-trained Transformer architecture like GPT. These models primarily address natural language tasks, unlike traditional models that may focus on images, videos, or speech. Moreover, LLMs are generative models, meaning their main capability is generation rather than prediction or decision-making.&lt;/p&gt;&#xA;&lt;p&gt;In contrast to traditional models, large models exhibit the following characteristics:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Ability to Understand and Generate Natural Language:&lt;/strong&gt; Many traditional models may not understand human natural language, let alone generate it.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Powerful and Versatile:&lt;/strong&gt; Traditional models often solve one or a few specific problems, while large models can tackle a wide range of issues.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Contextual Memory:&lt;/strong&gt; Large models possess memory capabilities, allowing them to relate to previous dialogue, unlike many traditional models.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Training Method:&lt;/strong&gt; Large models are pre-trained on vast amounts of unlabeled text, significantly reducing the need for labeled data compared to traditional models.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Massive Parameter Scale:&lt;/strong&gt; Most large models have parameter scales in the hundreds of billions, such as GPT-3.5 with 175 billion parameters, while GPT-4.0 is rumored to reach trillions of parameters.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;High Computational Resource Requirements:&lt;/strong&gt; Due to their scale and complexity, these models require significant computational resources for training and inference.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;lecture-3-evolution-of-large-models&#34;&gt;Lecture 3: Evolution of Large Models&#xA;&lt;/h2&gt;&lt;h3 id=&#34;1-evolution-of-generative-capabilities-in-llms&#34;&gt;1. Evolution of Generative Capabilities in LLMs&#xA;&lt;/h3&gt;&lt;p&gt;Understanding the evolution of LLMs helps clarify how large models have developed their current capabilities and better understand the relationship between LLMs and Transformers:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&lt;strong&gt;N-gram:&lt;/strong&gt; The earliest stage of generative capability, primarily solving the prediction of the next word, but limited in understanding context and grammatical structure.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;RNN and LSTM:&lt;/strong&gt; These models addressed the issue of context length, enabling longer contextual windows but struggled with large data processing.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Transformer:&lt;/strong&gt; Combines the predictive capabilities of previous models while supporting training on large datasets but lacks natural language understanding and generation.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;LLM:&lt;/strong&gt; Adopts the GPT pre-training and supervised fine-tuning approach, enabling the model to understand and generate natural language.&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 3&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;900px&#34; data-flex-grow=&#34;375&#34; height=&#34;288&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-2870aba5cd/img-847e9cdfef.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-2870aba5cd/img-847e9cdfef_hu_8e2bba945c13ec24.jpeg 800w, https://lumigallerys.com/posts/note-2870aba5cd/img-847e9cdfef.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;2-development-from-gpt-1-to-gpt-4&#34;&gt;2. Development from GPT-1 to GPT-4&#xA;&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;GPT-1:&lt;/strong&gt; Introduced unsupervised training steps, solving the issue of requiring extensive labeled data. However, its small parameter scale (117 million) limited its ability to handle complex tasks without fine-tuning.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;GPT-2:&lt;/strong&gt; Increased parameter scale to 1.5 billion and expanded training text size to 40GB, enhancing model capabilities but still facing limitations with complex problems.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;GPT-3:&lt;/strong&gt; Expanded parameter scale to 175 billion, achieving strong performance in text generation and language understanding while eliminating the need for fine-tuning.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;InstructGPT:&lt;/strong&gt; To address GPT-3&amp;rsquo;s limitations, it added supervised fine-tuning and reinforcement learning from human feedback (RLHF) to optimize performance.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;GPT-3.5:&lt;/strong&gt; Released in March 2022, with training data up to June 2021, featuring a larger dataset of 45TB.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;GPT-4:&lt;/strong&gt; Released in April 2023, significantly enhancing reasoning capabilities and supporting multimodal abilities.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;GPT-4o:&lt;/strong&gt; Expected to enhance voice chat capabilities by May 2024.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;O1:&lt;/strong&gt; OpenAI&amp;rsquo;s O1 model, released in September 2024, focuses on enhancing reasoning capabilities.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 4&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;215px&#34; data-flex-grow=&#34;89&#34; height=&#34;1202&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-2870aba5cd/img-24afa3616c.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-2870aba5cd/img-24afa3616c_hu_35d0cf1ba1345bd5.jpeg 800w, https://lumigallerys.com/posts/note-2870aba5cd/img-24afa3616c.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;lecture-4-principles-of-text-generation-in-large-models&#34;&gt;Lecture 4: Principles of Text Generation in Large Models&#xA;&lt;/h2&gt;&lt;h3 id=&#34;1-how-does-gpt-generate-text&#34;&gt;1. How Does GPT Generate Text?&#xA;&lt;/h3&gt;&lt;p&gt;The process of generating text in large models can be summarized in five steps:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;Upon receiving a prompt, the model first tokenizes the input content into multiple tokens.&lt;/li&gt;&#xA;&lt;li&gt;It uses the Transformer architecture to understand the relationships between tokens, grasping the overall meaning of the prompt.&lt;/li&gt;&#xA;&lt;li&gt;Based on context, it predicts the next token, potentially generating multiple results, each with a corresponding probability.&lt;/li&gt;&#xA;&lt;li&gt;The token with the highest probability is selected as the predicted next word.&lt;/li&gt;&#xA;&lt;li&gt;This process repeats until the entire content is generated.&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 5&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;346px&#34; data-flex-grow=&#34;144&#34; height=&#34;748&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-2870aba5cd/img-c60e4a80ba.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-2870aba5cd/img-c60e4a80ba_hu_8400f82f8c477994.jpeg 800w, https://lumigallerys.com/posts/note-2870aba5cd/img-c60e4a80ba.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;lecture-5-classification-of-llms&#34;&gt;Lecture 5: Classification of LLMs&#xA;&lt;/h2&gt;&lt;h3 id=&#34;1-classification-by-modality&#34;&gt;1. Classification by Modality&#xA;&lt;/h3&gt;&lt;p&gt;Currently, large models can be categorized into:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Text generation models (e.g., GPT-3.5)&lt;/li&gt;&#xA;&lt;li&gt;Image generation models (e.g., DALL-E)&lt;/li&gt;&#xA;&lt;li&gt;Video generation models (e.g., Sora)&lt;/li&gt;&#xA;&lt;li&gt;Speech generation models&lt;/li&gt;&#xA;&lt;li&gt;Multimodal models (e.g., GPT-4.0)&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;2-classification-by-training-stage&#34;&gt;2. Classification by Training Stage&#xA;&lt;/h3&gt;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Basic Language Model:&lt;/strong&gt; A model trained only on large-scale text corpora without instruction or downstream task fine-tuning.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Instruction-Finetuned Language Model:&lt;/strong&gt; A model that has undergone instruction fine-tuning and human feedback optimizations.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;3-classification-by-general-and-industry-models&#34;&gt;3. Classification by General and Industry Models&#xA;&lt;/h3&gt;&lt;p&gt;Large models can also be divided into general models and industry-specific models. General models perform well across various tasks but may struggle with specific industry-related data and terminology. Industry models, on the other hand, are fine-tuned for specific domains, achieving higher performance and accuracy.&lt;/p&gt;&#xA;&lt;h2 id=&#34;lecture-6-core-technologies-of-llms&#34;&gt;Lecture 6: Core Technologies of LLMs&#xA;&lt;/h2&gt;&lt;p&gt;While this section may contain many technical terms that are challenging to understand, as a product manager, it is essential to grasp key concepts to facilitate communication with developers and technical teams.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;1. Model Architecture:&lt;/strong&gt; The Transformer architecture is one of the foundational core technologies of large models.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;2. Pre-training and Fine-tuning&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Pre-training:&lt;/strong&gt; A key technology involving training on large-scale unlabeled data, significantly reducing the need for labeled data.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Fine-tuning:&lt;/strong&gt; A technique to improve model performance on specific tasks through additional training on targeted datasets.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;&lt;strong&gt;3. Model Compression and Acceleration&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Model Pruning:&lt;/strong&gt; Reducing model size and computational complexity by removing unimportant parameters.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Knowledge Distillation:&lt;/strong&gt; Training a smaller student model to mimic the behavior of a larger teacher model, retaining performance while reducing computational costs.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;lecture-7-six-steps-in-large-model-development&#34;&gt;Lecture 7: Six Steps in Large Model Development&#xA;&lt;/h2&gt;&lt;p&gt;According to OpenAI&amp;rsquo;s information, the development of large models typically involves the following six steps:&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 6&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;528px&#34; data-flex-grow=&#34;220&#34; height=&#34;490&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-2870aba5cd/img-1f332ced08.jpeg&#34; srcset=&#34;https://lumigallerys.com/posts/note-2870aba5cd/img-1f332ced08_hu_28c961c4c0ee9dd.jpeg 800w, https://lumigallerys.com/posts/note-2870aba5cd/img-1f332ced08.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&lt;strong&gt;Data Collection and Processing:&lt;/strong&gt; Collecting large amounts of text data from various sources and cleaning it to remove irrelevant or low-quality content.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Model Design:&lt;/strong&gt; Determining the model architecture, such as the Transformer architecture used by GPT-4, and defining its size, including layers, hidden units, and total parameters.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Pre-training:&lt;/strong&gt; The model learns language and knowledge by reading extensive text data, akin to a student absorbing information.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Instruction Fine-tuning:&lt;/strong&gt; The process of retraining the model with question-answer pairs to improve its responses.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Reward Mechanism:&lt;/strong&gt; Setting up an incentive system to guide the model towards providing valuable and accurate responses.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Reinforcement Learning:&lt;/strong&gt; The model learns through trial and error in real-world scenarios to improve its performance.&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;h2 id=&#34;lecture-8-understanding-large-model-training-and-fine-tuning&#34;&gt;Lecture 8: Understanding Large Model Training and Fine-tuning&#xA;&lt;/h2&gt;&lt;h3 id=&#34;1-understanding-large-model-training&#34;&gt;1. Understanding Large Model Training&#xA;&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;1) What Data Is Needed for Training Large Models?&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Text data: Used for training language models, such as news articles, books, social media posts, and Wikipedia.&lt;/li&gt;&#xA;&lt;li&gt;Structured data: Such as knowledge graphs, to enhance the model&amp;rsquo;s knowledge.&lt;/li&gt;&#xA;&lt;li&gt;Semi-structured data: Such as XML and JSON formats for information extraction.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;&lt;strong&gt;2) Sources of Training Data&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Public datasets: Such as Common Crawl, Wikipedia, and OpenWebText.&lt;/li&gt;&#xA;&lt;li&gt;Proprietary data: Internal company data or paid proprietary data.&lt;/li&gt;&#xA;&lt;li&gt;User-generated content: Content from social media, forums, and comments.&lt;/li&gt;&#xA;&lt;li&gt;Synthetic data: Data generated through GANs or other generative models.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;&lt;strong&gt;3) Costs Associated with Training Large Models&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Computational resources: GPU/TPU usage costs, depending on model size and training duration.&lt;/li&gt;&#xA;&lt;li&gt;Storage costs: For large datasets and model weights, which can reach TB levels.&lt;/li&gt;&#xA;&lt;li&gt;Data acquisition costs: Costs for purchasing proprietary data or cleaning and labeling data.&lt;/li&gt;&#xA;&lt;li&gt;Energy costs: Training large models consumes significant electricity, increasing operational costs.&lt;/li&gt;&#xA;&lt;li&gt;R&amp;amp;D costs: Salaries for researchers and engineers, as well as development and maintenance expenses.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;2-understanding-large-model-fine-tuning&#34;&gt;2. Understanding Large Model Fine-tuning&#xA;&lt;/h3&gt;&lt;ol&gt;&#xA;&lt;li&gt;Two stages of fine-tuning: &lt;strong&gt;Supervised Fine-tuning (SFT) and Reinforcement Learning (RLHF)&lt;/strong&gt;, with differences as follows:&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 7&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;197px&#34; data-flex-grow=&#34;82&#34; height=&#34;882&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://lumigallerys.com/posts/note-2870aba5cd/img-28ed0a7ab5.jpeg&#34; width=&#34;724&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;2) Two Methods of Fine-tuning:&lt;/strong&gt; Lora fine-tuning and SFT fine-tuning.&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Lora fine-tuning adjusts only part of the model&amp;rsquo;s parameters, suitable for resource-limited scenarios.&lt;/li&gt;&#xA;&lt;li&gt;SFT fine-tuning adjusts all parameters, enabling the model to address a wider range of specific tasks.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;lecture-9-key-factors-affecting-large-model-performance&#34;&gt;Lecture 9: Key Factors Affecting Large Model Performance&#xA;&lt;/h2&gt;&lt;p&gt;While there are many large models on the market, differences in their capabilities exist. The five most important factors affecting the performance of large models are:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&lt;strong&gt;Model Architecture:&lt;/strong&gt; The design, including layers, hidden units, and total parameters, significantly impacts the model&amp;rsquo;s ability to handle complex tasks.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Quality and Quantity of Training Data:&lt;/strong&gt; Model performance heavily relies on the coverage and diversity of its training data.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Parameter Scale:&lt;/strong&gt; More parameters typically allow better learning and capturing of complex data patterns but increase computational costs.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Algorithm Efficiency:&lt;/strong&gt; The choice of algorithms used for training and optimizing the model affects learning efficiency and final performance.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Training Frequency:&lt;/strong&gt; Ensuring sufficient training iterations to achieve optimal performance while avoiding overfitting.&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;h2 id=&#34;lecture-10-how-to-measure-the-quality-of-large-models&#34;&gt;Lecture 10: How to Measure the Quality of Large Models?&#xA;&lt;/h2&gt;&lt;p&gt;From the application perspective, measuring the quality of a large model involves evaluating its performance across several dimensions:&lt;/p&gt;&#xA;&lt;h3 id=&#34;1-measuring-product-performance&#34;&gt;1. Measuring Product Performance&#xA;&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;1) Semantic Understanding Ability:&lt;/strong&gt; Includes understanding semantics, grammar, and context, which determine the quality of interaction with the model.&#xA;&lt;strong&gt;2) Logical Reasoning:&lt;/strong&gt; The model&amp;rsquo;s reasoning ability, numerical computation skills, and contextual understanding are core capabilities.&#xA;&lt;strong&gt;3) Accuracy of Generated Content:&lt;/strong&gt; Includes the rate of hallucinations and ability to identify traps.&#xA;&lt;strong&gt;4) Hallucination Rate:&lt;/strong&gt; The accuracy of the model&amp;rsquo;s responses and results, as models sometimes generate nonsensical content.&#xA;&lt;strong&gt;5) Trap Information Identification Rate:&lt;/strong&gt; The model&amp;rsquo;s ability to recognize and handle misleading information.&#xA;&lt;strong&gt;6) Quality of Generated Content:&lt;/strong&gt; Evaluated based on diversity, professionalism, creativity, and timeliness.&#xA;&lt;strong&gt;7) Contextual Memory Ability:&lt;/strong&gt; Represents the model&amp;rsquo;s memory capability and context window length.&#xA;&lt;strong&gt;8) Model Performance:&lt;/strong&gt; Includes response speed, resource consumption, robustness, and stability.&#xA;&lt;strong&gt;9) Human-like Quality:&lt;/strong&gt; Evaluates how &amp;ldquo;human-like&amp;rdquo; the model is, including emotional analysis capabilities.&#xA;&lt;strong&gt;10) Multimodal Ability:&lt;/strong&gt; Assesses the model&amp;rsquo;s capability to process and generate across different modalities, including text, images, video, and speech.&lt;/p&gt;&#xA;&lt;h3 id=&#34;2-measuring-basic-model-capabilities&#34;&gt;2. Measuring Basic Model Capabilities&#xA;&lt;/h3&gt;&lt;p&gt;The three key elements for measuring basic model capabilities are: algorithms, computational power, and data quality.&lt;/p&gt;&#xA;&lt;h3 id=&#34;3-assessing-model-safety&#34;&gt;3. Assessing Model Safety&#xA;&lt;/h3&gt;&lt;p&gt;In addition to evaluating capabilities, safety considerations are crucial. We assess safety based on:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Content Safety:&lt;/strong&gt; Compliance with safety management, social, and legal norms.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Ethical Standards:&lt;/strong&gt; Ensuring generated content is free from bias and discrimination.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Privacy and Copyright Protection:&lt;/strong&gt; Adhering to privacy and copyright laws.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;lecture-11-limitations-of-large-models&#34;&gt;Lecture 11: Limitations of Large Models&#xA;&lt;/h2&gt;&lt;h3 id=&#34;1-the-hallucination-problem&#34;&gt;1. The Hallucination Problem&#xA;&lt;/h3&gt;&lt;p&gt;The hallucination problem refers to models generating plausible but incorrect or fabricated information. This issue is a significant concern for users and a primary reason for skepticism about model outputs.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Causes of Hallucinations:&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Overfitting Training Data:&lt;/strong&gt; The model may overfit noise or errors in the training data, leading to the generation of fictitious content.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Presence of False Information in Training Data:&lt;/strong&gt; Insufficient coverage of real scenarios in training data can result in the model generating unverified information.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Inadequate Consideration of Information Credibility:&lt;/strong&gt; The model may generate content confidently without effectively assessing its credibility.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;&lt;strong&gt;Potential Solutions:&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Using Richer Training Data:&lt;/strong&gt; Incorporating diverse and authentic training data to reduce overfitting risks.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Credibility Modeling:&lt;/strong&gt; Introducing components to estimate the credibility of generated information.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;External Verification Mechanisms:&lt;/strong&gt; Employing external sources to validate generated content against real-world facts.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;2-the-amnesia-problem&#34;&gt;2. The Amnesia Problem&#xA;&lt;/h3&gt;&lt;p&gt;The amnesia problem occurs when models forget previously mentioned information during long dialogues or complex contexts, leading to inconsistencies. Causes include:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Limitations of Contextual Memory:&lt;/strong&gt; The model may struggle to retain and utilize long-term dependencies.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Lack of Examples in Training Data:&lt;/strong&gt; Insufficient examples of long dialogues or complex contexts in training data can hinder effective memory retention.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;</description>
        </item></channel>
</rss>
