问题1
- how to help AI get the ability of knowledge accumulation?
To enable an AI system to accumulate knowledge effectively, you can incorporate the following techniques:
Knowledge distillation: Train a new model to learn from the predictions of a well-performing, pre-existing model. This process helps transfer the knowledge from the older model to the new one, allowing the new model to inherit the accumulated knowledge.
Knowledge graph integration: Create a knowledge graph that represents relationships between entities, concepts, and facts. As the AI system encounters new information, it can incorporate that information into the knowledge graph, enriching its understanding and enabling more accurate and context-aware responses.
Contextual embedding models: Utilize contextual embedding models, such as BERT or GPT, that capture the contextual meaning of words and sentences. By continually fine-tuning these models on new data, the AI system can incorporate new knowledge and improve its understanding of language and concepts.
Active learning from user feedback: Incorporate mechanisms that actively learn from user interactions and feedback. When users provide corrections or additional information, the AI system can update its knowledge based on the feedback, refining its understanding and improving its performance over time.
Reinforcement learning with reward shaping: Apply reinforcement learning techniques that guide the AI system’s behavior toward desired outcomes. By shaping the reward signals to encourage knowledge acquisition and learning, the system can prioritize actions that lead to increased knowledge accumulation.
Meta-learning: Implement meta-learning techniques that enable the AI system to learn how to learn. By training the model on a wide range of tasks and datasets, it can develop a more generalized understanding and adapt more quickly to new tasks and knowledge domains.
Integration of external knowledge sources: Integrate external knowledge sources, such as online databases, encyclopedias, or domain-specific resources, into the AI system. This integration allows the system to access and incorporate up-to-date information from these sources, expanding its knowledge base.
Continuous training and fine-tuning: Continuously update the AI system with new data and knowledge to keep it current and relevant. Regularly retrain and fine-tune the model using the accumulated data to enhance its performance and adaptability.
Remember, the ability to accumulate knowledge in an AI system is a continuous process. By incorporating these techniques, the system can effectively learn from new data, user interactions, and external sources, enabling it to expand its knowledge base and improve its overall performance over time.
现状
2022到2023年的AI元年诞生了 ChatGPT, Stable Diffusion, SAM(Segment Anything Model from Meta), 我愿称三者为AI始祖。注意此AI不是我们今天所说的AI, 而是具有终身知识的AI,也许你会死,但AI不会,也许一个国家会灭亡,但AI不会。为什么称他们三个为始祖,而不是别的模型呢?
- ChatGPT属于自然语言领域的
promptable model
- SAM属于计算机视觉领域的
promptable model
- Stable Diffusion 是连接视觉和语言领域的
multimodal model
人类的视觉,语言,已被计算机掌握了,而且也已经打通,似乎始祖中还缺少听觉
和触觉
和嗅觉
和说觉
,好似英语中的听说读写看
。这些感觉被计算机掌握是迟早的,因为都有各自的应用意义。但缺少他们并不影响以上三位始祖的地位,因为vision promptable model
和language promptable model
和 multimodal model
就可以构建一个微弱的world model
了(Lecun提出的世界模型)。
多模态模型先不说,他负责连接各个领域模型。关系到AI终身学习的方法也许就是可提示模型,因为每一次提示,都是模型在积类知识,现在我们也许不会太看重模型给出的答案是错的,因为当有人告诉他错了的时候,他就学会了,并且尽量避免下次范同样的错误。想想看,这不就是人类的学习过程吗?可怕的是,计算机学会了知识积累,却不知疲倦,只要有电有人和他交互,那么他就一直在学习,所以我说现在是举世之人力,供养一个不死AI。但事实并不是如此悲观,人类距离被AI取代还差在AI的推理能力,就是如果有人故意诱导AI变傻,那么AI是无理由接受的,人类却不会那么轻易的被误导。
打赏作者