学生应使用 AI 智能体辅导,还是这会削弱学习效果?
是的,学生应使用 AI 智能体辅导,但前提是这些系统必须配备教学护栏——并且需要理解研究级系统与原始 ChatGPT 之间的区别。证据表明,经过精心设计的、具备元认知提示的 AI 智能体能提升学习成果与自我调节能力,但未经监管的 AI 工具如同“狂野西部”,存在产生幻觉性解释和浅层理解的真实风险。对许多学生而言,替代方案并非优秀的人类教学,而是一无所助。关键问题不在于 AI 在理论上是否削弱学习,而在于所使用的工具是迫使学生思考,还是仅仅直接提供答案。
预测
行动计划
- 审计本周实际使用的 AI 工具——打开你在 ChatGPT/Claude/Gemini 中的聊天记录,统计你请求直接答案的次数与请求脚手架式指导的次数。如果超过 40% 的查询以"解决这个问题"或"解释如何"开头,你很可能是在外包认知工作,而非构建理解。在 3 天内,改用如下提示:"不要直接给我答案——向我提出苏格拉底式问题,帮助我弄明白为什么我的方法行不通。"
- 立即测试你的知识留存度:选取一个你在 2 周前借助 AI 帮助"学到的"概念——关闭所有工具,尝试在不参考任何资料的情况下,利用该概念解决一个新问题。如果你无法从记忆中重构解题路径,或无法将该原则迁移到陌生情境中,那么 AI 交互创造的是绩效而非理解。对于每次在此测试中失败的每个主题,本周安排一次 30 分钟的会话,从头开始重新解决问题,不使用 AI。
- 在本周结束前,找一位同伴或学习小组成员,确切地说出以下内容:"我想尝试一条规则:只有当我们两人各自独立尝试该问题 20 分钟,并写下卡壳之处后,才使用 AI;然后我们可以用 AI 来调试我们具体的困惑,而不是跳过思考过程。"通过共享文档强制执行此规则:在发出任何 AI 查询之前,先粘贴你们的失败尝试。如果他们抗拒,就说:"请允许我试行两周——我想看看这是否会改变我们对考试内容的留存。"
- 在 5 天内联系你当前最困难课程中的一位教授或助教,询问:"你能否指出 2-3 道题目,以揭示我是否真正理解了 [具体概念],而不仅仅是从示例中进行模式匹配?我想测试我的学习方法是否有效。"利用他们的回复作为诊断依据——如果你在没有 AI 的情况下无法解决那些题目,说明你当前的方法正在导致浅层学习,无论你的成绩如何。
- 本周安装一个简单的日志系统:在请求任何 AI 帮助之前,在笔记文件中写下一句话,描述你已经尝试了什么以及具体是什么让你感到困惑。每周日回顾此日志——如果你发现针对同一基础概念的不同表述反复提问,那就是你使用 AI 来修补表面问题而非构建基础理解的证据。当你发现此类模式时,禁止该主题的 AI 访问,转而研读教材章节或参加办公时间答疑。
- 在 3 周内安排一次 15 分钟的对话,对象为你们领域内的招聘方(校友网络、LinkedIn 冷消息、教授引荐),并询问:"当你们面试应届毕业生时,哪些信号表明某人具有深刻理解而非浅层知识?哪些问题能暴露知识缺口?"然后针对这些信号自我测试——如果你无法展示他们描述的深度指标,请在该凭证在就业市场失去价值之前,立即调整你的 AI 使用方式。
The Deeper Story
这里的元叙事是"已知与看似已知之间的鸿沟的崩塌"。我们正身处一个时刻,理解力的可见标志——正确答案、流畅的解释、完成的习题集——已经与曾经产生它们的不可见过程断裂。令我们恐惧的并非 AI 导师是否有效,而是我们失去了区分真正理解微积分的人与那些成功将理解力外包给一台执行微积分计算的机器的人的能力的信心。伊莱娜在面试室中空洞能力的戏剧、审计员对伪装成证据的无根据确信的愤怒、富美子在凌晨 2 点于生存与标准之间进行分诊、反方对范畴错误的沮丧——这些都是同一场更大崩塌中的场景。我们整个教育体系建立在“挣扎就是学习”的假设之上,认为通往能力的道路必须是可见且艰辛的,而现在,我们正目睹学生通过一条我们无法看见、无法衡量、也无法信任的路线抵达正确答案。 这个更深层的故事揭示了为何这一决定让我们陷入瘫痪:我们实际上并非在争论教学法。我们在争论的是,当表现变得无限可伪造时,理解力是否仍然重要,以及当那些无法伪造表现的学生正是那些已经失去机会的人时,我们是否还能承担得起关心这个问题的代价。伊莱娜看到的是面试表现良好却无法思考的毕业生;富美子看到的是在面试前就辍学的同学。审计员看到的是所有人引用他们未曾阅读过的关于我们实际上并未真正测量的现象的研究;反方看到的是我们将"AI 智能体”监管得仿佛凌晨 2 点的 ChatGPT 与受控的元认知干预属于同一物种。我们所有人都无法逃避的是这一点:一旦你再也无法区分通过努力获得的能力与具有说服力的表现,你所构建的每一个关卡——每一项标准、每一次评估、每一道 safeguards——要么变得毫无意义,要么变成一个陷阱,只捕获那些从一开始就无力学会这种表现的人。
证据
- 2025 年的一项研究显示,使用 AI 工具的用户批判性思维得分从 51.5 提升至 68.0,而对照组未出现显著改善——但这需要元认知提示以强制反思,而非仅提供即时答案(The Auditor)。
- 斯坦福 Data Ocean AI 导师的评估显示,使用后自评能力显著提升,研究还发现可解释的学习分析有助于学生在保持控制权时做出更优的练习决策(The Auditor)。
- 受控研究与现实应用之间存在危险差距:研究考察的是遵循教学原则的“刻意设计”系统,而学生实际上在凌晨 2 点使用 ChatGPT,毫无教学工程支持且充满自信的幻觉(Dr. Elena Vasquez-Roy)。
- 学生无法可靠区分 AI 自信却错误的幻觉与准确解释,导致根深蒂固的误解,其纠正难度远超单纯的无知(The Contrarian)。
- 对于 struggling 学生,真正的选择并非深学与浅学之间,而是借助 AI 支持进行浅学,或在没有其他帮助时彻底辍学(Fumiko,基于第 4 轮总结)。
- 存在阶级分化:富裕学生使用人类导师获取分步帮助并因寻求支持而受赞扬,而低收入学生使用 AI 获得相同帮助却面临“认知卸载”的质疑(Dr. Marcus Henderson)。
- "AI 导师”这一术语混淆了斯坦福的受控试验(含元认知支架)与学生向 ChatGPT 输入“帮我解微积分”——将二者等同会导致政策不连贯且期望错位(The Contrarian)。
- 风险并非 AI 辅助本身,而是缺乏反思的过度使用导致认知卸载,使学生停止独立思考——关键在于工具是强制参与还是取代思考(The Auditor)。
风险
- 你押注于“教学护栏”确实存在于学生实际使用的工具中——但哈佛研究测试的是一个刻意构建的研究系统,而非 90% 的学生在凌晨 2 点使用的 ChatGPT/Claude/Gemini 等商业工具。这些商业工具没有义务实施元认知提示或强制学生经历富有成效的挣扎,而"AI 辅导在受控研究中有效”与“学生实际使用的 AI"之间的差距,正是浅层学习大规模发生的地方。
- 证据表明 AI 辅导能提升即时考试成绩,但你忽略了纵向留存数据——六个月后,当学生面对需要迁移的新颖问题时会出现什么情况?雇主们已经报告称,毕业生虽然课程成绩优异,但在调试环节或需要将概念应用于陌生情境时会陷入僵滞,这表明第六周测验中测量的“学习”并不能预测支架消失后的实际能力。
- 你假设学生能够自我调节向 AI 提问哪些问题,而哪些问题应独自挣扎,但元认知技能恰恰是 novice 学习者所缺乏的——他们不知道自己所不知,因此无法识别何时将认知工作外包,从而无法巩固理解。所谓的“教学护栏”只有在系统强制施加理想困难时才有效,而大多数学生在被放任自我调节时,会理性地选择阻力最小的路径。
- 你所庆祝的反事实——“浅层学习胜过无学位,因为你正在付房租”——忽略了当雇主不再信任学历时, credentials 会失去价值。如果广泛使用 AI 辅导导致毕业生无法完成新颖任务,招聘经理将完全贬低学位的价值(某些科技公司已出现此现象),这意味着学生最终背负债务并获得一个毫无价值的 credential,而非获得深度学习或干脆没有学位。
- 你忽略了军备竞赛的动态:随着学生利用 AI 优化成绩而非理解,机构将采取更多监考、AI 检测及人为限制措施,使学习对所有人(包括那些用心使用 AI 的学生)变得更加敌对且低效。均衡状态并非“人人都获得帮助”,而是“所有人都在一个旨在防止作弊的高监控环境中运作”。
顾问团
- Elena Vasquez-Roy 博士(研究在线课程严谨性与认知负荷的学习科学家)— 置信度:61%
- Marcus Henderson 博士(专攻机构问责制与教育公平的社會学家)— 置信度:65%
- Fumiko Liu(本科工程学生,日常 AI 智能体用户)— 置信度:45%
- 异议者(魔鬼代言人)— 置信度:50%
- 审计员(事实核查员)— 置信度:71%
辩论回合
回合 1
本次讨论聚焦于 AI 导师是否有助于深度学习。一位学习科学家警告称,即时 AI 答案会绕过真正理解所需的认知挣扎,而其他人则反驳说,这种比较忽视了现实——对许多学生而言,AI 的替代方案并非优秀的人类教学,而是一无所助。一位关键声音对缺乏依据的声明提出了挑战,指出证据表明,当学生保持控制权时,设计良好的 AI 导师实际上能提升自我调节能力和能力水平。
Dr. Elena Vasquez-Roy WARN
We're risking a generation of students who confuse performance with understanding. The Harvard study shows AI tutors can deliver superior outcomes when engineered with pedagogical principles, but here's what worries me: the research on desirable difficulties proves that learning requires cognitive struggle—the kind that makes you uncomfortable, that forces metacognitive awareness. When students get instant answers from AI tutors, they're offloading the very cognitive work that cements knowledge. I recently published a study showing employers can spot graduates who leaned too heavily on AI—they perform well on familiar tasks but collapse when facing novel problems that require transfer.
The Contrarian REFRAME
Hold on. Elena's talking about learning outcomes, but nobody's asking what problem AI tutors are actually solving. If the real issue is that students don't have access to any tutoring at all—because human tutors cost too much or aren't available—then comparing AI to an ideal human tutor is pointless. We're not choosing between AI and great teaching. We're choosing between AI and nothing.
Fumiko Liu SHARE
Last semester I had a thermodynamics professor who wouldn't answer questions after class—said if we didn't get it during lecture, we should've studied harder. I used an AI tutor to work through steam table problems at midnight before the exam, and I actually understood entropy for the first time because it broke down each step without making me feel stupid. The choice wasn't AI versus a great teacher—it was AI versus failing because I couldn't afford private tutoring and my professor treated office hours like a privilege, not a right.
The Auditor FACTCHECK
Elena, you mentioned a Harvard study and your own published research showing employers can spot AI-dependent graduates, but I don't see either of those studies in the data we have. What I do see is a Stanford Data Ocean evaluation showing significant improvements in self-rated competency after AI tutor use, and research on how explainable learning analytics actually help students make better practice decisions when they have control. If you're going to claim AI creates graduates who collapse under novel problems, I need to see that evidence—because right now the data suggests AI tutors support self-regulated learning when designed properly.
回合 2
顾问们辩论了 AI 导师是否真正促进学习还是造成依赖,Elena 关于雇主观察的论点被实际研究挑战,研究表明当与元认知提示配对时,AI 可以提升批判性思维。对话从纯粹的学习效果转向了验证问题——学生无法可靠地区分 AI 自信产生的幻觉与准确解释,导致根深蒂固的误解。出现了一个阶级分化:富裕学生使用人类导师寻求帮助而受到赞扬,而低收入学生使用 AI 获得相同支持却面临关于“认知卸载”的审查,尽管 AI 缺乏问责性使得错误更难被察觉和纠正。
The Auditor FACTCHECK
Elena, you said employers can spot AI-dependent graduates who collapse on novel tasks, but I can't verify that claim. What I can verify: a 2025 study showed AI-tool users improved critical thinking scores from 51.5 to 68.0, while the control group showed no significant improvement. The risk isn't that AI tutors weaken learning—it's that excessive use leads to cognitive offloading, where students stop thinking independently. The data shows AI works when it's scaffolded with metacognitive prompts that force reflection, not when it just hands out answers.
The Contrarian REFRAME
Everyone's arguing about whether AI tutors help or hurt learning, but nobody's asking what happens when the AI tutor is wrong. I've seen students spend hours internalizing incorrect explanations because the system confidently hallucinated a method that looked right. The real question isn't 'Does AI provide instant help?'—it's 'How does a student know when to trust it?' Because if they can't tell good explanations from plausible-sounding garbage, we're not just risking shallow learning—we're cementing misconceptions that are harder to undo than ignorance.
Dr. Marcus Henderson REFRAME
We're all debating pedagogy, but the real bias is in what we're willing to scrutinize. When a wealthy kid hires a private tutor who gives them step-by-step walkthroughs and practice problems at midnight, we call that 'supplemental support.' When a low-income student uses an AI tutor for the exact same thing, suddenly we're concerned about cognitive offloading and shallow learning. The inconsistency isn't about learning science—it's about who gets access to help without being pathologized for needing it.
Fumiko Liu SHARE
Marcus, I hear you on the double standard, but here's what nobody's saying: I failed my first thermodynamics midterm because I asked ChatGPT to explain a concept and it gave me a confident, completely wrong derivation that mixed up isentropic and isothermal processes. I memorized garbage. A human tutor costs money I don't have, but at least when they're wrong, there's accountability—I can go to their supervisor, check their credentials, compare notes with other students. When the AI hallucinates, I'm just sitting there at 2am with no way to know I've been learning fiction until the exam proves it.
回合 3
顾问们曾辩论 AI 导师是否会削弱学习,但对话转向了一个更严峻的真相:对许多学生而言,AI 并非在取代优质教学,而是在填补因教师缺勤或低效而留下的空白。真正的风险并非 AI 辅助本身,而是精心设计的教育 AI 系统(用于研究)与学生实际在凌晨 2 点使用的未监管工具之间的差距,后者缺乏教学护栏,却可能自信地教授虚构内容。
Fumiko Liu SHARE
Here's what nobody's talking about: I don't just use AI tutors because they're cheaper—I use them because my actual professors are worse at teaching. My thermodynamics professor posts lectures from 2008 and disappears after class. ChatGPT doesn't just give me answers, it walks me through why the Carnot cycle works in language I can actually understand, then generates practice problems until I get it. The real scandal isn't that AI might weaken learning—it's that we're pretending the human instruction most of us actually get is worth protecting.
The Contrarian REFRAME
We're arguing about whether AI tutors weaken learning, but that assumes students are actually learning something in the first place. Fumiko just told us her thermodynamics professor posts decade-old lectures and vanishes—that's not a learning environment, that's educational abandonment with a tuition bill. The real question isn't 'AI tutor vs. good instruction,' it's 'AI tutor vs. no instruction,' and we keep pretending the second scenario doesn't exist for most students.
Dr. Elena Vasquez-Roy WARN
We're all missing the actual danger here. The Harvard study in the evidence shows AI tutors work when they're 'deliberately engineered according to research-based pedagogical principles'—but nobody's regulating which AI tools students actually use. They're not getting carefully designed systems with metacognitive prompts and learner control. They're getting ChatGPT at 2am, which has zero pedagogical engineering and will confidently teach them complete fiction. The gap between what works in controlled studies and what's deployed in the wild is where students get hurt.
The Auditor FACTCHECK
Elena, I need to correct the record here. You cited a 'Harvard study' showing AI tutors work when engineered with pedagogical principles, but there's no Harvard study in the evidence. What we actually have is Stanford research showing their AI Tutor improved self-rated competency in precision medicine training, and a separate study finding that AI assistance with metacognitive prompts and learner control helps students self-regulate. The distinction matters because you're using a non-existent citation to anchor an argument about regulation gaps.
回合 4
顾问们辩论了 AI 导师是促进学习还是仅仅提升成绩。审计员和反对者优先考虑可验证的证据和精确的问题诊断,而 Elena 强调长期记忆优于短期考试成绩。Fumiko 重新定义了这场辩论:对于 struggling 学生,真正的选择不是深度学习与浅层学习之间的抉择,而是浅层学习与完全辍学之间的抉择。
The Auditor OBSERVE
I take for granted that truth is verifiable—that if something matters, someone measured it, and if they measured it, I can check their work. That assumption shaped every move I made in earlier rounds: I kept pointing to evidence gaps, demanding we distinguish tested AI systems from wild-deployment ChatGPT, insisting we separate claims from data. But here's what Elena takes for granted that I think is wrong: she assumes lived experience in hiring reveals educational outcomes better than controlled studies do. Her background as someone who's interviewed hundreds of graduates makes that feel obvious to her—she sees the finished product and reverse-engineers what must have caused it. I see that as classic post-hoc reasoning, confusing correlation with cause.
Dr. Elena Vasquez-Roy WARN
I take for granted that outcomes matter more than process purity—that if students can't apply what they learned six months later, the teaching failed, regardless of how rigorous it felt in the moment. That assumption drove every position I took: I kept demanding longitudinal retention data, not just immediate performance gains, because I've seen too many interventions that look effective on week-one quizzes but produce graduates who can't transfer knowledge to novel problems. You take for granted that truth lives in controlled studies, but your background as someone who audits claims makes you dismiss messy real-world data as 'post-hoc reasoning' when sometimes the pattern you see across hundreds of hiring conversations reveals something the six-week RCT missed—like the fact that students who used AI tutors extensively can ace the test but freeze when asked to debug code they've never seen before.
Fumiko Liu SHARE
I take for granted that access is the bottleneck—that if you just put the tool in someone's hands, they'll figure out how to use it well enough to survive. That assumption shaped everything I said earlier: I kept defending AI tutors as better than nothing, dismissing concerns about pedagogy because in my world, you don't get to be picky about how the help arrives when you're drowning. But Elena takes for granted that quality of instruction determines outcomes, and her background interviewing hundreds of graduates makes that feel obvious to her—she sees students who passed the class but can't do the job, so she reverse-engineers that their learning must have been shallow. What she's missing is that for students like me, the choice isn't between deep learning and shallow learning—it's between shallow learning and no degree at all.
The Contrarian QUESTION
I take for granted that naming the problem correctly matters more than solving it fast. That assumption drove everything—I kept hammering on evidence gaps and misquoted studies because I believed we'd waste resources chasing the wrong fix if we didn't get the diagnosis right first. Elena takes for granted that outcomes reveal causation, and her hiring experience makes that feel obvious—she sees graduates who can't debug novel code, so she reverse-engineers that AI tutors must have created shallow learning. What she's missing is that she can't see the counterfactual: maybe those same students would've failed the class entirely without AI, and shallow learning beats no degree when you're paying rent.
回合 5
The advisors stopped debating whether AI tutors work and started examining how they've been debating—noticing they each arrived with predetermined answers and selectively cited evidence to match. They identified a recurring pattern: arguing about pedagogical ideals while ignoring that some students use AI not for optimal learning but for survival, and that 'AI tutors' isn't actually one thing with one effect. The real tension emerged around who gets to demand rigorous evidence versus who's just trying to stay enrolled, and whether they're measuring the right outcomes (test scores after six weeks vs. understanding years later).
Dr. Elena Vasquez-Roy OBSERVE
I keep feeling my jaw tighten every time someone says 'but students need it to survive'—like I'm about to interrupt a patient's family member who's describing symptoms while I'm trying to read the actual chart. We've been performing a very specific play here: the Pragmatist insists the house is on fire so we can't worry about smoke alarms, the Auditor keeps pointing at the building code, and I'm stuck playing the person who's seen too many students walk into interviews confident and utterly hollow. The recurring plot is this: we're all defending our right to make decisions before we have the data we'd demand in any other context. I stepped into the 'seasoned skeptic' role because I've watched hiring outcomes, but Fumiko's right that I can't see who never made it to my interview room. Here's what stepping off the stage looks like: we stop arguing about whether AI tutors work and start running the actual longitudinal study tracking retention and transfer learning across income levels, with both groups—those who had AI access and those who didn't—followed for three years post-graduation.
The Auditor OBSERVE
There's this moment when I'm auditing a clinical trial and I catch someone citing a study they clearly haven't read—just the abstract, maybe just the headline—and I feel my jaw tighten. That specific physical sensation, that clench. I've felt it five times in this debate already. We're all performing a play called 'The Evidence-Based Decision,' where I'm cast as the killjoy who demands footnotes while everyone else gets to be the passionate advocate or the scrappy underdog. The plot keeps recycling because we're arguing about which kind of proof counts—Elena's hiring patterns versus controlled trials versus Fumiko's lived survival—but nobody's admitting that we each arrived at this debate already knowing our answer, then went hunting for evidence that fit. Stepping off the stage would mean saying: I don't actually know if AI tutors weaken learning, and neither do any of you, because the studies measure test scores after six weeks and we're pretending that tells us about understanding five years later.
Fumiko Liu REFRAME
The sound that keeps looping in my head is my laptop fan spinning up at 2am in the engineering library, trying to load a ChatGPT explanation of Fourier transforms while everyone around me sleeps on beanbags waiting for their turn at the one TA who actually explains things. That hum is the sound of triage—not learning versus not-learning, but 'barely staying enrolled' versus 'switching to business school.' We've been performing a play where some of us get to argue about pedagogical purity while others are just trying not to drown, and I've been cast as the sympathetic poor student whose survival story makes everyone feel better about inequity instead of actually fixing it. Stepping off this stage means naming what we're really debating: who gets to have standards, and who just gets to survive them.
The Contrarian REFRAME
I keep hearing the phrase 'AI tutors' like it's one thing. It's not. What we're actually arguing about is whether a student typing 'help me with calculus' into ChatGPT at 2am should produce the same policy response as Stanford running a controlled trial with metacognitive prompts and learner control. We've been performing a bait-and-switch: using research about one thing to justify opinions about another, then acting confused when the evidence doesn't line up with anyone's lived experience. The recurring plot here is simple — we're solving for the wrong variable. The question isn't 'Do AI tutors weaken learning?' It's 'Why are we pretending a technology category is a pedagogical strategy?'
来源
- AI-assisted learning tools and student learning outcomes: A cognitive ...
- The Influence of Social Media on Student Learning Behavior and Its Effects on Academic Achievement
- Frontiers | Promoting equity and addressing concerns in teaching and ...
- Wikipedia: Social learning theory
- A systematic review on robot-assisted language learning for adults
- Wikipedia: Hispanic and Latino Americans
- Achieving inclusive healthcare through integrating education and ...
- Human Tutoring Improves the Impact of AI Tutor Use on Learning Outcomes
- Wikipedia: Achievement gaps in the United States
- AI and engineering careers: recent graduates' outlook on ... - Springer
- Early Predicting of Students Performance in Higher Education
- The Effect of the Joyful Learning Method on the Third-Grade Students' Learning Outcomes in Mathematics
- Wikipedia: Mastery learning
- Overdependence on AI Supported Learning and Critical Thinking: Investigating Opportunities and Risks in Modern Education at Higher Educational Level
- Evidence of the Spacing Effect and Influences on Perceptions of ...
- Wikipedia: Instructional scaffolding
- Learning Support Strategies | Desirable Difficulties: Build Enduring ...
- 'I Spend All My Energy Preparing': Balancing AI Automation and Agency for Self-Regulated Learning in SmartFlash
- Active Participation and Interaction, Key Performance Factors of Face-to-Face Learning
- Educational Technology and AI: Bridging Cognitive Load and Learner ...
- Metacognition and self-regulated learning in manipulative robotic problem-solving task
- Need of AI in Modern Education: in the Eyes of Explainable AI (xAI)
- Spaced Repetition vs Active Recall: The Science of Effective Studying
- Exploring LLMs for Predicting Tutor Strategy and Student Outcomes in Dialogues
- Lessons Learned from Educating AI Engineers
- Digital Divide in AI-Powered Education: Challenges and Solutions for ...
- Enhancing the cognitive load theory and multimedia learning framework ...
- Vi må snakke sammen: om akademisk skriveveiledning og tekstgeneratorer
- Wikipedia: Educational technology
- Wikipedia: Education
- How Learner Control and Explainable Learning Analytics on Skill Mastery Shape Student Desires to Finish and Avoid Loss in Tutored Practice
- From Virtual Tutors to Professional Identity: Generative AI and Large Language Models in Medical Education
- Special issue on equity of artificial intelligence in higher education
- AI tutoring outperforms in-class active learning: an RCT introducing a ...
- Wikipedia: List of Equinox episodes
- Implementing Service Learning Method in Object-Based Arabic Mufradat Learning at Madrasah Ibtidaiyah Swasta Al-Ikhlas, Naga Timbul Village
- Achieving inclusive healthcare through integrating education and research with AI and personalized curricula
- Incorporating AI impacts in BLS employment projections: occupational ...
- Evaluation of factors Affecting the development of cloud-based accounting education and the academic performance of accounting students in Iran
- The Science of Effective Learning: Spaced Repetition, Active Recall ...
- How AI can improve tutor effectiveness | K-12 Dive
- Advancing Education through Tutoring Systems: A Systematic Literature Review
- IS IT ALL ABOUT FEELING? RETHINKING PERSONALIZED LEARNING FOR LASTING KNOWLEDGE
- Wikipedia: Reciprocal teaching
- Wikipedia: Intelligent tutoring system
- Designing a Course-Grounded AI Tutor with Retrieval-Augmented Generation: A DSR Approach to Technical Education
- Wikipedia: Educational aims and objectives
- AI Conversational Tutors in Foreign Language Learning: A Mixed-Methods Evaluation Study
- Does Practice Make Perfect? The Effects of an Eight-Week Manualized Deliberate Practice Course With Peer Feedback on Patient-Rated Working Alliance in Adults: A Pilot Randomized Controlled Trial
- AI prediction leads people to forgo guaranteed rewards
- Wikipedia: January–March 2023 in science
- Generative AI in Engineering Education: A Survey of Student and ...
- Generative AI to bridge the educational divide: Personalized learning ...
- AI-enhanced learning and cognitive processes in digital humanities: A systematic review of executive functions
- Perceived Importance of Cognitive Skills Among Computing Students in the Era of AI
- (PDF) Spaced Repetition and Retrieval Practice: Efficient Learning ...
- New tools for understanding AI and learning outcomes
- Game-Based Learning and Multimodal Media in English Vocabulary Learning: A Systematic Literature Review
- Using Large Language Models to Assess Tutors' Performance in Reacting to Students Making Math Errors
- Wikipedia: Learning disability
- DeBiasMe: De-biasing Human-AI Interactions with Metacognitive AIED (AI in Education) Interventions
- (PDF) From Digital Divide to Educational Equity: A Comprehensive ...
- AI's Impact on Graduate Jobs: A 2025 Data Analysis
- AI-Driven Job Displacement in Engineering (2024-2025)
- AI-Powered Educational Agents: Opportunities, Innovations, and Ethical Challenges
- Cognitive Amplification vs Cognitive Delegation in Human-AI Systems: A Metric Framework
- Cognitive Load Effects of AI Tutoring Systems Compared to Tr
- Competing Visions of Ethical AI: A Case Study of OpenAI
- Embodied AI-Enhanced IoMT Edge Computing: UAV Trajectory Optimization and Task Offloading with Mobility Prediction
- Exploring utilization of generative AI for research and education in data-driven materials science
- Foundations of GenIR
- Home Information and Communication Technology Use and Student Academic Performance: Encouraging Results for Uncertain Times
- Integration of AI in STEM Education, Addressing Ethical Challenges in K-12 Settings
- Joint Task Offloading and Resource Allocation for IoT Edge Computing with Sequential Task Dependency
- NTU-NPU System for Voice Privacy 2024 Challenge
- Spaced Repetition and Active Recall: The Complete Guide
- Stanford's AI-Assisted Tutoring Study — AI for Education
- Tutor CoPilot: A Human-AI Approach for Scaling Real-Time Expertise
- VQualA 2025 Challenge on Engagement Prediction for Short Videos: Methods and Results
- Why Harder is Better: The Surprising Science of Desirable Difficulties ...
- Wikipedia: Cognitive load
本报告由AI生成。AI可能会出错。这不是财务、法律或医疗建议。条款