
Technology company Google introduced an AI system designed to develop future human skills. As AI continues to evolve, so-called durable soft skills that are difficult to automate are becoming increasingly valuable. These include critical thinking, collaboration, creative thinking, conflict resolution, project management, and other interpersonal abilities.
Presented as “Vantage,” an AI-powered experimental system designed to support the development and assessment of these competencies through simulated interaction environments, the initiative has been developed in collaboration with pedagogy experts and researchers, including contributors from New York University. It is intended to function as a structured sandbox for students to practice and be evaluated on future-ready skills using methodologies similar to those applied in core academic subjects such as mathematics or science. The system is currently available in English via Google Labs.
The process works by placing users in simulated multi-agent environments where they interact with AI-generated avatars in open-ended scenarios such as debates, collaborative problem-solving tasks, or project planning exercises. Within this setup, a coordinating “Executive LLM” uses predefined assessment frameworks to guide the interaction and dynamically adjust conversational conditions. This includes introducing disagreement, challenging assumptions, or steering dialogue direction in order to generate observable behavioural evidence relevant to targeted skills.
Simulation-Based AI Framework For Assessing Future-Ready Skills
Meanwhile, a separate AI evaluation model analyses the full interaction once the task is complete. Using the same structured rubrics, it assesses the conversation transcript and produces a detailed performance profile that maps observed behaviours to specific skill categories. The output includes both quantitative scoring and qualitative feedback, translating complex interpersonal interactions into structured and measurable indicators of skill performance.
In order to ensure methodological reliability, the system has been tested in partnership with New York University through controlled studies involving 188 participants aged 18 to 25. These evaluations focused on collaboration-related competencies such as conflict resolution and project coordination. Results indicated that adaptive AI-driven conversational steering generated a higher density of assessable skill evidence compared with non-directed interaction models, while maintaining coherent and natural dialogue flow across multiple tasks.
Further testing compared AI-generated scoring with human expert assessments using identical pedagogical rubrics. Findings showed that agreement levels between the AI evaluator and human raters were comparable to inter-human agreement. This suggested that automated systems can approximate expert-level consistency in structured evaluation contexts.
Additional validation with external partners, including OpenMic, extended testing to creative and language-based tasks involving multimedia and literature-based exercises. In these cases, AI-generated evaluations demonstrated strong correlation with expert human scoring, reinforcing the system’s potential applicability beyond structured teamwork scenarios into more open-ended creative domains.
Such simulation-based systems could be integrated into educational environments as an additional evaluative layer alongside traditional assessment methods in the near future. This would enable students to be evaluated not only on subject knowledge but also on applied interpersonal and cognitive skills within controlled simulated settings. The broader aim of the research is to make future-ready competencies more measurable at scale and to align educational evaluation more closely with evolving workforce demands.
The post Google’s New ‘Vantage’ Platform Uses AI Avatars To Test Critical Thinking, Collaboration, And Real-World Skills appeared first on Metaverse Post.
Source: Mpost.io
0 Comments