What is Talent in the Age of AI?
As the world reflects this month on the history of workers’ rights with the celebration of International Workers Day, SWL wants to look at what is coming next. So this month, SWL is asking, “what is the future of work?” For me, the pressing question of how Artificial Intelligence (AI) changes our ability to evaluate high performers is fascinating and presents a new way to think of how human abilities create value for an organisation.

This article from the Harvard Business Review posed two excellent questions that should be on the mind of every HR team:
“- Should organizations distinguish between mediocre performers bolstered by AI and actual high performers?
– Without distinguishing between augmented and independent performance, how can employers retain high performers and ensure effective internal talent pipelines and succession plans?”
As AI becomes increasingly integrated into the workplace, organizations must reevaluate how they define and assess high performance. Traditional metrics that focus solely on output may no longer suffice in distinguishing truly exceptional employees from those whose performance is primarily driven by AI tools. This shift necessitates a deeper exploration into the processes behind outcomes, emphasizing critical reasoning, creativity, and ethical judgment skills over output.
The Old Way Doesn’t Work in this New Age
Historically, performance evaluations have prioritized quantifiable outcomes like sales figures, project completions, and other measurable results. Now that AI tools are capable of generating high-quality outputs, there is a risk of conflating AI-assisted work with actual human ingenuity.
For example, AI can draft reports, analyze data, and even generate creative content, potentially masking the organic contribution of the employee. Before the advent of AI, an employee or manager may have generated a weekly report that evaluated a project’s completion schedule or sales target performance to show that they are keeping track of progress and results. Such a report not only demonstrated a substantial understanding of the project or goals, but also demonstrated writing ability, critical analysis skills, and presentation skills. Now AI can generate well-written reports, undertake analysis, and even make the report “pretty” with graphics and templates. This work can be generated within seconds without substantive contribution by the employee, other than providing a data.
If the output of the human-generated and AI generated report gives the required information about the project or goals, then what about the other evaluation points? This raises a pivotal question: Are we assessing the employee’s capabilities, or are we assessing the proficiency of their use of AI tools?
AI Output vs Human Skills
This moment in technological history is asking organisations to decide whether they value pure output or whether they value the human skills that drive innovation and adaptability. I propose that the choice is not binary. It can depend on the employee’s role in the organisation and what is needed from the employee performing that role.
In roles where tasks are routine and outcomes are easily quantifiable, such as data entry or basic customer service, emphasizing output may be appropriate. AI can enhance efficiency in these areas, and performance metrics can focus on speed and accuracy. The human skill in analysing the AI generated portion of work for accuracy will be an important performance evaluation metric in these types of roles.
Roles like strategic planning, research and development, or leadership positions, require creativity, critical thinking, and complex problem-solving. These skills demand a different approach. In these cases, over-reliance on AI-generated outputs can obscure the unique human contributions that drive innovation. Organisations should develop evaluation frameworks that assess the processes behind outcomes, including decision-making rationale, adaptability, and ethical considerations.
Challenges in Distinguishing Human and AI Contributions
One of the primary challenges in this new paradigm is discerning the extent of AI’s involvement in an employee’s work output. As AI tools become more sophisticated, their outputs can closely mimic human work, making it difficult to attribute credit accurately. This ambiguity can lead to potential misjudgments in performance evaluations, promotions, and rewards.
To mitigate this, organisations should:
- Implement Transparent AI Usage Policies: Clearly define acceptable uses of AI tools and require employees to disclose AI-assisted work. These policies should list the AI tools that are acceptable to use in accordance with the company’s privacy and confidentiality rules.
- Encourage Reflective Practices: Promote self-assessment and peer reviews that focus on the thought processes behind tasks. Make sure employees do not abdicate authority or responsibility to AI. It should be made clear that AI generated work needs to be evaluated and reviewed by the employee, and that the employee uses critical thinking when looking at AI outputs.
- Invest in Training: Equip managers with the skills to discern and evaluate the nuanced interplay between human input and AI assistance.
AI-augmented work is pushing us to redefine what it means to be a high performer. While traditionally, managers looked at outcomes and not the process, the AI age requires that we flip that script to determine and evaluate who are high performers. Organisations that want to develop their top talent need to develop protocols and policies on the appropriate use of AI, and monitor when employees are relying too heavily on AI to do their work. They then need to implement a shift in how they recognize and reward talent in their organisations, by addressing the question of “what is talent in the age of AI?”
Reference: McRae, E. R., Aykens, P., Lowmaster, K., & Shepp, J. (2025, January 22). 9 trends that will shape work in 2025 and beyond. Harvard Business Review. https://hbr.org/2025/01/9-trends-that-will-shape-work-in-2025-and-beyond