AI for Equity or Extraction?

It seems like everything nowadays is AI-powered, and I’m seeing it sweep across higher education like a monsoon. In international recruitment, AI is pitched as a panacea: it will “optimize” outreach, “predict” yield, “personalize” content, and “scale” global engagement. While some of that may be true, and I’m excited about all of it, the deeper question remains: At what cost, and to whom?

I am no stranger to shifts in technology. I remember when CRMs were a revelation and Zoom was an emergency fix. When we shifted from paper-based study abroad applications to an online platform, I was the happiest little advisor in the world. AI is different, though. It’s new logic, one that is shaping decisions before we even realize they’ve been made.

To state the obvious, international student recruitment has never been ethically neutral. It has long privileged students from English-speaking, high-income regions, reinforced prestige hierarchies, and extracted talent from the Global South under the guise of “opportunity.” AI now promises to make this process faster, smarter, and more targeted. If the data we train these tools on are rooted in historical inequities, we will not disrupt the cycle, we will just automate it.

If your AI model predicts yield based on prior enrollment patterns, and your campus has historically admitted students from a narrow band of international high schools in India, China, or Nigeria, guess who your future outreach will favor? The same students. From the same schools. With the same profiles. Equity gets optimized right out of the equation.

This problem is compounded by opacity. Many institutions, eager to “leverage AI,” are working with third-party vendors who use proprietary algorithms trained on undisclosed datasets. Institutions are told the system “scores” or “matches” students but are not told how. And for students, especially those from underrepresented backgrounds, it raises serious questions: Who is being seen? Who is being filtered out?

Vendors are private actors and are not accountable to ethics boards. If their models determine that a first-generation STEM student from a rural Indonesian school is less likely to yield than a legacy applicant from an international school in Beijing, guess who gets the priority follow-up? That is predictive exclusion being promoted as predictive modeling.

Action Steps for Ethical Engagement

First, work with vendors who can answer all the questions. Bonus points if they understand the equity issue deeply and thoughtfully. We have to steer AI. Here are three concrete ways institutions can ensure that AI recruitment supports equity and not extraction:

1. Ask the Hard Questions Up Front

When working with AI recruitment vendors, demand transparency. Ask:

  • What data were used to train your model?

  • Are your datasets representative across race, geography, and socioeconomic status?

  • How does your system mitigate algorithmic bias?

  • What options exist for human override, appeals, or review?

If your vendor cannot answer these questions, or refuses to, reconsider the partnership. You would never accept a student evaluation tool with opaque grading logic. Why accept it in admissions?

2. Build Ethical Requirements into Your MOUs and RFPs

As universities, we have leverage. If we’re writing the checks, we should be writing the rules. Future contracts should include:

  • Requirements for explainability (i.e. why was this student prioritized?)

  • Regular equity audits conducted by independent reviewers

  • The right to access, challenge, or opt-out of algorithmic decisions

  • Mandates for diverse training datasets that include underrepresented sending countries

If we embed ethics into procurement, we shift the market toward accountability.

3. Reframe "Best Fit" as a Two-Way Dialogue, Not a Prediction

The most harmful assumption AI can make is that "best fit" is something we determine for the student. Instead, it must be a dialogue. AI can serve students when it supports exploration: suggesting institutions they might not have considered, highlighting strengths they may not have seen, and surfacing options that match their goals beyond rankings or yield likelihood.

For this to happen, we need to include student-facing tools that explain how recommendations are made and allow students to shape their own profiles, giving them agency and dignity.

From Extraction to Reciprocity

Leaders in international education are the stewards of this moment. Our titles say we manage “engagement,” but what we really manage are relationships, across borders and institutions. If we treat AI as a black box that magically delivers students, we risk losing the very thing that gives our field meaning: the human context. So we aren’t the last to ask questions or the first to cede control, we should:

  • Train our staff in AI literacy so they become informed stewards.

  • Join cross-campus conversations with data governance committees.

  • Develop shared values with partner universities about ethical, AI-enabled recruitment pipelines that promote mutuality and not dependency.

Global education should never be about strip-mining the world’s talent but about cultivating a shared ecosystem of opportunity, where AI can help expand access, not narrow it. It requires intentionality and leadership and resisting the temptation to equate speed with success. We have all these exciting tools, but we must now define the terms.

Previous
Previous

The Myth of “Best Fit”: Why Students Deserve Better Advice

Next
Next

Whose Guard Is It Anyway?