By Michael Hoffman
A New Era for Hiring—and for Employment Law
In my decades of practice, I’ve witnessed the workplace transform in ways few of us could have imagined—digitization, remote work, the gig economy, and evolving cultural norms. But perhaps the most disruptive change to hiring practices in recent years has come from the rise of artificial intelligence.
Employers today are increasingly turning to AI-driven tools to screen resumes, evaluate facial expressions during virtual interviews, or predict job performance using algorithms. On paper, this looks like a leap toward efficiency and objectivity. In reality, it’s a minefield of legal, ethical, and practical challenges—particularly when it comes to fairness, bias, and transparency.
As employment lawyers, we stand at a critical intersection. Our job isn’t just to interpret statutes or react to disputes. It’s to counsel, prevent, and help companies navigate a rapidly evolving landscape in a way that’s legally sound and socially responsible.
Bias, Built In
One of the greatest misconceptions about AI is that it’s inherently neutral. In fact, the opposite is often true. AI tools learn from historical data—and if that data reflects bias (as most historical hiring data does), the algorithm will learn and perpetuate it.
Let’s say a company has traditionally hired predominantly white male engineers from a select group of universities. If an AI is trained on that data, it will likely favor candidates who match that profile, even if indirectly—such as favoring certain word choices or penalizing employment gaps more common among women or caregivers.
This presents a real risk of discrimination—intentional or not. And while courts are still catching up, the law is clear: disparate impact is grounds for liability under Title VII of the Civil Rights Act. In other words, even if the employer didn’t mean to discriminate, if the tool they used had that effect, they may still be held accountable.
Regulation Is Coming (and It’s Already Here)
We’re beginning to see a wave of regulatory attention focused on AI in employment. The EEOC has issued technical guidance on the use of algorithmic tools, warning employers to monitor for bias. New York City has already enacted a law requiring employers using automated hiring tools to conduct bias audits and notify candidates.
It’s safe to say this is just the beginning. Federal and state agencies are moving toward greater transparency requirements, and I wouldn’t be surprised if we soon see mandates for human oversight and consent from applicants when AI tools are used.
For employment lawyers, this means it’s not enough to wait until a claim is filed. We need to educate clients now—especially HR departments and hiring managers—about their responsibilities and risks.
What Lawyers Can Do Today
So how do we, as employment counsel, bridge the gap between innovation and fairness? Here are a few core strategies I recommend to my clients:
1. Demand Transparency from Vendors
If a company is using a third-party AI tool, they need to ask hard questions. What data was it trained on? Has it been audited for bias? What mechanisms exist for human oversight? Employers are still responsible for any discriminatory outcomes, even if the algorithm comes from an outside provider.
2. Encourage Bias Audits and Ongoing Testing
Employers should regularly test their AI tools for disparate impact—just as they would any other employment practice. We can help connect them with qualified experts and guide the interpretation of results.
3. Promote Human Involvement
AI can inform decisions, but it shouldn’t make them in a vacuum. I always recommend a final human review before any hiring decision is finalized. This protects both the candidate and the company.
4. Update Policies and Train Staff
Hiring policies must reflect this new reality. That means updating equal employment opportunity language to address algorithmic tools and training recruiters on the responsible use of technology.
5. Advocate for Candidate Rights
Transparency should go both ways. Applicants deserve to know if AI is evaluating them, and how. Companies that are proactive about disclosure and candidate support not only reduce legal exposure but build trust in their brand.
A Human-Centered Future
Despite the challenges, I believe AI offers real potential to reduce certain biases and streamline hiring—if used carefully. The key is not to reject the technology outright, but to approach it with the same diligence we apply to any employment practice. At its best, AI can help reduce subjectivity, catch inconsistencies, and surface qualified candidates who may otherwise be overlooked.
But left unchecked, it risks reinforcing the very inequalities many of us have spent our careers trying to dismantle.
As employment lawyers, we have a unique role to play—not just as legal advisors, but as stewards of fairness and advocates for a better workplace. In the age of AI hiring, that role is more vital than ever.