The Missing Piece to ATS Prescreening Questions

It’s no surprise that the number of resumes stockpiling for HR and hiring managers has increased in the past year. Thus, it’s important now more than ever to have an applicant tracking system (ATS) that’s well optimized to organize candidates. Sure, there are basic search and filter components available; however, there’s one more key component that can really help you find the “shining star” in today’s more than likely flooded candidate pool. This is prescreening or custom questions.

A pre-screening mechanism typically allows an ATS user to develop weighted questions that each applicant must complete for a specific position. If a candidate answers enough questions “wrong,” he or she might lose their opportunity for the position and be put in the not-in-consideration bucket. Perfect. This means one less candidate for an HR or hiring manager to review. However, what if the candidate was a good fit but the prescreening questions were just a bit off? This is a topic I’ve had an opportunity to discuss with Jerry Bires on several occasions.

For anyone who doesn’t know Jerry Bires, he’s the Prescreening Question Expert who can help formulate questions to pre-qualify top candidates. Rather than try to explain it myself, I’ve included some commentary Jerry wrote that further explains his point of view on the topic.

Layoffs today are cutting into the "wood" of the real talent pool at many companies, and top line people are available for the first time in years. Some are now likely applying to your company; they're somewhere in the stack of unreviewed resumes. As you deal with an increasingly unmanageable flow of applicants, there's a lot on the line in terms of your employer branding. Prospective employees, especially those of the caliber you now or will want to hire, will be doubly impressed if they're well treated in this economic climate.

Filtering qualified candidates from those unqualified is crucial, but it takes experience to do it properly. The process must not be onerous to the candidate, yet it must be comprehensive enough to provide meaningful data to the hiring manager. The goal is to deliver a better candidate, and it’s best done by creating questions that display a gradient of experience and skills from novice to guru in areas deemed crucial to success in the role.

Besides adding value to the internal HR department ~ by filtering bad from mediocre from excellent candidates for your positions ~ the online pre-screen also benefits applicants. Their answers will allow you to more reliably assess their fit for position. Thus, you can more quickly apprise them of their true status, beyond the utterly vague, "We received your resume." Whether the applicant is told no, maybe or yes in subsequent communications with them, the simple fact that you've reached out more definitively will boost their perception of your employer value proposition or EVP. You've treated them with the courtesy of a timely response, and, believe me, in today’s employment marketplace, that’s a rare occurrence.


To be most effective, question creation should follow a conversation with the hiring authority. He or she is the domain expert, from whom proper interviewing can draw out the most effective questions, the answers to which will serve as true arbiters of success in the role. As we're asking for information directly relating to an applicant’s ability to do a job they desire, the questions, if well written, should capture their interest. Plus, in today's climate, candidates are highly motivated to complete an online application.

By the way, I'm not an industrial psychologist, I'm a recruiter, and there's not what I would call "scientific rigor" at work in my questions. I'm distinguishing among unqualified, reasonably qualified and very qualified candidates based on a hiring manager's criteria. Once the pool of applicants has been pre-screened, and phone interviewed, you can put those most desired through the in-depth assessments that are the purview of the I/O PhD.

You may find that a couple of sacred cows must be gored as you move forward. From caveman days, when Thag asked Ug how long he'd been hunting mammoth, years of experience has been the primary gauge of candidate adequacy for a job. Thag never knew that, while Ug claimed five full years of hunting, for most of the last two years he'd actually just been gutting mammoth that his wife had hunted. Damn shoulder injury.

10,000 years later, most online pre-screening questions fail to deliver anything better than meager data points about an applicant's ability to do a particular job.

While there’s some merit to asking about years of experience, you never know what percentage of a person's time over the course of a year has been spent doing the task a hiring manager said was crucial to success. So a claim of 7 year's experience, which on the surface means the candidate should be stronger than someone with 4 year's experience, may not necessarily be so.

What other kinds of questions appear often in online applications? These two examples are typical:

Indicate your level of expertise in Java programming.
Enter 0 if no experience, 1 if basic, 2 if moderate, 4 if solid or 5 if expert.


Please rate your experience at maximizing sales.
None Limited Good Extensive

Maybe I'm too long in the tooth, but my feeling is that asking candidates to self-evaluate themselves removes any value to asking the question.

Hiring Manager Input is Crucial

Ask probing questions of your hiring manager so you find out how they distinguish the novice from the guru for a particular skill set. Get them to peel the onion, as it were, and get granular with the duties to be performed. When you can engage the hiring manager in the process they'll help you create meaningful questions. Realize that most hiring managers (and HR professionals) have grown up reading – and likely writing - job specs stating "you must have X years of Y experience." So some realignment of their thinking must occur.

When a hiring manager told me he wanted someone with AJAX experience, I had him describe how competency with AJAX might be expressed along a gradient.

With his input I wrote a question like this.
Regarding my development experience using AJAX:
- I'd be new to using AJAX for web applications
- I've built a site where I dynamically manipulated the DOM using JavaScript
- I've written JavaScript that makes remote XML over HTTP calls back to a server to display interactive data
- Above, plus I understand how to build AJAX applications such that they work with multiple browser versions
- Immediately above, plus the work was done for an mobile application
- All I know about AJAX is that he never got over playing second fiddle to Achilles

Not being a software engineer, it’s beyond my abilities to generate such responses on my own. Moreover, this same question, when asked of another hiring manager, in a different application environment, may turn out an entirely different response.

Why the off kilter reference to Achilles in the last response? An online application shouldn't take itself so seriously that it becomes a grind. If I can add a question or comment that uses humor, I will. You don't overdue it, but such questions can remove some of the mental pressure a candidate feels.

Let's step back into pre-history to examine question creation more closely from the perspective of Thag and Ug.

As you can imagine, of little value to a clan's chief would be a tribal member's answers to the two questions below.

1. I have hunted mammoth:

2. Please indicate your level of expertise in hunting mammoth. Enter 0 if you are new to hunting, 1 if you have basic skills, 2 if moderate, 3 if solid and 4 if you're an expert mammoth hunter.

As noted above, we can get far more valuable data from answers to questions about mammoth hunting when our interview with a hiring authority has gotten specific skill set information that we can adapt into a multiple choice (or behavioral) question.

The examples below would return far more valuable information than the ones above, and, depending on what's perceived as the most valued criteria, we would score and weight them.

For example, perhaps the questions below could be assigned a point value and each response would earn the candidate some percentage of those points. We may or may not want to allow the candidate to write in their own answer. We may or may not want them to be able to choose more than one response.

QUESTION: In terms of hunting mammoth, I am most proficient with:
- Spears, bow and arrow, or other long range weapon
- Short handle stone club or knife
- Vines
- Throwing stones
- Generating a fearsome yell or wearing a scary mask
- Other, or I would be new to hunting mammoth

QUESTION: In the past month I have eaten mammoth that I:
- Neither hunted nor slew
- Hunted, but it was slain by another
- Directly contributed to the slaying
- Had sole responsibility for the slewage (sic)
- None of the above, or I would be new to eating mammoth

QUESTION: In terms of my hands-on experience with mammoth, I have:
- Hunted mammoth
- Cooked mammoth
- Skinned mammoth
- Two of the above
- All of the above
- None of the above

In summary, when well crafted, the multiple choice question can provide recruiters and hiring managers with an effective initial filter that separates a sure no from a maybe or yes.

Besides the multiple choice questions on which we have focused, and which are capable of being created in most applicant tracking systems, there’s great value in adding open-ended questions to your online application. Such questions should not allow candidates to conjecture about their ability to do something. Instead the open-ended question can be written so that the answer effectively pays off the experience claimed on a resume. The answer should effectively detail why they can perform one or more of the key functions required on the job.

Keeping to our prehistoric theme, some examples of good behavioral questions are these:

When leading a mammoth hunt, what specific strategies have you taken to minimize injuries to your tribe from hoof and tusk?

Describe how you’ve motivated mammoth hunters who could not be outfitted with long range weapons and instead were given a cudgel. Were direct affronts to their manhood as effective as enticements to a larger share of the kill?

If you have questions or would like to talk to Jerry more about this topic, please contact him at: 618-457-8727 or

Best of luck!!!
Jake Stupak

Views: 1561

Comment by Tom Janz on September 14, 2009 at 3:24pm
Dr. Tom-- one of those annoyingly expensive IO Psychologists here. Yes, there are a lot of candidates out there and finding the gold among the gravel tough. Screening talent is not rocket science, but it is People Science. While you offer good advice for avoiding common pitfalls in writing screening questions, trying it at home (or at your PC) can be quite dangerous for at least two reasons.

Puddle ONE: Adverse Impact Screening questions are part of the selection process and subject to review for their impact on Protected Classes by the EEOC (Equal Employment Opportunity Commission) just like all other steps in the hiring process. If your questions have the effect of screening out more members of the Protected Classes than the majority classes, you may need to then prove that they are a performance related business necessity that cannot be achieved by some other processes that generates less 'adverse impact'. At the very least, you should moniter the adverse impact that your screening questions trigger if you have more than 15 employees. If your questions cause adverse impact and it comes to the EEOC attention and they take action, it is not a good thing. You can face expensive court-mandated remedies and settlements, not to mention the fun of endless hearings in uncomfortable chairs attended by humorless government lawyers and your expensive attourneys.

Puddle TWO: Weak or even damaging questions. Weak questions result in little variation among candidates or show little relationship to on-the-job performance. Damaging questions accomplish the opposite of what you intended-- the high scorers perform worse on the job than the low scorers. Fortunately, this happens rarely. While its easy to spot questions that waste you and your candidate's time by delivering little variation (almost everyone clicks the same answer), its not so easy to spot questions weakly related to job performance. You need to actually measure job performance (or at the very least, the candidate performance in final decision interviews) and then statistically relate those scores to the candidate's answers to the screening questions. Not your typical recruiter's cup of tea.

My recommendation? Focus screening questions on advertised position minimum requirements-- using the techniques that Jerry suggests. Then screen ALL eligible candidates with a low-cost, online, validated screening assessment. Sure it cost's a bit more than ginning up a set of knowledge, credential, or experience questions, but you can avoid those uncomfortable courtroom chairs and be more certain that you haven't ruled out the best talent early on while floating less-than-average talent to the top. Research on selection utility finds that the opportunity costs of using home-grown screening far outweigh the assessment costs of using a best-practice online solution-- by from 3 to 10 times. Contact me at for the proof and a quick way to project the size of the stack of $20s (complete with googley eyes) you could be saving by switching from subjective to objective practices when it comes to talent screening.


You need to be a member of RecruitingBlogs to add comments!

Join RecruitingBlogs


All the recruiting news you see here, delivered straight to your inbox.

Just enter your e-mail address below


RecruitingBlogs on Twitter

© 2023   All Rights Reserved   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service