One option is the solve this problem the way we do today - require that tests be taken in a controlled environment where personal devices aren’t permitted.
Another is to keep track of how often someone does much worse in an eventual in person interview than their GPT screener interview and make that a signal in the broader system.
Another way is to take advantage of the fact that it’s generally easier to recognize something as being good than to create something good, and just explicitly allow the use of AI when interviewing - though I don’t know if that would actually work, or if the “evaluation is easier than generation” property is still as true in cutting edge AI.
Won't that interview filter GPT stop working as soon as someone else makes another GPT which generates answers good enough to pass the filter? :)
Good question.
One option is the solve this problem the way we do today - require that tests be taken in a controlled environment where personal devices aren’t permitted.
Another is to keep track of how often someone does much worse in an eventual in person interview than their GPT screener interview and make that a signal in the broader system.
Another way is to take advantage of the fact that it’s generally easier to recognize something as being good than to create something good, and just explicitly allow the use of AI when interviewing - though I don’t know if that would actually work, or if the “evaluation is easier than generation” property is still as true in cutting edge AI.