I have been responsible for the first level screening interviews for my team and my sister team at my current organization. I have had some variety of experiences as an interviewer which I would like to note down as the following.
- Many people bluff about their experience. Many just don’t know the intricacies of what’s been written in their own resumes.
- I noticed that people who give monologues about their work are the ones who can’t answer technical questions. This is especially the case when someone can answer how the product works and what features are available and how the system was designed and how it works, but they would not be able to highlight their own contribution or daily work.
- Folks who apply for senior positions explicitly tell that they don’t code. I had one architect tell me that they became an architect so that they don’t have to code, hence they won’t answer any technical questions not related to architecture and system design. I have been instructed explicitly that we need people who can code. For senior roles, the screening questions should be straightforward enough (can you tell a data structure which can be used to reverse a sentence? Given the API of the said datastructure and a code stub, can you write a working code to reverse a sentence?) Reversing a sentence was one example of how the questions were straightforward. But the seniors just… refused. This was a huge disappointment for me.
- Many candidates lie and cheat as well. I have seen instances of people chatting on their phone and getting the answers, using google to copy answers. I mean, the least you could do is change the variable names and at least not use the first result in the google results. Some still have the nerve to plagiarize using the phone even with the webcam switched on.
- Many candidates were comfortable answering fizzbuzz, but were not comfortable answering variations of the same. For example, count the unique characters in a string is solved easily but the same candidates cannot understand and solve if I ask them to give me the second most frequent character in a string.
- Many practice data structures and algorithms with c++ and tell me that they can’t do the same with python. With python being a hard requirement, I was not able to give them a pass. I did give them the opportunity to use the docs for syntax, but convert the logic for basic DSA questions from c++ to python, but they were unable to do that. I wonder why/how many have been trained to think of DSA only from a language point of view and not think of it conceptually in an abstract manner.
- It’s really hard to set up questions while also making sure it is not easy to google them. Reversing a sentence is not so easy to google compared to reversing a string.
- For SQL, I have been using a variation of Jitbit’s interview questions. It’s
really surprising how many people falter at self joins and left joins, even
though they can explain the theoretical difference between them. Some are also
able to explain orally the difference between
count(column)if nulls are present, but when they write the same in sql, they are unable to debug the code as to why the counts aren’t coming out the expected manner.
Overall, this was a really eye opening experience. At one point I felt really sad that I had to reject so many candidates in a week. Eventually I have been managing this by scheduling only a limited number of candidates per week to keep myself sane. 😅
Moving ahead, I’m looking at evaluating products which can be used to conduct the base line screening so that I don’t have to spend an hour on screening for basic skills. This would definitely save a lot of time. I prefer this approach over a project based on because of lesser opportunity to plagiarize. The product I’m hoping for should be similar to hackerrank/others which can use my custom question and test cases and evaluate the candidate. The features should be straightforward like write code, evaluate test cases, submit, email alert, maybe proctoring (though I don’t feel comfortable spying on the candidate. If I’m going to do this then might as well just have a one-on-one call). This should be a fun side project to work on and open source it. I already see one (OpenRank) which seems to fit the bill but I’m yet to evaluate this. If this side project works out then all we require is to host this on a platform and run the code. I remember reading that Jeff Atwood also did something similar and built his own tool to evaluate candidates (I don’t have the link handy right now). Maybe it’s InterviewZen but I can’t find his quote right now. Anyway, such a tool would be a huge help to filter out candidates who don’t possess the basic programming knowledge.