Technical Interviews – Phone Screen – Part 3

December 19, 2007

Fourth, augment your answer. Answer the question and then some. Here’s a very useful exercise if you have never done this. Put yourself in the position of the interviewer. Pretend you have 30 minutes to give a phone screen on C++ and programming skills. What topics do you cover? You can’t possibly cover everything, so you have to prioritize. Try making a list of things that you think are important. Then cut the list down until you have maybe 6-10 things. Now come to the phone screen with your list in hand and try to make sure that you say something about each of your points somewhere in the phone screen. Now, of course, you can’t just launch into a random tangent after a totally unrelated question, but the idea is that your list and the interviewer’s list should have significant overlap (otherwise, if the skills you think are necessary and the skills the interviewer thinks are necessary are vastly different, the job is probably not a good match anyway). Therefore, when the interviewer asks a seemingly “random” question, you’ll be able to see which of the underlying points he is aiming at and you can beat him to the punch.

Another personal example, here. One of the questions I had in an on-site interview was to write some bit counting code and then optimize it. This being my very first on-site interview, I might have been a little obtuse, but when the interviewer asked some general question like, “why did you only use 16 entries for that table? Why not use 1024 or 10 million?” I wasn’t sure what he was getting at. It just seemed like a dumb question. Of course you’re not going hard-code a static 10 million entry table in your program – that’s stupid, what kind of question is this? What the interviewer really wanted to hear was a description of the memory hierarchy – L1, L2 cache and main memory and how performance would take a nose dive after you exhaust those caches. My frustration was that I’m well aware of the memory hierarchy and cache design issues (set associativity, aliasing, cache-line alignment for performance, etc, etc…) but I really messed up that question. Had I come with a mental checklist of things that I thought were important to the job, cache issues and cpu architecture would be at the top and I would have made sure to communicate that somewhere. I could have seen that in the question right away. So don’t make the same mistake. Come with a list. Make it a two-way street. The interviewer is trying to “pull” out what you know, so make sure to “push” out what you know.

Fifth, impressions do count. Rarely does a candidate just nail every single question and answer everything perfectly, so then, like it or not, it does come down to weighing the good and bad – “well, he did well on my data structures questions, but I don’t think he really understands threading, …” This is where the subjective part comes in. For a candidate that I like, I might be more forgiving of the things that he doesn’t know. For a candidate that I don’t like, I’m less forgiving. Shocking, but that’s human nature. Of course, that’s within limits. If the candidate totally bombed the interview, it doesn’t matter how much I like him, he’s not going to pass. And if a candidate answers every question perfectly, I’ll pass him even if I didn’t like his attitude. (The idea being we can evaluate again in an on-site interview with other interviewers). In terms of attitude, basically it comes down to not being cocky, arrogant and dismissive and instead showing genuine enthusiasm and eagerness and being gracious with the interviewer on questions that you think are bad (all interviewers ask bad questions occasionally). Hopefully common sense, but too often candidates seem very stiff and even slightly defensive.

Finally, be sure to follow-up at the end. The interviewer may not offer you a chance. This may be intentional. When I’m going to fail a candidate (and I know as soon as the screen is over), I might just say, “well, it was nice talking with you … ” Most candidates don’t ask about follow-up. They just say, “OK” and we hang up. I’ve had a couple candidates ask me straight out if they passed or not right at the end. This puts the interviewer in a slightly awkward position if they are failing as I hate to say right out, “no, you failed.” If the candidate is passing, I have no problem telling him so on the phone and explaining the next steps. You should know the status of the phone screen by the end. If you don’t, ask what the next steps are. You can get a pretty good read by how the interviewer responds whether you passed or not. (Basically, if he doesn’t tell you right away that you passed, you probably failed).

Advertisements

Technical Interviews – Phone Screen – Part 2

December 19, 2007

Don’t assume that just because you don’t know something it must not be important. I remember one candidate was doing quite poorly on the C++ questions. I probably should have just given up after he couldn’t even tell me what a virtual function was, but since that was my first question, I pressed on. (I couldn’t end the phone screen after 2 minutes, right?) It was quite clear he knew hardly any C++ at all. His attitude, however, was quite dismissive – as if I were asking him trivial things and he would just “go read a book” and “pick it up in a couple weeks.” After he told me he could just “pick it [C++] up in a couple weeks” by finding some book somewhere, the interview was over. I was just shocked. It’s OK to be a little dismissive if you are answering at 95%+ and then some (back to knowing what you know – when you really know something, confidence is natural), but the interviewer has a reason for these questions and though you may think they are just silly interview questions, he does not. A random question to explain the RAII design pattern (or I explain the design pattern and ask you where you might use it) may seem like minutiae to you, but maybe the interviewer is trying to see if you know how to write exception-safe code. (Say, because their code base is chock-full of exceptions).

Which leads me to point three. The “trivia” questions. I hated these as an interviewee. I hated them because they have no bearing on actual programming skill. They are just trivia. It’s almost random if you know them or not. Let’s give some specific examples. Almost all “definition” questions (such as the RAII design pattern above – throwing out big words to see if the candidate recognizes them). The “keyword” questions. Come on, you’ve seen them – “what does the ‘mutable’/’static’/’explicit’/’volatile’/’register’/’restrict’ etc keyword do?” Does the fact that one person knows what the ‘mutable’ keyword does (allows const methods to access non-const, but “mutable” data members) and another does not have any bearing on how well each one programs in the real world? Probably not.

So why are these so popular? Mostly because they are easy. No guessing here – you either know the answer or you don’t. Second, they can effectively test for lack of experience. Consider an interview I had with nVidia. The interviewer posed the question, “what does the volatile keyword do?” I forgot. (really – I think I knew it) and gave him the answer for the “register” keyword. Bingo! The interviewer immediately knows I have little to no device driver experience. (Which should have been obvious from my resume anyway). What frustrated me at the time was that the question seemed unfair. So what? Does that mean I’d make a lousy device driver programmer? Maybe. Maybe not. But the question has no bearing on my actual skill as a programmer and I was afraid that was not understood. And had I known the answer to the question right away, it would have proven what? Absolutely nothing. It would have been a no-op. Naturally, I hate not knowing something, so I went back and read all about the volatile keyword (and how it is not a substitute for memory barriers in lockless algorithms, but that’s another issue). Now if I interview with AMD and get the same question, you bet I’m going to nail it. But absolutely nothing has changed between then and now. I still have no real device driver experience to speak of. I try to avoid these questions as much as I can. My best advice here is to just memorize them all. There aren’t that many keywords in the C++ language. Make sure you know all of them.


Technical Interviews – Phone Screen – Part 1

December 19, 2007

Over the past several months, I’ve had the opportunity to participate directly in all levels of the interviewing and hiring process at my new company. This has been quite an eye opening experience.

It wasn’t too long ago (if you scroll down just a few pages) that I was on the other side and as a candidate, I found the interview process very frustrating. I had a strong resume and managed to get quite a few phone screens and even many on-site interviews (I had at least 8 before I finally got a job. Maybe more – after so many, it’s hard to remember). It got very frustrating. No feedback, just the standard cookie cutter form letter – “not a good fit,” “looking for more experience,” etc…

Fast forward about a year. I’ve participated in on-site interviewing of over a dozen candidates, reviewed code screens for even more and given phone screens to yet more. Some of these reflections will be specific to the position and job I’m interested in (entry-level – 2 years experience C++ programmer) and some is just ranting about some things that candidates do, but hopefully much will be relevant to anyone seeking a technical or programming job. I’ve divided this series into several parts to make it more readable.

Let’s get some things out of the way first. All interviews are at least partially biased, subjective and do not necessarily accurately measure what they really want to measure. Certain popular interviewing procedures make things worse, but to an extent, as much as we’d like to pretend everything is completely objective and scientific, there is a lot of variability in the whole process. Don’t take it personally if you fail an interview. Fortunately, there are things you can do to improve your chances. It’s not all luck.

All right, let’s start with the phone screen. First, the interviewer hates asking dumb questions almost as much as you hate answering them. Realize that he is not trying to be condescending by asking such trivial questions. It doesn’t mean he has not seen your resume or doesn’t believe your resume. The interviewer really does want you to pass. But you have to help a little. One way to help is by being gracious with the dumb questions. Frankly, some people lie on their resume and so until people stop missing these dumb questions, we have to keep asking them.

Which leads to point two. It’s OK not to know everything. If you don’t know what the interviewer is talking about at all, say so. I’ve had candidates that would play guessing games or just try fishing for more information by answering a different question or changing the parameters and redefining my questions so they could answer it. Only after wasting significant time clearly defining the question and finally boiling it down to a binary true or false answer do I realize the candidate doesn’t have any idea at all what he’s talking about. Do not do this. Sometimes, the interviewer will try to draw you down that line of questioning anyway. This happens for two reasons. One, he may think it is really important and he wants to give you a chance to guess. (Sometimes you can gain insight into how the candidate thinks by how he guesses). Or two, he thinks you do know the answer but are afraid of getting caught by a “trick” question so you’re giving up too easily. This is not a bad thing. It means you impressed him enough with your earlier answers that he believes you are smarter than you’re letting on. If you really don’t know, make a guess, explain why you guessed how you did, then say you know it’s a guess and re-iterate that you are not familiar with the topic. The interviewer should get the hint.

Candidates that tell me honestly when they don’t know something impress me much more than those that are good guessers. Knowing what you know, but also, more importantly, knowing what you don’t know is a sign of maturity. I’ll gladly admit that I am not a C++ expert (or any other expert, for that matter) and I’m learning new things all the time. (Just the other day I learned about type based alias analysis and struggled with the lack of assignment in template metaprogramming – maybe a post for another day)


Is C# Just a Better Java?

October 4, 2006

Recently, I downloaded Microsoft’s Visual C# Express (free), and I have been playing around with C# and .NET. So, after just a couple weeks and a few hundred lines of code, I feel confident in my ability to make an informed judgement.

The first thing that struck me was just how similar it is to Java. If imitation is the sincerest form of flattery, I guess Sun should feel pretty good. Microsoft probably would have just stolen Java if they could have gotten away with it, but apparently some legal troubles stood in their way.

Both compile into bytecode and run in a VM. Both have automatic garbage collection. Neither supports multiple inheritance (contrast with C++) preferring instead the interface paradigm. Both use the “everything is an object” model. Strings are immutable. (In Java, you would use a StringBuffer; in C#, it’s called a StringBuilder). Neither support global methods. Both support thread synchronization primitives in the language itself (Java with the “synchronized” keyword; C# with the “lock” keyword). Exception handling is similar, though C# does not support checked exceptions (where you have to declare a “throws” clause in your method prototype).

With all those similarities, it seems clear that C# was modeled on Java and is really a pretty far cry from C++ and an even farther cry from C. (Thus, the name, C#, seems misleading, at least to me. I originally assumed it might be more C-like).

But there are some notable differences. C# supports “value types” or structs. Where in Java, everything is allocated on the heap, in C#, structs are allocated on the stack. Consider the following code (valid in both Java and C#):

SomeClass[] x = new SomeClass[500];

This allocates an array of 500 references to SomeClass, but it doesn’t allocate any space for each class. Thus, we have to do something like:

for (int i=0; i < 500; i++)
x[i] = new SomeClass();

Contrast this with the statement:

int[] x = new int[500];

In this case, because int is a primitive type, we allocate space for 500 ints. (Rather than 500 references to an int). With a C# struct, we can do this:


struct SomeStruct {
public int a, b, c;
}
SomeStruct[] x = new SomeStruct[500];
x[57].a = 2;

Why do I care? Maybe it’s just my C coding background, but the second way is more efficient. Besides avoiding 500 calls to new, the structs will be allocated in one chunk which should ensure that they are contiguous in memory and increase cache efficiency. (I guess if we’re worried about cache efficiency, we probably wouldn’t be using C#, but humor me). Finally, the garbage collector only has to keep track of one object rather than 500.

Along the same lines, C# supports true multidimensional arrays rather than only jagged arrays like Java. (Though you can use jagged arrays too).

In line with C# giving more precise control, C# supports a “ref” keyword which makes it possible to write a real swap function in C#:

public static void swap(ref int a, ref int b)
{
int z = b;
b = a;
a = z;
}
// ...
int x = 5, y = 10;
swap(ref x, ref y); // now x = 10; y = 5

Java proponents may argue that this is not necessary and the programmer could redesign his program, but certainly having the functionality if you want it is better than not having it at all. If you don’t like it, don’t use it.

There are several other significant differences such as operator overloading, delegates and generics, but most others are syntactic sugar (enums are handled more nicely and you can switch on strings, for example).

So, in short, it seems the answer to the question, “is C# a better Java,” is yes. Personally, I don’t really get into either language, but if I had to pick, I’d take C#. Of course, there’s more to a platform than just the core language, but that’s a subject for another post.


Open Source Graphics Drivers

August 10, 2006

The recent announcement that Intel is open sourcing the Linux drivers for their latest 3D hardware (i965 integrated parts) got me thinking once again about the evils of closed source graphics drivers. Since most of the arguments against closed source graphics drivers are well-known, I’ll try not to rehash them.

The problem, though, is that all those arguments are negative arguments against keeping the drivers closed. But frankly, the companies involved are not much interested in arguments against closed drivers coming from a few open source Linux fans. Sales of cards used exclusively for Linux are probably very small so they want something more tangible. So, here are two positive arguments for opening the cards that I haven’t heard.

Virtualization. I have slightly mixed feelings about virtualization schemes like Xen and VMWare (maybe another post), but nevertheless, they are becoming increasingly popular for many reasons. They currently do a very good job virtualizing memory, CPU, and network cards, but in the area of 3D graphics, they utterly fail. Why? Why can’t I run Quake 3 in Windows and play Tux Racer at the same time in Linux with Xen? Now, ignoring for the moment the sensibility of doing such a thing, the current state of affairs is that it’s just not possible.

The question, though, is how is it fundamentally different if I play Quake 3 and Tux Racer both under Linux or one under Linux and one under Windows. The way hardware accelerated OpenGL works under Linux (and presumably its at least similar in Windows) is that the 3D clients (Quake 3 and Tux Racer) get direct hardware access through mmapped memory and communicate locking and stuff through shared memory. This is part of DRI (direct rendering interface). The point, though, is that the hardware itself has to support this direct rendering and be able to context switch between 3D clients and that most of these locking and synchronization issues are already solved in the drivers (where by drivers, I mean the OpenGL library, X driver and kernel driver). Technically, then, there is not a lot of difference and it’s basically a driver issue. The driver assumes it has direct physical access, but if the driver were open source, one could rewrite it to not assume physical access and use similar synchronization through the Xen hypervisor.

Now, I’m sure it’s much more difficult than that, but with the Intel i965, there is a completely open source stack – Xen, the Linux kernel, X, OpenGL (Mesa), and the i965 drivers are all open source. If the hardware can do it, it’s just writing the code now (which may not be trivial in the least). I think it would be cool to see the Intel cards doing something that is just not possible with the closed source ATI and NVidia cards because they chose to open source their drivers.

The second reason is one that even the Windows weenies (whoops, did I say that? I meant “fans”) should care about. Games. Windows games. Let’s face it, games sell video cards. One of the most frustrating things game developers have to deal with is driver incompatibilities. Working on ED, a game that doesn’t really tax the video card, we had several driver bugs ourselves. It’s not uncommon to see an FAQ entry instructing anyone having any problem to upgrade to the latest manufacturer’s drivers (which shows great faith in the manufacturer’s regression testing). Often this works, but sometimes it introduces different bugs somewhere else.

Now, suppose the drivers were open source. Game studios with their multi-million dollar game budgets could easily afford to employ an engineer whose job it is to be intimately familiar with the drivers of both cards. If a bug arises, they can just fix it, push it upstream so end users will get it, and move along. This person could also fill a consulting role. Wonder why your clever optimization works on one card, but not another? What is the best way to do X that will work on all cards? etc… The end result is that the game that comes out will work flawlessly and perform optimally on each vendor’s card. (Incidentally, it might allow people more freedom to choose OpenGL on Windows rather than always choosing D3D because it’s better supported).

Take Doom 3 as an example. Doom 3 is OpenGL and ran better on the high-end NVidia cards when it came out. And it’s basically a driver issue. NVidia has good OpenGL drivers, ATI less so. While the rest of us open source rabble might not be competent enough to write an OpenGL driver (as NVidia insinuated), John Carmack could. Furthermore, since iD makes a game engine, it is in their best interest to make sure that the fast path in the engine is optimized in the drivers as well, so it’s not at all inconceivable that Carmack would spend some time optimizing the low-level drivers. Optimizations that could only make your card look better in benchmarks. Ditto for Epic, Crytek, Valve and anyone else making a game engine.

Remind me again why these drivers are closed source…


Firefox and Opera

August 7, 2006

Recently I ran across this Javascript speed test on Digg. Go ahead and try it yourself. In short, the test shows that Firefox 1.5 is as much as three times slower than Opera and marginally slower than IE 6 for these particular tests. Now I knew that Firefox was no speed demon, but come on. Three times slower! Add to that Firefox’s seemingly insatiable appetite for RAM and Opera is looking very good.

Now, I’m not a big fan of closed source software, but I figured if Opera is really that good, maybe I should give it a shot. So I did. For the past week, I’ve been using Opera 9.01 exclusively. Now, I must say that Opera is a great browser and I never had any problems whatsoever with it (no rendering issues, font issues, etc…). However, I’m sticking with Firefox. Here’s why.

First, let’s look at the memory situation. I have heard that Opera is much more RAM-friendly than Firefox and Firefox has a reputation of being somewhat of a memory hog. This was not my experience, however. When I first started Opera, it was using about 17MB of memory. Currently, after 3 days of browsing, it is taking up 118MB! I have 10 tabs open. Occasionally, memory usage does drop, but Opera seems just as reticent to return its memory as Firefox. This is certainly not a formal test, but I think it’s safe to say Opera uses no less memory than Firefox. (I actually think Firefox uses slightly less, but that’s more a hunch than anything). I’ll save the rant about why a web browser should need a quarter of the memory, but for my part, I think the memory usage is an important criteria and I was disappointed with Opera.

Next, let’s look at that speed test mentioned in the beginning of the article a little more deeply. I’m not at all convinced the test is measuring what it intends. The numbers are impressive, no doubt, but they are not stable. After 3 days of browsing and with my 10 tabs open, I tried the test again. Now it is 7x slower than with a clean slate and only slightly less than 2x slower than Firefox. It is even easier to demonstrate. Startup Opera with one tab and open the speed test. It shows me about 50ms for the string functions test. Now, open one additional tab and log into GMail. Switch back to the first tab and run the test again. Now the string functions test takes 600ms, over 10x slower! How in the world do the string functions get 10x slower because there is another tab open?? I guess only the Opera engineers will know, but it casts grave doubt in my mind that the test is useful at all. Firefox is at least consistently slow.

In addition, looking at the code for the tests, it’s quite obvious that the tests are measuring totally unrealistic workloads – workloads that would never occur in the real world. For example, take the try-catch test. It runs a loop about 4000 times that just throws an exception, catches it and does nothing with it. Now, if you have a program that catches 4000 exceptions, you will have bigger problems than the speed of the catch. So, of course, it is good that it includes a battery of tests, but each one is similarly contrived. A proper benchmark has to use a realistic workload. In short, I think the test is bogus and this serves as simply another reminder to be very, very skeptical of any benchmarks like these.

Finally, my last reason for eschewing Opera and sticking with Firefox is purely philosophical. Firefox is open source. Opera is not. By using Firefox, it contributes to the development and hopefully these issues will be fixed more quickly.


Fluxbox

August 5, 2006

I recently purchased a 250GB SATA drive because I was running out of space on my Linux partition. I decided it would be a good time to clean out all the cruft that has been accumulating in my Gentoo installation over the past 3 years by starting over with a clean slate. (And because the new drive is noticeably faster than the old one). I figure I’ll slowly build out the new Gentoo installation, customizing it while dual booting with the old Gentoo installation. Then I’ll simply copy my old home directory over and reformat the old partition.

So after installing X, I was faced with the age-old decision of which window manager to use. I’ve been using XFCE 4 for the past 3 years and highly recommend it. KDE and GNOME are just overkill for my needs. Don’t get me wrong – they’re fantastic for someone who likes that type of desktop. But, I’ve gotten in the habit of keeping several terminal windows open and launching almost all my apps from the command-line. Need some tunes? “xmms &” Need a calculator? “bc” Need to move/delete/rename some files? bash is your friend. I guess it’s hard to teach an old dog new tricks.

But then I got to thinking about how much of XFCE I really used and I realized maybe even that was overkill. So, I’m currently trying out Fluxbox and I must admit, I really like it. top shows it taking up less than 4.5 MB (and some of that is probably shared libraries) which is also nice. More memory for Firefox to hog… I especially like the Tabs feature which lets you take any group of windows and turn them into a tabbed window. It’s great for xterms. I used to use gnome-terminal just for the tabs. The right-click menu is very easy to customize as are the hotkeys. It’s no replacement for KDE or GNOME (it doesn’t even have a file manager), but if you just want a small fast window manager to do just the basics and then get out of your way, Fluxbox might be for you.