Gender dependent systems are usually created by splitting the training data into each gender and building two separate acoustic models for each gender. This method assumes that every state of a subphonetic model is uniformly dependent on the gender. We use the premise that the acoustic realizations of various sub phonetic units are dependent on gender in varying degrees across phones and more particularly context dependent. We show that this is indeed the case by using gender as a question in addition to phone context questions in the context decision trees. Using these trees we build phone specific gender dependent acoustic models and demonstrate a novel method to pick between genders during decoding based on a measure of confidence of the decoded hypothesis. An improvement of 10-20% in word error is achieved relative to a gender independent system.