Hard-coded context length in DAM ranking model?
Created by: kauttoj
I'm training a DAM neural ranking model following the example for Ubuntu V2 dataset. I want to repeat the same analysis with my own data with varying context length size. However, I think there is a problem in line 99 of "deep_attention_matching_network.py". The call
super(DAMNetwork, self).__init__(*args, **kwargs)
does not allow custom context lengths, but always reverts to the default 10 (in "tf_base_matching_model.py" line 46). The correct behavior (at least no errors) is achieved by replacing above call with:
super(DAMNetwork, self).__init__(num_context_turns=num_context_turns,*args, **kwargs)
Could you check if this is a bug in the code. Or maybe there is something missing in "ranking_ubuntu_v2_mt_word2vec_dam.json" config file.