Enter two sentences (paraphrases or translations) in up to two languages to obtain word alignments. Steer your desired tokenization using whitespaces. Language coverage is identical to multilingual BERT. Alignments are computed on subword level with multilingual BERT for the three methods ArgMax, IterMax and Match. For details see the paper.