Faster querying with smaller memory usage.
DALM is faster and smaller language model implementation using double-array structures.
Quick Start
# docker pull nowlab/dalm_translation_kit:20160330
You may see DALM components and Moses decoder under /opt. Moses decoder is linked to DALM.
Build DALM from source.
C++
# mkdir build && cd build && cmake .. -DCMAKE_INSTALL_PREFIX=../local && make install
(Experimental) Python
(comming soon...)
Use DALM with your language model.
You can use DALM with your language model. DALM can read ARPA format.
Build binary model
# build_dalm -f /path/to/arpa.file -o /path/to/output
Use DALM with Moses decoder.
DALM is integrated with Moses decoder. We also distribute a docker image including pre-build DALM and Moses decoder. >> Our Dockerhub page
Build Moses decoder with DALM
# ./bjam --with-dalm=/path/to/DALM
Write moses.ini
[feature] ... PhraseDictionaryMemory name=TranslationModel0 num-features=4 path=/path/to/rule-table.gz input-factor=0 output-factor=0 ... DALM name=LM0 factor=0 path=/path/to/dalm_model_directory order=5 ... [weight] LM0= ...
Use DALM with your code.
From C++
Please see an example.
(Experimental) From Python
Please see an example.