chore(transformer-xl): Initial commit
This commit is contained in:
parent
ef4684ef39
commit
10512876f2
46 changed files with 10547 additions and 0 deletions
131
transformer-xl/tf/README.md
Normal file
131
transformer-xl/tf/README.md
Normal file
|
|
@ -0,0 +1,131 @@
|
|||
|
||||
## Introduction
|
||||
|
||||
This directory contains our TF implementation of Transformer-XL. Note that our state-of-the-art results reported in the paper were obtained by training the model on a large-scale TPU cluster, and our gpu codebase currently does not support distributed training. Here we provide two sets of hyperparameters and scripts:
|
||||
- `*large_tpu.sh` are for the SoTA setting on TPUs. These are exactly the commands we used to obtained our best results.
|
||||
- `*base_gpu.sh` are for the base models which can be run on a few GPUs.
|
||||
|
||||
|
||||
## Prerequisite
|
||||
|
||||
- Python 2.7
|
||||
- Tensorflow [1.12.0](https://github.com/tensorflow/tensorflow/releases/tag/v1.12.0)
|
||||
|
||||
|
||||
|
||||
## Obtain and evaluate pretrained SoTA models
|
||||
|
||||
#### 1. Download preprocessed data (vocab) & pretrained models
|
||||
|
||||
(a) Set your own `DATA_ROOT` in `sota/download.sh` (default to `./`), which will be the root diretory of downloaded model.
|
||||
|
||||
(b) Then, download the model & data by `bash sota/download.sh`. After downloading, the expected directory structure is as follows
|
||||
|
||||
```markdown
|
||||
pretrained_xl
|
||||
tf_enwik8/
|
||||
data/
|
||||
cache.pkl
|
||||
corpus-info.json
|
||||
model/
|
||||
checkpoint
|
||||
model.ckpt*
|
||||
tf_wt103/
|
||||
...
|
||||
...
|
||||
```
|
||||
|
||||
**Note**: we include preprocessed data in the download files to make sure the **same vocabulary** is used. Please see the code `tf/data_utils.py` to understand the data structure.
|
||||
|
||||
|
||||
|
||||
#### 2. Run evaluation scripts to replicate SoTA results on GPUs
|
||||
|
||||
- **enwik8**: modify the script `sota/enwik8.sh` accordingly (see below)
|
||||
- set `DATA_ROOT` to the same folder used in the download step (default to `./`)
|
||||
- set `TEST_NUM_CORE ` (number of GPUs to use): we recommend 2 GPUs => about 60 mins
|
||||
- run the script: `bash sota/enwik8.sh`
|
||||
|
||||
- **lm1b**: modify the script `sota/lm1b.sh` accordingly (see below)
|
||||
- set `DATA_ROOT` to the same folder used in the download step (default to `./`)
|
||||
- set `TEST_NUM_CORE ` (number of GPUs to use): we recommend 1 GPUs => less than 5 mins
|
||||
- run the script: `bash sota/lm1b.sh`
|
||||
|
||||
- **wt103**: modify the script `sota/wt103.sh` accordingly (see below)
|
||||
- set `DATA_ROOT` to the same folder used in the download step (default to `./`)
|
||||
- set `TEST_NUM_CORE ` (number of GPUs to use): we recommend 1 GPUs => less than 5 mins
|
||||
- run the script: `bash sota/wt103.sh`
|
||||
|
||||
- **text8**: modify the script `sota/text8.sh` accordingly (see below)
|
||||
- set `DATA_ROOT` to the same folder used in the download step (default to `./`)
|
||||
- set `TEST_NUM_CORE ` (number of GPUs to use): we recommend 2 GPUs => about 60 mins
|
||||
- run the script: `bash sota/text8.sh`
|
||||
|
||||
|
||||
#### 3. Resources Needed for SoTA Model Training
|
||||
|
||||
We used 32, 32, 64, and 512 TPU cores for training our best models on enwik8, text8, wt103, and lm1b respectively. The training time for each model ranges from 2 to 5 days.
|
||||
|
||||
|
||||
|
||||
## Train "Transformer-XL" from scratch with GPUs or TPUs
|
||||
|
||||
### 1. Download raw data
|
||||
|
||||
`bash getdata.sh`
|
||||
|
||||
|
||||
|
||||
### 2. Preprocess, training and evaluation
|
||||
|
||||
For `dataset` in `[enwik8, lm1b, wt103, text8]`:
|
||||
|
||||
- check out `scripts/dataset_base_gpu.sh` for GPU training and evaluation
|
||||
- check out `scripts/dataset_large_tpu.sh` for TPU training and evaluation
|
||||
|
||||
|
||||
|
||||
#### (1) Preprocess raw data and create tfrecords
|
||||
|
||||
**NOTE**: The preprocessing for GPU and TPU are different. So, you have to run them separately.
|
||||
|
||||
GPU:
|
||||
|
||||
- create training and validation data: `bash scripts/dataset_bas_gpu.sh train_data`
|
||||
- create test data: `bash scripts/dataset_base_gpu.sh test_data`
|
||||
|
||||
TPU:
|
||||
|
||||
- Set the Google storage URL in `scripts/dataset_large_tpu.sh`:
|
||||
- `GSDATA`: data URL
|
||||
- `GSEXP`: experiment URL
|
||||
- create training and validation data: `bash scripts/dataset_large_tpu.sh train_data`
|
||||
- create test data: `bash scripts/dataset_large_tpu.sh test_data`
|
||||
|
||||
|
||||
|
||||
#### (2) Run training
|
||||
|
||||
Base models on GPUs:
|
||||
|
||||
- Modify the configurations in `scripts/dataset_base_gpu.sh` according to your needs.
|
||||
- `bash scripts/dataset_base_gpu.sh train`
|
||||
- If enough resources are available, increasing the model sizes (e.g., `N_LAYER`, `D_MODEL`, `D_EMBED`, `D_HEAD`, `D_INNER`) so that they are closer to the values defined in `scripts/dataset_large_tpu.sh`. Likewise, when resources are limited, decrease the model sizes. It is recommended to ensure that `D_MODEL == D_EMBED` and `D_MODEL == N_HEAD x D_HEAD`. When the model sizes increase, remember to increase `warmup_steps` accordingly to alleviate optimization difficulties.
|
||||
- Adjust the `NUM_CORE` parameter to reflect the number of GPUs to use.
|
||||
|
||||
Larger models on TPUs:
|
||||
|
||||
- Modify the configurations in `scripts/dataset_large_tpu.sh` according to your needs.
|
||||
- `bash scripts/dataset_large_tpu.sh train`
|
||||
|
||||
|
||||
|
||||
#### (3) Run evaluation
|
||||
|
||||
Base models on GPUs:
|
||||
|
||||
- `bash scripts/dataset_base_gpu.sh eval --eval_ckpt_path PATH_TO_CKPT`
|
||||
|
||||
Larger models on TPUs:
|
||||
|
||||
- `bash scripts/dataset_base_tpu.sh eval --eval_ckpt_path PATH_TO_CKPT`
|
||||
118
transformer-xl/tf/avg_checkpoints.py
Normal file
118
transformer-xl/tf/avg_checkpoints.py
Normal file
|
|
@ -0,0 +1,118 @@
|
|||
# coding=utf-8
|
||||
# Copyright 2018 The Tensor2Tensor Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Script to average values of variables in a list of checkpoint files."""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
import os
|
||||
import numpy as np
|
||||
import six
|
||||
from six.moves import zip # pylint: disable=redefined-builtin
|
||||
import tensorflow as tf
|
||||
|
||||
flags = tf.flags
|
||||
FLAGS = flags.FLAGS
|
||||
|
||||
flags.DEFINE_string("checkpoints", "",
|
||||
"Comma-separated list of checkpoints to average.")
|
||||
flags.DEFINE_integer("num_last_checkpoints", 0,
|
||||
"Averages the last N saved checkpoints."
|
||||
" If the checkpoints flag is set, this is ignored.")
|
||||
flags.DEFINE_string("prefix", "",
|
||||
"Prefix (e.g., directory) to append to each checkpoint.")
|
||||
flags.DEFINE_string("output_path", "/tmp/averaged.ckpt",
|
||||
"Path to output the averaged checkpoint to.")
|
||||
|
||||
|
||||
def checkpoint_exists(path):
|
||||
return (tf.gfile.Exists(path) or tf.gfile.Exists(path + ".meta") or
|
||||
tf.gfile.Exists(path + ".index"))
|
||||
|
||||
|
||||
def main(_):
|
||||
tf.logging.set_verbosity(tf.logging.INFO)
|
||||
if FLAGS.checkpoints:
|
||||
# Get the checkpoints list from flags and run some basic checks.
|
||||
checkpoints = [c.strip() for c in FLAGS.checkpoints.split(",")]
|
||||
checkpoints = [c for c in checkpoints if c]
|
||||
if not checkpoints:
|
||||
raise ValueError("No checkpoints provided for averaging.")
|
||||
if FLAGS.prefix:
|
||||
checkpoints = [FLAGS.prefix + c for c in checkpoints]
|
||||
else:
|
||||
assert FLAGS.num_last_checkpoints >= 1, "Must average at least one model"
|
||||
assert FLAGS.prefix, ("Prefix must be provided when averaging last"
|
||||
" N checkpoints")
|
||||
checkpoint_state = tf.train.get_checkpoint_state(
|
||||
os.path.dirname(FLAGS.prefix))
|
||||
# Checkpoints are ordered from oldest to newest.
|
||||
checkpoints = checkpoint_state.all_model_checkpoint_paths[
|
||||
-FLAGS.num_last_checkpoints:]
|
||||
|
||||
checkpoints = [c for c in checkpoints if checkpoint_exists(c)]
|
||||
if not checkpoints:
|
||||
if FLAGS.checkpoints:
|
||||
raise ValueError(
|
||||
"None of the provided checkpoints exist. %s" % FLAGS.checkpoints)
|
||||
else:
|
||||
raise ValueError("Could not find checkpoints at %s" %
|
||||
os.path.dirname(FLAGS.prefix))
|
||||
|
||||
# Read variables from all checkpoints and average them.
|
||||
tf.logging.info("Reading variables and averaging checkpoints:")
|
||||
for c in checkpoints:
|
||||
tf.logging.info("%s ", c)
|
||||
var_list = tf.contrib.framework.list_variables(checkpoints[0])
|
||||
var_values, var_dtypes = {}, {}
|
||||
for (name, shape) in var_list:
|
||||
if not name.startswith("global_step"):
|
||||
var_values[name] = np.zeros(shape)
|
||||
for checkpoint in checkpoints:
|
||||
reader = tf.contrib.framework.load_checkpoint(checkpoint)
|
||||
for name in var_values:
|
||||
tensor = reader.get_tensor(name)
|
||||
var_dtypes[name] = tensor.dtype
|
||||
var_values[name] += tensor
|
||||
tf.logging.info("Read from checkpoint %s", checkpoint)
|
||||
for name in var_values: # Average.
|
||||
var_values[name] /= len(checkpoints)
|
||||
|
||||
with tf.variable_scope(tf.get_variable_scope(), reuse=tf.AUTO_REUSE):
|
||||
tf_vars = [
|
||||
tf.get_variable(v, shape=var_values[v].shape, dtype=var_dtypes[v])
|
||||
for v in var_values
|
||||
]
|
||||
placeholders = [tf.placeholder(v.dtype, shape=v.shape) for v in tf_vars]
|
||||
assign_ops = [tf.assign(v, p) for (v, p) in zip(tf_vars, placeholders)]
|
||||
global_step = tf.Variable(
|
||||
0, name="global_step", trainable=False, dtype=tf.int64)
|
||||
saver = tf.train.Saver(tf.all_variables())
|
||||
|
||||
# Build a model consisting only of variables, set them to the average values.
|
||||
with tf.Session() as sess:
|
||||
sess.run(tf.initialize_all_variables())
|
||||
for p, assign_op, (name, value) in zip(placeholders, assign_ops,
|
||||
six.iteritems(var_values)):
|
||||
sess.run(assign_op, {p: value})
|
||||
# Use the built saver to save the averaged checkpoint.
|
||||
saver.save(sess, FLAGS.output_path, global_step=global_step)
|
||||
|
||||
tf.logging.info("Averaged checkpoints saved in %s", FLAGS.output_path)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
tf.app.run()
|
||||
586
transformer-xl/tf/data_utils.py
Normal file
586
transformer-xl/tf/data_utils.py
Normal file
|
|
@ -0,0 +1,586 @@
|
|||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
import math
|
||||
import os
|
||||
from functools import partial
|
||||
|
||||
from collections import Counter, OrderedDict
|
||||
import pickle
|
||||
import json
|
||||
import multiprocessing as mp
|
||||
|
||||
import numpy as np
|
||||
|
||||
from absl import flags
|
||||
import tensorflow as tf
|
||||
from vocabulary import Vocab
|
||||
|
||||
from tensorflow.gfile import Exists as exists
|
||||
from tensorflow.gfile import MakeDirs as makedirs
|
||||
from tensorflow.gfile import Glob as glob
|
||||
|
||||
|
||||
def _preprocess(shard, train, vocab, save_dir, cutoffs, bin_sizes, bsz, tgt_len,
|
||||
num_core_per_host, use_tpu, num_shuffle):
|
||||
file_names = []
|
||||
num_batch = 0
|
||||
|
||||
path = train[shard]
|
||||
data_shard = vocab.encode_file(path, ordered=False, add_double_eos=True)
|
||||
|
||||
for shuffle in range(num_shuffle):
|
||||
basename = "train-{:03d}-{:02d}".format(shard, shuffle)
|
||||
print("Processing shard {} shuffle {}".format(shard, shuffle))
|
||||
|
||||
np.random.shuffle(data_shard)
|
||||
file_name, num_batch_shuffle = create_ordered_tfrecords(
|
||||
save_dir, basename, np.concatenate(data_shard), bsz, tgt_len,
|
||||
num_core_per_host, cutoffs, bin_sizes, use_tpu=use_tpu)
|
||||
file_names.append(file_name)
|
||||
num_batch += num_batch_shuffle
|
||||
|
||||
return file_names, num_batch
|
||||
|
||||
|
||||
class Corpus(object):
|
||||
def __init__(self, path, dataset, *args, **kwargs):
|
||||
self.dataset = dataset
|
||||
self.vocab = Vocab(*args, **kwargs)
|
||||
|
||||
if self.dataset in ["ptb", "wt2", "enwik8", "text8"]:
|
||||
self.vocab.count_file(os.path.join(path, "train.txt"))
|
||||
self.vocab.count_file(os.path.join(path, "valid.txt"))
|
||||
self.vocab.count_file(os.path.join(path, "test.txt"))
|
||||
elif self.dataset == "wt103":
|
||||
self.vocab.count_file(os.path.join(path, "train.txt"))
|
||||
elif self.dataset == "lm1b":
|
||||
train_path_pattern = os.path.join(
|
||||
path, "1-billion-word-language-modeling-benchmark-r13output",
|
||||
"training-monolingual.tokenized.shuffled", "news.en-*")
|
||||
train_paths = glob(train_path_pattern)
|
||||
|
||||
# the vocab will load from file when build_vocab() is called
|
||||
# for train_path in sorted(train_paths):
|
||||
# self.vocab.count_file(train_path, verbose=True)
|
||||
|
||||
self.vocab.build_vocab()
|
||||
|
||||
if self.dataset in ["ptb", "wt2", "wt103"]:
|
||||
self.train = self.vocab.encode_file(
|
||||
os.path.join(path, "train.txt"), ordered=True)
|
||||
self.valid = self.vocab.encode_file(
|
||||
os.path.join(path, "valid.txt"), ordered=True)
|
||||
self.test = self.vocab.encode_file(
|
||||
os.path.join(path, "test.txt"), ordered=True)
|
||||
elif self.dataset in ["enwik8", "text8"]:
|
||||
self.train = self.vocab.encode_file(
|
||||
os.path.join(path, "train.txt"), ordered=True, add_eos=False)
|
||||
self.valid = self.vocab.encode_file(
|
||||
os.path.join(path, "valid.txt"), ordered=True, add_eos=False)
|
||||
self.test = self.vocab.encode_file(
|
||||
os.path.join(path, "test.txt"), ordered=True, add_eos=False)
|
||||
elif self.dataset == "lm1b":
|
||||
self.train = train_paths
|
||||
valid_path = os.path.join(path, "valid.txt")
|
||||
test_path = valid_path
|
||||
self.valid = self.vocab.encode_file(
|
||||
valid_path, ordered=True, add_double_eos=True)
|
||||
self.test = self.vocab.encode_file(
|
||||
test_path, ordered=True, add_double_eos=True)
|
||||
|
||||
if self.dataset == "wt103":
|
||||
self.cutoffs = [0, 20000, 40000, 200000] + [len(self.vocab)]
|
||||
elif self.dataset == "lm1b":
|
||||
self.cutoffs = [0, 60000, 100000, 640000] + [len(self.vocab)]
|
||||
else:
|
||||
self.cutoffs = []
|
||||
|
||||
|
||||
def convert_to_tfrecords(self, split, save_dir, bsz, tgt_len,
|
||||
num_core_per_host, **kwargs):
|
||||
FLAGS = kwargs.get('FLAGS')
|
||||
|
||||
file_names = []
|
||||
use_tpu = FLAGS.use_tpu and not (split == "test" and num_core_per_host == 1)
|
||||
|
||||
if use_tpu:
|
||||
record_name = "record_info-{}.bsz-{}.tlen-{}.core-{}.json".format(
|
||||
split, bsz, tgt_len, num_core_per_host)
|
||||
else:
|
||||
record_name = "record_info-{}.bsz-{}.tlen-{}.json".format(
|
||||
split, bsz, tgt_len)
|
||||
|
||||
record_info_path = os.path.join(save_dir, record_name)
|
||||
|
||||
if self.dataset in ["ptb", "wt2", "wt103", "enwik8", "text8"]:
|
||||
data = getattr(self, split)
|
||||
bin_sizes = get_bin_sizes(
|
||||
data, bsz // num_core_per_host, tgt_len, self.cutoffs)
|
||||
file_name, num_batch = create_ordered_tfrecords(
|
||||
save_dir, split, data, bsz, tgt_len, num_core_per_host,
|
||||
self.cutoffs, bin_sizes,
|
||||
num_passes=FLAGS.num_passes if split == 'train' and use_tpu else 1,
|
||||
use_tpu=use_tpu)
|
||||
file_names.append(file_name)
|
||||
elif self.dataset == "lm1b":
|
||||
bin_sizes = get_bin_sizes(
|
||||
self.valid, bsz // num_core_per_host, tgt_len, self.cutoffs)
|
||||
if split == "train":
|
||||
np.random.seed(123456)
|
||||
num_batch = 0
|
||||
|
||||
if FLAGS.num_procs > 1:
|
||||
_preprocess_wrapper = partial(_preprocess,
|
||||
train=self.train, vocab=self.vocab, save_dir=save_dir,
|
||||
cutoffs=self.cutoffs, bin_sizes=bin_sizes, bsz=bsz,
|
||||
tgt_len=tgt_len, num_core_per_host=num_core_per_host,
|
||||
use_tpu=use_tpu, num_shuffle=FLAGS.num_shuffle)
|
||||
|
||||
pool = mp.Pool(processes=FLAGS.num_procs)
|
||||
results = pool.map(_preprocess_wrapper, range(len(self.train)))
|
||||
for res in results:
|
||||
file_names.extend(res[0])
|
||||
num_batch += res[1]
|
||||
else:
|
||||
for shard, path in enumerate(self.train):
|
||||
data_shard = self.vocab.encode_file(path, ordered=False,
|
||||
add_double_eos=True)
|
||||
|
||||
num_shuffle = FLAGS.num_shuffle
|
||||
|
||||
for shuffle in range(num_shuffle):
|
||||
print("Processing shard {} shuffle {}".format(shard, shuffle))
|
||||
basename = "train-{:03d}-{:02d}".format(shard, shuffle)
|
||||
np.random.shuffle(data_shard)
|
||||
file_name, num_batch_ = create_ordered_tfrecords(
|
||||
save_dir, basename, np.concatenate(data_shard), bsz, tgt_len,
|
||||
num_core_per_host,
|
||||
self.cutoffs, bin_sizes, use_tpu=use_tpu)
|
||||
file_names.append(file_name)
|
||||
num_batch += num_batch_
|
||||
|
||||
else:
|
||||
file_name, num_batch = create_ordered_tfrecords(
|
||||
save_dir, split, getattr(self, split), bsz, tgt_len,
|
||||
num_core_per_host,
|
||||
self.cutoffs, bin_sizes, use_tpu=use_tpu)
|
||||
file_names.append(file_name)
|
||||
|
||||
with open(record_info_path, "w") as fp:
|
||||
record_info = {
|
||||
"filenames": file_names,
|
||||
"bin_sizes": bin_sizes,
|
||||
"num_batch": num_batch
|
||||
}
|
||||
json.dump(record_info, fp)
|
||||
|
||||
|
||||
def get_bin_sizes(data, batch_size, tgt_len, cutoffs, std_mult=[2.5, 2.5, 2.5]):
|
||||
"""
|
||||
Note: the `batch_size` here should be per-core batch size
|
||||
"""
|
||||
bin_sizes = []
|
||||
|
||||
def _nearest_to_eight(x): # so that it's faster on TPUs
|
||||
y = x - x % 8
|
||||
return y + 8 if x % 8 >= 4 else max(8, y)
|
||||
|
||||
if cutoffs:
|
||||
num_batch = len(data) // batch_size // tgt_len
|
||||
|
||||
data = data[:batch_size * num_batch * tgt_len]
|
||||
data = data.reshape(batch_size, num_batch, tgt_len)
|
||||
|
||||
tot = batch_size * tgt_len
|
||||
for b, (left, right) in enumerate(zip(cutoffs[1:-1], cutoffs[2:])):
|
||||
mask = (data >= left) * (data < right)
|
||||
percents = mask.astype(np.float64).sum(2).sum(0) / tot
|
||||
mean = np.mean(percents)
|
||||
std = np.std(percents)
|
||||
|
||||
bin_size = int(math.ceil(tgt_len * batch_size * (mean + std_mult[b] * std)))
|
||||
bin_size = _nearest_to_eight(bin_size)
|
||||
bin_sizes.append(bin_size)
|
||||
|
||||
return bin_sizes
|
||||
|
||||
|
||||
def _int64_feature(values):
|
||||
return tf.train.Feature(int64_list=tf.train.Int64List(value=values))
|
||||
|
||||
def _float_feature(values):
|
||||
return tf.train.Feature(float_list=tf.train.FloatList(value=values))
|
||||
|
||||
def batchify(data, batch_size, num_passes):
|
||||
"""
|
||||
if use_tpu = True: num_passes > 1
|
||||
|
||||
Since TPU training requires entire [bsz x tgt_len] chunks, it can discard
|
||||
as many as `bsz * tgt_len` tokens in training. When `bsz` and `tgt_len` are
|
||||
both large, as in the case of TPU training for Transformer-XL, the problem
|
||||
may lead to detectable performance drop.
|
||||
|
||||
Here, we use multiple randomly shifted copies to deal with this problem.
|
||||
"""
|
||||
if num_passes > 1:
|
||||
data_len = len(data)
|
||||
double_data = np.concatenate([data, data])
|
||||
data_list = []
|
||||
for i in range(num_passes):
|
||||
start = np.random.randint(0, data_len)
|
||||
data_list.append(double_data[start:start+data_len])
|
||||
data = np.concatenate(data_list)
|
||||
|
||||
num_step = len(data) // batch_size
|
||||
data = data[:batch_size * num_step]
|
||||
data = data.reshape(batch_size, num_step)
|
||||
|
||||
return data
|
||||
|
||||
|
||||
def create_ordered_tfrecords(save_dir, basename, data, batch_size, tgt_len,
|
||||
num_core_per_host, cutoffs=[], bin_sizes=[],
|
||||
num_passes=1, use_tpu=False):
|
||||
|
||||
if use_tpu:
|
||||
file_name = "{}.bsz-{}.tlen-{}.core-{}.tfrecords".format(
|
||||
basename, batch_size, tgt_len, num_core_per_host)
|
||||
else:
|
||||
file_name = "{}.bsz-{}.tlen-{}.tfrecords".format(
|
||||
basename, batch_size, tgt_len)
|
||||
|
||||
save_path = os.path.join(save_dir, file_name)
|
||||
record_writer = tf.python_io.TFRecordWriter(save_path)
|
||||
|
||||
batched_data = batchify(data, batch_size, num_passes)
|
||||
|
||||
num_batch = 0
|
||||
# for t in range(0, batched_data.shape[1] - tgt_len - 1, tgt_len):
|
||||
for t in range(0, batched_data.shape[1] - 1, tgt_len):
|
||||
cur_tgt_len = min(batched_data.shape[1] - 1 - t, tgt_len)
|
||||
# drop the remainder if use tpu
|
||||
if use_tpu and cur_tgt_len < tgt_len:
|
||||
break
|
||||
if num_batch % 500 == 0:
|
||||
print(" processing batch {}".format(num_batch))
|
||||
for idx in range(batch_size):
|
||||
inputs = batched_data[idx, t:t + cur_tgt_len]
|
||||
labels = batched_data[idx, t + 1:t + cur_tgt_len + 1]
|
||||
|
||||
# features dict
|
||||
feature = {
|
||||
"inputs": _int64_feature(inputs),
|
||||
"labels": _int64_feature(labels),
|
||||
}
|
||||
|
||||
if len(cutoffs) > 0 and use_tpu:
|
||||
# validate `bin_sizes` and `cutoffs`
|
||||
assert len(cutoffs) - len(bin_sizes) == 2, \
|
||||
"len(cutoffs) - len(bin_sizes) != 2"
|
||||
|
||||
# mask for bin 0
|
||||
left, right = cutoffs[:2]
|
||||
inp_mask = ((inputs >= left) * (inputs < right)).astype(np.float32)
|
||||
tgt_mask = ((labels >= left) * (labels < right)).astype(np.float32)
|
||||
|
||||
feature["inp_mask"] = _float_feature(inp_mask)
|
||||
feature["tgt_mask"] = _float_feature(tgt_mask)
|
||||
|
||||
# refresh `inp_cnts` and `tgt_cnts` for each TPU core
|
||||
if idx % (batch_size // num_core_per_host) == 0:
|
||||
inp_cnts = [0] * len(bin_sizes)
|
||||
tgt_cnts = [0] * len(bin_sizes)
|
||||
|
||||
head_labels = np.copy(labels)
|
||||
inp_pos_per_bin, tgt_pos_per_bin = [], []
|
||||
for b, (left, right) in enumerate(zip(cutoffs[1:-1], cutoffs[2:])):
|
||||
inp_pos = np.where((inputs >= left) * (inputs < right))[0]
|
||||
tgt_pos = np.where((labels >= left) * (labels < right))[0]
|
||||
inp_pos_per_bin.append(inp_pos)
|
||||
tgt_pos_per_bin.append(tgt_pos)
|
||||
|
||||
head_labels[tgt_pos] = cutoffs[1] + b
|
||||
|
||||
feature["head_labels"] = _int64_feature(head_labels)
|
||||
|
||||
# permutation feature
|
||||
def _add_perm_feature(feature, pos_per_bin, cnts, prefix):
|
||||
for b, pos in enumerate(pos_per_bin):
|
||||
idx_tuple = []
|
||||
for p in pos:
|
||||
if cnts[b] < bin_sizes[b]:
|
||||
idx_tuple.append([p, cnts[b]])
|
||||
cnts[b] += 1
|
||||
else:
|
||||
break
|
||||
|
||||
n_tup = len(idx_tuple)
|
||||
tup = np.array(idx_tuple).reshape(n_tup * 2)
|
||||
|
||||
feature["{}_cnt_{}".format(prefix, b)] = _int64_feature([n_tup])
|
||||
feature["{}_tup_{}".format(prefix, b)] = _int64_feature(tup)
|
||||
|
||||
_add_perm_feature(feature, inp_pos_per_bin, inp_cnts, "inp")
|
||||
_add_perm_feature(feature, tgt_pos_per_bin, tgt_cnts, "tgt")
|
||||
|
||||
example = tf.train.Example(features=tf.train.Features(feature=feature))
|
||||
record_writer.write(example.SerializeToString())
|
||||
|
||||
num_batch += 1
|
||||
|
||||
record_writer.close()
|
||||
print("Done writing {}. batches: {}".format(file_name, num_batch))
|
||||
|
||||
return file_name, num_batch
|
||||
|
||||
|
||||
def get_lm_corpus(data_dir, dataset):
|
||||
fn = os.path.join(data_dir, "cache.pkl")
|
||||
|
||||
if exists(fn):
|
||||
print("Loading cached dataset...")
|
||||
with open(fn, "rb") as fp:
|
||||
corpus = pickle.load(fp)
|
||||
else:
|
||||
print("Producing dataset...")
|
||||
kwargs = {}
|
||||
if dataset in ["wt103", "wt2"]:
|
||||
kwargs["special"] = ["<eos>"]
|
||||
kwargs["lower_case"] = False
|
||||
elif dataset == "ptb":
|
||||
kwargs["special"] = ["<eos>"]
|
||||
kwargs["lower_case"] = True
|
||||
elif dataset == "lm1b":
|
||||
kwargs["special"] = []
|
||||
kwargs["lower_case"] = False
|
||||
kwargs["vocab_file"] = os.path.join(data_dir, "1b_word_vocab.txt")
|
||||
elif dataset in ["enwik8", "text8"]:
|
||||
pass
|
||||
|
||||
corpus = Corpus(data_dir, dataset, **kwargs)
|
||||
|
||||
print("Saving dataset...")
|
||||
with open(fn, "wb") as fp:
|
||||
pickle.dump(corpus, fp, protocol=2)
|
||||
|
||||
corpus_info = {
|
||||
"vocab_size" : len(corpus.vocab),
|
||||
"cutoffs" : corpus.cutoffs,
|
||||
"dataset" : corpus.dataset
|
||||
}
|
||||
with open(os.path.join(data_dir, "corpus-info.json"), "w") as fp:
|
||||
json.dump(corpus_info, fp)
|
||||
|
||||
return corpus
|
||||
|
||||
|
||||
def main(unused_argv):
|
||||
del unused_argv # Unused
|
||||
|
||||
corpus = get_lm_corpus(FLAGS.data_dir, FLAGS.dataset)
|
||||
|
||||
save_dir = os.path.join(FLAGS.data_dir, "tfrecords")
|
||||
if not exists(save_dir):
|
||||
makedirs(save_dir)
|
||||
|
||||
# test mode
|
||||
if FLAGS.per_host_test_bsz > 0:
|
||||
corpus.convert_to_tfrecords("test", save_dir, FLAGS.per_host_test_bsz,
|
||||
FLAGS.tgt_len, FLAGS.num_core_per_host,
|
||||
FLAGS=FLAGS)
|
||||
return
|
||||
|
||||
for split, batch_size in zip(
|
||||
["train", "valid"],
|
||||
[FLAGS.per_host_train_bsz, FLAGS.per_host_valid_bsz]):
|
||||
|
||||
if batch_size <= 0: continue
|
||||
print("Converting {} set...".format(split))
|
||||
corpus.convert_to_tfrecords(split, save_dir, batch_size, FLAGS.tgt_len,
|
||||
FLAGS.num_core_per_host, FLAGS=FLAGS)
|
||||
|
||||
|
||||
def load_record_info(record_info_dir, split, per_host_bsz, tgt_len,
|
||||
num_core_per_host, use_tpu):
|
||||
if use_tpu:
|
||||
record_name = "record_info-{}.bsz-{}.tlen-{}.core-{}.json".format(
|
||||
split, per_host_bsz, tgt_len, num_core_per_host)
|
||||
else:
|
||||
record_name = "record_info-{}.bsz-{}.tlen-{}.json".format(
|
||||
split, per_host_bsz, tgt_len)
|
||||
|
||||
record_info_path = os.path.join(record_info_dir, record_name)
|
||||
with open(record_info_path, "r") as fp:
|
||||
record_info = json.load(fp)
|
||||
|
||||
return record_info
|
||||
|
||||
def get_input_fn(record_info_dir, split, per_host_bsz, tgt_len,
|
||||
num_core_per_host, num_hosts=1, use_tpu=False):
|
||||
"""Creates input function."""
|
||||
record_info = load_record_info(record_info_dir, split, per_host_bsz, tgt_len,
|
||||
num_core_per_host, use_tpu=use_tpu)
|
||||
|
||||
file_names = record_info["filenames"]
|
||||
bin_sizes = record_info["bin_sizes"]
|
||||
num_batch = record_info["num_batch"]
|
||||
|
||||
tf.logging.info("[{}] File names {}".format(split, file_names))
|
||||
|
||||
def input_fn(params):
|
||||
# per-core batch size
|
||||
per_core_bsz = params["batch_size"]
|
||||
|
||||
# data_dir could be a remote path, e.g., a google storage url
|
||||
data_dir = params["data_dir"]
|
||||
|
||||
def parser(record):
|
||||
# preprocess "inp_perm" and "tgt_perm"
|
||||
def _process_perm_feature(example, prefix):
|
||||
for b in range(len(bin_sizes)):
|
||||
cnt = example.pop("{}_cnt_{}".format(prefix, b))[0]
|
||||
tup = example.pop("{}_tup_{}".format(prefix, b))
|
||||
|
||||
tup = tf.reshape(
|
||||
tf.sparse_tensor_to_dense(tup),
|
||||
shape=[cnt, 2])
|
||||
|
||||
# tf.float32
|
||||
perm = tf.sparse_to_dense(
|
||||
sparse_indices=tup,
|
||||
output_shape=[tgt_len, bin_sizes[b]],
|
||||
sparse_values=1.0,
|
||||
default_value=0.0)
|
||||
|
||||
example["{}_perm_{}".format(prefix, b)] = perm
|
||||
|
||||
# whether allow the last batch with a potentially shorter length
|
||||
if use_tpu:
|
||||
record_spec = {
|
||||
"inputs": tf.FixedLenFeature([tgt_len], tf.int64),
|
||||
"labels": tf.FixedLenFeature([tgt_len], tf.int64),
|
||||
}
|
||||
else:
|
||||
record_spec = {
|
||||
"inputs": tf.VarLenFeature(tf.int64),
|
||||
"labels": tf.VarLenFeature(tf.int64),
|
||||
}
|
||||
|
||||
# permutation related features
|
||||
if bin_sizes and use_tpu:
|
||||
# tf.float32
|
||||
record_spec["inp_mask"] = tf.FixedLenFeature([tgt_len], tf.float32)
|
||||
record_spec["tgt_mask"] = tf.FixedLenFeature([tgt_len], tf.float32)
|
||||
|
||||
record_spec["head_labels"] = tf.FixedLenFeature([tgt_len], tf.int64)
|
||||
|
||||
for b in range(len(bin_sizes)):
|
||||
record_spec["inp_cnt_{}".format(b)] = tf.FixedLenFeature([1], tf.int64)
|
||||
record_spec["inp_tup_{}".format(b)] = tf.VarLenFeature(tf.int64)
|
||||
record_spec["tgt_cnt_{}".format(b)] = tf.FixedLenFeature([1], tf.int64)
|
||||
record_spec["tgt_tup_{}".format(b)] = tf.VarLenFeature(tf.int64)
|
||||
|
||||
# retrieve serialized example
|
||||
example = tf.parse_single_example(
|
||||
serialized=record,
|
||||
features=record_spec)
|
||||
|
||||
# transform permutation tuples to permutation matrices
|
||||
if bin_sizes and use_tpu:
|
||||
_process_perm_feature(example, "inp")
|
||||
_process_perm_feature(example, "tgt")
|
||||
|
||||
# cast int64 into int32
|
||||
# cast sparse to dense
|
||||
for key in list(example.keys()):
|
||||
val = example[key]
|
||||
if tf.keras.backend.is_sparse(val):
|
||||
val = tf.sparse.to_dense(val)
|
||||
if val.dtype == tf.int64:
|
||||
val = tf.to_int32(val)
|
||||
example[key] = val
|
||||
|
||||
if use_tpu:
|
||||
return example
|
||||
else:
|
||||
return example["inputs"], example["labels"]
|
||||
|
||||
file_paths = []
|
||||
for file_name in file_names:
|
||||
file_path = os.path.join(data_dir, file_name)
|
||||
file_paths.append(file_path)
|
||||
|
||||
if split == "train":
|
||||
dataset = tf.data.Dataset.from_tensor_slices(file_paths)
|
||||
if len(file_paths) > 1:
|
||||
dataset = dataset.shuffle(len(file_paths)).repeat()
|
||||
dataset = tf.data.TFRecordDataset(dataset)
|
||||
elif num_hosts > 1:
|
||||
host_id = params["context"].current_host
|
||||
# drop the remaining batches
|
||||
num_batch_per_host = num_batch // num_hosts
|
||||
|
||||
my_start_sample_id = (host_id * num_batch_per_host * num_core_per_host *
|
||||
per_core_bsz)
|
||||
my_sample_num = num_batch_per_host * num_core_per_host * per_core_bsz
|
||||
dataset = tf.data.TFRecordDataset(dataset).skip(
|
||||
my_start_sample_id).take(my_sample_num)
|
||||
else:
|
||||
dataset = tf.data.TFRecordDataset(dataset)
|
||||
|
||||
dataset = dataset.map(parser).cache().repeat()
|
||||
dataset = dataset.batch(per_core_bsz, drop_remainder=True)
|
||||
dataset = dataset.prefetch(num_core_per_host * per_core_bsz)
|
||||
else:
|
||||
# do not shuffle, repeat or cache in evaluation
|
||||
dataset = tf.data.Dataset.from_tensor_slices(file_paths)
|
||||
dataset = tf.data.TFRecordDataset(dataset)
|
||||
dataset = dataset.map(parser)
|
||||
dataset = dataset.batch(per_core_bsz, drop_remainder=True)
|
||||
|
||||
return dataset
|
||||
|
||||
if split == "train" and num_hosts > 1:
|
||||
record_info["num_batch"] = num_batch // num_hosts
|
||||
|
||||
return input_fn, record_info
|
||||
|
||||
def get_corpus_info(corpus_info_path):
|
||||
with open(corpus_info_path, "r") as fp:
|
||||
corpus_info = json.load(fp)
|
||||
return corpus_info
|
||||
|
||||
if __name__ == "__main__":
|
||||
FLAGS = flags.FLAGS
|
||||
flags.DEFINE_string("data_dir", None,
|
||||
help="Location of the data corpus")
|
||||
flags.DEFINE_enum("dataset", "wt103",
|
||||
["ptb", "wt2", "wt103", "lm1b", "enwik8", "text8"],
|
||||
help="Dataset name.")
|
||||
flags.DEFINE_integer("per_host_train_bsz", 60,
|
||||
help="train batch size each host")
|
||||
flags.DEFINE_integer("per_host_valid_bsz", 60,
|
||||
help="valid batch size each host")
|
||||
flags.DEFINE_integer("per_host_test_bsz", 0,
|
||||
help="If > 0, enter test mode and process test set only."
|
||||
"Otherwise, process train and dev sets only.")
|
||||
flags.DEFINE_integer("tgt_len", 70,
|
||||
help="number of tokens to predict")
|
||||
flags.DEFINE_integer("max_batch", -1,
|
||||
help="run in debug mode")
|
||||
flags.DEFINE_integer("num_core_per_host", 8,
|
||||
help="8 for TPU v2.")
|
||||
flags.DEFINE_bool("debug", default=False,
|
||||
help="Process only the first batch without shuffle for lm1b.")
|
||||
flags.DEFINE_integer("num_procs", 1,
|
||||
help="number of processes")
|
||||
flags.DEFINE_integer("num_passes", 10,
|
||||
help="number of passes when use_tpu=True")
|
||||
flags.DEFINE_integer("num_shuffle", 4,
|
||||
help="number of shuffles for lm1b")
|
||||
flags.DEFINE_bool("use_tpu", True,
|
||||
help="use tpu")
|
||||
|
||||
tf.app.run(main)
|
||||
65
transformer-xl/tf/gpu_utils.py
Normal file
65
transformer-xl/tf/gpu_utils.py
Normal file
|
|
@ -0,0 +1,65 @@
|
|||
import os
|
||||
import tensorflow as tf
|
||||
|
||||
def assign_to_gpu(gpu=0, ps_dev="/device:CPU:0"):
|
||||
def _assign(op):
|
||||
node_def = op if isinstance(op, tf.NodeDef) else op.node_def
|
||||
if node_def.op == "Variable":
|
||||
return ps_dev
|
||||
else:
|
||||
return "/gpu:%d" % gpu
|
||||
return _assign
|
||||
|
||||
|
||||
def average_grads_and_vars(tower_grads_and_vars):
|
||||
def average_dense(grad_and_vars):
|
||||
if len(grad_and_vars) == 1:
|
||||
return grad_and_vars[0][0]
|
||||
|
||||
grad = grad_and_vars[0][0]
|
||||
for g, _ in grad_and_vars[1:]:
|
||||
grad += g
|
||||
return grad / len(grad_and_vars)
|
||||
|
||||
def average_sparse(grad_and_vars):
|
||||
if len(grad_and_vars) == 1:
|
||||
return grad_and_vars[0][0]
|
||||
|
||||
indices = []
|
||||
values = []
|
||||
for g, _ in grad_and_vars:
|
||||
indices += [g.indices]
|
||||
values += [g.values]
|
||||
indices = tf.concat(indices, 0)
|
||||
values = tf.concat(values, 0) / len(grad_and_vars)
|
||||
return tf.IndexedSlices(values, indices, grad_and_vars[0][0].dense_shape)
|
||||
|
||||
average_grads_and_vars = []
|
||||
for grad_and_vars in zip(*tower_grads_and_vars):
|
||||
if grad_and_vars[0][0] is None:
|
||||
grad = None
|
||||
elif isinstance(grad_and_vars[0][0], tf.IndexedSlices):
|
||||
grad = average_sparse(grad_and_vars)
|
||||
else:
|
||||
grad = average_dense(grad_and_vars)
|
||||
# Keep in mind that the Variables are redundant because they are shared
|
||||
# across towers. So .. we will just return the first tower's pointer to
|
||||
# the Variable.
|
||||
v = grad_and_vars[0][1]
|
||||
grad_and_var = (grad, v)
|
||||
average_grads_and_vars.append(grad_and_var)
|
||||
return average_grads_and_vars
|
||||
|
||||
|
||||
def load_from_checkpoint(saver, logdir):
|
||||
sess = tf.get_default_session()
|
||||
ckpt = tf.train.get_checkpoint_state(logdir)
|
||||
if ckpt and ckpt.model_checkpoint_path:
|
||||
if os.path.isabs(ckpt.model_checkpoint_path):
|
||||
# Restores from checkpoint with absolute path.
|
||||
saver.restore(sess, ckpt.model_checkpoint_path)
|
||||
else:
|
||||
# Restores from checkpoint with relative path.
|
||||
saver.restore(sess, os.path.join(logdir, ckpt.model_checkpoint_path))
|
||||
return True
|
||||
return False
|
||||
546
transformer-xl/tf/model.py
Normal file
546
transformer-xl/tf/model.py
Normal file
|
|
@ -0,0 +1,546 @@
|
|||
import tensorflow as tf
|
||||
|
||||
|
||||
def positional_embedding(pos_seq, inv_freq, bsz=None):
|
||||
sinusoid_inp = tf.einsum('i,j->ij', pos_seq, inv_freq)
|
||||
pos_emb = tf.concat([tf.sin(sinusoid_inp), tf.cos(sinusoid_inp)], -1)
|
||||
if bsz is not None:
|
||||
return tf.tile(pos_emb[:, None, :], [1, bsz, 1])
|
||||
else:
|
||||
return pos_emb[:, None, :]
|
||||
|
||||
|
||||
def positionwise_FF(inp, d_model, d_inner, dropout, kernel_initializer,
|
||||
scope='ff', is_training=True):
|
||||
output = inp
|
||||
with tf.variable_scope(scope):
|
||||
output = tf.layers.dense(inp, d_inner, activation=tf.nn.relu,
|
||||
kernel_initializer=kernel_initializer,
|
||||
name='layer_1')
|
||||
output = tf.layers.dropout(output, dropout, training=is_training,
|
||||
name='drop_1')
|
||||
output = tf.layers.dense(output, d_model,
|
||||
kernel_initializer=kernel_initializer,
|
||||
name='layer_2')
|
||||
output = tf.layers.dropout(output, dropout, training=is_training,
|
||||
name='drop_2')
|
||||
output = tf.contrib.layers.layer_norm(output + inp, begin_norm_axis=-1)
|
||||
return output
|
||||
|
||||
|
||||
def rel_shift(x):
|
||||
x_size = tf.shape(x)
|
||||
|
||||
x = tf.pad(x, [[0, 0], [1, 0], [0, 0], [0, 0]])
|
||||
x = tf.reshape(x, [x_size[1] + 1, x_size[0], x_size[2], x_size[3]])
|
||||
x = tf.slice(x, [1, 0, 0, 0], [-1, -1, -1, -1])
|
||||
x = tf.reshape(x, x_size)
|
||||
|
||||
return x
|
||||
|
||||
|
||||
def rel_multihead_attn(w, r, r_w_bias, r_r_bias, attn_mask, mems, d_model,
|
||||
n_head, d_head, dropout, dropatt, is_training,
|
||||
kernel_initializer, scope='rel_attn'):
|
||||
scale = 1 / (d_head ** 0.5)
|
||||
with tf.variable_scope(scope):
|
||||
qlen = tf.shape(w)[0]
|
||||
rlen = tf.shape(r)[0]
|
||||
bsz = tf.shape(w)[1]
|
||||
|
||||
cat = tf.concat([mems, w],
|
||||
0) if mems is not None and mems.shape.ndims > 1 else w
|
||||
w_heads = tf.layers.dense(cat, 3 * n_head * d_head, use_bias=False,
|
||||
kernel_initializer=kernel_initializer, name='qkv')
|
||||
r_head_k = tf.layers.dense(r, n_head * d_head, use_bias=False,
|
||||
kernel_initializer=kernel_initializer, name='r')
|
||||
|
||||
w_head_q, w_head_k, w_head_v = tf.split(w_heads, 3, -1)
|
||||
w_head_q = w_head_q[-qlen:]
|
||||
|
||||
klen = tf.shape(w_head_k)[0]
|
||||
|
||||
w_head_q = tf.reshape(w_head_q, [qlen, bsz, n_head, d_head])
|
||||
w_head_k = tf.reshape(w_head_k, [klen, bsz, n_head, d_head])
|
||||
w_head_v = tf.reshape(w_head_v, [klen, bsz, n_head, d_head])
|
||||
|
||||
r_head_k = tf.reshape(r_head_k, [rlen, n_head, d_head])
|
||||
|
||||
rw_head_q = w_head_q + r_w_bias
|
||||
rr_head_q = w_head_q + r_r_bias
|
||||
|
||||
AC = tf.einsum('ibnd,jbnd->ijbn', rw_head_q, w_head_k)
|
||||
BD = tf.einsum('ibnd,jnd->ijbn', rr_head_q, r_head_k)
|
||||
BD = rel_shift(BD)
|
||||
|
||||
attn_score = (AC + BD) * scale
|
||||
attn_mask_t = attn_mask[:, :, None, None]
|
||||
attn_score = attn_score * (1 - attn_mask_t) - 1e30 * attn_mask_t
|
||||
|
||||
attn_prob = tf.nn.softmax(attn_score, 1)
|
||||
attn_prob = tf.layers.dropout(attn_prob, dropatt, training=is_training)
|
||||
|
||||
attn_vec = tf.einsum('ijbn,jbnd->ibnd', attn_prob, w_head_v)
|
||||
size_t = tf.shape(attn_vec)
|
||||
attn_vec = tf.reshape(attn_vec, [size_t[0], size_t[1], n_head * d_head])
|
||||
|
||||
attn_out = tf.layers.dense(attn_vec, d_model, use_bias=False,
|
||||
kernel_initializer=kernel_initializer, name='o')
|
||||
attn_out = tf.layers.dropout(attn_out, dropout, training=is_training)
|
||||
|
||||
output = tf.contrib.layers.layer_norm(attn_out + w, begin_norm_axis=-1)
|
||||
return output
|
||||
|
||||
|
||||
def embedding_lookup(lookup_table, x, use_tpu=True):
|
||||
if use_tpu:
|
||||
n_token = tf.shape(lookup_table)[0]
|
||||
one_hot_idx = tf.one_hot(x, n_token)
|
||||
if one_hot_idx.shape.ndims == 2:
|
||||
return tf.einsum('nd,in->id', lookup_table, one_hot_idx)
|
||||
else:
|
||||
return tf.einsum('nd,ibn->ibd', lookup_table, one_hot_idx)
|
||||
else:
|
||||
return tf.nn.embedding_lookup(lookup_table, x)
|
||||
|
||||
|
||||
def mask_adaptive_embedding_lookup(x, n_token, d_embed, d_proj, cutoffs, initializer,
|
||||
proj_initializer, div_val=1,
|
||||
proj_same_dim=True,
|
||||
scope='adaptive_embed', **kwargs):
|
||||
emb_scale = d_proj ** 0.5
|
||||
with tf.variable_scope(scope):
|
||||
if div_val == 1:
|
||||
lookup_table = tf.get_variable('lookup_table', [n_token, d_embed],
|
||||
initializer=initializer)
|
||||
y = embedding_lookup(lookup_table, x, use_tpu=False)
|
||||
if d_proj != d_embed:
|
||||
proj_W = tf.get_variable('proj_W', [d_embed, d_proj],
|
||||
initializer=proj_initializer)
|
||||
y = tf.einsum('ibe,ed->ibd', y, proj_W)
|
||||
else:
|
||||
proj_W = None
|
||||
ret_params = [lookup_table, proj_W]
|
||||
else:
|
||||
tables, projs = [], []
|
||||
cutoff_ends = [0] + cutoffs + [n_token]
|
||||
x_size = tf.shape(x)
|
||||
y = tf.zeros([x_size[0], x_size[1], d_proj])
|
||||
for i in range(len(cutoff_ends) - 1):
|
||||
with tf.variable_scope('cutoff_{}'.format(i)):
|
||||
l_idx, r_idx = cutoff_ends[i], cutoff_ends[i + 1]
|
||||
mask = (x >= l_idx) & (x < r_idx)
|
||||
cur_x = tf.boolean_mask(x, mask) - l_idx
|
||||
cur_d_embed = d_embed // (div_val ** i)
|
||||
lookup_table = tf.get_variable('lookup_table',
|
||||
[r_idx - l_idx, cur_d_embed],
|
||||
initializer=initializer)
|
||||
cur_y = embedding_lookup(lookup_table, cur_x, use_tpu=False)
|
||||
if d_proj == cur_d_embed and not proj_same_dim:
|
||||
proj_W = None
|
||||
else:
|
||||
proj_W = tf.get_variable('proj_W', [cur_d_embed, d_proj],
|
||||
initializer=proj_initializer)
|
||||
cur_y = tf.einsum('id,de->ie', cur_y, proj_W)
|
||||
mask_idx = tf.to_int64(tf.where(mask))
|
||||
y += tf.scatter_nd(mask_idx, cur_y, tf.to_int64(tf.shape(y)))
|
||||
tables.append(lookup_table)
|
||||
projs.append(proj_W)
|
||||
ret_params = [tables, projs]
|
||||
|
||||
y *= emb_scale
|
||||
return y, ret_params
|
||||
|
||||
|
||||
def mul_adaptive_embedding_lookup(x, n_token, d_embed, d_proj, cutoffs, initializer,
|
||||
proj_initializer, div_val=1, perms=None,
|
||||
proj_same_dim=True,
|
||||
scope='adaptive_embed'):
|
||||
"""
|
||||
perms: If None, first compute W = W1 x W2 (projection for each bin),
|
||||
and then compute X x W (embedding lookup). If not None,
|
||||
use bin-based embedding lookup with max_bin_size defined by
|
||||
the shape of perms.
|
||||
"""
|
||||
emb_scale = d_proj ** 0.5
|
||||
with tf.variable_scope(scope):
|
||||
if div_val == 1:
|
||||
lookup_table = tf.get_variable('lookup_table', [n_token, d_embed],
|
||||
initializer=initializer)
|
||||
y = embedding_lookup(lookup_table, x)
|
||||
if d_proj != d_embed:
|
||||
proj_W = tf.get_variable('proj_W', [d_embed, d_proj],
|
||||
initializer=proj_initializer)
|
||||
y = tf.einsum('ibe,ed->ibd', y, proj_W)
|
||||
else:
|
||||
proj_W = None
|
||||
ret_params = [lookup_table, proj_W]
|
||||
else:
|
||||
tables, projs = [], []
|
||||
cutoff_ends = [0] + cutoffs + [n_token]
|
||||
x_size = tf.shape(x)
|
||||
if perms is None:
|
||||
cat_lookup = []
|
||||
else:
|
||||
cat_lookup = tf.zeros([x_size[0], x_size[1], d_proj])
|
||||
for i in range(len(cutoff_ends) - 1):
|
||||
with tf.variable_scope('cutoff_{}'.format(i)):
|
||||
l_idx, r_idx = cutoff_ends[i], cutoff_ends[i + 1]
|
||||
cur_d_embed = d_embed // (div_val ** i)
|
||||
lookup_table = tf.get_variable('lookup_table',
|
||||
[r_idx - l_idx, cur_d_embed],
|
||||
initializer=initializer)
|
||||
if cur_d_embed == d_proj and not proj_same_dim:
|
||||
proj_W = None
|
||||
else:
|
||||
proj_W = tf.get_variable('proj_W', [cur_d_embed, d_proj],
|
||||
initializer=proj_initializer)
|
||||
if perms is None:
|
||||
cat_lookup.append(tf.einsum('ie,ed->id', lookup_table, proj_W))
|
||||
else:
|
||||
# speed up the computation of the first bin
|
||||
# also save some meory
|
||||
if i == 0:
|
||||
cur_y = embedding_lookup(lookup_table, tf.minimum(x, r_idx - 1))
|
||||
if proj_W is not None:
|
||||
cur_y = tf.einsum('ibe,ed->ibd', cur_y, proj_W)
|
||||
cur_y *= perms[i][:, :, None]
|
||||
cat_lookup += cur_y
|
||||
else:
|
||||
cur_x = tf.einsum('ib,ibk->k', tf.to_float(x - l_idx), perms[i])
|
||||
cur_x = tf.to_int32(cur_x)
|
||||
cur_y = embedding_lookup(lookup_table, cur_x)
|
||||
if proj_W is not None:
|
||||
cur_y = tf.einsum('ke,ed->kd', cur_y, proj_W)
|
||||
cat_lookup += tf.einsum('kd,ibk->ibd', cur_y, perms[i])
|
||||
tables.append(lookup_table)
|
||||
projs.append(proj_W)
|
||||
if perms is None:
|
||||
cat_lookup = tf.concat(cat_lookup, 0)
|
||||
y = embedding_lookup(cat_lookup, x)
|
||||
else:
|
||||
y = cat_lookup
|
||||
ret_params = [tables, projs]
|
||||
|
||||
y *= emb_scale
|
||||
return y, ret_params
|
||||
|
||||
|
||||
def mask_adaptive_logsoftmax(hidden, target, n_token, d_embed, d_proj, cutoffs,
|
||||
params, tie_projs,
|
||||
initializer=None, proj_initializer=None,
|
||||
div_val=1, scope='adaptive_softmax',
|
||||
proj_same_dim=True,
|
||||
return_mean=True, **kwargs):
|
||||
def _logit(x, W, b, proj):
|
||||
y = x
|
||||
if proj is not None:
|
||||
y = tf.einsum('ibd,ed->ibe', y, proj)
|
||||
return tf.einsum('ibd,nd->ibn', y, W) + b
|
||||
|
||||
params_W, params_projs = params[0], params[1]
|
||||
|
||||
def _gather_logprob(logprob, target):
|
||||
lp_size = tf.shape(logprob)
|
||||
r = tf.range(lp_size[0])
|
||||
idx = tf.stack([r, target], 1)
|
||||
return tf.gather_nd(logprob, idx)
|
||||
|
||||
with tf.variable_scope(scope):
|
||||
if len(cutoffs) == 0:
|
||||
softmax_b = tf.get_variable('bias', [n_token],
|
||||
initializer=tf.zeros_initializer())
|
||||
output = _logit(hidden, params_W, softmax_b, params_projs)
|
||||
nll = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=target,
|
||||
logits=output)
|
||||
else:
|
||||
cutoff_ends = [0] + cutoffs + [n_token]
|
||||
nll = tf.zeros_like(target, dtype=tf.float32)
|
||||
for i in range(len(cutoff_ends) - 1):
|
||||
with tf.variable_scope('cutoff_{}'.format(i)):
|
||||
l_idx, r_idx = cutoff_ends[i], cutoff_ends[i + 1]
|
||||
mask = (target >= l_idx) & (target < r_idx)
|
||||
mask_idx = tf.where(mask)
|
||||
cur_target = tf.boolean_mask(target, mask) - l_idx
|
||||
cur_d_embed = d_embed // (div_val ** i)
|
||||
|
||||
if div_val == 1:
|
||||
cur_W = params_W[l_idx: r_idx]
|
||||
else:
|
||||
cur_W = params_W[i]
|
||||
cur_b = tf.get_variable('b', [r_idx - l_idx],
|
||||
initializer=tf.zeros_initializer())
|
||||
if tie_projs[i]:
|
||||
if div_val == 1:
|
||||
cur_proj = params_projs
|
||||
else:
|
||||
cur_proj = params_projs[i]
|
||||
else:
|
||||
if (div_val == 1 or not proj_same_dim) and d_proj == cur_d_embed:
|
||||
cur_proj = None
|
||||
else:
|
||||
cur_proj = tf.get_variable('proj', [cur_d_embed, d_proj],
|
||||
initializer=proj_initializer)
|
||||
if i == 0:
|
||||
cluster_W = tf.get_variable('cluster_W', [len(cutoffs), d_embed],
|
||||
initializer=tf.zeros_initializer())
|
||||
cluster_b = tf.get_variable('cluster_b', [len(cutoffs)],
|
||||
initializer=tf.zeros_initializer())
|
||||
cur_W = tf.concat([cur_W, cluster_W], 0)
|
||||
cur_b = tf.concat([cur_b, cluster_b], 0)
|
||||
|
||||
head_logit = _logit(hidden, cur_W, cur_b, cur_proj)
|
||||
head_logprob = tf.nn.log_softmax(head_logit)
|
||||
cur_head_logprob = tf.boolean_mask(head_logprob, mask)
|
||||
cur_logprob = _gather_logprob(cur_head_logprob, cur_target)
|
||||
else:
|
||||
cur_head_logprob = tf.boolean_mask(head_logprob, mask)
|
||||
cur_hidden = tf.boolean_mask(hidden, mask)
|
||||
tail_logit = tf.squeeze(_logit(
|
||||
cur_hidden[None], cur_W, cur_b, cur_proj), 0)
|
||||
tail_logprob = tf.nn.log_softmax(tail_logit)
|
||||
cur_logprob = (cur_head_logprob[:, cutoff_ends[1] + i - 1] +
|
||||
_gather_logprob(tail_logprob, cur_target))
|
||||
nll += tf.scatter_nd(mask_idx, -cur_logprob,
|
||||
tf.to_int64(tf.shape(nll)))
|
||||
if return_mean:
|
||||
nll = tf.reduce_mean(nll)
|
||||
return nll
|
||||
|
||||
|
||||
def mul_adaptive_logsoftmax(hidden, target, n_token, d_embed, d_proj, cutoffs,
|
||||
params, tie_projs,
|
||||
initializer=None, proj_initializer=None,
|
||||
div_val=1, perms=None, proj_same_dim=True,
|
||||
scope='adaptive_softmax',
|
||||
**kwargs):
|
||||
def _logit(x, W, b, proj):
|
||||
y = x
|
||||
if x.shape.ndims == 3:
|
||||
if proj is not None:
|
||||
y = tf.einsum('ibd,ed->ibe', y, proj)
|
||||
return tf.einsum('ibd,nd->ibn', y, W) + b
|
||||
else:
|
||||
if proj is not None:
|
||||
y = tf.einsum('id,ed->ie', y, proj)
|
||||
return tf.einsum('id,nd->in', y, W) + b
|
||||
|
||||
params_W, params_projs = params[0], params[1]
|
||||
|
||||
with tf.variable_scope(scope):
|
||||
if len(cutoffs) == 0:
|
||||
softmax_b = tf.get_variable('bias', [n_token],
|
||||
initializer=tf.zeros_initializer())
|
||||
output = _logit(hidden, params_W, softmax_b, params_projs)
|
||||
nll = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=target,
|
||||
logits=output)
|
||||
nll = tf.reduce_mean(nll)
|
||||
else:
|
||||
total_loss, total_cnt = 0, 0
|
||||
cutoff_ends = [0] + cutoffs + [n_token]
|
||||
for i in range(len(cutoff_ends) - 1):
|
||||
with tf.variable_scope('cutoff_{}'.format(i)):
|
||||
l_idx, r_idx = cutoff_ends[i], cutoff_ends[i + 1]
|
||||
|
||||
cur_d_embed = d_embed // (div_val ** i)
|
||||
|
||||
if div_val == 1:
|
||||
cur_W = params_W[l_idx: r_idx]
|
||||
else:
|
||||
cur_W = params_W[i]
|
||||
cur_b = tf.get_variable('b', [r_idx - l_idx],
|
||||
initializer=tf.zeros_initializer())
|
||||
if tie_projs[i]:
|
||||
if div_val == 1:
|
||||
cur_proj = params_projs
|
||||
else:
|
||||
cur_proj = params_projs[i]
|
||||
else:
|
||||
if (div_val == 1 or not proj_same_dim) and d_proj == cur_d_embed:
|
||||
cur_proj = None
|
||||
else:
|
||||
cur_proj = tf.get_variable('proj', [cur_d_embed, d_proj],
|
||||
initializer=proj_initializer)
|
||||
|
||||
if i == 0:
|
||||
cluster_W = tf.get_variable('cluster_W', [len(cutoffs), d_embed],
|
||||
initializer=tf.zeros_initializer())
|
||||
cluster_b = tf.get_variable('cluster_b', [len(cutoffs)],
|
||||
initializer=tf.zeros_initializer())
|
||||
cur_W = tf.concat([cur_W, cluster_W], 0)
|
||||
cur_b = tf.concat([cur_b, cluster_b], 0)
|
||||
|
||||
head_logit = _logit(hidden, cur_W, cur_b, cur_proj)
|
||||
|
||||
head_target = kwargs.get("head_target")
|
||||
head_nll = tf.nn.sparse_softmax_cross_entropy_with_logits(
|
||||
labels=head_target,
|
||||
logits=head_logit)
|
||||
|
||||
masked_loss = head_nll * perms[i]
|
||||
total_loss += tf.reduce_sum(masked_loss)
|
||||
total_cnt += tf.reduce_sum(perms[i])
|
||||
|
||||
# head_logprob = tf.nn.log_softmax(head_logit)
|
||||
|
||||
# final_logprob = head_logprob * perms[i][:, :, None]
|
||||
# final_target = tf.one_hot(target, tf.shape(head_logprob)[2])
|
||||
# total_loss -= tf.einsum('ibn,ibn->', final_logprob, final_target)
|
||||
# total_cnt += tf.reduce_sum(perms[i])
|
||||
else:
|
||||
cur_head_nll = tf.einsum('ib,ibk->k', head_nll, perms[i])
|
||||
|
||||
cur_hidden = tf.einsum('ibd,ibk->kd', hidden, perms[i])
|
||||
tail_logit = _logit(cur_hidden, cur_W, cur_b, cur_proj)
|
||||
|
||||
tail_target = tf.einsum('ib,ibk->k', tf.to_float(target - l_idx),
|
||||
perms[i])
|
||||
tail_nll = tf.nn.sparse_softmax_cross_entropy_with_logits(
|
||||
labels=tf.to_int32(tail_target),
|
||||
logits=tail_logit)
|
||||
|
||||
sum_nll = cur_head_nll + tail_nll
|
||||
mask = tf.reduce_sum(perms[i], [0, 1])
|
||||
|
||||
masked_loss = sum_nll * mask
|
||||
total_loss += tf.reduce_sum(masked_loss)
|
||||
total_cnt += tf.reduce_sum(mask)
|
||||
|
||||
nll = total_loss / total_cnt
|
||||
|
||||
return nll
|
||||
|
||||
|
||||
def _create_mask(qlen, mlen, same_length=False):
|
||||
attn_mask = tf.ones([qlen, qlen])
|
||||
mask_u = tf.matrix_band_part(attn_mask, 0, -1)
|
||||
mask_dia = tf.matrix_band_part(attn_mask, 0, 0)
|
||||
attn_mask_pad = tf.zeros([qlen, mlen])
|
||||
ret = tf.concat([attn_mask_pad, mask_u - mask_dia], 1)
|
||||
if same_length:
|
||||
mask_l = tf.matrix_band_part(attn_mask, -1, 0)
|
||||
ret = tf.concat([ret[:, :qlen] + mask_l - mask_dia, ret[:, qlen:]], 1)
|
||||
return ret
|
||||
|
||||
def _cache_mem(curr_out, prev_mem, mem_len=None):
|
||||
if mem_len is None or prev_mem is None:
|
||||
new_mem = curr_out
|
||||
elif mem_len == 0:
|
||||
return prev_mem
|
||||
else:
|
||||
new_mem = tf.concat([prev_mem, curr_out], 0)[- mem_len:]
|
||||
|
||||
return tf.stop_gradient(new_mem)
|
||||
|
||||
|
||||
def transformer(dec_inp, target, mems, n_token, n_layer, d_model, d_embed,
|
||||
n_head, d_head, d_inner, dropout, dropatt,
|
||||
initializer, is_training, proj_initializer=None,
|
||||
mem_len=None, cutoffs=[], div_val=1, tie_projs=[],
|
||||
same_length=False, clamp_len=-1, use_tpu=True,
|
||||
input_perms=None, target_perms=None, head_target=None,
|
||||
untie_r=False, proj_same_dim=True,
|
||||
scope='transformer'):
|
||||
"""
|
||||
cutoffs: a list of python int. Cutoffs for adaptive softmax.
|
||||
tie_projs: a list of python bools. Whether to tie the projections.
|
||||
use_tpu: if True, use one_hot in embedding lookup and bin-based implementation
|
||||
of adaptive softmax.
|
||||
perms: a list of tensors. Each tensor should of size [len, bsz, bin_size].
|
||||
Only used in the adaptive setting.
|
||||
"""
|
||||
new_mems = []
|
||||
with tf.variable_scope(scope):
|
||||
if untie_r:
|
||||
r_w_bias = tf.get_variable('r_w_bias', [n_layer, n_head, d_head],
|
||||
initializer=initializer)
|
||||
r_r_bias = tf.get_variable('r_r_bias', [n_layer, n_head, d_head],
|
||||
initializer=initializer)
|
||||
else:
|
||||
r_w_bias = tf.get_variable('r_w_bias', [n_head, d_head],
|
||||
initializer=initializer)
|
||||
r_r_bias = tf.get_variable('r_r_bias', [n_head, d_head],
|
||||
initializer=initializer)
|
||||
|
||||
qlen = tf.shape(dec_inp)[0]
|
||||
mlen = tf.shape(mems[0])[0] if mems is not None else 0
|
||||
klen = mlen + qlen
|
||||
|
||||
if proj_initializer is None:
|
||||
proj_initializer = initializer
|
||||
lookup_fn = (mul_adaptive_embedding_lookup if use_tpu else
|
||||
mask_adaptive_embedding_lookup)
|
||||
embeddings, shared_params = lookup_fn(
|
||||
x=dec_inp,
|
||||
n_token=n_token,
|
||||
d_embed=d_embed,
|
||||
d_proj=d_model,
|
||||
cutoffs=cutoffs,
|
||||
initializer=initializer,
|
||||
proj_initializer=proj_initializer,
|
||||
div_val= div_val,
|
||||
perms=input_perms,
|
||||
proj_same_dim=proj_same_dim)
|
||||
|
||||
attn_mask = _create_mask(qlen, mlen, same_length)
|
||||
|
||||
pos_seq = tf.range(klen - 1, -1, -1.0)
|
||||
if clamp_len > 0:
|
||||
pos_seq = tf.minimum(pos_seq, clamp_len)
|
||||
inv_freq = 1 / (10000 ** (tf.range(0, d_model, 2.0) / d_model))
|
||||
pos_emb = positional_embedding(pos_seq, inv_freq)
|
||||
|
||||
output = tf.layers.dropout(embeddings, dropout, training=is_training)
|
||||
pos_emb = tf.layers.dropout(pos_emb, dropout, training=is_training)
|
||||
|
||||
if mems is None:
|
||||
mems = [None] * n_layer
|
||||
|
||||
for i in range(n_layer):
|
||||
# cache new mems
|
||||
new_mems.append(_cache_mem(output, mems[i], mem_len))
|
||||
|
||||
with tf.variable_scope('layer_{}'.format(i)):
|
||||
output = rel_multihead_attn(
|
||||
w=output,
|
||||
r=pos_emb,
|
||||
r_w_bias=r_w_bias if not untie_r else r_w_bias[i],
|
||||
r_r_bias=r_r_bias if not untie_r else r_r_bias[i],
|
||||
attn_mask=attn_mask,
|
||||
mems=mems[i],
|
||||
d_model=d_model,
|
||||
n_head=n_head,
|
||||
d_head=d_head,
|
||||
dropout=dropout,
|
||||
dropatt=dropatt,
|
||||
is_training=is_training,
|
||||
kernel_initializer=initializer)
|
||||
output = positionwise_FF(
|
||||
inp=output,
|
||||
d_model=d_model,
|
||||
d_inner=d_inner,
|
||||
dropout=dropout,
|
||||
kernel_initializer=initializer,
|
||||
is_training=is_training)
|
||||
|
||||
output = tf.layers.dropout(output, dropout, training=is_training)
|
||||
|
||||
logsoftmax_fn = (mul_adaptive_logsoftmax if use_tpu else
|
||||
mask_adaptive_logsoftmax)
|
||||
loss = logsoftmax_fn(
|
||||
hidden=output,
|
||||
target=target,
|
||||
n_token=n_token,
|
||||
d_embed=d_embed,
|
||||
d_proj=d_model,
|
||||
cutoffs=cutoffs,
|
||||
params=shared_params,
|
||||
tie_projs=tie_projs,
|
||||
initializer=initializer,
|
||||
proj_initializer=proj_initializer,
|
||||
div_val=div_val,
|
||||
perms=target_perms,
|
||||
head_target=head_target,
|
||||
proj_same_dim=proj_same_dim)
|
||||
return loss, new_mems
|
||||
|
||||
102
transformer-xl/tf/scripts/enwik8_base_gpu.sh
Normal file
102
transformer-xl/tf/scripts/enwik8_base_gpu.sh
Normal file
|
|
@ -0,0 +1,102 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Data
|
||||
DATA_ROOT=../data/enwik8/
|
||||
|
||||
# Model
|
||||
N_LAYER=12
|
||||
D_MODEL=512
|
||||
D_EMBED=512
|
||||
N_HEAD=8
|
||||
D_HEAD=64
|
||||
D_INNER=2048
|
||||
|
||||
# Training
|
||||
TGT_LEN=512
|
||||
MEM_LEN=512
|
||||
|
||||
BSZ=24
|
||||
NUM_CORE=4
|
||||
|
||||
# Testing
|
||||
TEST_TGT_LEN=80
|
||||
TEST_MEM_LEN=2100
|
||||
TEST_CLAMP_LEN=820
|
||||
|
||||
TEST_BSZ=10
|
||||
TEST_NUM_CORE=1
|
||||
|
||||
if [[ $1 == 'train_data' ]]; then
|
||||
python data_utils.py \
|
||||
--data_dir=${DATA_ROOT}/ \
|
||||
--dataset=enwik8 \
|
||||
--tgt_len=${TGT_LEN} \
|
||||
--per_host_train_bsz=${BSZ} \
|
||||
--per_host_valid_bsz=${BSZ} \
|
||||
--num_passes=1 \
|
||||
--use_tpu=False \
|
||||
${@:2}
|
||||
elif [[ $1 == 'test_data' ]]; then
|
||||
python data_utils.py \
|
||||
--data_dir=${DATA_ROOT}/ \
|
||||
--dataset=enwik8 \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--per_host_test_bsz=${TEST_BSZ} \
|
||||
--num_passes=1 \
|
||||
--use_tpu=False \
|
||||
${@:2}
|
||||
elif [[ $1 == 'train' ]]; then
|
||||
echo 'Run training...'
|
||||
python train_gpu.py \
|
||||
--data_dir=${DATA_ROOT}/tfrecords \
|
||||
--record_info_dir=${DATA_ROOT}/tfrecords/ \
|
||||
--corpus_info_path=${DATA_ROOT}/corpus-info.json \
|
||||
--model_dir=EXP-enwik8 \
|
||||
--n_layer=${N_LAYER} \
|
||||
--d_model=${D_MODEL} \
|
||||
--d_embed=${D_EMBED} \
|
||||
--n_head=${N_HEAD} \
|
||||
--d_head=${D_HEAD} \
|
||||
--d_inner=${D_INNER} \
|
||||
--dropout=0.1 \
|
||||
--dropatt=0.0 \
|
||||
--learning_rate=0.00025 \
|
||||
--warmup_steps=0 \
|
||||
--train_steps=400000 \
|
||||
--tgt_len=${TGT_LEN} \
|
||||
--mem_len=${MEM_LEN} \
|
||||
--train_batch_size=${BSZ} \
|
||||
--num_core_per_host=${NUM_CORE} \
|
||||
--iterations=200 \
|
||||
--save_steps=4000 \
|
||||
--do_train=True \
|
||||
--do_eval=False \
|
||||
${@:2}
|
||||
elif [[ $1 == 'eval' ]]; then
|
||||
echo 'Run evaluation...'
|
||||
python train_gpu.py \
|
||||
--data_dir=${DATA_ROOT}/tfrecords \
|
||||
--record_info_dir=${DATA_ROOT}/tfrecords/ \
|
||||
--corpus_info_path=${DATA_ROOT}/corpus-info.json \
|
||||
--model_dir=EXP-enwik8 \
|
||||
--n_layer=${N_LAYER} \
|
||||
--d_model=${D_MODEL} \
|
||||
--d_embed=${D_EMBED} \
|
||||
--n_head=${N_HEAD} \
|
||||
--d_head=${D_HEAD} \
|
||||
--d_inner=${D_INNER} \
|
||||
--dropout=0.0 \
|
||||
--dropatt=0.0 \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--mem_len=${TEST_MEM_LEN} \
|
||||
--clamp_len=${TEST_CLAMP_LEN} \
|
||||
--same_length=True \
|
||||
--eval_batch_size=${TEST_BSZ} \
|
||||
--num_core_per_host=${TEST_NUM_CORE} \
|
||||
--do_train=False \
|
||||
--do_eval=True \
|
||||
--eval_split=test \
|
||||
${@:2}
|
||||
else
|
||||
echo 'unknown argment 1'
|
||||
fi
|
||||
122
transformer-xl/tf/scripts/enwik8_large_tpu.sh
Normal file
122
transformer-xl/tf/scripts/enwik8_large_tpu.sh
Normal file
|
|
@ -0,0 +1,122 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Path
|
||||
LOCAL_DIR=../data/enwik8/
|
||||
GSDATA=
|
||||
GSEXP=
|
||||
|
||||
# TPU setting
|
||||
NUM_HOST=2
|
||||
NUM_CORE=16 # TPUv2 -> 8 | TPUv3 -> 16
|
||||
|
||||
TEST_NUM_HOST=1
|
||||
TEST_NUM_CORE=8 # TPUv2 -> 8 | TPUv3 -> 16
|
||||
|
||||
# Model
|
||||
N_LAYER=24
|
||||
D_MODEL=1024
|
||||
D_EMBED=1024
|
||||
N_HEAD=8
|
||||
D_HEAD=128
|
||||
D_INNER=3072
|
||||
|
||||
# Training
|
||||
TGT_LEN=768
|
||||
MEM_LEN=768
|
||||
TRAIN_BSZ=64
|
||||
VALID_BSZ=64
|
||||
|
||||
# Testing
|
||||
TEST_TGT_LEN=128
|
||||
TEST_MEM_LEN=3800
|
||||
TEST_CLAMP_LEN=1000
|
||||
TEST_BSZ=16
|
||||
|
||||
if [[ $1 == 'train_data' ]]; then
|
||||
python data_utils.py \
|
||||
--data_dir=${LOCAL_DIR}/ \
|
||||
--dataset=enwik8 \
|
||||
--tgt_len=${TGT_LEN} \
|
||||
--per_host_train_bsz=${TRAIN_BSZ} \
|
||||
--per_host_valid_bsz=${VALID_BSZ} \
|
||||
--num_core_per_host=${NUM_CORE} \
|
||||
--num_passes=10 \
|
||||
--use_tpu=True \
|
||||
${@:2}
|
||||
|
||||
SRC_PATTERN=train.bsz-${TRAIN_BSZ}.tlen-${TGT_LEN}.core-${NUM_CORE}*
|
||||
gsutil cp ${LOCAL_DIR}/tfrecords/${SRC_PATTERN} ${GSDATA}/enwik8-tfrecords/
|
||||
|
||||
SRC_PATTERN=valid.bsz-${VALID_BSZ}.tlen-${TGT_LEN}.core-${NUM_CORE}*
|
||||
gsutil cp ${LOCAL_DIR}/tfrecords/${SRC_PATTERN} ${GSDATA}/enwik8-tfrecords/
|
||||
|
||||
elif [[ $1 == 'test_data' ]]; then
|
||||
python data_utils.py \
|
||||
--data_dir=${LOCAL_DIR}/ \
|
||||
--dataset=enwik8 \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--per_host_test_bsz=${TEST_BSZ} \
|
||||
--num_core_per_host=${TEST_NUM_CORE} \
|
||||
--num_passes=1 \
|
||||
--use_tpu=True \
|
||||
${@:2}
|
||||
|
||||
SRC_PATTERN=test.bsz-${TEST_BSZ}.tlen-${TEST_TGT_LEN}.core-${TEST_NUM_CORE}*
|
||||
gsutil cp ${LOCAL_DIR}/tfrecords/${SRC_PATTERN} ${GSDATA}/enwik8-tfrecords/
|
||||
|
||||
elif [[ $1 == 'train' ]]; then
|
||||
echo 'Run training...'
|
||||
python train.py \
|
||||
--data_dir=${GSDATA}/enwik8-tfrecords \
|
||||
--record_info_dir=${LOCAL_DIR}/tfrecords/ \
|
||||
--corpus_info_path=${LOCAL_DIR}/corpus-info.json \
|
||||
--model_dir=${GSEXP}/enwik8 \
|
||||
--n_layer=${N_LAYER} \
|
||||
--d_model=${D_MODEL} \
|
||||
--d_embed=${D_EMBED} \
|
||||
--n_head=${N_HEAD} \
|
||||
--d_head=${D_HEAD} \
|
||||
--d_inner=${D_INNER} \
|
||||
--dropout=0.15 \
|
||||
--dropatt=0.15 \
|
||||
--learning_rate=0.00025 \
|
||||
--warmup_steps=4000 \
|
||||
--train_steps=400000 \
|
||||
--tgt_len=${TGT_LEN} \
|
||||
--mem_len=${MEM_LEN} \
|
||||
--train_batch_size=${TRAIN_BSZ} \
|
||||
--use_tpu=True \
|
||||
--num_host=${NUM_HOST} \
|
||||
--num_core_per_host=${NUM_CORE} \
|
||||
--iterations=1000 \
|
||||
--save_steps=10000 \
|
||||
--do_train=True \
|
||||
--do_eval=False \
|
||||
${@:2}
|
||||
|
||||
elif [[ $1 == 'eval' ]]; then
|
||||
echo 'Run evaluation...'
|
||||
python train.py \
|
||||
--data_dir=${GSDATA}/enwik8-tfrecords \
|
||||
--record_info_dir=${LOCAL_DIR}/tfrecords/ \
|
||||
--corpus_info_path=${LOCAL_DIR}/corpus-info.json \
|
||||
--model_dir=${GSEXP}/enwik8 \
|
||||
--n_layer=${N_LAYER} \
|
||||
--d_model=${D_MODEL} \
|
||||
--d_embed=${D_EMBED} \
|
||||
--n_head=${N_HEAD} \
|
||||
--d_head=${D_HEAD} \
|
||||
--d_inner=${D_INNER} \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--mem_len=${TEST_MEM_LEN} \
|
||||
--eval_batch_size=${TEST_BSZ} \
|
||||
--num_host=${TEST_NUM_HOST} \
|
||||
--num_core_per_host=${TEST_NUM_CORE} \
|
||||
--use_tpu=True \
|
||||
--do_train=False \
|
||||
--do_eval_only=True \
|
||||
--eval_split=test \
|
||||
${@:2}
|
||||
else
|
||||
echo 'unknown argment 1'
|
||||
fi
|
||||
110
transformer-xl/tf/scripts/lm1b_base_gpu.sh
Normal file
110
transformer-xl/tf/scripts/lm1b_base_gpu.sh
Normal file
|
|
@ -0,0 +1,110 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Data
|
||||
DATA_ROOT=../data/one-billion-words/
|
||||
|
||||
# Model
|
||||
DIV_VAL=4
|
||||
N_LAYER=18
|
||||
D_MODEL=1024
|
||||
D_EMBED=1024
|
||||
N_HEAD=8
|
||||
D_HEAD=128
|
||||
D_INNER=4096
|
||||
|
||||
# Training
|
||||
TGT_LEN=256
|
||||
MEM_LEN=256
|
||||
|
||||
BSZ=256
|
||||
NUM_CORE=4
|
||||
|
||||
# Testing
|
||||
TEST_TGT_LEN=32
|
||||
TEST_MEM_LEN=128
|
||||
TEST_CLAMP_LEN=-1
|
||||
|
||||
TEST_BSZ=16
|
||||
TEST_NUM_CORE=1
|
||||
|
||||
|
||||
if [[ $1 == 'train_data' ]]; then
|
||||
python data_utils.py \
|
||||
--data_dir=${DATA_ROOT}/ \
|
||||
--dataset=lm1b \
|
||||
--tgt_len=${TGT_LEN} \
|
||||
--per_host_train_bsz=${BSZ} \
|
||||
--per_host_valid_bsz=${BSZ} \
|
||||
--num_passes=1 \
|
||||
--use_tpu=False \
|
||||
${@:2}
|
||||
elif [[ $1 == 'test_data' ]]; then
|
||||
python data_utils.py \
|
||||
--data_dir=${DATA_ROOT}/ \
|
||||
--dataset=lm1b \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--per_host_test_bsz=${TEST_BSZ} \
|
||||
--num_passes=1 \
|
||||
--use_tpu=False \
|
||||
${@:2}
|
||||
elif [[ $1 == 'train' ]]; then
|
||||
echo 'Run training...'
|
||||
python train_gpu.py \
|
||||
--data_dir=${DATA_ROOT}/tfrecords \
|
||||
--record_info_dir=${DATA_ROOT}/tfrecords/ \
|
||||
--corpus_info_path=${DATA_ROOT}/corpus-info.json \
|
||||
--model_dir=EXP-lm1b \
|
||||
--div_val=${DIV_VAL} \
|
||||
--untie_r=True \
|
||||
--proj_share_all_but_first=False \
|
||||
--proj_same_dim=False \
|
||||
--n_layer=${N_LAYER} \
|
||||
--d_model=${D_MODEL} \
|
||||
--d_embed=${D_EMBED} \
|
||||
--n_head=${N_HEAD} \
|
||||
--d_head=${D_HEAD} \
|
||||
--d_inner=${D_INNER} \
|
||||
--dropout=0.1 \
|
||||
--dropatt=0.0 \
|
||||
--learning_rate=0.00025 \
|
||||
--warmup_steps=0 \
|
||||
--train_steps=400000 \
|
||||
--tgt_len=${TGT_LEN} \
|
||||
--mem_len=${MEM_LEN} \
|
||||
--train_batch_size=${BSZ} \
|
||||
--num_core_per_host=${NUM_CORE} \
|
||||
--iterations=200 \
|
||||
--save_steps=4000 \
|
||||
${@:2}
|
||||
elif [[ $1 == 'eval' ]]; then
|
||||
echo 'Run evaluation...'
|
||||
python train_gpu.py \
|
||||
--data_dir=${DATA_ROOT}/tfrecords \
|
||||
--record_info_dir=${DATA_ROOT}/tfrecords/ \
|
||||
--corpus_info_path=${DATA_ROOT}/corpus-info.json \
|
||||
--model_dir=EXP-lm1b \
|
||||
--div_val=${DIV_VAL} \
|
||||
--untie_r=True \
|
||||
--proj_share_all_but_first=False \
|
||||
--proj_same_dim=False \
|
||||
--n_layer=${N_LAYER} \
|
||||
--d_model=${D_MODEL} \
|
||||
--d_embed=${D_EMBED} \
|
||||
--n_head=${N_HEAD} \
|
||||
--d_head=${D_HEAD} \
|
||||
--d_inner=${D_INNER} \
|
||||
--dropout=0.0 \
|
||||
--dropatt=0.0 \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--mem_len=${TEST_MEM_LEN} \
|
||||
--clamp_len=${TEST_CLAMP_LEN} \
|
||||
--same_length=True \
|
||||
--eval_batch_size=${TEST_BSZ} \
|
||||
--num_core_per_host=${TEST_NUM_CORE} \
|
||||
--do_train=False \
|
||||
--do_eval=True \
|
||||
--eval_split=test \
|
||||
${@:2}
|
||||
else
|
||||
echo 'unknown argment 1'
|
||||
fi
|
||||
136
transformer-xl/tf/scripts/lm1b_large_tpu.sh
Normal file
136
transformer-xl/tf/scripts/lm1b_large_tpu.sh
Normal file
|
|
@ -0,0 +1,136 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Path
|
||||
LOCAL_DIR=../data/one-billion-words/
|
||||
GSDATA=
|
||||
GSEXP=
|
||||
|
||||
# TPU setting
|
||||
NUM_HOST=32
|
||||
NUM_CORE=16 # TPUv2 -> 8 | TPUv3 -> 16
|
||||
|
||||
TEST_NUM_HOST=1
|
||||
TEST_NUM_CORE=8 # TPUv2 -> 8 | TPUv3 -> 16
|
||||
|
||||
# Model
|
||||
DIV_VAL=4
|
||||
N_LAYER=24
|
||||
D_MODEL=1280
|
||||
D_EMBED=1280
|
||||
N_HEAD=16
|
||||
D_HEAD=80
|
||||
D_INNER=8192
|
||||
|
||||
# Training
|
||||
TGT_LEN=32
|
||||
MEM_LEN=32
|
||||
TRAIN_BSZ=512
|
||||
VALID_BSZ=512
|
||||
TRAIN_BSZ_PER_HOST=$((TRAIN_BSZ / NUM_HOST))
|
||||
VALID_BSZ_PER_HOST=$((VALID_BSZ / NUM_HOST))
|
||||
|
||||
# Testing
|
||||
TEST_TGT_LEN=32
|
||||
TEST_MEM_LEN=128
|
||||
TEST_CLAMP_LEN=-1
|
||||
TEST_BSZ=8
|
||||
|
||||
if [[ $1 == 'train_data' ]]; then
|
||||
python data_utils.py \
|
||||
--data_dir=${LOCAL_DIR}/ \
|
||||
--dataset=lm1b \
|
||||
--tgt_len=${TGT_LEN} \
|
||||
--per_host_train_bsz=${TRAIN_BSZ_PER_HOST} \
|
||||
--per_host_valid_bsz=${VALID_BSZ_PER_HOST} \
|
||||
--num_core_per_host=${NUM_CORE} \
|
||||
--num_passes=10 \
|
||||
--use_tpu=True \
|
||||
${@:2}
|
||||
|
||||
SRC_PATTERN=train.bsz-${TRAIN_BSZ}.tlen-${TGT_LEN}.core-${NUM_CORE}*
|
||||
gsutil cp ${LOCAL_DIR}/tfrecords/${SRC_PATTERN} ${GSDATA}/lm1b-tfrecords/
|
||||
|
||||
SRC_PATTERN=valid.bsz-${VALID_BSZ}.tlen-${TGT_LEN}.core-${NUM_CORE}*
|
||||
gsutil cp ${LOCAL_DIR}/tfrecords/${SRC_PATTERN} ${GSDATA}/lm1b-tfrecords/
|
||||
|
||||
elif [[ $1 == 'test_data' ]]; then
|
||||
python data_utils.py \
|
||||
--data_dir=${LOCAL_DIR}/ \
|
||||
--dataset=lm1b \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--per_host_test_bsz=${TEST_BSZ} \
|
||||
--num_core_per_host=${TEST_NUM_CORE} \
|
||||
--num_passes=1 \
|
||||
--use_tpu=True \
|
||||
${@:2}
|
||||
|
||||
SRC_PATTERN=test.bsz-${TEST_BSZ}.tlen-${TEST_TGT_LEN}.core-${TEST_NUM_CORE}*
|
||||
gsutil cp ${LOCAL_DIR}/tfrecords/${SRC_PATTERN} ${GSDATA}/lm1b-tfrecords/
|
||||
|
||||
elif [[ $1 == 'train' ]]; then
|
||||
echo 'Run training...'
|
||||
python train.py \
|
||||
--data_dir=${GSDATA}/lm1b-tfrecords \
|
||||
--record_info_dir=${LOCAL_DIR}/tfrecords/ \
|
||||
--corpus_info_path=${LOCAL_DIR}/corpus-info.json \
|
||||
--model_dir=${GSEXP}/lm1b \
|
||||
--div_val=${DIV_VAL} \
|
||||
--untie_r=True \
|
||||
--proj_share_all_but_first=False \
|
||||
--proj_same_dim=False \
|
||||
--n_layer=${N_LAYER} \
|
||||
--d_model=${D_MODEL} \
|
||||
--d_embed=${D_EMBED} \
|
||||
--n_head=${N_HEAD} \
|
||||
--d_head=${D_HEAD} \
|
||||
--d_inner=${D_INNER} \
|
||||
--dropout=0.05 \
|
||||
--dropatt=0.05 \
|
||||
--init_std=0.005 \
|
||||
--learning_rate=0.0001 \
|
||||
--warmup_steps=30000 \
|
||||
--train_steps=1200000 \
|
||||
--tgt_len=${TGT_LEN} \
|
||||
--mem_len=${MEM_LEN} \
|
||||
--train_batch_size=${TRAIN_BSZ} \
|
||||
--num_hosts=${NUM_HOST} \
|
||||
--num_core_per_host=${NUM_CORE} \
|
||||
--iterations=1000 \
|
||||
--save_steps=10000 \
|
||||
--use_tpu=True \
|
||||
--do_eval=False \
|
||||
${@:2}
|
||||
|
||||
elif [[ $1 == 'eval' ]]; then
|
||||
echo 'Run evaluation...'
|
||||
python train.py \
|
||||
--data_dir=${GSDATA}/lm1b-tfrecords \
|
||||
--record_info_dir=${LOCAL_DIR}/tfrecords/ \
|
||||
--corpus_info_path=${LOCAL_DIR}/corpus-info.json \
|
||||
--model_dir=${GSEXP}/lm1b \
|
||||
--div_val=${DIV_VAL} \
|
||||
--untie_r=True \
|
||||
--proj_share_all_but_first=False \
|
||||
--proj_same_dim=False \
|
||||
--n_layer=${N_LAYER} \
|
||||
--d_model=${D_MODEL} \
|
||||
--d_embed=${D_EMBED} \
|
||||
--n_head=${N_HEAD} \
|
||||
--d_head=${D_HEAD} \
|
||||
--d_inner=${D_INNER} \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--mem_len=${TEST_MEM_LEN} \
|
||||
--clamp_len=${TEST_CLAMP_LEN} \
|
||||
--same_length=True \
|
||||
--eval_batch_size=${TEST_BSZ} \
|
||||
--num_host=${TEST_NUM_HOST} \
|
||||
--num_core_per_host=${TEST_NUM_CORE} \
|
||||
--use_tpu=True \
|
||||
--do_train=False \
|
||||
--do_eval_only=True \
|
||||
--eval_split=test \
|
||||
${@:2}
|
||||
|
||||
else
|
||||
echo 'unknown argment 1'
|
||||
fi
|
||||
102
transformer-xl/tf/scripts/text8_base_gpu.sh
Normal file
102
transformer-xl/tf/scripts/text8_base_gpu.sh
Normal file
|
|
@ -0,0 +1,102 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Data
|
||||
DATA_ROOT=../data/text8/
|
||||
|
||||
# Model
|
||||
N_LAYER=12
|
||||
D_MODEL=512
|
||||
D_EMBED=512
|
||||
N_HEAD=8
|
||||
D_HEAD=64
|
||||
D_INNER=2048
|
||||
|
||||
# Training
|
||||
TGT_LEN=512
|
||||
MEM_LEN=512
|
||||
|
||||
BSZ=24
|
||||
NUM_CORE=4
|
||||
|
||||
# Testing
|
||||
TEST_TGT_LEN=80
|
||||
TEST_MEM_LEN=2100
|
||||
TEST_CLAMP_LEN=820
|
||||
|
||||
TEST_BSZ=10
|
||||
TEST_NUM_CORE=1
|
||||
|
||||
if [[ $1 == 'train_data' ]]; then
|
||||
python data_utils.py \
|
||||
--data_dir=${DATA_ROOT}/ \
|
||||
--dataset=text8 \
|
||||
--tgt_len=${TGT_LEN} \
|
||||
--per_host_train_bsz=${BSZ} \
|
||||
--per_host_valid_bsz=${BSZ} \
|
||||
--num_passes=1 \
|
||||
--use_tpu=False \
|
||||
${@:2}
|
||||
elif [[ $1 == 'test_data' ]]; then
|
||||
python data_utils.py \
|
||||
--data_dir=${DATA_ROOT}/ \
|
||||
--dataset=text8 \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--per_host_test_bsz=${TEST_BSZ} \
|
||||
--num_passes=1 \
|
||||
--use_tpu=False \
|
||||
${@:2}
|
||||
elif [[ $1 == 'train' ]]; then
|
||||
echo 'Run training...'
|
||||
python train_gpu.py \
|
||||
--data_dir=${DATA_ROOT}/tfrecords \
|
||||
--record_info_dir=${DATA_ROOT}/tfrecords/ \
|
||||
--corpus_info_path=${DATA_ROOT}/corpus-info.json \
|
||||
--model_dir=EXP-text8 \
|
||||
--n_layer=${N_LAYER} \
|
||||
--d_model=${D_MODEL} \
|
||||
--d_embed=${D_EMBED} \
|
||||
--n_head=${N_HEAD} \
|
||||
--d_head=${D_HEAD} \
|
||||
--d_inner=${D_INNER} \
|
||||
--dropout=0.1 \
|
||||
--dropatt=0.0 \
|
||||
--learning_rate=0.00025 \
|
||||
--warmup_steps=0 \
|
||||
--train_steps=400000 \
|
||||
--tgt_len=${TGT_LEN} \
|
||||
--mem_len=${MEM_LEN} \
|
||||
--train_batch_size=${BSZ} \
|
||||
--num_core_per_host=${NUM_CORE} \
|
||||
--iterations=200 \
|
||||
--save_steps=4000 \
|
||||
--do_train=True \
|
||||
--do_eval=False \
|
||||
${@:2}
|
||||
elif [[ $1 == 'eval' ]]; then
|
||||
echo 'Run evaluation...'
|
||||
python train_gpu.py \
|
||||
--data_dir=${DATA_ROOT}/tfrecords \
|
||||
--record_info_dir=${DATA_ROOT}/tfrecords/ \
|
||||
--corpus_info_path=${DATA_ROOT}/corpus-info.json \
|
||||
--model_dir=EXP-text8 \
|
||||
--n_layer=${N_LAYER} \
|
||||
--d_model=${D_MODEL} \
|
||||
--d_embed=${D_EMBED} \
|
||||
--n_head=${N_HEAD} \
|
||||
--d_head=${D_HEAD} \
|
||||
--d_inner=${D_INNER} \
|
||||
--dropout=0.0 \
|
||||
--dropatt=0.0 \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--mem_len=${TEST_MEM_LEN} \
|
||||
--clamp_len=${TEST_CLAMP_LEN} \
|
||||
--same_length=True \
|
||||
--eval_batch_size=${TEST_BSZ} \
|
||||
--num_core_per_host=${TEST_NUM_CORE} \
|
||||
--do_train=False \
|
||||
--do_eval=True \
|
||||
--eval_split=test \
|
||||
${@:2}
|
||||
else
|
||||
echo 'unknown argment 1'
|
||||
fi
|
||||
122
transformer-xl/tf/scripts/text8_large_tpu.sh
Normal file
122
transformer-xl/tf/scripts/text8_large_tpu.sh
Normal file
|
|
@ -0,0 +1,122 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Path
|
||||
LOCAL_DIR=../data/text8/
|
||||
GSDATA=
|
||||
GSEXP=
|
||||
|
||||
# TPU setting
|
||||
NUM_HOST=2
|
||||
NUM_CORE=16 # TPUv2 -> 8 | TPUv3 -> 16
|
||||
|
||||
TEST_NUM_HOST=1
|
||||
TEST_NUM_CORE=8 # TPUv2 -> 8 | TPUv3 -> 16
|
||||
|
||||
# Model
|
||||
N_LAYER=24
|
||||
D_MODEL=1024
|
||||
D_EMBED=1024
|
||||
N_HEAD=8
|
||||
D_HEAD=128
|
||||
D_INNER=3072
|
||||
|
||||
# Training
|
||||
TGT_LEN=768
|
||||
MEM_LEN=768
|
||||
TRAIN_BSZ=64
|
||||
VALID_BSZ=64
|
||||
|
||||
# Testing
|
||||
TEST_TGT_LEN=128
|
||||
TEST_MEM_LEN=3800
|
||||
TEST_CLAMP_LEN=1000
|
||||
TEST_BSZ=16
|
||||
|
||||
if [[ $1 == 'train_data' ]]; then
|
||||
python data_utils.py \
|
||||
--data_dir=${LOCAL_DIR}/ \
|
||||
--dataset=text8 \
|
||||
--tgt_len=${TGT_LEN} \
|
||||
--per_host_train_bsz=${TRAIN_BSZ} \
|
||||
--per_host_valid_bsz=${VALID_BSZ} \
|
||||
--num_core_per_host=${NUM_CORE} \
|
||||
--num_passes=10 \
|
||||
--use_tpu=True \
|
||||
${@:2}
|
||||
|
||||
SRC_PATTERN=train.bsz-${TRAIN_BSZ}.tlen-${TGT_LEN}.core-${NUM_CORE}*
|
||||
gsutil cp ${LOCAL_DIR}/tfrecords/${SRC_PATTERN} ${GSDATA}/text8-tfrecords/
|
||||
|
||||
SRC_PATTERN=valid.bsz-${VALID_BSZ}.tlen-${TGT_LEN}.core-${NUM_CORE}*
|
||||
gsutil cp ${LOCAL_DIR}/tfrecords/${SRC_PATTERN} ${GSDATA}/text8-tfrecords/
|
||||
|
||||
elif [[ $1 == 'test_data' ]]; then
|
||||
python data_utils.py \
|
||||
--data_dir=${LOCAL_DIR}/ \
|
||||
--dataset=text8 \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--per_host_test_bsz=${TEST_BSZ} \
|
||||
--num_core_per_host=${TEST_NUM_CORE} \
|
||||
--num_passes=1 \
|
||||
--use_tpu=True \
|
||||
${@:2}
|
||||
|
||||
SRC_PATTERN=test.bsz-${TEST_BSZ}.tlen-${TEST_TGT_LEN}.core-${TEST_NUM_CORE}*
|
||||
gsutil cp ${LOCAL_DIR}/tfrecords/${SRC_PATTERN} ${GSDATA}/text8-tfrecords/
|
||||
|
||||
elif [[ $1 == 'train' ]]; then
|
||||
echo 'Run training...'
|
||||
python train.py \
|
||||
--data_dir=${GSDATA}/text8-tfrecords \
|
||||
--record_info_dir=${LOCAL_DIR}/tfrecords/ \
|
||||
--corpus_info_path=${LOCAL_DIR}/corpus-info.json \
|
||||
--model_dir=${GSEXP}/text8 \
|
||||
--n_layer=${N_LAYER} \
|
||||
--d_model=${D_MODEL} \
|
||||
--d_embed=${D_EMBED} \
|
||||
--n_head=${N_HEAD} \
|
||||
--d_head=${D_HEAD} \
|
||||
--d_inner=${D_INNER} \
|
||||
--dropout=0.15 \
|
||||
--dropatt=0.15 \
|
||||
--learning_rate=0.00025 \
|
||||
--warmup_steps=4000 \
|
||||
--train_steps=400000 \
|
||||
--tgt_len=${TGT_LEN} \
|
||||
--mem_len=${MEM_LEN} \
|
||||
--train_batch_size=${TRAIN_BSZ} \
|
||||
--use_tpu=True \
|
||||
--num_host=${NUM_HOST} \
|
||||
--num_core_per_host=${NUM_CORE} \
|
||||
--iterations=1000 \
|
||||
--save_steps=10000 \
|
||||
--do_train=True \
|
||||
--do_eval=False \
|
||||
${@:2}
|
||||
|
||||
elif [[ $1 == 'eval' ]]; then
|
||||
echo 'Run evaluation...'
|
||||
python train.py \
|
||||
--data_dir=${GSDATA}/text8-tfrecords \
|
||||
--record_info_dir=${LOCAL_DIR}/tfrecords/ \
|
||||
--corpus_info_path=${LOCAL_DIR}/corpus-info.json \
|
||||
--model_dir=${GSEXP}/text8 \
|
||||
--n_layer=${N_LAYER} \
|
||||
--d_model=${D_MODEL} \
|
||||
--d_embed=${D_EMBED} \
|
||||
--n_head=${N_HEAD} \
|
||||
--d_head=${D_HEAD} \
|
||||
--d_inner=${D_INNER} \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--mem_len=${TEST_MEM_LEN} \
|
||||
--eval_batch_size=${TEST_BSZ} \
|
||||
--num_host=${TEST_NUM_HOST} \
|
||||
--num_core_per_host=${TEST_NUM_CORE} \
|
||||
--use_tpu=True \
|
||||
--do_train=False \
|
||||
--do_eval_only=True \
|
||||
--eval_split=test \
|
||||
${@:2}
|
||||
else
|
||||
echo 'unknown argment 1'
|
||||
fi
|
||||
108
transformer-xl/tf/scripts/wt103_base_gpu.sh
Normal file
108
transformer-xl/tf/scripts/wt103_base_gpu.sh
Normal file
|
|
@ -0,0 +1,108 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Data
|
||||
DATA_ROOT=../data/wikitext-103/
|
||||
|
||||
# Model
|
||||
DIV_VAL=1
|
||||
N_LAYER=16
|
||||
D_MODEL=410
|
||||
D_EMBED=410
|
||||
N_HEAD=10
|
||||
D_HEAD=41
|
||||
D_INNER=2100
|
||||
|
||||
# Training
|
||||
TGT_LEN=150
|
||||
MEM_LEN=150
|
||||
|
||||
BSZ=60
|
||||
NUM_CORE=4
|
||||
|
||||
# Testing
|
||||
TEST_TGT_LEN=64
|
||||
TEST_MEM_LEN=640
|
||||
TEST_CLAMP_LEN=400
|
||||
|
||||
TEST_BSZ=10
|
||||
TEST_NUM_CORE=1
|
||||
|
||||
|
||||
if [[ $1 == 'train_data' ]]; then
|
||||
python data_utils.py \
|
||||
--data_dir=${DATA_ROOT}/ \
|
||||
--dataset=wt103 \
|
||||
--tgt_len=${TGT_LEN} \
|
||||
--per_host_train_bsz=${BSZ} \
|
||||
--per_host_valid_bsz=${BSZ} \
|
||||
--num_passes=1 \
|
||||
--use_tpu=False \
|
||||
${@:2}
|
||||
elif [[ $1 == 'test_data' ]]; then
|
||||
python data_utils.py \
|
||||
--data_dir=${DATA_ROOT}/ \
|
||||
--dataset=enwik8 \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--per_host_test_bsz=${TEST_BSZ} \
|
||||
--num_passes=1 \
|
||||
--use_tpu=False \
|
||||
${@:2}
|
||||
elif [[ $1 == 'train' ]]; then
|
||||
echo 'Run training...'
|
||||
python train_gpu.py \
|
||||
--data_dir=${DATA_ROOT}/tfrecords \
|
||||
--record_info_dir=${DATA_ROOT}/tfrecords/ \
|
||||
--corpus_info_path=${DATA_ROOT}/corpus-info.json \
|
||||
--model_dir=EXP-wt103 \
|
||||
--div_val=${DIV_VAL} \
|
||||
--untie_r=True \
|
||||
--proj_share_all_but_first=True \
|
||||
--n_layer=${N_LAYER} \
|
||||
--d_model=${D_MODEL} \
|
||||
--d_embed=${D_EMBED} \
|
||||
--n_head=${N_HEAD} \
|
||||
--d_head=${D_HEAD} \
|
||||
--d_inner=${D_INNER} \
|
||||
--dropout=0.1 \
|
||||
--dropatt=0.0 \
|
||||
--learning_rate=0.00025 \
|
||||
--warmup_steps=0 \
|
||||
--train_steps=400000 \
|
||||
--tgt_len=${TGT_LEN} \
|
||||
--mem_len=${MEM_LEN} \
|
||||
--train_batch_size=${BSZ} \
|
||||
--num_core_per_host=${NUM_CORE} \
|
||||
--iterations=200 \
|
||||
--save_steps=4000 \
|
||||
${@:2}
|
||||
elif [[ $1 == 'eval' ]]; then
|
||||
echo 'Run evaluation...'
|
||||
python train_gpu.py \
|
||||
--data_dir=${DATA_ROOT}/tfrecords \
|
||||
--record_info_dir=${DATA_ROOT}/tfrecords/ \
|
||||
--corpus_info_path=${DATA_ROOT}/corpus-info.json \
|
||||
--model_dir=EXP-wt103 \
|
||||
--div_val=${DIV_VAL} \
|
||||
--untie_r=True \
|
||||
--proj_share_all_but_first=True \
|
||||
--n_layer=${N_LAYER} \
|
||||
--d_model=${D_MODEL} \
|
||||
--d_embed=${D_EMBED} \
|
||||
--n_head=${N_HEAD} \
|
||||
--d_head=${D_HEAD} \
|
||||
--d_inner=${D_INNER} \
|
||||
--dropout=0.0 \
|
||||
--dropatt=0.0 \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--mem_len=${TEST_MEM_LEN} \
|
||||
--clamp_len=${TEST_CLAMP_LEN} \
|
||||
--same_length=True \
|
||||
--eval_batch_size=${TEST_BSZ} \
|
||||
--num_core_per_host=${TEST_NUM_CORE} \
|
||||
--do_train=False \
|
||||
--do_eval=True \
|
||||
--eval_split=test \
|
||||
${@:2}
|
||||
else
|
||||
echo 'unknown argment 1'
|
||||
fi
|
||||
134
transformer-xl/tf/scripts/wt103_large_tpu.sh
Normal file
134
transformer-xl/tf/scripts/wt103_large_tpu.sh
Normal file
|
|
@ -0,0 +1,134 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Path
|
||||
LOCAL_DIR=../data/wikitext-103/
|
||||
GSDATA=
|
||||
GSEXP=
|
||||
|
||||
# TPU setting
|
||||
NUM_HOST=4
|
||||
NUM_CORE=16 # TPUv2 -> 8 | TPUv3 -> 16
|
||||
|
||||
TEST_NUM_HOST=1
|
||||
TEST_NUM_CORE=8 # TPUv2 -> 8 | TPUv3 -> 16
|
||||
|
||||
# Model
|
||||
DIV_VAL=4
|
||||
N_LAYER=18
|
||||
D_MODEL=1024
|
||||
D_EMBED=1024
|
||||
N_HEAD=16
|
||||
D_HEAD=64
|
||||
D_INNER=4096
|
||||
|
||||
# Training
|
||||
TGT_LEN=384
|
||||
MEM_LEN=384
|
||||
TRAIN_BSZ=128
|
||||
VALID_BSZ=128
|
||||
|
||||
# Testing
|
||||
TEST_TGT_LEN=128
|
||||
TEST_MEM_LEN=1600
|
||||
TEST_CLAMP_LEN=1000
|
||||
TEST_BSZ=8
|
||||
|
||||
if [[ $1 == 'train_data' ]]; then
|
||||
python data_utils.py \
|
||||
--data_dir=${LOCAL_DIR}/ \
|
||||
--dataset=wt103 \
|
||||
--tgt_len=${TGT_LEN} \
|
||||
--per_host_train_bsz=${TRAIN_BSZ} \
|
||||
--per_host_valid_bsz=${VALID_BSZ} \
|
||||
--num_core_per_host=${NUM_CORE} \
|
||||
--num_passes=10 \
|
||||
--use_tpu=True \
|
||||
${@:2}
|
||||
|
||||
SRC_PATTERN=train.bsz-${TRAIN_BSZ}.tlen-${TGT_LEN}.core-${NUM_CORE}*
|
||||
gsutil cp ${LOCAL_DIR}/tfrecords/${SRC_PATTERN} ${GSDATA}/wt103-tfrecords/
|
||||
|
||||
SRC_PATTERN=valid.bsz-${VALID_BSZ}.tlen-${TGT_LEN}.core-${NUM_CORE}*
|
||||
gsutil cp ${LOCAL_DIR}/tfrecords/${SRC_PATTERN} ${GSDATA}/wt103-tfrecords/
|
||||
|
||||
elif [[ $1 == 'test_data' ]]; then
|
||||
python data_utils.py \
|
||||
--data_dir=${LOCAL_DIR}/ \
|
||||
--dataset=wt103 \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--per_host_test_bsz=${TEST_BSZ} \
|
||||
--num_core_per_host=${TEST_NUM_CORE} \
|
||||
--num_passes=1 \
|
||||
--use_tpu=True \
|
||||
${@:2}
|
||||
|
||||
SRC_PATTERN=test.bsz-${TEST_BSZ}.tlen-${TEST_TGT_LEN}.core-${TEST_NUM_CORE}*
|
||||
gsutil cp ${LOCAL_DIR}/tfrecords/${SRC_PATTERN} ${GSDATA}/wt103-tfrecords/
|
||||
|
||||
elif [[ $1 == 'train' ]]; then
|
||||
echo 'Run training...'
|
||||
python train.py \
|
||||
--data_dir=${GSDATA}/wt103-tfrecords \
|
||||
--record_info_dir=${LOCAL_DIR}/tfrecords/ \
|
||||
--corpus_info_path=${LOCAL_DIR}/corpus-info.json \
|
||||
--model_dir=${GSEXP}/wt103 \
|
||||
--div_val=${DIV_VAL} \
|
||||
--untie_r=True \
|
||||
--proj_share_all_but_first=True \
|
||||
--proj_same_dim=True \
|
||||
--n_layer=${N_LAYER} \
|
||||
--d_model=${D_MODEL} \
|
||||
--d_embed=${D_EMBED} \
|
||||
--n_head=${N_HEAD} \
|
||||
--d_head=${D_HEAD} \
|
||||
--d_inner=${D_INNER} \
|
||||
--dropout=0.2 \
|
||||
--dropatt=0.2 \
|
||||
--init_std=0.005 \
|
||||
--learning_rate=0.00025 \
|
||||
--warmup_steps=16000 \
|
||||
--train_steps=4000000 \
|
||||
--tgt_len=${TGT_LEN} \
|
||||
--mem_len=${MEM_LEN} \
|
||||
--train_batch_size=${TRAIN_BSZ} \
|
||||
--num_hosts=${NUM_HOST} \
|
||||
--num_core_per_host=${NUM_CORE} \
|
||||
--iterations=1000 \
|
||||
--save_steps=10000 \
|
||||
--use_tpu=True \
|
||||
--do_eval=False \
|
||||
${@:2}
|
||||
|
||||
elif [[ $1 == 'eval' ]]; then
|
||||
echo 'Run evaluation...'
|
||||
python train.py \
|
||||
--data_dir=${GSDATA}/wt103-tfrecords \
|
||||
--record_info_dir=${LOCAL_DIR}/tfrecords/ \
|
||||
--corpus_info_path=${LOCAL_DIR}/corpus-info.json \
|
||||
--model_dir=${GSEXP}/wt103 \
|
||||
--div_val=${DIV_VAL} \
|
||||
--untie_r=True \
|
||||
--proj_share_all_but_first=True \
|
||||
--proj_same_dim=True \
|
||||
--n_layer=${N_LAYER} \
|
||||
--d_model=${D_MODEL} \
|
||||
--d_embed=${D_EMBED} \
|
||||
--n_head=${N_HEAD} \
|
||||
--d_head=${D_HEAD} \
|
||||
--d_inner=${D_INNER} \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--mem_len=${TEST_MEM_LEN} \
|
||||
--clamp_len=${TEST_CLAMP_LEN} \
|
||||
--same_length=True \
|
||||
--eval_batch_size=${TEST_BSZ} \
|
||||
--num_host=${TEST_NUM_HOST} \
|
||||
--num_core_per_host=${TEST_NUM_CORE} \
|
||||
--use_tpu=True \
|
||||
--do_train=False \
|
||||
--do_eval_only=True \
|
||||
--eval_split=test \
|
||||
${@:2}
|
||||
|
||||
else
|
||||
echo 'unknown argment 1'
|
||||
fi
|
||||
87
transformer-xl/tf/sota/download.sh
Normal file
87
transformer-xl/tf/sota/download.sh
Normal file
|
|
@ -0,0 +1,87 @@
|
|||
#!/bin/bash
|
||||
|
||||
URL=http://curtis.ml.cmu.edu/datasets/pretrained_xl
|
||||
|
||||
DATA_ROOT=./
|
||||
|
||||
function download () {
|
||||
fileurl=${1}
|
||||
filename=${fileurl##*/}
|
||||
if [ ! -f ${filename} ]; then
|
||||
echo ">>> Download '${filename}' from '${fileurl}'."
|
||||
wget --quiet ${fileurl}
|
||||
else
|
||||
echo "*** File '${filename}' exists. Skip."
|
||||
fi
|
||||
}
|
||||
|
||||
cd $DATA_ROOT
|
||||
mkdir -p pretrained_xl && cd pretrained_xl
|
||||
|
||||
# enwik8
|
||||
mkdir -p tf_enwik8 && cd tf_enwik8
|
||||
|
||||
mkdir -p data && cd data
|
||||
download ${URL}/tf_enwiki8/data/cache.pkl
|
||||
download ${URL}/tf_enwiki8/data/corpus-info.json
|
||||
cd ..
|
||||
|
||||
mkdir -p model && cd model
|
||||
download ${URL}/tf_enwiki8/model/checkpoint
|
||||
download ${URL}/tf_enwiki8/model/model.ckpt-0.data-00000-of-00001
|
||||
download ${URL}/tf_enwiki8/model/model.ckpt-0.index
|
||||
download ${URL}/tf_enwiki8/model/model.ckpt-0.meta
|
||||
cd ..
|
||||
|
||||
cd ..
|
||||
|
||||
# text8
|
||||
mkdir -p tf_text8 && cd tf_text8
|
||||
|
||||
mkdir -p data && cd data
|
||||
download ${URL}/tf_text8/data/cache.pkl
|
||||
download ${URL}/tf_text8/data/corpus-info.json
|
||||
cd ..
|
||||
|
||||
mkdir -p model && cd model
|
||||
download ${URL}/tf_text8/model/checkpoint
|
||||
download ${URL}/tf_text8/model/model.ckpt-0.data-00000-of-00001
|
||||
download ${URL}/tf_text8/model/model.ckpt-0.index
|
||||
download ${URL}/tf_text8/model/model.ckpt-0.meta
|
||||
cd ..
|
||||
|
||||
cd ..
|
||||
|
||||
# wt103
|
||||
mkdir -p tf_wt103 && cd tf_wt103
|
||||
|
||||
mkdir -p data && cd data
|
||||
download ${URL}/tf_wt103/data/cache.pkl
|
||||
download ${URL}/tf_wt103/data/corpus-info.json
|
||||
cd ..
|
||||
|
||||
mkdir -p model && cd model
|
||||
download ${URL}/tf_wt103/model/checkpoint
|
||||
download ${URL}/tf_wt103/model/model.ckpt-0.data-00000-of-00001
|
||||
download ${URL}/tf_wt103/model/model.ckpt-0.index
|
||||
download ${URL}/tf_wt103/model/model.ckpt-0.meta
|
||||
cd ..
|
||||
|
||||
cd ..
|
||||
|
||||
# lm1b
|
||||
mkdir -p tf_lm1b && cd tf_lm1b
|
||||
|
||||
mkdir -p data && cd data
|
||||
download ${URL}/tf_lm1b/data/cache.pkl
|
||||
download ${URL}/tf_lm1b/data/corpus-info.json
|
||||
cd ..
|
||||
|
||||
mkdir -p model && cd model
|
||||
download ${URL}/tf_lm1b/model/checkpoint
|
||||
download ${URL}/tf_lm1b/model/model.ckpt-1191000.data-00000-of-00001
|
||||
download ${URL}/tf_lm1b/model/model.ckpt-1191000.index
|
||||
download ${URL}/tf_lm1b/model/model.ckpt-1191000.meta
|
||||
cd ..
|
||||
|
||||
cd ..
|
||||
58
transformer-xl/tf/sota/enwik8.sh
Normal file
58
transformer-xl/tf/sota/enwik8.sh
Normal file
|
|
@ -0,0 +1,58 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Data
|
||||
DATA_ROOT=./
|
||||
DATA_DIR=${DATA_ROOT}/pretrained_xl/tf_enwik8/data
|
||||
MODEL_DIR=${DATA_ROOT}/pretrained_xl/tf_enwik8/model
|
||||
|
||||
# Model
|
||||
N_LAYER=24
|
||||
D_MODEL=1024
|
||||
D_EMBED=1024
|
||||
N_HEAD=8
|
||||
D_HEAD=128
|
||||
D_INNER=3072
|
||||
|
||||
# Testing
|
||||
TEST_TGT_LEN=128
|
||||
TEST_MEM_LEN=3800
|
||||
TEST_CLAMP_LEN=1000
|
||||
|
||||
TEST_CKPT_PATH=${MODEL_DIR}/model.ckpt-0
|
||||
TEST_BSZ=16
|
||||
TEST_NUM_CORE=2
|
||||
|
||||
|
||||
echo 'Preprocess test set...'
|
||||
python data_utils.py \
|
||||
--data_dir=${DATA_DIR}/ \
|
||||
--dataset=enwik8 \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--per_host_test_bsz=${TEST_BSZ} \
|
||||
--num_passes=1 \
|
||||
--use_tpu=False
|
||||
|
||||
echo 'Run evaluation on test set...'
|
||||
python train_gpu.py \
|
||||
--data_dir=${DATA_DIR}/tfrecords \
|
||||
--record_info_dir=${DATA_DIR}/tfrecords/ \
|
||||
--corpus_info_path=${DATA_DIR}/corpus-info.json \
|
||||
--eval_ckpt_path=${TEST_CKPT_PATH} \
|
||||
--model_dir=EXP-enwik8 \
|
||||
--n_layer=${N_LAYER} \
|
||||
--d_model=${D_MODEL} \
|
||||
--d_embed=${D_EMBED} \
|
||||
--n_head=${N_HEAD} \
|
||||
--d_head=${D_HEAD} \
|
||||
--d_inner=${D_INNER} \
|
||||
--dropout=0.0 \
|
||||
--dropatt=0.0 \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--mem_len=${TEST_MEM_LEN} \
|
||||
--clamp_len=${TEST_CLAMP_LEN} \
|
||||
--same_length=True \
|
||||
--eval_batch_size=${TEST_BSZ} \
|
||||
--num_core_per_host=${TEST_NUM_CORE} \
|
||||
--do_train=False \
|
||||
--do_eval=True \
|
||||
--eval_split=test
|
||||
63
transformer-xl/tf/sota/lm1b.sh
Normal file
63
transformer-xl/tf/sota/lm1b.sh
Normal file
|
|
@ -0,0 +1,63 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Data
|
||||
DATA_ROOT=./
|
||||
DATA_DIR=${DATA_ROOT}/pretrained_xl/tf_lm1b/data
|
||||
MODEL_DIR=${DATA_ROOT}/pretrained_xl/tf_lm1b/model
|
||||
|
||||
# Model
|
||||
DIV_VAL=4
|
||||
N_LAYER=24
|
||||
D_MODEL=1280
|
||||
D_EMBED=1280
|
||||
N_HEAD=16
|
||||
D_HEAD=80
|
||||
D_INNER=8192
|
||||
|
||||
# Testing
|
||||
TEST_TGT_LEN=32
|
||||
TEST_MEM_LEN=128
|
||||
TEST_CLAMP_LEN=-1
|
||||
|
||||
TEST_CKPT_PATH=${MODEL_DIR}/model.ckpt-1191000
|
||||
TEST_BSZ=16
|
||||
TEST_NUM_CORE=1
|
||||
|
||||
|
||||
echo 'Preprocess test set...'
|
||||
python data_utils.py \
|
||||
--data_dir=${DATA_DIR}/ \
|
||||
--dataset=lm1b \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--per_host_test_bsz=${TEST_BSZ} \
|
||||
--num_passes=1 \
|
||||
--use_tpu=False
|
||||
|
||||
echo 'Run evaluation on test set...'
|
||||
python train_gpu.py \
|
||||
--data_dir=${DATA_DIR}/tfrecords \
|
||||
--record_info_dir=${DATA_DIR}/tfrecords/ \
|
||||
--corpus_info_path=${DATA_DIR}/corpus-info.json \
|
||||
--eval_ckpt_path=${TEST_CKPT_PATH} \
|
||||
--model_dir=EXP-lm1b \
|
||||
--div_val=${DIV_VAL} \
|
||||
--untie_r=True \
|
||||
--proj_share_all_but_first=False \
|
||||
--proj_same_dim=False \
|
||||
--n_layer=${N_LAYER} \
|
||||
--d_model=${D_MODEL} \
|
||||
--d_embed=${D_EMBED} \
|
||||
--n_head=${N_HEAD} \
|
||||
--d_head=${D_HEAD} \
|
||||
--d_inner=${D_INNER} \
|
||||
--dropout=0.0 \
|
||||
--dropatt=0.0 \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--mem_len=${TEST_MEM_LEN} \
|
||||
--clamp_len=${TEST_CLAMP_LEN} \
|
||||
--same_length=True \
|
||||
--eval_batch_size=${TEST_BSZ} \
|
||||
--num_core_per_host=${TEST_NUM_CORE} \
|
||||
--do_train=False \
|
||||
--do_eval=True \
|
||||
--eval_split=test
|
||||
58
transformer-xl/tf/sota/text8.sh
Normal file
58
transformer-xl/tf/sota/text8.sh
Normal file
|
|
@ -0,0 +1,58 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Data
|
||||
DATA_ROOT=./
|
||||
DATA_DIR=${DATA_ROOT}/pretrained_xl/tf_text8/data
|
||||
MODEL_DIR=${DATA_ROOT}/pretrained_xl/tf_text8/model
|
||||
|
||||
# Model
|
||||
N_LAYER=24
|
||||
D_MODEL=1024
|
||||
D_EMBED=1024
|
||||
N_HEAD=8
|
||||
D_HEAD=128
|
||||
D_INNER=3072
|
||||
|
||||
# Testing
|
||||
TEST_TGT_LEN=128
|
||||
TEST_MEM_LEN=3800
|
||||
TEST_CLAMP_LEN=1000
|
||||
|
||||
TEST_CKPT_PATH=${MODEL_DIR}/model.ckpt-0
|
||||
TEST_BSZ=16
|
||||
TEST_NUM_CORE=2
|
||||
|
||||
|
||||
echo 'Preprocess test set...'
|
||||
python data_utils.py \
|
||||
--data_dir=${DATA_DIR}/ \
|
||||
--dataset=text8 \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--per_host_test_bsz=${TEST_BSZ} \
|
||||
--num_passes=1 \
|
||||
--use_tpu=False
|
||||
|
||||
echo 'Run evaluation on test set...'
|
||||
python train_gpu.py \
|
||||
--data_dir=${DATA_DIR}/tfrecords \
|
||||
--record_info_dir=${DATA_DIR}/tfrecords/ \
|
||||
--corpus_info_path=${DATA_DIR}/corpus-info.json \
|
||||
--eval_ckpt_path=${TEST_CKPT_PATH} \
|
||||
--model_dir=EXP-text8 \
|
||||
--n_layer=${N_LAYER} \
|
||||
--d_model=${D_MODEL} \
|
||||
--d_embed=${D_EMBED} \
|
||||
--n_head=${N_HEAD} \
|
||||
--d_head=${D_HEAD} \
|
||||
--d_inner=${D_INNER} \
|
||||
--dropout=0.0 \
|
||||
--dropatt=0.0 \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--mem_len=${TEST_MEM_LEN} \
|
||||
--clamp_len=${TEST_CLAMP_LEN} \
|
||||
--same_length=True \
|
||||
--eval_batch_size=${TEST_BSZ} \
|
||||
--num_core_per_host=${TEST_NUM_CORE} \
|
||||
--do_train=False \
|
||||
--do_eval=True \
|
||||
--eval_split=test
|
||||
71
transformer-xl/tf/sota/wt103.sh
Normal file
71
transformer-xl/tf/sota/wt103.sh
Normal file
|
|
@ -0,0 +1,71 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Data
|
||||
DATA_ROOT=./
|
||||
DATA_DIR=${DATA_ROOT}/pretrained_xl/tf_wt103/data
|
||||
MODEL_DIR=${DATA_ROOT}/pretrained_xl/tf_wt103/model
|
||||
|
||||
# Model
|
||||
DIV_VAL=4
|
||||
N_LAYER=18
|
||||
D_MODEL=1024
|
||||
D_EMBED=1024
|
||||
N_HEAD=16
|
||||
D_HEAD=64
|
||||
D_INNER=4096
|
||||
|
||||
# Training
|
||||
TGT_LEN=256
|
||||
MEM_LEN=256
|
||||
|
||||
BSZ=16
|
||||
NUM_CORE=2
|
||||
|
||||
# Testing
|
||||
TEST_TGT_LEN=128
|
||||
TEST_MEM_LEN=1600
|
||||
TEST_CLAMP_LEN=1000
|
||||
|
||||
TEST_CKPT_PATH=${MODEL_DIR}/model.ckpt-0
|
||||
TEST_BSZ=16
|
||||
TEST_NUM_CORE=1
|
||||
|
||||
|
||||
echo 'Preprocess test set...'
|
||||
python data_utils.py \
|
||||
--data_dir=${DATA_DIR}/ \
|
||||
--dataset=enwik8 \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--per_host_test_bsz=${TEST_BSZ} \
|
||||
--num_passes=1 \
|
||||
--use_tpu=False
|
||||
|
||||
|
||||
echo 'Run evaluation on test set...'
|
||||
python train_gpu.py \
|
||||
--data_dir=${DATA_DIR}/tfrecords \
|
||||
--record_info_dir=${DATA_DIR}/tfrecords/ \
|
||||
--corpus_info_path=${DATA_DIR}/corpus-info.json \
|
||||
--eval_ckpt_path=${TEST_CKPT_PATH} \
|
||||
--model_dir=EXP-wt103 \
|
||||
--div_val=${DIV_VAL} \
|
||||
--untie_r=True \
|
||||
--proj_share_all_but_first=True \
|
||||
--n_layer=${N_LAYER} \
|
||||
--d_model=${D_MODEL} \
|
||||
--d_embed=${D_EMBED} \
|
||||
--n_head=${N_HEAD} \
|
||||
--d_head=${D_HEAD} \
|
||||
--d_inner=${D_INNER} \
|
||||
--dropout=0.0 \
|
||||
--dropatt=0.0 \
|
||||
--tgt_len=${TEST_TGT_LEN} \
|
||||
--mem_len=${TEST_MEM_LEN} \
|
||||
--clamp_len=${TEST_CLAMP_LEN} \
|
||||
--same_length=True \
|
||||
--eval_batch_size=${TEST_BSZ} \
|
||||
--num_core_per_host=${TEST_NUM_CORE} \
|
||||
--do_train=False \
|
||||
--do_eval=True \
|
||||
--eval_split=test
|
||||
|
||||
3519
transformer-xl/tf/tpu_estimator.py
Normal file
3519
transformer-xl/tf/tpu_estimator.py
Normal file
File diff suppressed because it is too large
Load diff
462
transformer-xl/tf/train.py
Normal file
462
transformer-xl/tf/train.py
Normal file
|
|
@ -0,0 +1,462 @@
|
|||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
import math
|
||||
import time
|
||||
|
||||
from absl import flags
|
||||
import absl.logging as _logging # pylint: disable=unused-import
|
||||
|
||||
from six.moves import xrange # pylint: disable=redefined-builtin
|
||||
|
||||
import tensorflow as tf
|
||||
from tensorflow.gfile import Exists as exists
|
||||
import model
|
||||
import data_utils
|
||||
import tpu_estimator
|
||||
|
||||
import numpy as np
|
||||
from time import sleep
|
||||
|
||||
|
||||
# TPU parameters
|
||||
flags.DEFINE_string("master", default=None,
|
||||
help="master")
|
||||
flags.DEFINE_string("tpu", default=None,
|
||||
help="The Cloud TPU to use for training. This should be either the name "
|
||||
"used when creating the Cloud TPU, or a grpc://ip.address.of.tpu:8470 url.")
|
||||
flags.DEFINE_string("gcp_project", default=None,
|
||||
help="Project name for the Cloud TPU-enabled project. If not specified, "
|
||||
"we will attempt to automatically detect the GCE project from metadata.")
|
||||
flags.DEFINE_string("tpu_zone",default=None,
|
||||
help="GCE zone where the Cloud TPU is located in. If not specified, we "
|
||||
"will attempt to automatically detect the GCE project from metadata.")
|
||||
flags.DEFINE_bool("use_tpu", default=True,
|
||||
help="Use TPUs rather than plain CPUs.")
|
||||
flags.DEFINE_integer("num_hosts", default=1,
|
||||
help="number of TPU hosts")
|
||||
flags.DEFINE_integer("num_core_per_host", default=8,
|
||||
help="number of cores per host")
|
||||
|
||||
# Experiment (data/checkpoint/directory) parameters
|
||||
flags.DEFINE_string("data_dir", default="",
|
||||
help="Path to tf-records directory.")
|
||||
flags.DEFINE_string("record_info_dir", default="",
|
||||
help="Path to local directory containing filenames.txt.")
|
||||
flags.DEFINE_string("corpus_info_path", default="",
|
||||
help="Path to corpus-info.json file.")
|
||||
flags.DEFINE_string("model_dir", default=None,
|
||||
help="Estimator model_dir.")
|
||||
flags.DEFINE_bool("do_eval", default=False,
|
||||
help="Whether to run eval on the dev set.")
|
||||
flags.DEFINE_bool("track_mean", default=True,
|
||||
help="Trace mean loss during training.")
|
||||
flags.DEFINE_string("eval_ckpt_path", None,
|
||||
help="Checkpoint path for evaluation."
|
||||
"If set, model_dir will be ignored."
|
||||
"If unset, will use the latest ckpt in model_dir.")
|
||||
flags.DEFINE_string("warm_start_path", None,
|
||||
help="Checkpoint path for warm start."
|
||||
"If set, will clear Adam states."
|
||||
"Note that the new model_dir should be different"
|
||||
" from warm_start_path.")
|
||||
|
||||
# Optimization paramenters
|
||||
flags.DEFINE_float("learning_rate", default=2.5e-4,
|
||||
help="Maximum learning rate.")
|
||||
flags.DEFINE_float("clip", default=0.25,
|
||||
help="Gradient clipping value.")
|
||||
# for cosine decay
|
||||
flags.DEFINE_float("min_lr_ratio", default=0.01,
|
||||
help="Minimum ratio learning rate.")
|
||||
flags.DEFINE_integer("warmup_steps", default=0,
|
||||
help="Number of steps for linear lr warmup.")
|
||||
|
||||
# Training parameters
|
||||
flags.DEFINE_integer("train_batch_size", default=60,
|
||||
help="Size of train batch.")
|
||||
flags.DEFINE_integer("eval_batch_size", default=60,
|
||||
help="Size of valid batch.")
|
||||
flags.DEFINE_integer("train_steps", default=100000,
|
||||
help="Total number of training steps.")
|
||||
flags.DEFINE_integer("iterations", default=500,
|
||||
help="Number of iterations per repeat loop.")
|
||||
flags.DEFINE_integer("save_steps", default=10000,
|
||||
help="number of steps for model checkpointing.")
|
||||
|
||||
# Evaluation parameters
|
||||
flags.DEFINE_integer("max_eval_batch", default=-1,
|
||||
help="Set -1 to turn off. Only used in test mode.")
|
||||
flags.DEFINE_bool("do_eval_only", default=False,
|
||||
help="Run evaluation only.")
|
||||
flags.DEFINE_integer("start_eval_steps", default=10000,
|
||||
help="Which checkpoint to start with in `do_eval_only` mode.")
|
||||
flags.DEFINE_string("eval_split", "valid",
|
||||
help="Which data split to evaluate.")
|
||||
|
||||
# Model paramenters
|
||||
flags.DEFINE_integer("tgt_len", default=70,
|
||||
help="Number of steps to predict")
|
||||
flags.DEFINE_integer("mem_len", default=70,
|
||||
help="Number of steps to cache")
|
||||
flags.DEFINE_bool("same_length", default=False,
|
||||
help="Same length attention")
|
||||
flags.DEFINE_integer("clamp_len", default=-1,
|
||||
help="Clamp length")
|
||||
|
||||
flags.DEFINE_integer("n_layer", default=6,
|
||||
help="Number of layers.")
|
||||
flags.DEFINE_integer("d_model", default=500,
|
||||
help="Dimension of the model.")
|
||||
flags.DEFINE_integer("d_embed", default=500,
|
||||
help="Dimension of the embeddings.")
|
||||
flags.DEFINE_integer("n_head", default=10,
|
||||
help="Number of attention heads.")
|
||||
flags.DEFINE_integer("d_head", default=50,
|
||||
help="Dimension of each attention head.")
|
||||
flags.DEFINE_integer("d_inner", default=1000,
|
||||
help="Dimension of inner hidden size in positionwise feed-forward.")
|
||||
flags.DEFINE_float("dropout", default=0.1,
|
||||
help="Dropout rate.")
|
||||
flags.DEFINE_float("dropatt", default=0.1,
|
||||
help="Attention dropout rate.")
|
||||
flags.DEFINE_bool("untie_r", default=False,
|
||||
help="untie r_w_bias and r_r_bias")
|
||||
|
||||
# Adaptive Softmax / Embedding
|
||||
flags.DEFINE_bool("tie_weight", default=True,
|
||||
help="Tie embedding and softmax weight.")
|
||||
flags.DEFINE_integer("div_val", default=1,
|
||||
help="Divide the embedding size by this val for each bin")
|
||||
flags.DEFINE_bool("proj_share_all_but_first", default=False,
|
||||
help="True to share all but first projs, False not to share.")
|
||||
flags.DEFINE_bool("proj_same_dim", default=True,
|
||||
help="Project the bin with the same dimension.")
|
||||
|
||||
# Parameter initialization
|
||||
flags.DEFINE_enum("init", default="normal",
|
||||
enum_values=["normal", "uniform"],
|
||||
help="Initialization method.")
|
||||
flags.DEFINE_float("init_std", default=0.02,
|
||||
help="Initialization std when init is normal.")
|
||||
flags.DEFINE_float("proj_init_std", default=0.01,
|
||||
help="Initialization std for embedding projection.")
|
||||
flags.DEFINE_float("init_range", default=0.1,
|
||||
help="Initialization std when init is uniform.")
|
||||
|
||||
|
||||
FLAGS = flags.FLAGS
|
||||
|
||||
def metric_fn(loss):
|
||||
"""Evaluation metric Fn which runs on CPU."""
|
||||
perplexity = tf.exp(tf.reduce_mean(loss))
|
||||
bpc = tf.reduce_mean(loss) / tf.constant(math.log(2))
|
||||
return {
|
||||
"perplexity": tf.metrics.mean(perplexity),
|
||||
"bpc": tf.metrics.mean(bpc),
|
||||
}
|
||||
|
||||
|
||||
def get_model_fn(n_token, cutoffs, train_bin_sizes, eval_bin_sizes):
|
||||
def model_fn(features, labels, mode, params):
|
||||
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
|
||||
|
||||
|
||||
batch_size = params["batch_size"]
|
||||
|
||||
mems = params["cache"]
|
||||
inp = tf.transpose(features["inputs"], [1, 0])
|
||||
tgt = tf.transpose(features["labels"], [1, 0])
|
||||
|
||||
bin_sizes = train_bin_sizes if is_training else eval_bin_sizes
|
||||
if bin_sizes:
|
||||
inp_perms = [tf.transpose(features["inp_mask"], [1, 0])]
|
||||
tgt_perms = [tf.transpose(features["tgt_mask"], [1, 0])]
|
||||
|
||||
head_tgt = tf.transpose(features["head_labels"], [1, 0])
|
||||
|
||||
for b in range(len(bin_sizes)):
|
||||
inp_perm = tf.transpose(features["inp_perm_{}".format(b)], [1, 0, 2])
|
||||
tgt_perm = tf.transpose(features["tgt_perm_{}".format(b)], [1, 0, 2])
|
||||
|
||||
inp_perms.append(inp_perm)
|
||||
tgt_perms.append(tgt_perm)
|
||||
else:
|
||||
inp_perms, tgt_perms, head_tgt = None, None, None
|
||||
|
||||
if FLAGS.init == "uniform":
|
||||
initializer = tf.initializers.random_uniform(
|
||||
minval=-FLAGS.init_range,
|
||||
maxval=FLAGS.init_range,
|
||||
seed=None)
|
||||
elif FLAGS.init == "normal":
|
||||
initializer = tf.initializers.random_normal(
|
||||
stddev=FLAGS.init_std,
|
||||
seed=None)
|
||||
proj_initializer = tf.initializers.random_normal(
|
||||
stddev=FLAGS.proj_init_std,
|
||||
seed=None)
|
||||
|
||||
tie_projs = [False for _ in range(len(cutoffs) + 1)]
|
||||
if FLAGS.proj_share_all_but_first:
|
||||
for i in range(1, len(tie_projs)):
|
||||
tie_projs[i] = True
|
||||
|
||||
tf.logging.info("Vocab size : {}".format(n_token))
|
||||
tf.logging.info("Batch size : {}".format(batch_size))
|
||||
|
||||
loss, new_mems = model.transformer(
|
||||
dec_inp=inp,
|
||||
target=tgt,
|
||||
mems=mems,
|
||||
n_token=n_token,
|
||||
n_layer=FLAGS.n_layer,
|
||||
d_model=FLAGS.d_model,
|
||||
d_embed=FLAGS.d_embed,
|
||||
n_head=FLAGS.n_head,
|
||||
d_head=FLAGS.d_head,
|
||||
d_inner=FLAGS.d_inner,
|
||||
dropout=FLAGS.dropout,
|
||||
dropatt=FLAGS.dropatt,
|
||||
initializer=initializer,
|
||||
is_training=is_training,
|
||||
mem_len=FLAGS.mem_len,
|
||||
cutoffs=cutoffs,
|
||||
div_val=FLAGS.div_val,
|
||||
tie_projs=tie_projs,
|
||||
input_perms=inp_perms,
|
||||
target_perms=tgt_perms,
|
||||
head_target=head_tgt,
|
||||
same_length=FLAGS.same_length,
|
||||
clamp_len=FLAGS.clamp_len,
|
||||
use_tpu=FLAGS.use_tpu,
|
||||
untie_r=FLAGS.untie_r,
|
||||
proj_same_dim=FLAGS.proj_same_dim)
|
||||
|
||||
total_loss = tf.reduce_mean(loss)
|
||||
|
||||
if mode == tf.estimator.ModeKeys.EVAL:
|
||||
if FLAGS.use_tpu:
|
||||
with tf.colocate_with(total_loss):
|
||||
total_loss = tf.contrib.tpu.cross_replica_sum(total_loss) \
|
||||
/ FLAGS.num_hosts / FLAGS.num_core_per_host
|
||||
metric_loss = tf.tile(tf.reshape(total_loss, [1, 1]), [batch_size, 1])
|
||||
eval_spec = tf.contrib.tpu.TPUEstimatorSpec(
|
||||
mode=mode,
|
||||
loss=total_loss,
|
||||
eval_metrics=(metric_fn, [metric_loss]))
|
||||
|
||||
eval_spec.cache = new_mems
|
||||
|
||||
return eval_spec
|
||||
|
||||
# Configuring the optimization step.
|
||||
global_step = tf.train.get_global_step()
|
||||
|
||||
# increase the learning rate linearly
|
||||
if FLAGS.warmup_steps > 0:
|
||||
warmup_lr = tf.to_float(global_step) / tf.to_float(FLAGS.warmup_steps) \
|
||||
* FLAGS.learning_rate
|
||||
else:
|
||||
warmup_lr = 0.0
|
||||
|
||||
# number of parameters
|
||||
num_params = np.sum([np.prod(v.shape) for v in tf.trainable_variables()])
|
||||
tf.logging.info("#params: {}".format(num_params))
|
||||
|
||||
# format_str = '{{:<{0}s}}\t{{}}'.format(
|
||||
# max([len(v.name) for v in tf.trainable_variables()]))
|
||||
# for v in tf.trainable_variables():
|
||||
# tf.logging.info(format_str.format(v.name, v.get_shape()))
|
||||
|
||||
|
||||
# decay the learning rate using the cosine schedule
|
||||
decay_lr = tf.train.cosine_decay(
|
||||
FLAGS.learning_rate,
|
||||
global_step=global_step-FLAGS.warmup_steps,
|
||||
decay_steps=FLAGS.train_steps-FLAGS.warmup_steps,
|
||||
alpha=FLAGS.min_lr_ratio)
|
||||
|
||||
learning_rate = tf.where(global_step < FLAGS.warmup_steps,
|
||||
warmup_lr, decay_lr)
|
||||
|
||||
if FLAGS.use_tpu:
|
||||
optimizer = tf.contrib.tpu.CrossShardOptimizer(
|
||||
tf.train.AdamOptimizer(learning_rate=learning_rate))
|
||||
#GradientDescentOptimizer
|
||||
else:
|
||||
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
|
||||
|
||||
grads_and_vars = optimizer.compute_gradients(total_loss)
|
||||
gradients, variables = zip(*grads_and_vars)
|
||||
clipped, _ = tf.clip_by_global_norm(gradients, FLAGS.clip)
|
||||
train_op = optimizer.apply_gradients(
|
||||
zip(clipped, variables), global_step=tf.train.get_global_step())
|
||||
|
||||
# Constucting TPUEstimatorSpec with cache.
|
||||
train_spec = tf.contrib.tpu.TPUEstimatorSpec(
|
||||
mode=mode, loss=total_loss, train_op=train_op)
|
||||
|
||||
if FLAGS.mem_len < FLAGS.tgt_len:
|
||||
new_mems = [new_mems[: FLAGS.mem_len] for mem_t in new_mems]
|
||||
train_spec.cache = new_mems
|
||||
|
||||
return train_spec
|
||||
|
||||
return model_fn
|
||||
|
||||
|
||||
def get_cache_fn(mem_len):
|
||||
|
||||
def cache_fn(batch_size):
|
||||
mems = []
|
||||
for l in xrange(FLAGS.n_layer):
|
||||
if mem_len > 0:
|
||||
mems.append(
|
||||
tf.zeros([mem_len, batch_size, FLAGS.d_model], dtype=tf.float32))
|
||||
else:
|
||||
mems.append(tf.zeros([mem_len], dtype=tf.float32))
|
||||
|
||||
return mems
|
||||
|
||||
return cache_fn
|
||||
|
||||
|
||||
def main(unused_argv):
|
||||
del unused_argv # Unused
|
||||
|
||||
tf.logging.set_verbosity(tf.logging.INFO)
|
||||
|
||||
# Get corpus info
|
||||
corpus_info = data_utils.get_corpus_info(FLAGS.corpus_info_path)
|
||||
n_token = corpus_info["vocab_size"]
|
||||
cutoffs = corpus_info["cutoffs"][1:-1]
|
||||
|
||||
if FLAGS.save_steps == 0:
|
||||
FLAGS.save_steps = None
|
||||
|
||||
if not FLAGS.do_eval_only:
|
||||
# Get train input function
|
||||
train_input_fn, train_record_info = data_utils.get_input_fn(
|
||||
record_info_dir=FLAGS.record_info_dir,
|
||||
split="train",
|
||||
per_host_bsz=FLAGS.train_batch_size // FLAGS.num_hosts,
|
||||
tgt_len=FLAGS.tgt_len,
|
||||
num_core_per_host=FLAGS.num_core_per_host,
|
||||
num_hosts=FLAGS.num_hosts,
|
||||
use_tpu=FLAGS.use_tpu)
|
||||
train_bin_sizes = train_record_info["bin_sizes"]
|
||||
num_train_batch = train_record_info["num_batch"]
|
||||
|
||||
# Get train cache function
|
||||
train_cache_fn = get_cache_fn(FLAGS.mem_len)
|
||||
else:
|
||||
train_bin_sizes = []
|
||||
num_train_batch = None
|
||||
train_cache_fn = None
|
||||
|
||||
if FLAGS.do_eval or FLAGS.do_eval_only:
|
||||
assert FLAGS.num_hosts == 1
|
||||
# Get eval input function
|
||||
eval_input_fn, eval_record_info = data_utils.get_input_fn(
|
||||
record_info_dir=FLAGS.record_info_dir,
|
||||
split=FLAGS.eval_split,
|
||||
per_host_bsz=FLAGS.eval_batch_size // FLAGS.num_hosts,
|
||||
tgt_len=FLAGS.tgt_len,
|
||||
num_core_per_host=FLAGS.num_core_per_host,
|
||||
num_hosts=FLAGS.num_hosts,
|
||||
use_tpu=FLAGS.use_tpu)
|
||||
eval_bin_sizes = eval_record_info["bin_sizes"]
|
||||
num_eval_batch = eval_record_info["num_batch"]
|
||||
|
||||
if FLAGS.max_eval_batch > 0:
|
||||
num_eval_batch = min(FLAGS.max_eval_batch, num_eval_batch)
|
||||
|
||||
# Get eval cache function
|
||||
eval_cache_fn = get_cache_fn(FLAGS.mem_len)
|
||||
model_fn = get_model_fn(n_token, cutoffs, train_bin_sizes, eval_bin_sizes)
|
||||
else:
|
||||
eval_cache_fn = None
|
||||
model_fn = get_model_fn(n_token, cutoffs, train_bin_sizes, [])
|
||||
|
||||
##### Create estimator
|
||||
# TPU Configuration
|
||||
tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver(
|
||||
FLAGS.tpu, zone=FLAGS.tpu_zone, project=FLAGS.gcp_project)
|
||||
|
||||
per_host_input = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
|
||||
run_config = tf.contrib.tpu.RunConfig(
|
||||
cluster=tpu_cluster_resolver,
|
||||
model_dir=FLAGS.model_dir,
|
||||
session_config=tf.ConfigProto(
|
||||
allow_soft_placement=True, log_device_placement=True),
|
||||
tpu_config=tf.contrib.tpu.TPUConfig(
|
||||
iterations_per_loop=FLAGS.iterations,
|
||||
num_shards=FLAGS.num_core_per_host * FLAGS.num_hosts,
|
||||
per_host_input_for_training=per_host_input),
|
||||
keep_checkpoint_max=100000, # effectively save all checkpoints
|
||||
save_checkpoints_secs=None,
|
||||
save_checkpoints_steps=FLAGS.save_steps
|
||||
)
|
||||
|
||||
# warm start
|
||||
warm_start_from = None
|
||||
if FLAGS.warm_start_path is not None:
|
||||
warm_start_from = tf.estimator.WarmStartSettings(
|
||||
ckpt_to_initialize_from=FLAGS.warm_start_path)
|
||||
|
||||
# TPU Estimator
|
||||
estimator = tpu_estimator.TPUEstimator(
|
||||
model_fn=model_fn,
|
||||
train_cache_fn=train_cache_fn,
|
||||
eval_cache_fn=eval_cache_fn,
|
||||
use_tpu=FLAGS.use_tpu,
|
||||
config=run_config,
|
||||
params={"data_dir":FLAGS.data_dir, "track_mean":FLAGS.track_mean},
|
||||
train_batch_size=FLAGS.train_batch_size,
|
||||
eval_batch_size=FLAGS.eval_batch_size,
|
||||
warm_start_from=warm_start_from)
|
||||
|
||||
if FLAGS.do_eval_only:
|
||||
if FLAGS.eval_ckpt_path is not None:
|
||||
ret = estimator.evaluate(input_fn=eval_input_fn, steps=num_eval_batch,
|
||||
checkpoint_path=FLAGS.eval_ckpt_path)
|
||||
tf.logging.info("=" * 200)
|
||||
log_str = "Eval results | "
|
||||
for key, val in ret.items():
|
||||
log_str += "{} {} | ".format(key, val)
|
||||
tf.logging.info(log_str)
|
||||
tf.logging.info("=" * 200)
|
||||
else:
|
||||
ckpt_state = tf.train.get_checkpoint_state(FLAGS.model_dir)
|
||||
eval_results = []
|
||||
for eval_checkpoint in ckpt_state.all_model_checkpoint_paths:
|
||||
if not exists(eval_checkpoint + ".index"): continue
|
||||
global_step = int(eval_checkpoint.split("-")[-1])
|
||||
if global_step < FLAGS.start_eval_steps or global_step > FLAGS.train_steps:
|
||||
continue
|
||||
ret = estimator.evaluate(input_fn=eval_input_fn, steps=num_eval_batch,
|
||||
checkpoint_path=eval_checkpoint)
|
||||
eval_results.append(ret)
|
||||
|
||||
eval_results.sort(key = lambda x: x["perplexity"])
|
||||
|
||||
tf.logging.info("=" * 200)
|
||||
log_str = "Best results | "
|
||||
for key, val in eval_results[0].items():
|
||||
log_str += "{} {} | ".format(key, val)
|
||||
tf.logging.info(log_str)
|
||||
tf.logging.info("=" * 200)
|
||||
else:
|
||||
if not FLAGS.do_eval:
|
||||
estimator.train(input_fn=train_input_fn, steps=FLAGS.train_steps)
|
||||
else:
|
||||
for step in range(0, FLAGS.train_steps, num_train_batch):
|
||||
train_steps = min(FLAGS.train_steps - step, num_train_batch)
|
||||
estimator.train(input_fn=train_input_fn, steps=train_steps)
|
||||
estimator.evaluate(input_fn=eval_input_fn, steps=num_eval_batch)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
tf.app.run()
|
||||
475
transformer-xl/tf/train_gpu.py
Normal file
475
transformer-xl/tf/train_gpu.py
Normal file
|
|
@ -0,0 +1,475 @@
|
|||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
import os
|
||||
import math
|
||||
import time
|
||||
|
||||
from absl import flags
|
||||
import absl.logging as _logging # pylint: disable=unused-import
|
||||
|
||||
import tensorflow as tf
|
||||
import model
|
||||
import data_utils
|
||||
|
||||
from gpu_utils import assign_to_gpu, average_grads_and_vars
|
||||
|
||||
import numpy as np
|
||||
|
||||
# GPU config
|
||||
flags.DEFINE_integer("num_hosts", default=1,
|
||||
help="Number of TPU hosts")
|
||||
flags.DEFINE_integer("num_core_per_host", default=8,
|
||||
help="Number of cores per host")
|
||||
|
||||
# Experiment (data/checkpoint/directory) config
|
||||
flags.DEFINE_string("data_dir", default="",
|
||||
help="Path to tf-records directory.")
|
||||
flags.DEFINE_string("record_info_dir", default="",
|
||||
help="Path to local directory containing filenames.txt.")
|
||||
flags.DEFINE_string("corpus_info_path", default="",
|
||||
help="Path to corpus-info.json file.")
|
||||
flags.DEFINE_string("model_dir", default=None,
|
||||
help="Estimator model_dir.")
|
||||
flags.DEFINE_bool("do_train", default=True,
|
||||
help="Whether to run training.")
|
||||
flags.DEFINE_bool("do_eval", default=False,
|
||||
help="Whether to run eval on the dev set.")
|
||||
flags.DEFINE_string("eval_ckpt_path", None,
|
||||
help="Checkpoint path for do_test evaluation."
|
||||
"If set, model_dir will be ignored."
|
||||
"If unset, will use the latest ckpt in model_dir.")
|
||||
flags.DEFINE_string("warm_start_path", None,
|
||||
help="Checkpoint path for warm start."
|
||||
"If set, will clear Adam states."
|
||||
"Note that the new model_dir should be different"
|
||||
" from warm_start_path.")
|
||||
|
||||
# Optimization config
|
||||
flags.DEFINE_float("learning_rate", default=2.5e-4,
|
||||
help="Maximum learning rate.")
|
||||
flags.DEFINE_float("clip", default=0.25,
|
||||
help="Gradient clipping value.")
|
||||
# for cosine decay
|
||||
flags.DEFINE_float("min_lr_ratio", default=0.004,
|
||||
help="Minimum ratio learning rate.")
|
||||
flags.DEFINE_integer("warmup_steps", default=0,
|
||||
help="Number of steps for linear lr warmup.")
|
||||
|
||||
# Training config
|
||||
flags.DEFINE_integer("train_batch_size", default=60,
|
||||
help="Size of train batch.")
|
||||
flags.DEFINE_integer("eval_batch_size", default=60,
|
||||
help="Size of valid batch.")
|
||||
flags.DEFINE_integer("train_steps", default=100000,
|
||||
help="Total number of training steps.")
|
||||
flags.DEFINE_integer("iterations", default=500,
|
||||
help="Number of iterations per repeat loop.")
|
||||
flags.DEFINE_integer("save_steps", default=10000,
|
||||
help="number of steps for model checkpointing.")
|
||||
|
||||
# Evaluation config
|
||||
flags.DEFINE_bool("do_test", default=False,
|
||||
help="Run on the test set.")
|
||||
flags.DEFINE_integer("max_eval_batch", default=-1,
|
||||
help="Set -1 to turn off. Only used in test mode.")
|
||||
flags.DEFINE_bool("do_eval_only", default=False,
|
||||
help="Run evaluation only.")
|
||||
flags.DEFINE_integer("start_eval_steps", default=10000,
|
||||
help="Which checkpoint to start with in `do_eval_only` mode.")
|
||||
flags.DEFINE_string("eval_split", "valid",
|
||||
help="Which data split to evaluate.")
|
||||
|
||||
# Model config
|
||||
flags.DEFINE_integer("tgt_len", default=70,
|
||||
help="Number of steps to predict")
|
||||
flags.DEFINE_integer("mem_len", default=70,
|
||||
help="Number of steps to cache")
|
||||
flags.DEFINE_bool("same_length", default=False,
|
||||
help="Same length attention")
|
||||
flags.DEFINE_integer("clamp_len", default=-1,
|
||||
help="Clamp length")
|
||||
|
||||
flags.DEFINE_integer("n_layer", default=6,
|
||||
help="Number of layers.")
|
||||
flags.DEFINE_integer("d_model", default=500,
|
||||
help="Dimension of the model.")
|
||||
flags.DEFINE_integer("d_embed", default=500,
|
||||
help="Dimension of the embeddings.")
|
||||
flags.DEFINE_integer("n_head", default=10,
|
||||
help="Number of attention heads.")
|
||||
flags.DEFINE_integer("d_head", default=50,
|
||||
help="Dimension of each attention head.")
|
||||
flags.DEFINE_integer("d_inner", default=1000,
|
||||
help="Dimension of inner hidden size in positionwise feed-forward.")
|
||||
flags.DEFINE_float("dropout", default=0.1,
|
||||
help="Dropout rate.")
|
||||
flags.DEFINE_float("dropatt", default=0.1,
|
||||
help="Attention dropout rate.")
|
||||
flags.DEFINE_bool("untie_r", default=False,
|
||||
help="untie r_w_bias and r_r_bias")
|
||||
|
||||
# Adaptive Softmax / Embedding
|
||||
flags.DEFINE_bool("tie_weight", default=True,
|
||||
help="Tie embedding and softmax weight.")
|
||||
flags.DEFINE_integer("div_val", default=1,
|
||||
help="Divide the embedding size by this val for each bin")
|
||||
flags.DEFINE_bool("proj_share_all_but_first", default=False,
|
||||
help="True to share all but first projs, False not to share.")
|
||||
flags.DEFINE_bool("proj_same_dim", default=True,
|
||||
help="Project the bin with the same dimension.")
|
||||
|
||||
# Parameter initialization
|
||||
flags.DEFINE_enum("init", default="normal",
|
||||
enum_values=["normal", "uniform"],
|
||||
help="Initialization method.")
|
||||
flags.DEFINE_float("init_std", default=0.02,
|
||||
help="Initialization std when init is normal.")
|
||||
flags.DEFINE_float("proj_init_std", default=0.01,
|
||||
help="Initialization std for embedding projection.")
|
||||
flags.DEFINE_float("init_range", default=0.1,
|
||||
help="Initialization std when init is uniform.")
|
||||
|
||||
FLAGS = flags.FLAGS
|
||||
|
||||
def get_model_fn(n_token, cutoffs):
|
||||
def model_fn(inp, tgt, mems, is_training):
|
||||
inp = tf.transpose(inp, [1, 0])
|
||||
tgt = tf.transpose(tgt, [1, 0])
|
||||
|
||||
if FLAGS.init == "uniform":
|
||||
initializer = tf.initializers.random_uniform(
|
||||
minval=-FLAGS.init_range,
|
||||
maxval=FLAGS.init_range,
|
||||
seed=None)
|
||||
elif FLAGS.init == "normal":
|
||||
initializer = tf.initializers.random_normal(
|
||||
stddev=FLAGS.init_std,
|
||||
seed=None)
|
||||
proj_initializer = tf.initializers.random_normal(
|
||||
stddev=FLAGS.proj_init_std,
|
||||
seed=None)
|
||||
|
||||
tie_projs = [False for _ in range(len(cutoffs) + 1)]
|
||||
if FLAGS.proj_share_all_but_first:
|
||||
for i in range(1, len(tie_projs)):
|
||||
tie_projs[i] = True
|
||||
|
||||
loss, new_mems = model.transformer(
|
||||
dec_inp=inp,
|
||||
target=tgt,
|
||||
mems=mems,
|
||||
n_token=n_token,
|
||||
n_layer=FLAGS.n_layer,
|
||||
d_model=FLAGS.d_model,
|
||||
d_embed=FLAGS.d_embed,
|
||||
n_head=FLAGS.n_head,
|
||||
d_head=FLAGS.d_head,
|
||||
d_inner=FLAGS.d_inner,
|
||||
dropout=FLAGS.dropout,
|
||||
dropatt=FLAGS.dropatt,
|
||||
initializer=initializer,
|
||||
proj_initializer=proj_initializer,
|
||||
is_training=is_training,
|
||||
mem_len=FLAGS.mem_len,
|
||||
cutoffs=cutoffs,
|
||||
div_val=FLAGS.div_val,
|
||||
tie_projs=tie_projs,
|
||||
input_perms=None,
|
||||
target_perms=None,
|
||||
head_target=None,
|
||||
same_length=FLAGS.same_length,
|
||||
clamp_len=FLAGS.clamp_len,
|
||||
use_tpu=False,
|
||||
untie_r=FLAGS.untie_r,
|
||||
proj_same_dim=FLAGS.proj_same_dim)
|
||||
|
||||
# number of parameters
|
||||
num_params = sum([np.prod(v.shape) for v in tf.trainable_variables()])
|
||||
tf.logging.info('#params: {}'.format(num_params))
|
||||
|
||||
# format_str = '{{:<{0}s}}\t{{}}'.format(
|
||||
# max([len(v.name) for v in tf.trainable_variables()]))
|
||||
# for v in tf.trainable_variables():
|
||||
# tf.logging.info(format_str.format(v.name, v.get_shape()))
|
||||
|
||||
if is_training:
|
||||
all_vars = tf.trainable_variables()
|
||||
grads = tf.gradients(loss, all_vars)
|
||||
grads_and_vars = list(zip(grads, all_vars))
|
||||
|
||||
return loss, new_mems, grads_and_vars
|
||||
else:
|
||||
return loss, new_mems
|
||||
|
||||
return model_fn
|
||||
|
||||
|
||||
def single_core_graph(n_token, cutoffs, is_training, inp, tgt, mems):
|
||||
model_fn = get_model_fn(
|
||||
n_token=n_token,
|
||||
cutoffs=cutoffs)
|
||||
|
||||
model_ret = model_fn(
|
||||
inp=inp,
|
||||
tgt=tgt,
|
||||
mems=mems,
|
||||
is_training=is_training)
|
||||
|
||||
return model_ret
|
||||
|
||||
|
||||
def train(n_token, cutoffs, ps_device):
|
||||
##### Get input function and model function
|
||||
train_input_fn, train_record_info = data_utils.get_input_fn(
|
||||
record_info_dir=FLAGS.record_info_dir,
|
||||
split="train",
|
||||
per_host_bsz=FLAGS.train_batch_size,
|
||||
tgt_len=FLAGS.tgt_len,
|
||||
num_core_per_host=FLAGS.num_core_per_host,
|
||||
num_hosts=1,
|
||||
use_tpu=False)
|
||||
|
||||
tf.logging.info("num of batches {}".format(train_record_info["num_batch"]))
|
||||
|
||||
##### Create computational graph
|
||||
train_set = train_input_fn({
|
||||
"batch_size": FLAGS.train_batch_size,
|
||||
"data_dir": FLAGS.data_dir})
|
||||
|
||||
input_feed, label_feed = train_set.make_one_shot_iterator().get_next()
|
||||
|
||||
inputs = tf.split(input_feed, FLAGS.num_core_per_host, 0)
|
||||
labels = tf.split(label_feed, FLAGS.num_core_per_host, 0)
|
||||
|
||||
per_core_bsz = FLAGS.train_batch_size // FLAGS.num_core_per_host
|
||||
|
||||
tower_mems, tower_losses, tower_new_mems, tower_grads_and_vars = [], [], [], []
|
||||
|
||||
for i in range(FLAGS.num_core_per_host):
|
||||
reuse = True if i > 0 else None
|
||||
with tf.device(assign_to_gpu(i, ps_device)), \
|
||||
tf.variable_scope(tf.get_variable_scope(), reuse=reuse):
|
||||
|
||||
mems_i = [tf.placeholder(tf.float32,
|
||||
[FLAGS.mem_len, per_core_bsz, FLAGS.d_model])
|
||||
for _ in range(FLAGS.n_layer)]
|
||||
|
||||
loss_i, new_mems_i, grads_and_vars_i = single_core_graph(
|
||||
n_token=n_token,
|
||||
cutoffs=cutoffs,
|
||||
is_training=True,
|
||||
inp=inputs[i],
|
||||
tgt=labels[i],
|
||||
mems=mems_i)
|
||||
|
||||
tower_mems.append(mems_i)
|
||||
tower_losses.append(loss_i)
|
||||
tower_new_mems.append(new_mems_i)
|
||||
tower_grads_and_vars.append(grads_and_vars_i)
|
||||
|
||||
## average losses and gradients across towers
|
||||
if len(tower_losses) > 1:
|
||||
loss = tf.add_n(tower_losses) / len(tower_losses)
|
||||
grads_and_vars = average_grads_and_vars(tower_grads_and_vars)
|
||||
else:
|
||||
loss = tower_losses[0]
|
||||
grads_and_vars = tower_grads_and_vars[0]
|
||||
grads, all_vars = zip(*grads_and_vars)
|
||||
|
||||
## clip gradient
|
||||
clipped, gnorm = tf.clip_by_global_norm(grads, FLAGS.clip)
|
||||
grads_and_vars = list(zip(clipped, all_vars))
|
||||
|
||||
## configure the optimizer
|
||||
global_step = tf.train.get_or_create_global_step()
|
||||
|
||||
# warmup stage: increase the learning rate linearly
|
||||
if FLAGS.warmup_steps > 0:
|
||||
warmup_lr = tf.to_float(global_step) / tf.to_float(FLAGS.warmup_steps) \
|
||||
* FLAGS.learning_rate
|
||||
else:
|
||||
warmup_lr = 0.0
|
||||
|
||||
# decay stage: decay the learning rate using the cosine schedule
|
||||
decay_lr = tf.train.cosine_decay(
|
||||
FLAGS.learning_rate,
|
||||
global_step=global_step-FLAGS.warmup_steps,
|
||||
decay_steps=FLAGS.train_steps-FLAGS.warmup_steps,
|
||||
alpha=FLAGS.min_lr_ratio)
|
||||
|
||||
# choose warmup or decay
|
||||
learning_rate = tf.where(global_step < FLAGS.warmup_steps,
|
||||
warmup_lr, decay_lr)
|
||||
|
||||
# get the train op
|
||||
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
|
||||
train_op = optimizer.apply_gradients(grads_and_vars, global_step)
|
||||
|
||||
##### Training loop
|
||||
tower_mems_np = [
|
||||
[np.zeros([FLAGS.mem_len, per_core_bsz, FLAGS.d_model], dtype=np.float32)
|
||||
for layer in range(FLAGS.n_layer)]
|
||||
for core in range(FLAGS.num_core_per_host)
|
||||
]
|
||||
|
||||
saver = tf.train.Saver()
|
||||
|
||||
with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess:
|
||||
sess.run(tf.global_variables_initializer())
|
||||
|
||||
if FLAGS.warm_start_path is not None:
|
||||
tf.logging.info("warm start from {}".format(FLAGS.warm_start_path))
|
||||
saver.restore(sess, FLAGS.warm_start_path)
|
||||
|
||||
fetches = [loss, tower_new_mems, global_step, gnorm, learning_rate, train_op]
|
||||
|
||||
total_loss, prev_step = 0., -1
|
||||
while True:
|
||||
feed_dict = {}
|
||||
for i in range(FLAGS.num_core_per_host):
|
||||
for m, m_np in zip(tower_mems[i], tower_mems_np[i]):
|
||||
feed_dict[m] = m_np
|
||||
|
||||
fetched = sess.run(fetches, feed_dict=feed_dict)
|
||||
|
||||
loss_np, tower_mems_np, curr_step = fetched[:3]
|
||||
total_loss += loss_np
|
||||
|
||||
if curr_step > 0 and curr_step % FLAGS.iterations == 0:
|
||||
curr_loss = total_loss / (curr_step - prev_step)
|
||||
tf.logging.info("[{}] | gnorm {:.2f} lr {:8.6f} "
|
||||
"| loss {:.2f} | pplx {:>7.2f}, bpc {:>7.4f}".format(
|
||||
curr_step, fetched[-3], fetched[-2],
|
||||
curr_loss, math.exp(curr_loss), curr_loss / math.log(2)))
|
||||
total_loss, prev_step = 0., curr_step
|
||||
|
||||
if curr_step > 0 and curr_step % FLAGS.save_steps == 0:
|
||||
save_path = os.path.join(FLAGS.model_dir, "model.ckpt")
|
||||
saver.save(sess, save_path)
|
||||
tf.logging.info("Model saved in path: {}".format(save_path))
|
||||
|
||||
if curr_step == FLAGS.train_steps:
|
||||
break
|
||||
|
||||
|
||||
def evaluate(n_token, cutoffs, ps_device):
|
||||
##### Get input function and model function
|
||||
eval_input_fn, eval_record_info = data_utils.get_input_fn(
|
||||
record_info_dir=FLAGS.record_info_dir,
|
||||
split=FLAGS.eval_split,
|
||||
per_host_bsz=FLAGS.eval_batch_size,
|
||||
tgt_len=FLAGS.tgt_len,
|
||||
num_core_per_host=FLAGS.num_core_per_host,
|
||||
num_hosts=1,
|
||||
use_tpu=False)
|
||||
|
||||
num_batch = eval_record_info["num_batch"]
|
||||
if FLAGS.max_eval_batch > 0:
|
||||
num_batch = FLAGS.max_eval_batch
|
||||
tf.logging.info("num of batches {}".format(num_batch))
|
||||
|
||||
##### Create computational graph
|
||||
eval_set = eval_input_fn({
|
||||
"batch_size": FLAGS.eval_batch_size,
|
||||
"data_dir": FLAGS.data_dir})
|
||||
|
||||
input_feed, label_feed = eval_set.make_one_shot_iterator().get_next()
|
||||
|
||||
inputs = tf.split(input_feed, FLAGS.num_core_per_host, 0)
|
||||
labels = tf.split(label_feed, FLAGS.num_core_per_host, 0)
|
||||
|
||||
per_core_bsz = FLAGS.eval_batch_size // FLAGS.num_core_per_host
|
||||
tower_mems, tower_losses, tower_new_mems = [], [], []
|
||||
|
||||
for i in range(FLAGS.num_core_per_host):
|
||||
with tf.device(assign_to_gpu(i, ps_device)), \
|
||||
tf.variable_scope(tf.get_variable_scope(), reuse=tf.AUTO_REUSE):
|
||||
|
||||
mems_i = [tf.placeholder(tf.float32,
|
||||
[FLAGS.mem_len, per_core_bsz, FLAGS.d_model])
|
||||
for _ in range(FLAGS.n_layer)]
|
||||
|
||||
loss_i, new_mems_i = single_core_graph(
|
||||
n_token=n_token,
|
||||
cutoffs=cutoffs,
|
||||
is_training=False,
|
||||
inp=inputs[i],
|
||||
tgt=labels[i],
|
||||
mems=mems_i)
|
||||
|
||||
tower_mems.append(mems_i)
|
||||
tower_losses.append(loss_i)
|
||||
tower_new_mems.append(new_mems_i)
|
||||
|
||||
## sum losses across towers
|
||||
if len(tower_losses) > 1:
|
||||
loss = tf.add_n(tower_losses) / len(tower_losses)
|
||||
else:
|
||||
loss = tower_losses[0]
|
||||
|
||||
##### Evaluation loop
|
||||
tower_mems_np = [
|
||||
[np.zeros([FLAGS.mem_len, per_core_bsz, FLAGS.d_model], dtype=np.float32)
|
||||
for layer in range(FLAGS.n_layer)]
|
||||
for core in range(FLAGS.num_core_per_host)
|
||||
]
|
||||
|
||||
saver = tf.train.Saver()
|
||||
|
||||
with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess:
|
||||
sess.run(tf.global_variables_initializer())
|
||||
|
||||
if FLAGS.eval_ckpt_path is None:
|
||||
eval_ckpt_path = tf.train.latest_checkpoint(FLAGS.model_dir)
|
||||
else:
|
||||
eval_ckpt_path = FLAGS.eval_ckpt_path
|
||||
tf.logging.info("Evaluate {}".format(eval_ckpt_path))
|
||||
saver.restore(sess, eval_ckpt_path)
|
||||
|
||||
fetches = [loss, tower_new_mems, tf.size(label_feed)]
|
||||
|
||||
format_str = " >> processing batch {{:{0}d}}/{{:{0}d}} ..".format(
|
||||
len(str(num_batch)))
|
||||
|
||||
total_loss, total_cnt = 0, 0
|
||||
for step in range(num_batch):
|
||||
if step % (num_batch // 10) == 0:
|
||||
tf.logging.info(format_str.format(step, num_batch))
|
||||
|
||||
feed_dict = {}
|
||||
for i in range(FLAGS.num_core_per_host):
|
||||
for m, m_np in zip(tower_mems[i], tower_mems_np[i]):
|
||||
feed_dict[m] = m_np
|
||||
|
||||
fetched = sess.run(fetches, feed_dict=feed_dict)
|
||||
|
||||
loss_np, tower_mems_np, cnt_np = fetched[:3]
|
||||
total_loss += loss_np * cnt_np
|
||||
total_cnt += cnt_np
|
||||
|
||||
avg_loss = total_loss / total_cnt
|
||||
tf.logging.info("| loss {:.2f} | pplx {:>7.2f}, bpc {:>7.4f}".format(
|
||||
avg_loss, math.exp(avg_loss), avg_loss / math.log(2)))
|
||||
|
||||
|
||||
def main(unused_argv):
|
||||
del unused_argv # Unused
|
||||
|
||||
tf.logging.set_verbosity(tf.logging.INFO)
|
||||
|
||||
# Get corpus info
|
||||
corpus_info = data_utils.get_corpus_info(FLAGS.corpus_info_path)
|
||||
n_token = corpus_info["vocab_size"]
|
||||
cutoffs = corpus_info["cutoffs"][1:-1]
|
||||
tf.logging.info("n_token {}".format(n_token))
|
||||
|
||||
if FLAGS.do_train:
|
||||
train(n_token, cutoffs, "/gpu:0")
|
||||
if FLAGS.do_eval:
|
||||
evaluate(n_token, cutoffs, "/gpu:0")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
tf.app.run()
|
||||
170
transformer-xl/tf/vocabulary.py
Normal file
170
transformer-xl/tf/vocabulary.py
Normal file
|
|
@ -0,0 +1,170 @@
|
|||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import print_function
|
||||
|
||||
from collections import Counter, OrderedDict
|
||||
|
||||
import numpy as np
|
||||
|
||||
import tensorflow as tf
|
||||
|
||||
from tensorflow.gfile import Open as open
|
||||
from tensorflow.gfile import Exists as exists
|
||||
|
||||
class Vocab(object):
|
||||
def __init__(self, special=[], min_freq=0, max_size=None, lower_case=True,
|
||||
delimiter=None, vocab_file=None):
|
||||
self.counter = Counter()
|
||||
self.special = special
|
||||
self.min_freq = min_freq
|
||||
self.max_size = max_size
|
||||
self.lower_case = lower_case
|
||||
self.delimiter = delimiter
|
||||
self.vocab_file = vocab_file
|
||||
|
||||
def tokenize(self, line, add_eos=False, add_double_eos=False):
|
||||
line = line.strip()
|
||||
# convert to lower case
|
||||
if self.lower_case:
|
||||
line = line.lower()
|
||||
|
||||
# empty delimiter '' will evaluate False
|
||||
if self.delimiter == '':
|
||||
symbols = line
|
||||
else:
|
||||
symbols = line.split(self.delimiter)
|
||||
|
||||
if add_double_eos: # lm1b
|
||||
return ['<S>'] + symbols + ['<S>']
|
||||
elif add_eos:
|
||||
return symbols + ['<eos>']
|
||||
else:
|
||||
return symbols
|
||||
|
||||
def count_file(self, path, verbose=False, add_eos=False):
|
||||
if verbose: print('counting file {} ...'.format(path))
|
||||
assert exists(path)
|
||||
|
||||
sents = []
|
||||
with open(path, 'r') as f:
|
||||
for idx, line in enumerate(f):
|
||||
if verbose and idx > 0 and idx % 500000 == 0:
|
||||
print(' line {}'.format(idx))
|
||||
symbols = self.tokenize(line, add_eos=add_eos)
|
||||
self.counter.update(symbols)
|
||||
sents.append(symbols)
|
||||
|
||||
return sents
|
||||
|
||||
def count_sents(self, sents, verbose=False):
|
||||
"""
|
||||
sents : a list of sentences, each a list of tokenized symbols
|
||||
"""
|
||||
if verbose: print('counting {} sents ...'.format(len(sents)))
|
||||
for idx, symbols in enumerate(sents):
|
||||
if verbose and idx > 0 and idx % 500000 == 0:
|
||||
print(' line {}'.format(idx))
|
||||
self.counter.update(symbols)
|
||||
|
||||
def _build_from_file(self, vocab_file):
|
||||
self.idx2sym = []
|
||||
self.sym2idx = OrderedDict()
|
||||
|
||||
with open(vocab_file, 'r') as f:
|
||||
for line in f:
|
||||
symb = line.strip().split()[0]
|
||||
self.add_symbol(symb)
|
||||
self.unk_idx = self.sym2idx['<UNK>']
|
||||
|
||||
def build_vocab(self):
|
||||
if self.vocab_file:
|
||||
print('building vocab from {}'.format(self.vocab_file))
|
||||
self._build_from_file(self.vocab_file)
|
||||
print('final vocab size {}'.format(len(self)))
|
||||
else:
|
||||
print('building vocab with min_freq={}, max_size={}'.format(
|
||||
self.min_freq, self.max_size))
|
||||
self.idx2sym = []
|
||||
self.sym2idx = OrderedDict()
|
||||
|
||||
for sym in self.special:
|
||||
self.add_special(sym)
|
||||
|
||||
for sym, cnt in self.counter.most_common(self.max_size):
|
||||
if cnt < self.min_freq: break
|
||||
self.add_symbol(sym)
|
||||
|
||||
print('final vocab size {} from {} unique tokens'.format(
|
||||
len(self), len(self.counter)))
|
||||
|
||||
def encode_file(self, path, ordered=False, verbose=False, add_eos=True,
|
||||
add_double_eos=False):
|
||||
if verbose: print('encoding file {} ...'.format(path))
|
||||
assert exists(path)
|
||||
encoded = []
|
||||
with open(path, 'r') as f:
|
||||
for idx, line in enumerate(f):
|
||||
if verbose and idx > 0 and idx % 500000 == 0:
|
||||
print(' line {}'.format(idx))
|
||||
symbols = self.tokenize(line, add_eos=add_eos,
|
||||
add_double_eos=add_double_eos)
|
||||
encoded.append(self.convert_to_nparray(symbols))
|
||||
|
||||
if ordered:
|
||||
encoded = np.concatenate(encoded)
|
||||
|
||||
return encoded
|
||||
|
||||
def encode_sents(self, sents, ordered=False, verbose=False):
|
||||
if verbose: print('encoding {} sents ...'.format(len(sents)))
|
||||
encoded = []
|
||||
for idx, symbols in enumerate(sents):
|
||||
if verbose and idx > 0 and idx % 500000 == 0:
|
||||
print(' line {}'.format(idx))
|
||||
encoded.append(self.convert_to_nparray(symbols))
|
||||
|
||||
if ordered:
|
||||
encoded = np.concatenate(encoded)
|
||||
|
||||
return encoded
|
||||
|
||||
def add_special(self, sym):
|
||||
if sym not in self.sym2idx:
|
||||
self.idx2sym.append(sym)
|
||||
self.sym2idx[sym] = len(self.idx2sym) - 1
|
||||
setattr(self, '{}_idx'.format(sym.strip('<>')), self.sym2idx[sym])
|
||||
|
||||
def add_symbol(self, sym):
|
||||
if sym not in self.sym2idx:
|
||||
self.idx2sym.append(sym)
|
||||
self.sym2idx[sym] = len(self.idx2sym) - 1
|
||||
|
||||
def get_sym(self, idx):
|
||||
assert 0 <= idx < len(self), 'Index {} out of range'.format(idx)
|
||||
return self.idx2sym[idx]
|
||||
|
||||
def get_idx(self, sym):
|
||||
if sym in self.sym2idx:
|
||||
return self.sym2idx[sym]
|
||||
else:
|
||||
assert hasattr(self, 'unk_idx')
|
||||
return self.sym2idx.get(sym, self.unk_idx)
|
||||
|
||||
def get_symbols(self, indices):
|
||||
return [self.get_sym(idx) for idx in indices]
|
||||
|
||||
def get_indices(self, symbols):
|
||||
return [self.get_idx(sym) for sym in symbols]
|
||||
|
||||
def convert_to_nparray(self, symbols):
|
||||
nparray = np.array(self.get_indices(symbols), dtype=np.int64)
|
||||
return nparray
|
||||
|
||||
def convert_to_sent(self, indices, exclude=None):
|
||||
if exclude is None:
|
||||
return ' '.join([self.get_sym(idx) for idx in indices])
|
||||
else:
|
||||
return ' '.join([self.get_sym(idx) for idx in indices if idx not in exclude])
|
||||
|
||||
def __len__(self):
|
||||
return len(self.idx2sym)
|
||||
Reference in a new issue