Skip to content

Commit 412eadf

Browse files
authored
Merge branch 'master' into transformer
2 parents 1b349bb + b1cd59d commit 412eadf

20 files changed

+187
-135
lines changed

CHANGELOG.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -79,6 +79,7 @@ To release a new version, please update the changelog as followed:
7979
### Deprecated
8080

8181
### Fixed
82+
- RNN updates: remove warnings, fix if seq_len=0, unitest (#PR 1033)
8283

8384
### Removed
8485

@@ -112,7 +113,8 @@ To release a new version, please update the changelog as followed:
112113
- Remove `private_method` decorator (#PR 1025)
113114
- Copy original model's `trainable_weights` and `nontrainable_weights` when initializing `ModelLayer` (#PR 1026)
114115
- Copy original model's `trainable_weights` and `nontrainable_weights` when initializing `LayerList` (#PR 1029)
115-
- remove redundant parts in `model.all_layers` (#PR 1029)
116+
- Remove redundant parts in `model.all_layers` (#PR 1029)
117+
- Replace `tf.image.resize_image_with_crop_or_pad` with `tf.image.resize_with_crop_or_pad` (#PR 1032)
116118

117119
### Removed
118120

@@ -122,7 +124,7 @@ To release a new version, please update the changelog as followed:
122124

123125
- @zsdonghao
124126
- @ChrisWu1997: #1010 #1015 #1025 #1030
125-
- @warshallrho: #1017 #1021 #1026 #1029
127+
- @warshallrho: #1017 #1021 #1026 #1029 #1032
126128
- @ArnoldLIULJ: #1023
127129
- @JingqingZ: #1023
128130

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -36,9 +36,9 @@
3636

3737
TensorLayer is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides a large collection of customizable neural layers / functions that are key to build real-world AI applications. TensorLayer is awarded the 2017 Best Open Source Software by the [ACM Multimedia Society](https://twitter.com/ImperialDSI/status/923928895325442049).
3838

39-
🔥📰🔥 [Deep Reinforcement Learning Model ZOO](https://github.com/tensorlayer/tensorlayer/tree/master/examples/reinforcement_learning) Release !!!
39+
🔥📰🔥 Reinforcement Learning Model Zoos: [Low-level APIs for Research](https://github.com/tensorlayer/tensorlayer/tree/master/examples/reinforcement_learning) and [High-level APIs for Production](https://github.com/tensorlayer/RLzoo)
4040

41-
🔥📰🔥 [Sipeed](https://github.com/sipeed/Maix-EMC): Run TensorLayer models on the **low-cost AI chip** (e.g., K210) (Alpha Version)
41+
🔥📰🔥 [Sipeed Maxi-EMC](https://github.com/sipeed/Maix-EMC): Run TensorLayer models on the **low-cost AI chip** (e.g., K210) (Alpha Version)
4242

4343
🔥📰🔥 [NNoM](https://github.com/majianjia/nnom): Run TensorLayer quantized models on the **MCU** (e.g., STM32) (Coming Soon)
4444

@@ -201,4 +201,4 @@ If you use TensorLayer for any projects, please cite this paper:
201201

202202
# License
203203

204-
TensorLayer is released under the Apache 2.0 license.
204+
TensorLayer is released under the Apache 2.0 license. We also host TensorLayer on [iHub](https://code.ihub.org.cn/projects/328) and [Gitee](https://gitee.com/organizations/TensorLayer).

docs/user/contributing.rst

Lines changed: 5 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -4,14 +4,10 @@
44
Contributing
55
===============
66

7-
TensorLayer is a major ongoing research project in Data Science Institute, Imperial College London.
8-
The goal of the project is to develop a compositional language while complex learning systems
7+
TensorLayer 2.0 is a major ongoing research project in CFCS, Peking University, the first version was established at Imperial College London in 2016. The goal of the project is to develop a compositional language while complex learning systems
98
can be built through composition of neural network modules.
109

11-
Numerous contributors come from various horizons such as: Tsinghua University, Carnegie Mellon University, University of Technology of Compiegne,
12-
Google, Microsoft, Bloomberg and etc.
13-
14-
There are many functions need to be contributed such as Maxout, Neural Turing Machine, Attention, TensorLayer Mobile and etc.
10+
Numerous contributors come from various horizons such as: Imperial College London, Tsinghua University, Carnegie Mellon University, Stanford, University of Technology of Compiegne, Google, Microsoft, Bloomberg and etc.
1511

1612
You can easily open a Pull Request (PR) on `GitHub`_, every little step counts and will be credited.
1713
As an open-source project, we highly welcome and value contributions!
@@ -27,10 +23,9 @@ As an open-source project, we highly welcome and value contributions!
2723
Project Maintainers
2824
--------------------------
2925

30-
3126
The TensorLayer project was started by `Hao Dong <https://zsdonghao.github.io>`_ at Imperial College London in June 2016.
3227

33-
For TensorLayer 2.x, it is now actively developing and maintaining by the following people who has more than 50 contributions*:
28+
For TensorLayer 2.x, it is now actively developing and maintaining by the following people who has more than 50 contributions:
3429

3530
- **Hao Dong** (`@zsdonghao <https://github.com/zsdonghao>`_) - `<https://zsdonghao.github.io>`_
3631
- **Jingqing Zhang** (`@JingqingZ <https://github.com/JingqingZ>`_) - `<https://jingqingz.github.io>`_
@@ -56,10 +51,9 @@ What to contribute
5651
Your method and example
5752
~~~~~~~~~~~~~~~~~~~~~~~~~~~
5853

59-
If you have a new method or example in terms of Deep learning and Reinforcement learning,
60-
you are welcome to contribute.
54+
If you have a new method or example in terms of Deep learning or Reinforcement learning, you are welcome to contribute.
6155

62-
* Provide your layer or example, so everyone can use it.
56+
* Provide your layers or examples, so everyone can use it.
6357
* Explain how it would work, and link to a scientific paper if applicable.
6458
* Keep the scope as narrow as possible, to make it easier to implement.
6559

docs/user/get_involved.rst

Lines changed: 29 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,20 @@
44
Get Involved in Research
55
=========================
66

7+
Ph.D. Postition @ PKU
8+
=============================================================
9+
10+
11+
Hi, I am Hao Dong, a new faculty member in EECS, Peking University. I now have a few Ph.D. positions per year open for international students who would like to study AI. If you or your friends are interested in it, please contact me ASAP.
12+
PKU is a top 30 university in the global ranking. The application is competitive, apply early is recommended. For the application of next year, please note that the DDL of Chinese Government Scholarship is in the end of each year, please check this `link <http://www.isd.pku.edu.cn/info/1503/5676.htm>`__ for more details.
13+
14+
My homepage: https://zsdonghao.github.io
15+
16+
Contact: hao.dong11[AT]imperial.ac.uk
17+
18+
719

8-
Center on Frontiers of Computing Studies, Peking University
20+
Faculty Postition @ PKU
921
=============================================================
1022

1123
The Center on Frontiers of Computing Studies (CFCS), Peking University (PKU), China, is a university new initiative co-founded by Professors John Hopcroft (Turing Awardee) and Wen Gao (CAE, ACM/IEEE Fellow). The center aims at developing the excellence on two fronts: research and education. On the research front, the center will provide a world-class research environment, where innovation and impactful research is the central aim, measured by professional reputation among world scholars, not by counting the number of publications and research funding. On the education front, the center deeply involves in the Turing Class, an elite undergraduate program that draws the cream of the crop from the PKU undergraduate talent pool. New curriculum and pedagogy are designed and practiced in this program, with the aim to cultivate a new generation of computer scientist/engineers that are solid in both theories and practices.
@@ -27,12 +39,12 @@ Application for a postdoctoral position should include a curriculum vita, brief
2739
We conduct review of applications monthly, immediately upon the recipient of all application materials at the beginning of each month. However, it is highly recommended that applicants submit complete applications sooner than later, as the positions are to be filled quickly.
2840

2941

30-
Data Science Institute, Imperial College London
42+
Postdoc Postition @ ICL
3143
==================================================
3244

3345
Data science is therefore by nature at the core of all modern transdisciplinary scientific activities, as it involves the whole life cycle of data, from acquisition and exploration to analysis and communication of the results. Data science is not only concerned with the tools and methods to obtain, manage and analyse data: it is also about extracting value from data and translating it from asset to product.
3446

35-
Launched on 1st April 2014, the Data Science Institute at Imperial College London aims to enhance Imperial's excellence in data-driven research across its faculties by fulfilling the following objectives.
47+
Launched on 1st April 2014, the Data Science Institute (DSI) at Imperial College London aims to enhance Imperial's excellence in data-driven research across its faculties by fulfilling the following objectives.
3648

3749
The Data Science Institute is housed in purpose built facilities in the heart of the Imperial College campus in South Kensington. Such a central location provides excellent access to collabroators across the College and across London.
3850

@@ -48,3 +60,17 @@ and other ways to
4860
`get involved <https://www.imperial.ac.uk/data-science/get-involved/>`__
4961
, or feel free to
5062
`contact us <https://www.imperial.ac.uk/data-science/get-involved/contact-us/>`__.
63+
64+
Software Engineering@SurgicalAI.cn
65+
=============================================================
66+
SurgicalAI is a startup founded by the data scientists and surgical robot experts from Imperial College. Our objective is AI democratise Surgery. By combining 5G, AI and Cloud Computing, SurgicalAI is building a platform enable junor surgeons to perfom complex procedures. As one of the most impactful startup, SurgicalAI is supported by Nvidia, AWS and top surgeons around the world.
67+
68+
Currently based in Hangzhou, China, we are building digital solution for cardiac surgery like TAVR, LAA and Orthopedidcs like TKA and UNA. A demo can be found at here <http://demo5g.surgicalai.cn>
69+
70+
We are activly looking for experts in robotic navigation, computer graphics and medical image analysis experts to join us, building digitalized surgical service platform for the aging world.
71+
72+
Home Page: http://www.surgicalai.cn
73+
74+
Demo Page: http://demo5g.surgicalai.cn
75+
76+
Contact: liufangde@surgicalai.cn

docs/user/get_start_model.rst

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -19,12 +19,12 @@ Static model
1919
def get_model(inputs_shape):
2020
ni = Input(inputs_shape)
2121
nn = Dropout(keep=0.8)(ni)
22-
nn = Dense(n_units=800, act=tf.nn.relu, name="dense1")(nn)
22+
nn = Dense(n_units=800, act=tf.nn.relu, name="dense1")(nn) # “name" is optional
2323
nn = Dropout(keep=0.8)(nn)
2424
nn = Dense(n_units=800, act=tf.nn.relu)(nn)
2525
nn = Dropout(keep=0.8)(nn)
26-
nn = Dense(n_units=10, act=tf.nn.relu)(nn)
27-
M = Model(inputs=ni, outputs=nn, name="mlp")
26+
nn = Dense(n_units=10, act=None)(nn)
27+
M = Model(inputs=ni, outputs=nn, name="mlp") # “name" is optional
2828
return M
2929
3030
MLP = get_model([None, 784])
@@ -46,10 +46,10 @@ In this case, you need to manually input the output shape of the previous layer
4646
4747
self.dropout1 = Dropout(keep=0.8)
4848
self.dense1 = Dense(n_units=800, act=tf.nn.relu, in_channels=784)
49-
self.dropout2 = Dropout(keep=0.8)#(self.dense1)
49+
self.dropout2 = Dropout(keep=0.8)
5050
self.dense2 = Dense(n_units=800, act=tf.nn.relu, in_channels=800)
51-
self.dropout3 = Dropout(keep=0.8)#(self.dense2)
52-
self.dense3 = Dense(n_units=10, act=tf.nn.relu, in_channels=800)
51+
self.dropout3 = Dropout(keep=0.8)
52+
self.dense3 = Dense(n_units=10, act=None, in_channels=800)
5353
5454
def forward(self, x, foo=False):
5555
z = self.dropout1(x)
@@ -59,7 +59,7 @@ In this case, you need to manually input the output shape of the previous layer
5959
z = self.dropout3(z)
6060
out = self.dense3(z)
6161
if foo:
62-
out = tf.nn.relu(out)
62+
out = tf.nn.softmax(out)
6363
return out
6464
6565
MLP = CustomModel()
@@ -156,7 +156,7 @@ Print model information
156156
# (dropout_1): Dropout(keep=0.8, name='dropout_1')
157157
# (dense_1): Dense(n_units=800, relu, in_channels='800', name='dense_1')
158158
# (dropout_2): Dropout(keep=0.8, name='dropout_2')
159-
# (dense_2): Dense(n_units=10, relu, in_channels='800', name='dense_2')
159+
# (dense_2): Dense(n_units=10, None, in_channels='800', name='dense_2')
160160
# )
161161
162162
import pprint
@@ -195,7 +195,7 @@ Print model information
195195
# 'name': 'dropout_3'},
196196
# 'class': 'Dropout',
197197
# 'prev_layer': ['dense_2_node_0']},
198-
# {'args': {'act': 'relu',
198+
# {'args': {'act': None,
199199
# 'layer_type': 'normal',
200200
# 'n_units': 10,
201201
# 'name': 'dense_3'},

examples/basic_tutorials/tutorial_cifar10_cnn_static.py

Lines changed: 4 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,9 @@
11
#!/usr/bin/env python3
22
# -*- coding: utf-8 -*-
33

4-
import multiprocessing
54
import time
6-
75
import numpy as np
8-
6+
import multiprocessing
97
import tensorflow as tf
108
import tensorlayer as tl
119
from tensorlayer.layers import (BatchNorm, Conv2d, Dense, Flatten, Input, LocalResponseNorm, MaxPool2d)
@@ -80,14 +78,11 @@ def get_model_batchnorm(inputs_shape):
8078
print_freq = 5
8179
n_step_epoch = int(len(y_train) / batch_size)
8280
n_step = n_epoch * n_step_epoch
83-
shuffle_buffer_size = 128 # 100
84-
# init_learning_rate = 0.1
85-
# learning_rate_decay_factor = 0.1
86-
# num_epoch_decay = 350
81+
shuffle_buffer_size = 128
8782

8883
train_weights = net.trainable_weights
89-
# learning_rate = tf.Variable(init_learning_rate)
9084
optimizer = tf.optimizers.Adam(learning_rate)
85+
# looking for decay learning rate? see https://github.com/tensorlayer/srgan/blob/master/train.py
9186

9287

9388
def generator_train():
@@ -127,7 +122,7 @@ def _map_fn_train(img, target):
127122

128123
def _map_fn_test(img, target):
129124
# 1. Crop the central [height, width] of the image.
130-
img = tf.image.resize_image_with_crop_or_pad(img, 24, 24)
125+
img = tf.image.resize_with_crop_or_pad(img, 24, 24)
131126
# 2. Subtract off the mean and divide by the variance of the pixels.
132127
img = tf.image.per_image_standardization(img)
133128
img = tf.reshape(img, (24, 24, 3))
@@ -182,14 +177,10 @@ def _map_fn_test(img, target):
182177

183178
# use training and evaluation sets to evaluate the model every print_freq epoch
184179
if epoch + 1 == 1 or (epoch + 1) % print_freq == 0:
185-
186180
print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time))
187-
188181
print(" train loss: {}".format(train_loss / n_iter))
189182
print(" train acc: {}".format(train_acc / n_iter))
190-
191183
net.eval()
192-
193184
val_loss, val_acc, n_iter = 0, 0, 0
194185
for X_batch, y_batch in test_ds:
195186
_logits = net(X_batch) # is_train=False, disable dropout
@@ -199,10 +190,6 @@ def _map_fn_test(img, target):
199190
print(" val loss: {}".format(val_loss / n_iter))
200191
print(" val acc: {}".format(val_acc / n_iter))
201192

202-
# FIXME : how to apply lr decay in eager mode?
203-
# learning_rate.assign(tf.train.exponential_decay(init_learning_rate, epoch, num_epoch_decay,
204-
# learning_rate_decay_factor))
205-
206193
# use testing data to evaluate the model
207194
net.eval()
208195
test_loss, test_acc, n_iter = 0, 0, 0

examples/basic_tutorials/tutorial_mnist_mlp_dynamic.py

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,5 @@
11
import time
2-
32
import numpy as np
4-
53
import tensorflow as tf
64
import tensorlayer as tl
75
from tensorlayer.layers import Dense, Dropout, Input
@@ -19,7 +17,6 @@ class CustomModel(Model):
1917

2018
def __init__(self):
2119
super(CustomModel, self).__init__()
22-
2320
self.dropout1 = Dropout(keep=0.8) #(self.innet)
2421
self.dense1 = Dense(n_units=800, act=tf.nn.relu, in_channels=784) #(self.dropout1)
2522
self.dropout2 = Dropout(keep=0.8) #(self.dense1)
@@ -52,27 +49,20 @@ def forward(self, x, foo=None):
5249
for epoch in range(n_epoch): ## iterate the dataset n_epoch times
5350
start_time = time.time()
5451
## iterate over the entire training set once (shuffle the data via training)
55-
5652
for X_batch, y_batch in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=True):
57-
5853
MLP.train() # enable dropout
59-
6054
with tf.GradientTape() as tape:
6155
## compute outputs
6256
_logits = MLP(X_batch, foo=1)
6357
## compute loss and update model
6458
_loss = tl.cost.cross_entropy(_logits, y_batch, name='train_loss')
65-
6659
grad = tape.gradient(_loss, train_weights)
6760
optimizer.apply_gradients(zip(grad, train_weights))
6861

6962
## use training and evaluation sets to evaluate the model every print_freq epoch
7063
if epoch + 1 == 1 or (epoch + 1) % print_freq == 0:
71-
7264
MLP.eval() # disable dropout
73-
7465
print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time))
75-
7666
train_loss, train_acc, n_iter = 0, 0, 0
7767
for X_batch, y_batch in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=False):
7868
_logits = MLP(X_batch, foo=1)
@@ -81,7 +71,6 @@ def forward(self, x, foo=None):
8171
n_iter += 1
8272
print(" train foo=1 loss: {}".format(train_loss / n_iter))
8373
print(" train foo=1 acc: {}".format(train_acc / n_iter))
84-
8574
val_loss, val_acc, n_iter = 0, 0, 0
8675
for X_batch, y_batch in tl.iterate.minibatches(X_val, y_val, batch_size, shuffle=False):
8776
_logits = MLP(X_batch, foo=1) # is_train=False, disable dropout
@@ -90,7 +79,6 @@ def forward(self, x, foo=None):
9079
n_iter += 1
9180
print(" val foo=1 loss: {}".format(val_loss / n_iter))
9281
print(" val foo=1 acc: {}".format(val_acc / n_iter))
93-
9482
val_loss, val_acc, n_iter = 0, 0, 0
9583
for X_batch, y_batch in tl.iterate.minibatches(X_val, y_val, batch_size, shuffle=False):
9684
_logits = MLP(X_batch) # is_train=False, disable dropout

examples/basic_tutorials/tutorial_mnist_mlp_dynamic_2.py

Lines changed: 0 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,5 @@
11
import time
2-
32
import numpy as np
4-
53
import tensorflow as tf
64
import tensorlayer as tl
75
from tensorlayer.layers import Dense, Dropout, Input, LayerList
@@ -19,17 +17,14 @@ class CustomModelHidden(Model):
1917

2018
def __init__(self):
2119
super(CustomModelHidden, self).__init__()
22-
2320
self.dropout1 = Dropout(keep=0.8) #(self.innet)
24-
2521
self.seq = LayerList(
2622
[
2723
Dense(n_units=800, act=tf.nn.relu, in_channels=784),
2824
Dropout(keep=0.8),
2925
Dense(n_units=800, act=tf.nn.relu, in_channels=800),
3026
]
3127
)
32-
3328
self.dropout3 = Dropout(keep=0.8) #(self.seq)
3429

3530
def forward(self, x):
@@ -43,7 +38,6 @@ class CustomModelOut(Model):
4338

4439
def __init__(self):
4540
super(CustomModelOut, self).__init__()
46-
4741
self.dense3 = Dense(n_units=10, act=tf.nn.relu, in_channels=800)
4842

4943
def forward(self, x, foo=None):
@@ -74,30 +68,23 @@ def forward(self, x, foo=None):
7468
for epoch in range(n_epoch): ## iterate the dataset n_epoch times
7569
start_time = time.time()
7670
## iterate over the entire training set once (shuffle the data via training)
77-
7871
for X_batch, y_batch in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=True):
79-
8072
MLP1.train() # enable dropout
8173
MLP2.train()
82-
8374
with tf.GradientTape() as tape:
8475
## compute outputs
8576
_hidden = MLP1(X_batch)
8677
_logits = MLP2(_hidden, foo=1)
8778
## compute loss and update model
8879
_loss = tl.cost.cross_entropy(_logits, y_batch, name='train_loss')
89-
9080
grad = tape.gradient(_loss, train_weights)
9181
optimizer.apply_gradients(zip(grad, train_weights))
9282

9383
## use training and evaluation sets to evaluate the model every print_freq epoch
9484
if epoch + 1 == 1 or (epoch + 1) % print_freq == 0:
95-
9685
MLP1.eval() # disable dropout
9786
MLP2.eval()
98-
9987
print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time))
100-
10188
train_loss, train_acc, n_iter = 0, 0, 0
10289
for X_batch, y_batch in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=False):
10390
_hidden = MLP1(X_batch)

0 commit comments

Comments
 (0)