Adding Batch support for LSTM_AE#10
Adding Batch support for LSTM_AE#10Mustapha-AJEGHRIR wants to merge 1 commit intoshobrook:masterfrom
Conversation
|
Preparing review... |
PR Analysis(review updated until commit cf0a261)
PR Feedback
|
| def forward(self, x, seq_len): | ||
| x = x.repeat(seq_len, 1).unsqueeze(0) | ||
| if len(x.shape) == 1 : # In case the batch dimension is not there | ||
| x = x.repeat(seq_len, 1) # Add the sequence dimension by repeating the embedding |
There was a problem hiding this comment.
Consider using the .unsqueeze() function to add an extra dimension to the tensor instead of using .repeat(). This could potentially improve performance as it avoids creating a larger tensor. [medium]
|
|
||
| def forward(self, x, seq_len): | ||
| x = x.repeat(seq_len, 1).unsqueeze(0) | ||
| if len(x.shape) == 1 : # In case the batch dimension is not there |
There was a problem hiding this comment.
It's a good practice to handle tensor shape manipulations in a separate function. This would make the code more readable and maintainable. [medium]
|
|
||
| def forward(self, x): | ||
| seq_len = x.shape[0] | ||
| if len(x.shape) <= 2 : # In case the batch dimension is not there |
There was a problem hiding this comment.
Instead of checking the tensor dimensions in every forward pass, consider reshaping the input tensor to always have a batch dimension. This would simplify the forward methods and potentially improve performance. [important]
| torch.rand((layer_dims[-1], out_dim), dtype=torch.float), | ||
| requires_grad=True | ||
| ) | ||
| self.dense_layer = nn.Linear(layer_dims[-1], out_dim) |
There was a problem hiding this comment.
The change from using a dense matrix to a linear layer is a good one as it makes the code more readable and leverages PyTorch's built-in functionality. However, ensure that this change doesn't affect the model's performance or results. [medium]
|
Persistent review updated to latest commit cf0a261 |
1 similar comment
|
Persistent review updated to latest commit cf0a261 |
|
|
||
| def forward(self, x): | ||
| seq_len = x.shape[0] | ||
| if len(x.shape) <= 2 : # In case the batch dimension is not there |
There was a problem hiding this comment.
Consider adding comments to explain the purpose of the conditional statements checking the shape of 'x'. This would make the code more readable and easier to maintain. [medium]
| if len(x.shape) == 1 : # In case the batch dimension is not there | ||
| x = x.repeat(seq_len, 1) # Add the sequence dimension by repeating the embedding | ||
| else : | ||
| x = x.unsqueeze(1).repeat(1, seq_len, 1) # Add the sequence dimension by repeating the embedding |
There was a problem hiding this comment.
The 'unsqueeze' method is used to add an extra dimension to the tensor. However, this is only done when the shape of 'x' is 1. It would be good to add a comment explaining why this is necessary, to improve code readability. [medium]
| def forward(self, x, seq_len): | ||
| x = x.repeat(seq_len, 1).unsqueeze(0) | ||
| if len(x.shape) == 1 : # In case the batch dimension is not there | ||
| x = x.repeat(seq_len, 1) # Add the sequence dimension by repeating the embedding |
There was a problem hiding this comment.
The 'repeat' method is used to repeat the tensor along a specified dimension. However, it's not immediately clear why this is necessary. Adding a comment to explain this would improve code readability. [medium]
| torch.rand((layer_dims[-1], out_dim), dtype=torch.float), | ||
| requires_grad=True | ||
| ) | ||
| self.dense_layer = nn.Linear(layer_dims[-1], out_dim) |
There was a problem hiding this comment.
The dense layer is created using the 'nn.Linear' method. It would be beneficial to add a comment explaining why this change was made from using a dense matrix to a dense layer. [medium]
Hello there,
I have added batch support for the LSTM_AE. I don't know if this is 100% compatible with the library, but I'm only using the LSTM_AE in one of my projects, so I decided to contribute 😃 .
Thanks