tensorboard-pytorch¶
A module for visualization with tensorboard
-
class
tensorboard.
SummaryWriter
(log_dir)[source]¶ Writes Summary directly to event files. The SummaryWriter class provides a high-level api to create an event file in a given directory and add summaries and events to it. The class updates the file contents asynchronously. This allows a training program to call methods to add data to the file directly from the training loop, without slowing down training.
-
add_audio
(tag, snd_tensor, global_step=None)[source]¶ Add audio data to summary.
Parameters: - tag (string) – Data identifier
- snd_tensor (torch.Tensor) – Sound data
- global_step (int) – Global step value to record
- Shape:
- snd_tensor: \((1, L)\). The values should between [-1, 1]. The sample rate is currently fixed at 44100 KHz.
-
add_graph
(model, lastVar)[source]¶ Add graph data to summary.
To draw the graph, you need a modelm
and an input variablet
that have correct size form
. Say you have runnedr = m(t)
, then you can usewriter.add_graph(m, r)
to save the graph. By default, the input tensor does not require gradient, therefore it will be omitted when back tracing. To draw the input node, pass an additional parameterrequires_grad=True
when creating the input tensor.Parameters: - model (torch.nn.Module) – model to draw.
- lastVar (torch.autograd.Variable) – the root node start from.
Note
This is experimental feature. Graph drawing is based on autograd’s backward tracing. It goes along the
next_functions
attribute in a variable recursively, drawing each encountered nodes. In some cases, the result is strange. See https://github.com/lanpa/tensorboard-pytorch/issues/7
-
add_histogram
(name, values, global_step=None, bins='tensorflow')[source]¶ Add histogram to summary.
Parameters: - tag (string) – Data identifier
- values (numpy.array) – Values to build histogram
- global_step (int) – Global step value to record
- bins (string) – one of {‘tensorflow’,’auto’, ‘fd’, ...}, this determines how the bins are made. You can find other options in: https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html
-
add_image
(tag, img_tensor, global_step=None)[source]¶ Add image data to summary.
Parameters: - tag (string) – Data identifier
- img_tensor (torch.Tensor) – Image data
- global_step (int) – Global step value to record
- Shape:
- img_tensor: \((3, H, W)\). Use
torchvision.utils.make_grid()
to prepare it is a good idea.
-
-
tensorboard.embedding.
add_embedding
(mat, save_path, metadata=None, label_img=None)[source]¶ add embedding
Parameters: - mat (torch.Tensor) – A matrix which each row is the feature vector of the data point
- save_path (string) – Save path (use
writer.file_writer.get_logdir()
to show embedding along with other summaries) - metadata (list) – A list of labels, each element will be convert to string
- label_img (torch.Tensor) – Images correspond to each data point
- Shape:
mat: \((N, D)\), where N is number of data and D is feature dimension
label_img: \((N, C, H, W)\)
Note
This function needs tensorflow installed. It invokes tensorflow to dump data. Therefore I separate it from the SummaryWriter class. Please pass
writer.file_writer.get_logdir()
tosave_path
to prevent glitches.If
save_path
is different than SummaryWritter’s save path, you need to pass the leave directory to tensorboard’s logdir argument, otherwise it cannot display anything. e.g. ifsave_path
equals ‘path/to/embedding’, you need to call ‘tensorboard –logdir=path/to/embedding’, instead of ‘tensorboard –logdir=path’.Finally, this funtion breaks PyTorch if you have ‘torch.nn.DataParallel’ in your code. Use it after training completes. See https://github.com/pytorch/pytorch/issues/2230
Examples:
from tensorboard.embedding import add_embedding import keyword import torch meta = [] while len(meta)<100: meta = meta+keyword.kwlist # get some strings meta = meta[:100] for i, v in enumerate(meta): meta[i] = v+str(i) label_img = torch.rand(100, 3, 10, 32) for i in range(100): label_img[i]*=i/100.0 add_embedding(torch.randn(100, 5), 'embedding1', metadata=meta, label_img=label_img) add_embedding(torch.randn(100, 5), 'embedding2', label_img=label_img) add_embedding(torch.randn(100, 5), 'embedding3', metadata=meta)