tensorboard-pytorch

A module for visualization with tensorboard

class tensorboard.SummaryWriter(log_dir)[source]

Writes Summary directly to event files. The SummaryWriter class provides a high-level api to create an event file in a given directory and add summaries and events to it. The class updates the file contents asynchronously. This allows a training program to call methods to add data to the file directly from the training loop, without slowing down training.

__init__(log_dir)[source]
Parameters:log_dir (string) – save location
add_audio(tag, snd_tensor, global_step=None)[source]

Add audio data to summary.

Parameters:
  • tag (string) – Data identifier
  • snd_tensor (torch.Tensor) – Sound data
  • global_step (int) – Global step value to record
Shape:
snd_tensor: \((1, L)\). The values should between [-1, 1]. The sample rate is currently fixed at 44100 KHz.
add_graph(model, lastVar)[source]

Add graph data to summary.

To draw the graph, you need a model m and an input variable t that have correct size for m. Say you have runned r = m(t), then you can use writer.add_graph(m, r) to save the graph. By default, the input tensor does not require gradient, therefore it will be omitted when back tracing. To draw the input node, pass an additional parameter requires_grad=True when creating the input tensor.
Parameters:

Note

This is experimental feature. Graph drawing is based on autograd’s backward tracing. It goes along the next_functions attribute in a variable recursively, drawing each encountered nodes. In some cases, the result is strange. See https://github.com/lanpa/tensorboard-pytorch/issues/7

add_histogram(name, values, global_step=None, bins='tensorflow')[source]

Add histogram to summary.

Parameters:
add_image(tag, img_tensor, global_step=None)[source]

Add image data to summary.

Parameters:
  • tag (string) – Data identifier
  • img_tensor (torch.Tensor) – Image data
  • global_step (int) – Global step value to record
Shape:
img_tensor: \((3, H, W)\). Use torchvision.utils.make_grid() to prepare it is a good idea.
add_scalar(name, scalar_value, global_step=None)[source]

Add scalar data to summary.

Parameters:
  • tag (string) – Data identifier
  • scalar_value (float) – Value to save
  • global_step (int) – Global step value to record
add_text(tag, text_string, global_step=None)[source]

Add text data to summary.

Parameters:
  • tag (string) – Data identifier
  • text_string (string) – String to save
  • global_step (int) – Global step value to record

Examples:

writer.add_text('lstm', 'This is an lstm', 0)
writer.add_text('rnn', 'This is an rnn', 10)
tensorboard.embedding.add_embedding(mat, save_path, metadata=None, label_img=None)[source]

add embedding

Parameters:
  • mat (torch.Tensor) – A matrix which each row is the feature vector of the data point
  • save_path (string) – Save path (use writer.file_writer.get_logdir() to show embedding along with other summaries)
  • metadata (list) – A list of labels, each element will be convert to string
  • label_img (torch.Tensor) – Images correspond to each data point
Shape:

mat: \((N, D)\), where N is number of data and D is feature dimension

label_img: \((N, C, H, W)\)

Note

This function needs tensorflow installed. It invokes tensorflow to dump data. Therefore I separate it from the SummaryWriter class. Please pass writer.file_writer.get_logdir() to save_path to prevent glitches.

If save_path is different than SummaryWritter’s save path, you need to pass the leave directory to tensorboard’s logdir argument, otherwise it cannot display anything. e.g. if save_path equals ‘path/to/embedding’, you need to call ‘tensorboard –logdir=path/to/embedding’, instead of ‘tensorboard –logdir=path’.

Finally, this funtion breaks PyTorch if you have ‘torch.nn.DataParallel’ in your code. Use it after training completes. See https://github.com/pytorch/pytorch/issues/2230

Examples:

from tensorboard.embedding import add_embedding
import keyword
import torch
meta = []
while len(meta)<100:
    meta = meta+keyword.kwlist # get some strings
meta = meta[:100]

for i, v in enumerate(meta):
    meta[i] = v+str(i)

label_img = torch.rand(100, 3, 10, 32)
for i in range(100):
    label_img[i]*=i/100.0

add_embedding(torch.randn(100, 5), 'embedding1', metadata=meta, label_img=label_img)
add_embedding(torch.randn(100, 5), 'embedding2', label_img=label_img)
add_embedding(torch.randn(100, 5), 'embedding3', metadata=meta)