Utility Functions

class cloudreg.scripts.util.S3Url(url)[source]
>>> s = S3Url("s3://bucket/hello/world")
>>> s.bucket
'bucket'
>>> s.key
'hello/world'
>>> s.url
's3://bucket/hello/world'
>>> s = S3Url("s3://bucket/hello/world?qwe1=3#ddd")
>>> s.bucket
'bucket'
>>> s.key
'hello/world?qwe1=3#ddd'
>>> s.url
's3://bucket/hello/world?qwe1=3#ddd'
>>> s = S3Url("s3://bucket/hello/world#foo?bar=2")
>>> s.key
'hello/world#foo?bar=2'
>>> s.url
's3://bucket/hello/world#foo?bar=2'
Attributes:
bucket
key
url
cloudreg.scripts.util.aws_cli(*cmd)[source]

Run an AWS CLI command

Raises:

RuntimeError – Error running aws cli command.

cloudreg.scripts.util.calc_hierarchy_levels(img_size, lowest_res=1024)[source]

Compute max number of mips for given chunk size

Parameters:
  • img_size (list) – Size of image in x,y,z

  • lowest_res (int, optional) – minimum chunk size in XY. Defaults to 1024.

Returns:

Number of mips

Return type:

int

cloudreg.scripts.util.chunks(l, n)[source]

Convert a list into n-size chunks (last chunk may have less than n elements)

Parameters:
  • l (list) – List to chunk

  • n (int) – Elements per chunk

Yields:

list – n-size chunk from l (last chunk may have fewer than n elements)

cloudreg.scripts.util.download_terastitcher_files(s3_path, local_path)[source]

Download terastitcher files from S3

Parameters:
  • s3_path (str) – S3 path where Terastitcher files might live

  • local_path (str) – Local path to save Terastitcher files

Returns:

True if files exist at s3 path, else False

Return type:

bool

cloudreg.scripts.util.get_bias_field(img, mask=None, scale=1.0, niters=[50, 50, 50, 50])[source]

Correct bias field in image using the N4ITK algorithm (http://bit.ly/2oFwAun)

Parameters:
  • img (SimpleITK.Image) – Input image with bias field.

  • mask (SimpleITK.Image, optional) – If used, the bias field will only be corrected within the mask. (the default is None, which results in the whole image being corrected.)

  • scale (float, optional) – Scale at which to compute the bias correction. (the default is 0.25, which results in bias correction computed on an image downsampled to 1/4 of it’s original size)

  • niters (list, optional) – Number of iterations per resolution. Each additional entry in the list adds an additional resolution at which the bias is estimated. (the default is [50, 50, 50, 50] which results in 50 iterations per resolution at 4 resolutions)

Returns:

Bias-corrected image that has the same size and spacing as the input image.

Return type:

SimpleITK.Image

cloudreg.scripts.util.get_matching_s3_keys(bucket, prefix='', suffix='')[source]

Generate the keys in an S3 bucket.

Parameters:
  • bucket (str) – Name of the S3 bucket.

  • prefix (str) – Only fetch keys that start with this prefix (optional).

  • suffix (str) – Only fetch keys that end with this suffix (optional).

Yields:

str – S3 keys if they exist with given prefix and suffix

cloudreg.scripts.util.get_reorientations(in_orient, out_orient)[source]

Generates a list of axes flips and swaps to convert from in_orient to out_orient

Parameters:
  • in_orient (str) – 3-letter input orientation

  • out_orient (str) – 3-letter output orientation

Raises:

Exception – Exception raised if in_orient or out_orient not valid

Returns:

New axis order and whether or not each axis needs to be flipped

Return type:

tuple of lists

cloudreg.scripts.util.imgResample(img, spacing, size=[], useNearest=False, origin=None, outsideValue=0)[source]

Resample image to certain spacing and size.

Parameters:
  • img (SimpleITK.Image) – Input 3D image.

  • spacing (list) – List of length 3 indicating the voxel spacing as [x, y, z]

  • size (list, optional) – List of length 3 indicating the number of voxels per dim [x, y, z] (the default is [], which will use compute the appropriate size based on the spacing.)

  • useNearest (bool, optional) – If True use nearest neighbor interpolation. (the default is False, which will use linear interpolation.)

  • origin (list, optional) – The location in physical space representing the [0,0,0] voxel in the input image. (the default is [0,0,0])

  • outsideValue (int, optional) – value used to pad are outside image (the default is 0)

Returns:

Resampled input image.

Return type:

SimpleITK.Image

cloudreg.scripts.util.run_command_on_server(command, ssh_key_path, ip_address, username='ubuntu')[source]

Run command on remote server

Parameters:
  • command (str) – Command to run

  • ssh_key_path (str) – Local path to ssh key neeed for this server

  • ip_address (str) – IP Address of server to connect to

  • username (str, optional) – Username on remote server. Defaults to “ubuntu”.

Returns:

Errors encountered on remote server if any

Return type:

str

cloudreg.scripts.util.start_ec2_instance(instance_id, instance_type)[source]

Start an EC2 instance

Parameters:
  • instance_id (str) – ID of EC2 instance to start

  • instance_type (str) – Type of EC2 instance to start

Returns:

Public IP address of EC2 instance

Return type:

str

cloudreg.scripts.util.tqdm_joblib(tqdm_object)[source]

Context manager to patch joblib to report into tqdm progress bar given as argument

cloudreg.scripts.util.upload_file_to_s3(local_path, s3_bucket, s3_key)[source]

Upload file to S3 from local storage

Parameters:
  • local_path (str) – Local path to file

  • s3_bucket (str) – S3 bucket name

  • s3_key (str) – S3 key to store file at

class cloudreg.scripts.visualization.S3Url(url)[source]
>>> s = S3Url("s3://bucket/hello/world")
>>> s.bucket
'bucket'
>>> s.key
'hello/world'
>>> s.url
's3://bucket/hello/world'
>>> s = S3Url("s3://bucket/hello/world?qwe1=3#ddd")
>>> s.bucket
'bucket'
>>> s.key
'hello/world?qwe1=3#ddd'
>>> s.url
's3://bucket/hello/world?qwe1=3#ddd'
>>> s = S3Url("s3://bucket/hello/world#foo?bar=2")
>>> s.key
'hello/world#foo?bar=2'
>>> s.url
's3://bucket/hello/world#foo?bar=2'
Attributes:
bucket
key
url

Create a viz link from S3 layer paths using Neurodata’s deployment of Neuroglancer and Neurodata’s json state server.

Parameters:
  • s3_layer_paths (list) – List of S3 paths to precomputed volumes to include in the viz link.

  • affine_matrices (list of np.ndarray, optional) – List of affine matrices associated with each layer. Affine matrices should be 3x3 for 2D data and 4x4 for 3D data. Defaults to None.

  • shader_controls (str, optional) – String of shader controls compliant with Neuroglancer shader controls. Defaults to None.

  • url (str, optional) – URL to JSON state server to store Neueroglancer JSON state. Defaults to “https://json.neurodata.io/v1”.

  • neuroglancer_link (str, optional) – URL for Neuroglancer deployment, default is to use Neurodata deployment of Neuroglancer.. Defaults to “https://ara.viz.neurodata.io/?json_url=”.

  • output_resolution (np.ndarray, optional) – Desired output resolution for all layers in nanometers. Defaults to np.array([1e-4] * 3) nanometers.

Returns:

viz link to data

Return type:

str

cloudreg.scripts.visualization.get_layer_json(s3_layer_path, affine_matrix, output_resolution)[source]

Generate Neuroglancer JSON for single layer.

Parameters:
  • s3_layer_path (str) – S3 path to precomputed layer.

  • affine_matrix (np.ndarray) – Affine matrix to apply to current layer. Translation in this matrix is in microns.

  • output_resolution (np.ndarray) – desired output resolution to visualize layer at.

Returns:

Neuroglancer JSON for single layer.

Return type:

dict

cloudreg.scripts.visualization.get_neuroglancer_json(s3_layer_paths, affine_matrices, output_resolution)[source]

Generate Neuroglancer state json.

Parameters:
  • s3_layer_paths (list of str) – List of S3 paths to precomputed layers.

  • affine_matrices (list of np.ndarray) – List of affine matrices for each layer.

  • output_resolution (np.ndarray) – Resolution we want to visualize at for all layers.

Returns:

Neuroglancer state JSON

Return type:

dict

cloudreg.scripts.visualization.get_output_dimensions_json(output_resolution)[source]

Convert output dimensions to Neuroglancer JSON

Parameters:

output_resolution (np.ndarray) – desired output resolution for precomputed data.

Returns:

Neuroglancer JSON for output dimensions

Return type:

dict