Stitching¶
- cloudreg.scripts.stitching.generate_stitching_commands(stitched_dir, stack_dir, metadata_s3_bucket, metadata_s3_path, do_steps=2)[source]¶
Generate Terastitcher stitching commands given COLM metadata files.
- Parameters:
stitched_dir (str) – Path to store stitched data at.
stack_dir (str) – Path to unstiched raw data.
metadata_s3_bucket (str) – Name of S3 bucket in which metdata is located.
metadata_s3_path (str) – Specific path to metadata files in the bucket
do_steps (int, optional) – Represents which Terastitcher steps to run. Defaults to ALL_STEPS (2).
- Returns:
Metadata and list of Terastitcher commands
- Return type:
tuple (dict, list of str)
- cloudreg.scripts.stitching.get_metadata(path_to_config)[source]¶
Get metadata from COLM config file.
- Parameters:
path_to_config (str) – Path to Experiment.ini file (COLM config file)
- Returns:
Metadata information.
- Return type:
dict
- cloudreg.scripts.stitching.get_scanned_cells(fname_scanned_cells)[source]¶
Read Scanned Cells.txt file from COLM into list
- Parameters:
fname_scanned_cells (str) – Path to scanned cells file.
- Returns:
Indicates whether or not a given location has been imaged on the COLM
- Return type:
list of lists
- cloudreg.scripts.stitching.run_terastitcher(raw_data_path, stitched_data_path, input_s3_path, log_s3_path=None, stitch_only=False, compute_only=False)[source]¶
Run Terastitcher commands to fully stitch raw data.
- Parameters:
raw_data_path (str) – Path to raw data (VW0 folder for COLM data)
stitched_data_path (str) – Path to where stitched data will be stored
input_s3_path (str) – S3 Path to where raw data and metadata live
log_s3_path (str, optional) – S3 path to store intermediates and XML files for Terastitcher. Defaults to None.
stitch_only (bool, optional) – Do stitching only if True. Defaults to False.
compute_only (bool, optional) – Compute alignments only if True. Defaults to False.
- Returns:
Metadata associated with this sample from Experiment.ini file (COLM data)
- Return type:
dict
- cloudreg.scripts.stitching.write_import_xml(fname_importxml, scanned_matrix, metadata)[source]¶
Write xml_import file for Terastitcher based on COLM metadata
- Parameters:
fname_importxml (str) – Path to wheer xml_import.xml should be stored
scanned_matrix (list of lists) – List of locations that have been imaged by the microscope
metadata (dict) – Metadata assocated with this COLM experiment
- cloudreg.scripts.stitching.write_terastitcher_commands(fname_ts, metadata, stitched_dir, do_steps)[source]¶
Generate Terastitcher commands from metadata
- Parameters:
fname_ts (str) – Path to bash file to store Terastitcher commands
metadata (dict) – Metadata information about experiment
stitched_dir (str) – Path to where stitched data will be stored
do_steps (int) – Indicator of which steps to run
- Returns:
List of Terastitcher commands to run
- Return type:
list of str
This program uses a main subordinate approach to consume a queue of elaborations using teraconverter Copyright (c) 2016: Massimiliano Guarrasi (1), Giulio Iannello (2), Alessandro Bria (2) (1): CINECA (2): University Campus Bio-Medico of Rome The program was made in the framework of the HUMAN BRAIN PROJECT. All rights reserved.
EXAMPLE of usage (X is the major version, Y is the minor version, Z is the patch): mpirun -np XX python paraconverterX.Y.Z.py -s=source_volume -d=destination_path –depth=DD –height=HH –width=WW –sfmt=source_format –dfmt=destinatiopn_format –resolutions=RR
where: - XX is the desided level of parallelism plus 1 (for the main process) - DD, HH, WW are the values used to partition the image for parallel execution - source and destination format are allowed formats for teraconverter - RR are the requested resolutions (according to the convention used by teraconverter) See teraconverter documentation for more details
* Change Log *¶
v2.3.2 2017-10-07 - added management of –isotropic option in the partition algorithm - corrected a bug in function ‘collect_instructions’
v2.2.2 2017-10-07 - revisted platform dependent instructions
v2.2.1 2017-09-19 - added option –info to display the memory needed in GBytes without performing any
conversion
v2.2 2017-03-12 - the suspend/resume mechanism can be disabled by changing the value of variable
‘suspend_resume_enabled’ (the mechanism is enebled if True, disabled if False
changed the policy to manage dataset partition and eliminated additional parameter to specify the desired degree of parallelism which is now directly passed by the main
v2.1 2017-02-06 - implemented a suspend/resume mechanism
the mechanism can slow down parallel execution if the dataset chunks are relatively small to avoid this a ram disk can be used to save the status (substitute the name ‘output_nae’ at line 953 with the path of the ram disk)
v2.0 2016-12-10 - dataset partitioning takes into account the source format in order to avoid that the
same image region is read by different TeraConverter instances; requires an additional parameter in the command line (see EXAMPLE of usage above)
- cloudreg.scripts.paraconverter.check_double_quote(inpstring)[source]¶
Check if some strings needs of a double quote (if some space are inside the string, it will need to be inside two double quote). E.g.: –sfmt=”TIFF (unstitched, 3D)” Input:
inpstring: input string or array of strings
- Output:
newstring = new string (or array of strings) corrected by quoting if necessary
- cloudreg.scripts.paraconverter.check_flag(params, string, delete)[source]¶
Check if a parameter (string) was beeen declared in the line of commands (params) and return the associated value. If delete is true the related string will be deleted If string is not present, return None Input:
params = list of parameters from original command line string = string to be searched delete = Boolean variable to check if the selected string must be deleted after copied in value variable
- Output:
value = parameter associated to the selected string
- cloudreg.scripts.paraconverter.collect_instructions(inst)[source]¶
Collect the remanent part of a list of strings in a unique string Input:
inst = Input list of strings
- Output:
results = String containing all the elements of inst
- cloudreg.scripts.paraconverter.create_commands(gi_np, info=False)[source]¶
Create commands to run in parallel Input: Output:
first_string = String to initialize parallel computation list_string = Dictionary of strings containing the command lines to process the data. E.G.: {i:command[i]} len_arr = Dictionary containing elements like {index:[size_width(i),size_height(i),size_depth(i)],…..} final_string = String to merge all metadadata
- cloudreg.scripts.paraconverter.create_sizes(size, wb, max_res, norest=False)[source]¶
Create a 3D array containing the size for each tile on the desidered direction Input:
start_wb = Start parameter for b size = size (in pixel) of the input immage wb = Rough depth for the tiles in the desidered direction max_res = Maximum level of resolution available (integer) norest = Boolean variable to chech if we need of the last array element (if it is different from the preavious one)
- Output:
arr = Array containing the size for each tile on the desidered direction
- cloudreg.scripts.paraconverter.create_starts_end(array, start_point=0, open_dx=True)[source]¶
Create arrays containing all the starting and ending indexes for the tiles on the desidered direction Input:
array = Array containing the size for each tile on the desidered direction start_point = Starting index for the input immage (optional) open_dx = If true (the default value) ==> ending indexes = subsequent starting indexes ==> Open end
- Output:
star_arr = Array containing all the starting indexes for the tiles on the desidered direction end_arr = Array containing all the ending indexes for the tiles on the desidered direction
- cloudreg.scripts.paraconverter.eliminate_double_quote(inpstring)[source]¶
Check if the string is already enclosed by quotes Input:
inpstring: input string or array of strings
- Output:
newstring = new string (or array of strings) corrected by eliminating enclosing quotes if any
- cloudreg.scripts.paraconverter.extract_params()[source]¶
Extract parameter from line of commands. Output:
params = list of parameters from original command line
- cloudreg.scripts.paraconverter.generate_final_command(input_name, output_name, wb1, wb2, wb3, sfmt, dfmt, iresolutions, max_res, params, last_string)[source]¶
Generate last command line to merge metadata Input:
input_name = Input file output_name = Standard output directory wb1 = Axprossimative depth for the tiles wb2 = Axprossimative height for the tiles wb3 = Axprossimative width for the tiles sfmt = Source format dfmt = Destination format iresolutions = List of integer values containing all the desidered values for level of resolution max_res = Maximum level of resolution available (integer) params = Array containing instruction derived from the remanent part of the imput string last_string = Remanent part of the input string
- Output:
final_string = Command line to merge metadata
- cloudreg.scripts.paraconverter.generate_first_command(input_name, output_name, wb1, wb2, wb3, sfmt, dfmt, iresolutions, max_res, params, last_string)[source]¶
Generate first command line Input:
input_name = Input file output_name = Standard output directory wb1 = Axprossimative depth for the tiles wb2 = Axprossimative height for the tiles wb3 = Axprossimative width for the tiles sfmt = Source format dfmt = Destination format iresolutions = List of integer values containing all the desidered values for level of resolution max_res = Maximum level of resolution available (integer) params = Array containing instruction derived from the remanent part of the imput string last_string = Remanent part of the input string
- Output:
first_string = Command line to preprocess the data
- cloudreg.scripts.paraconverter.generate_parallel_command(start_list, end_list, input_name, output_name, wb1, wb2, wb3, sfmt, dfmt, iresolutions, max_res, params, last_string)[source]¶
Generate the list of parallel command lines Input:
start_list = Ordered list of lists of starting points. E.g.: [[width_in[0], height_in[0], depth_in[0]], [width_in[1], height_in[1], depth_in[1]], … ,[width_in[N], height_in[N], depth_in[N]]] end_list = Ordered list of lists of starting points. E.g.: [[width_fin[0], height_fin[0], depth_in[0]], [width_fin[1], height_fin[1], depth_fin[1]], … ,[width_fin[N], height_fin[N], depth_fin[N]]] input_name = Input file output_name = Standard output directory wb1 = Axprossimative depth for the tiles wb2 = Axprossimative height for the tiles wb3 = Axprossimative width for the tiles sfmt = Source format dfmt = Destination format iresolutions = List of integer values containing all the desidered values for level of resolution max_res = Maximum level of resolution available (integer) params = Array containing instruction derived from the remanent part of the imput string last_string = Remanent part of the input string
- Output:
list_string = Dictionary of strings containing the command lines to process the data. E.G.: {i:command[i]}
- cloudreg.scripts.paraconverter.main(queue, rs_fname)[source]¶
Dispatch the work among processors. Input:
queue = list of job inputs
- cloudreg.scripts.paraconverter.opt_algo(D, w, n)[source]¶
Solves the tiling problem patitioning the interval [0, D-1] into k subintervals of size 2^n b and one final subinterval of size r = D - k 2^n b Input:
D = dimension of the original array w = approximate estimation of value for b n = desideres level of refinement (e.g. : n = 0 => maximum level of refinement; n =1 => number of point divided by 2^1=2; n = 2 => number of point divided by 2^2=4;)
- Output:
- arr_sizes = [b, r, k, itera]
b = normalized size of standard blocks (size of standard blocks = b * 2^n) r = rest (if not equal to 0, is the size of the last block) k = number of standard blocks itera = number of itarations to converge
- cloudreg.scripts.paraconverter.pop_left(dictionary)[source]¶
Cuts the first element of dictionary and returns its first element (key:value) Input/Output:
dictionary = Dictionary of string containing the command lines to use. After reading the dictionary the first element is deleted from the dictionary.
- Output:
first_el = first element (values) of the dictionary
- cloudreg.scripts.paraconverter.prep_array(wb, r, k)[source]¶
Create a 1D array containing the number of elements per tile. Input:
wb = size of standard blocks r = rest (if not equal to 0, is the size of the last block) k = number of standard blocks
- Output:
array = A list containing the number of element for every tiles.
- cloudreg.scripts.paraconverter.read_item(input_arr, item, default, message=True)[source]¶
Read the value related to “item” from the list “input_arr” and if no item are present set it to “default”. Please note: The function convert the output to the same type of “default” variable Input:
input_arr = List of strings from imput command line item = The item to search default = The default value if no item are present
- Output:
value = Output value for the selected item
- cloudreg.scripts.paraconverter.read_params()[source]¶
Read parameters from input string and from a file Input: Output:
input_name = Input file output_name = Standard output directory wb1 = Axprossimative depth for the tiles wb2 = Axprossimative height for the tiles wb3 = Axprossimative width for the tiles sfmt = Source format dfmt = Destination format iresolutions = List of integer values containing all the desidered values for level of resolution max_res = Maximum level of resolution available (integer) params = Array containing instruction derived from the remanent part of the imput string last_string = Remanent part of the input string height = Height of the input immage width = Width of the input immage depth = Depth of the input immage
- cloudreg.scripts.paraconverter.score_function(params)[source]¶
- Assigns a score value with the formula:
score = 100*N_of_voxel/max(N_of_voxel)
- Input:
params = dictionary containing {input_name : [Nx,Ny,Nz]}
- Output:
scores = dictionary containing {input_name : score}
- cloudreg.scripts.paraconverter.search_for_entry(string_2_serch, file_in, nline=0)[source]¶
Extract from the input file (file_in) up to the line number nline (if declared) the value assigned to string_2_serch. Input:
string_2_serch = string (or list of string) containing the variable to search (e.g. ‘HEIGHT=’) file_in = name of the file containing the information we neeed (e.g: prova.txt or /pico/home/prova.txt) nline = optional, number of the final row of the file we need to analyze
- Output:
Output = value or (list of values) assigned to the variable conteined in string_2_serch
- cloudreg.scripts.paraconverter.sort_elaborations(scores)[source]¶
Create a list of input_name sorted by score Input:
scores = dictionary of the form {input_name : score}
- Output:
scored = a list of input_name sorted by score
- cloudreg.scripts.paraconverter.sort_list(len_1, len_2, len_3)[source]¶
Create a list sorting the indexes along three directions: Input:
len_1 = Number of elements of the array for the first index len_2 = Number of elements of the array for the second index len_3 = Number of elements of the array for the third index
- Output:
order = An ordered list containig an a sequence of lists of 3 alements (one for each direction) that identify the position on the local index
- cloudreg.scripts.paraconverter.sort_start_end(start_1, start_2, start_3, end_1, end_2, end_3, size_1, size_2, size_3)[source]¶
Sort start points and edn point in two lists of elements Input:
start_1 = Array containing all the starting indexes for the tiles on the Depth direction start_2 = Array containing all the starting indexes for the tiles on the Height direction start_3 = Array containing all the starting indexes for the tiles on the Width direction end_1 = Array containing all the ending indexes for the tiles on the Depth direction end_2 = Array containing all the ending indexes for the tiles on the Height direction end_3 = Array containing all the ending indexes for the tiles on the Width direction size_1 = Array containing the size of the tile in the Depth direction size_2 = Array containing the size of the tile in the Height direction size_3 = Array containing the size of the tile in the Width direction
- Output:
order = An ordered list containig an a sequence of lists of 3 alements (one for each direction) that identify the position on the local index start_list = Ordered list of lists of starting points. E.g.: [[width_in[0], height_in[0], depth_in[0]], [width_in[1], height_in[1], depth_in[1]], … ,[width_in[N], height_in[N], depth_in[N]]] end_list = Ordered list of lists of starting points. E.g.: [[width_fin[0], height_fin[0], depth_in[0]], [width_fin[1], height_fin[1], depth_fin[1]], … ,[width_fin[N], height_fin[N], depth_fin[N]]] len_arr = Dictionary containing elements like {index:[size_width(i),size_height(i),size_depth(i)],…..}
- cloudreg.scripts.paraconverter.sort_work(params, priority)[source]¶
Returns a dictionary as params but ordered by score Input:
params = dictionary of the form {input_name : value} priority = the list of input_name ordered by score calculated by score_function
- Output:
sorted_dict = the same dictionary as params but ordered by score
- cloudreg.scripts.paraconverter.worker(input_file)[source]¶
Perform elaboration for each element of the queue. Input/Output
input_file = command to be executed
This program uses a main subordinate approach to consume a queue of elaborations using teraconverter Copyright (c) 2016: Massimiliano Guarrasi (1), Giulio Iannello (2), Alessandro Bria (2) (1): CINECA (2): University Campus Bio-Medico of Rome The program was made in the framework of the HUMAN BRAIN PROJECT. All rights reserved.
EXAMPLE of usage (X is the major version, Y is the minor version, Z is the patch):
For align step: mpirun -np XX python ParastitcherX.Y.Z.py -2 –projin=xml_import_file –projout=xml_displcomp_file [–sV=VV] [–sH=HH] [–sD=DD] [–imin_channel=C] [ … ]
where: - XX is the desided level of parallelism plus 1 (for the main process) - VV, HH, DD are the half size of the NCC map along V, H, and D directions, respectively - C is the input channel to be used for align computation
For fusion step: mpirun -np XX python ParastitcherX.Y.Z.py -6 –projin=xml_import_file –volout=destination_folder –volout_plugin=format_string [–slicewidth=WWW] [–sliceheight=HHH] [–slicedepth=DDD] [–resolutions=RRR] [ … ]
where: - format_string is one the formats: “TIFF (series, 2D)”, “TIFF, tiled, 2D”, “TIFF, tiled, 3D”, “TIFF, tiled, 4D”, - DDD, HHH, WWW are the values used to partition the image for parallel execution - RRR are the requested resolutions (according to the convention used by teraconverter) See teraconverter documentation for more details
* Change Log *¶
2018-09-05. Giulio. @CHSNGED on non-Windows platforms ‘prefix’ is automatically switched to ‘./’ if executables are not in the system path 2018-08-16. Giulio. @CHANGED command line interface: parameters for step 6 are the same than the sequential implementation 2018-08-16. Giulio. @ADDED debug control 2018-08-07. Giulio. @CREATED from parastitcher2.0.3.py and paraconverter2.3.2.py
terastitcher -2 –projin=/Users/iannello/Home/Windows/myTeraStitcher/TestData/Ailey/blending/test_00_01_02_03/xml_import_org.xml –projout=/Users/iannello/Home/Windows/myTeraStitcher/TestData/Ailey/blending/test_00_01_02_03/xml_displcomp_seq.xml
mpirun -np 3 python /Users/iannello/Home/Windows/paratools/parastitcher2.0.3.py -2 –projin=/Users/iannello/Home/Windows/myTeraStitcher/TestData/Ailey/blending/test_00_01_02_03/xml_import_org.xml –projout=/Users/iannello/Home/Windows/myTeraStitcher/TestData/Ailey/blending/test_00_01_02_03/xml_displcomp_par2.xml mpirun -np 3 python /Users/iannello/Home/Windows/paratools/Parastitcher3.0.0.py -2 –projin=/Users/iannello/Home/Windows/myTeraStitcher/TestData/Ailey/blending/test_00_01_02_03/xml_import_org.xml –projout=/Users/iannello/Home/Windows/myTeraStitcher/TestData/Ailey/blending/test_00_01_02_03/xml_displcomp_par2.xml
teraconverter –sfmt=”TIFF (unstitched, 3D)” -s=/Users/iannello/Home/Windows/myTeraStitcher/TestData/Ailey/blending/test_00_01_02_03/xml_merging.xml –dfmt=”TIFF (series, 2D)” -d=/Users/iannello/Home/Windows/myTeraStitcher/TestData/temp/result_p1 –resolutions=012 –depth=256 –width=256 –height=256
mpirun -np 3 python /Users/iannello/Home/Windows/paratools/paraconverter2.3.2.py –sfmt=”TIFF (unstitched, 3D)” -s=/Users/iannello/Home/Windows/myTeraStitcher/TestData/Ailey/blending/test_00_01_02_03/xml_merging.xml –dfmt=”TIFF (tiled, 3D)” -d=/Users/iannello/Home/Windows/myTeraStitcher/TestData/temp/result_p1 –resolutions=012 –depth=256 –width=256 –height=256 mpirun -np 3 python /Users/iannello/Home/Windows/paratools/Parastitcher3.0.0.py -6 –sfmt=”TIFF (unstitched, 3D)” -s=/Users/iannello/Home/Windows/myTeraStitcher/TestData/Ailey/blending/test_00_01_02_03/xml_merging.xml –dfmt=”TIFF (tiled, 3D)” -d=/Users/iannello/Home/Windows/myTeraStitcher/TestData/temp/result_p1 –resolutions=012 –depth=256 –width=256 –height=256
- cloudreg.scripts.parastitcher.check_double_quote(inpstring)[source]¶
Check if some strings needs of a double quote (if some space are inside the string, it will need to be inside two double quote). E.g.: –sfmt=”TIFF (unstitched, 3D)” Input:
inpstring: input string or array of strings
- Output:
newstring = new string (or array of strings) corrected by quoting if necessary
- cloudreg.scripts.parastitcher.check_flag(params, string, delete)[source]¶
Check if a parameter (string) was beeen declared in the line of commands (params) and return the associated value. If delete is true the related string will be deleted If string is not present, return None Input:
params = list of parameters from original command line string = string to be searched delete = Boolean variable to check if the selected string must be deleted after copied in value variable
- Output:
value = parameter associated to the selected string
- cloudreg.scripts.parastitcher.collect_instructions(inst)[source]¶
Collect the remanent part of a list of strings in a unique string Input:
inst = Input list of strings
- Output:
results = String containing all the elements of inst
- cloudreg.scripts.parastitcher.create_commands(gi_np, info=False)[source]¶
Create commands to run in parallel Input: Output:
first_string = String to initialize parallel computation list_string = Dictionary of strings containing the command lines to process the data. E.G.: {i:command[i]} len_arr = Dictionary containing elements like {index:[size_width(i),size_height(i),size_depth(i)],…..} final_string = String to merge all metadadata
- cloudreg.scripts.parastitcher.create_sizes(size, wb, max_res, norest=False)[source]¶
Create a 3D array containing the size for each tile on the desidered direction Input:
start_wb = Start parameter for b size = size (in pixel) of the input immage wb = Rough depth for the tiles in the desidered direction max_res = Maximum level of resolution available (integer) norest = Boolean variable to chech if we need of the last array element (if it is different from the preavious one)
- Output:
arr = Array containing the size for each tile on the desidered direction
- cloudreg.scripts.parastitcher.create_starts_end(array, start_point=0, open_dx=True)[source]¶
Create arrays containing all the starting and ending indexes for the tiles on the desidered direction Input:
array = Array containing the size for each tile on the desidered direction start_point = Starting index for the input immage (optional) open_dx = If true (the default value) ==> ending indexes = subsequent starting indexes ==> Open end
- Output:
star_arr = Array containing all the starting indexes for the tiles on the desidered direction end_arr = Array containing all the ending indexes for the tiles on the desidered direction
- cloudreg.scripts.parastitcher.do_additional_partition(nprocs, nrows, ncols, n_ss)[source]¶
All parameters should be float
- cloudreg.scripts.parastitcher.eliminate_double_quote(inpstring)[source]¶
Check if the string is already enclosed by quotes Input:
inpstring: input string or array of strings
- Output:
newstring = new string (or array of strings) corrected by eliminating enclosing quotes if any
- cloudreg.scripts.parastitcher.extract_np(inputf)[source]¶
extract the number of slices along z from the input xml file.
- cloudreg.scripts.parastitcher.extract_params()[source]¶
Extract parameter from line of commands. Output:
params = list of parameters from original command line
- cloudreg.scripts.parastitcher.find_last_slash(string)[source]¶
Search for / in a string. If one or more / was found, divide the string in a list of two string: the first containf all the character at left of the last / (included), and the second contains the remanent part of the text. If no / was found, the first element of the list will be set to ‘’
- cloudreg.scripts.parastitcher.generate_final_command(input_name, output_name, wb1, wb2, wb3, sfmt, dfmt, iresolutions, max_res, params, last_string)[source]¶
Generate last command line to merge metadata Input:
input_name = Input file output_name = Standard output directory wb1 = Axprossimative depth for the tiles wb2 = Axprossimative height for the tiles wb3 = Axprossimative width for the tiles sfmt = Source format dfmt = Destination format iresolutions = List of integer values containing all the desidered values for level of resolution max_res = Maximum level of resolution available (integer) params = Array containing instruction derived from the remanent part of the imput string last_string = Remanent part of the input string
- Output:
final_string = Command line to merge metadata
- cloudreg.scripts.parastitcher.generate_first_command(input_name, output_name, wb1, wb2, wb3, sfmt, dfmt, iresolutions, max_res, params, last_string)[source]¶
Generate first command line Input:
input_name = Input file output_name = Standard output directory wb1 = Axprossimative depth for the tiles wb2 = Axprossimative height for the tiles wb3 = Axprossimative width for the tiles sfmt = Source format dfmt = Destination format iresolutions = List of integer values containing all the desidered values for level of resolution max_res = Maximum level of resolution available (integer) params = Array containing instruction derived from the remanent part of the imput string last_string = Remanent part of the input string
- Output:
first_string = Command line to preprocess the data
- cloudreg.scripts.parastitcher.generate_parallel_command(start_list, end_list, input_name, output_name, wb1, wb2, wb3, sfmt, dfmt, iresolutions, max_res, params, last_string)[source]¶
Generate the list of parallel command lines Input:
start_list = Ordered list of lists of starting points. E.g.: [[width_in[0], height_in[0], depth_in[0]], [width_in[1], height_in[1], depth_in[1]], … ,[width_in[N], height_in[N], depth_in[N]]] end_list = Ordered list of lists of starting points. E.g.: [[width_fin[0], height_fin[0], depth_in[0]], [width_fin[1], height_fin[1], depth_fin[1]], … ,[width_fin[N], height_fin[N], depth_fin[N]]] input_name = Input file output_name = Standard output directory wb1 = Axprossimative depth for the tiles wb2 = Axprossimative height for the tiles wb3 = Axprossimative width for the tiles sfmt = Source format dfmt = Destination format iresolutions = List of integer values containing all the desidered values for level of resolution max_res = Maximum level of resolution available (integer) params = Array containing instruction derived from the remanent part of the imput string last_string = Remanent part of the input string
- Output:
list_string = Dictionary of strings containing the command lines to process the data. E.G.: {i:command[i]}
- cloudreg.scripts.parastitcher.main_step2(queue)[source]¶
dispatch the work among processors
queue is a list of job input
- cloudreg.scripts.parastitcher.main_step6(queue, rs_fname)[source]¶
Dispatch the work among processors. Input:
queue = list of job inputs
- cloudreg.scripts.parastitcher.opt_algo(D, w, n)[source]¶
Solves the tiling problem patitioning the interval [0, D-1] into k subintervals of size 2^n b and one final subinterval of size r = D - k 2^n b Input:
D = dimension of the original array w = approximate estimation of value for b n = desideres level of refinement (e.g. : n = 0 => maximum level of refinement; n =1 => number of point divided by 2^1=2; n = 2 => number of point divided by 2^2=4;)
- Output:
- arr_sizes = [b, r, k, itera]
b = normalized size of standard blocks (size of standard blocks = b * 2^n) r = rest (if not equal to 0, is the size of the last block) k = number of standard blocks itera = number of itarations to converge
- cloudreg.scripts.parastitcher.partition(m, n, N)[source]¶
return the number of partitions along V and H, respectively that are optimal to partition a block of size m_V x n_H in at least P sub-blocks
m: block size along V n: block size along H N: number of required partitions
return: p_m, p_n: the number of partitions along V and H, respectively
PRE:
- cloudreg.scripts.parastitcher.pop_left(dictionary)[source]¶
Cuts the first element of dictionary and returns its first element (key:value) Input/Output:
dictionary = Dictionary of string containing the command lines to use. After reading the dictionary the first element is deleted from the dictionary.
- Output:
first_el = first element (values) of the dictionary
- cloudreg.scripts.parastitcher.prep_array(wb, r, k)[source]¶
Create a 1D array containing the number of elements per tile. Input:
wb = size of standard blocks r = rest (if not equal to 0, is the size of the last block) k = number of standard blocks
- Output:
array = A list containing the number of element for every tiles.
- cloudreg.scripts.parastitcher.read_input(inputf, nline=0)[source]¶
Reads the file included in inputf at least up to line number nline (if declared).
- cloudreg.scripts.parastitcher.read_item(input_arr, item, default, message=True)[source]¶
Read the value related to “item” from the list “input_arr” and if no item are present set it to “default”. Please note: The function convert the output to the same type of “default” variable Input:
input_arr = List of strings from imput command line item = The item to search default = The default value if no item are present
- Output:
value = Output value for the selected item
- cloudreg.scripts.parastitcher.read_params()[source]¶
Read parameters from input string and from a file Input: Output:
input_name = Input file output_name = Standard output directory wb1 = Axprossimative depth for the tiles wb2 = Axprossimative height for the tiles wb3 = Axprossimative width for the tiles sfmt = Source format dfmt = Destination format iresolutions = List of integer values containing all the desidered values for level of resolution max_res = Maximum level of resolution available (integer) params = Array containing instruction derived from the remanent part of the imput string last_string = Remanent part of the input string height = Height of the input immage width = Width of the input immage depth = Depth of the input immage
- cloudreg.scripts.parastitcher.score_function(params)[source]¶
- Assigns a score value with the formula:
score = 100*N_of_voxel/max(N_of_voxel)
- Input:
params = dictionary containing {input_name : [Nx,Ny,Nz]}
- Output:
scores = dictionary containing {input_name : score}
- cloudreg.scripts.parastitcher.search_for_entry(string_2_serch, file_in, nline=0)[source]¶
Extract from the input file (file_in) up to the line number nline (if declared) the value assigned to string_2_serch. Input:
string_2_serch = string (or list of string) containing the variable to search (e.g. ‘HEIGHT=’) file_in = name of the file containing the information we neeed (e.g: prova.txt or /pico/home/prova.txt) nline = optional, number of the final row of the file we need to analyze
- Output:
Output = value or (list of values) assigned to the variable conteined in string_2_serch
- cloudreg.scripts.parastitcher.sort_elaborations(scores)[source]¶
Create a list of input_name sorted by score Input:
scores = dictionary of the form {input_name : score}
- Output:
scored = a list of input_name sorted by score
- cloudreg.scripts.parastitcher.sort_list(len_1, len_2, len_3)[source]¶
Create a list sorting the indexes along three directions: Input:
len_1 = Number of elements of the array for the first index len_2 = Number of elements of the array for the second index len_3 = Number of elements of the array for the third index
- Output:
order = An ordered list containig an a sequence of lists of 3 alements (one for each direction) that identify the position on the local index
- cloudreg.scripts.parastitcher.sort_start_end(start_1, start_2, start_3, end_1, end_2, end_3, size_1, size_2, size_3)[source]¶
Sort start points and edn point in two lists of elements Input:
start_1 = Array containing all the starting indexes for the tiles on the Depth direction start_2 = Array containing all the starting indexes for the tiles on the Height direction start_3 = Array containing all the starting indexes for the tiles on the Width direction end_1 = Array containing all the ending indexes for the tiles on the Depth direction end_2 = Array containing all the ending indexes for the tiles on the Height direction end_3 = Array containing all the ending indexes for the tiles on the Width direction size_1 = Array containing the size of the tile in the Depth direction size_2 = Array containing the size of the tile in the Height direction size_3 = Array containing the size of the tile in the Width direction
- Output:
order = An ordered list containig an a sequence of lists of 3 alements (one for each direction) that identify the position on the local index start_list = Ordered list of lists of starting points. E.g.: [[width_in[0], height_in[0], depth_in[0]], [width_in[1], height_in[1], depth_in[1]], … ,[width_in[N], height_in[N], depth_in[N]]] end_list = Ordered list of lists of starting points. E.g.: [[width_fin[0], height_fin[0], depth_in[0]], [width_fin[1], height_fin[1], depth_fin[1]], … ,[width_fin[N], height_fin[N], depth_fin[N]]] len_arr = Dictionary containing elements like {index:[size_width(i),size_height(i),size_depth(i)],…..}
- cloudreg.scripts.parastitcher.sort_work(params, priority)[source]¶
Returns a dictionary as params but ordered by score Input:
params = dictionary of the form {input_name : value} priority = the list of input_name ordered by score calculated by score_function
- Output:
sorted_dict = the same dictionary as params but ordered by score