stackSentinel: use 2 digit number in run_files to better sort 10 or more steps

update topsStack/README.md accordingly.

stackSentinel: suppress matplotlib DEBUG msg

stackSentinel: remove the unused imp import
LT1AB
Zhang Yunjun 2020-04-21 23:30:46 -07:00 committed by piyushrpt
parent 0f60e9f24c
commit 659a7ed6b0
2 changed files with 92 additions and 86 deletions

View File

@ -10,7 +10,7 @@ To use the sentinel stack processor, make sure to add the path of your `contrib/
The scripts provides support for Sentinel-1 TOPS stack processing. Currently supported workflows include a coregistered stack of SLC, interferograms, offsets, and coherence. The scripts provides support for Sentinel-1 TOPS stack processing. Currently supported workflows include a coregistered stack of SLC, interferograms, offsets, and coherence.
`stackSentinel.py` generates all configuration and run files required to be executed on a stack of Sentinel-1 TOPS data. When stackSentinel.py is executed for a given workflow (-W option) a **configs** and **run_files** folder is generated. No processing is performed at this stage. Within the run_files folder different run\_#\_description files are contained which are to be executed as shell scripts in the run number order. Each of these run scripts call specific configure files contained in the “configs” folder which call ISCE in a modular fashion. The configure and run files will change depending on the selected workflow. To make run_# files executable, change the file permission accordingly (e.g., `chmod +x run_1_unpack_slc`). `stackSentinel.py` generates all configuration and run files required to be executed on a stack of Sentinel-1 TOPS data. When stackSentinel.py is executed for a given workflow (-W option) a **configs** and **run_files** folder is generated. No processing is performed at this stage. Within the run_files folder different run\_#\_description files are contained which are to be executed as shell scripts in the run number order. Each of these run scripts call specific configure files contained in the “configs” folder which call ISCE in a modular fashion. The configure and run files will change depending on the selected workflow. To make run_# files executable, change the file permission accordingly (e.g., `chmod +x run_01_unpack_slc`).
```bash ```bash
stackSentinel.py -H #To see workflow examples, stackSentinel.py -H #To see workflow examples,
@ -74,53 +74,53 @@ stackSentinel.py -s ../SLC/ -d ../DEM/demLat_N18_N20_Lon_W100_W097.dem.wgs84 -a
by running the command above, the configs and run_files folders are created. User needs to execute each run file in order. The order is specified by the index number of the run file name. For the example above, the run_files folder includes the following files: by running the command above, the configs and run_files folders are created. User needs to execute each run file in order. The order is specified by the index number of the run file name. For the example above, the run_files folder includes the following files:
- run_1_unpack_slc_topo_master - run_01_unpack_slc_topo_master
- run_2_average_baseline - run_02_average_baseline
- run_3_extract_burst_overlaps - run_03_extract_burst_overlaps
- run_4_overlap_geo2rdr_resample - run_04_overlap_geo2rdr_resample
- run_5_pairs_misreg - run_05_pairs_misreg
- run_6_timeseries_misreg - run_06_timeseries_misreg
- run_7_geo2rdr_resample - run_07_geo2rdr_resample
- run_8_extract_stack_valid_region - run_08_extract_stack_valid_region
- run_9_merge - run_09_merge
- run_10_grid_baseline - run_10_grid_baseline
The generated run files are self descriptive. Below is a short explanation on what each run_file does: The generated run files are self descriptive. Below is a short explanation on what each run_file does:
**run_1_unpack_slc_topo_master:** **run_01_unpack_slc_topo_master:**
Includes commands to unpack Sentinel-1 TOPS SLCs using ISCE readers. For older SLCs which need antenna elevation pattern correction, the file is extracted and written to disk. For newer version of SLCs which dont need the elevation antenna pattern correction, only a gdal virtual “vrt” file (and isce xml file) is generated. The “.vrt” file points to the Sentinel SLC file and reads them whenever required during the processing. If a user wants to write the “.vrt” SLC file to disk, it can be done easily using gdal_translate (e.g. gdal_translate of ENVI File.vrt File.slc). Includes commands to unpack Sentinel-1 TOPS SLCs using ISCE readers. For older SLCs which need antenna elevation pattern correction, the file is extracted and written to disk. For newer version of SLCs which dont need the elevation antenna pattern correction, only a gdal virtual “vrt” file (and isce xml file) is generated. The “.vrt” file points to the Sentinel SLC file and reads them whenever required during the processing. If a user wants to write the “.vrt” SLC file to disk, it can be done easily using gdal_translate (e.g. gdal_translate of ENVI File.vrt File.slc).
The “run_1_unpack_slc_topo_master” also includes a command that refers to the config file of the stack master, which includes configuration for running topo for the stack master. Note that in the pair-wise processing strategy one should run topo (mapping from range-Doppler to geo coordinate) for all pairs. However, with stackSentinel, topo needs to be run only one time for the master in the stack. The “run_01_unpack_slc_topo_master” also includes a command that refers to the config file of the stack master, which includes configuration for running topo for the stack master. Note that in the pair-wise processing strategy one should run topo (mapping from range-Doppler to geo coordinate) for all pairs. However, with stackSentinel, topo needs to be run only one time for the master in the stack.
**run_2_average_baseline:** **run_02_average_baseline:**
Computes average baseline for the stack. These baselines are not used for processing anywhere. They are only an approximation and can be used for plotting purposes. A more precise baseline grid is estimated later in run_10. Computes average baseline for the stack. These baselines are not used for processing anywhere. They are only an approximation and can be used for plotting purposes. A more precise baseline grid is estimated later in run_10.
**run_3_extract_burst_overlaps:** **run_03_extract_burst_overlaps:**
Burst overlaps are extracted for estimating azimuth misregistration using NESD technique. If coregistration method is chosen to be “geometry”, then this run file wont exist and the overlaps are not extracted. Burst overlaps are extracted for estimating azimuth misregistration using NESD technique. If coregistration method is chosen to be “geometry”, then this run file wont exist and the overlaps are not extracted.
**run_4_overlap_geo2rdr_resample:*** **run_04_overlap_geo2rdr_resample:***
Running geo2rdr to estimate geometrical offsets between slave burst overlaps and the stack master burst overlaps. The slave burst overlaps are then resampled to the stack master burst overlaps. Running geo2rdr to estimate geometrical offsets between slave burst overlaps and the stack master burst overlaps. The slave burst overlaps are then resampled to the stack master burst overlaps.
**run_5_pairs_misreg:** **run_05_pairs_misreg:**
Using the coregistered stack burst overlaps generated from the previous step, differential overlap interferograms are generated and are used for estimating azimuth misregistration using Enhanced Spectral Diversity (ESD) technique. Using the coregistered stack burst overlaps generated from the previous step, differential overlap interferograms are generated and are used for estimating azimuth misregistration using Enhanced Spectral Diversity (ESD) technique.
**run_6_timeseries_misreg:** **run_06_timeseries_misreg:**
A time-series of azimuth and range misregistration is estimated with respect to the stack master. The time-series is a least squares esatimation from the pair misregistration from the previous step. A time-series of azimuth and range misregistration is estimated with respect to the stack master. The time-series is a least squares esatimation from the pair misregistration from the previous step.
**run_7_geo2rdr_resample:** **run_07_geo2rdr_resample:**
Using orbit and DEM, geometrical offsets among all slave SLCs and the stack master is computed. The goometrical offsets, together with the misregistration time-series (from previous step) are used for precise coregistration of each burst SLC. Using orbit and DEM, geometrical offsets among all slave SLCs and the stack master is computed. The goometrical offsets, together with the misregistration time-series (from previous step) are used for precise coregistration of each burst SLC.
**run_8_extract_stack_valid_region:** **run_08_extract_stack_valid_region:**
The valid region between burst SLCs at the overlap area of the bursts slightly changes for different acquisitions. Therefore we need to keep track of these overlaps which will be used during merging bursts. Without these knowledge, lines of invalid data may appear in the merged products at the burst overlaps. The valid region between burst SLCs at the overlap area of the bursts slightly changes for different acquisitions. Therefore we need to keep track of these overlaps which will be used during merging bursts. Without these knowledge, lines of invalid data may appear in the merged products at the burst overlaps.
**run_9_merge:** **run_09_merge:**
Merges all bursts for the master and coregistered SLCs. The geometry files are also merged including longitude, latitude, shadow and layer mask, line-of-sight files, etc. . Merges all bursts for the master and coregistered SLCs. The geometry files are also merged including longitude, latitude, shadow and layer mask, line-of-sight files, etc. .
@ -166,4 +166,4 @@ stackSentinel.py -s ../SLC/ -d ../../MexicoCity/demLat_N18_N20_Lon_W100_W097.dem
This workflow is basically similar to the previous one. The difference is that the interferograms are not unwrapped. This workflow is basically similar to the previous one. The difference is that the interferograms are not unwrapped.
#### 5. Execute the commands in run files (run_1*, run_2*, etc) in the "run_files" folder #### #### 5. Execute the commands in run files (run_01*, run_02*, etc) in the "run_files" folder ####

View File

@ -4,17 +4,25 @@
####################### #######################
import os, imp, sys, glob import os, sys, glob
import argparse import argparse
import configparser import configparser
import datetime import datetime
import time import time
import numpy as np import numpy as np
# suppress matplotlib DEBUG message
from matplotlib.path import Path as Path
import logging
mpl_logger = logging.getLogger('matplotlib')
mpl_logger.setLevel(logging.WARNING)
import isce import isce
import isceobj import isceobj
from isceobj.Sensor.TOPS.Sentinel1 import Sentinel1 from isceobj.Sensor.TOPS.Sentinel1 import Sentinel1
from Stack import config, run, sentinelSLC from Stack import config, run, sentinelSLC
helpstr= ''' helpstr= '''
Stack processor for Sentinel-1 data using ISCE software. Stack processor for Sentinel-1 data using ISCE software.
@ -32,7 +40,7 @@ Following are required to start processing:
Note that stackSentinel.py does not process any data. It only prepares a lot of input files for processing and a lot of run files. Then you need to execute all those generated run files in order. To know what is really going on, after running stackSentinel.py, look at each run file generated by stackSentinel.py. Each run file actually has several commands that are independent from each other and can be executed in parallel. The config files for each run file include the processing options to execute a specific command/function. Note that stackSentinel.py does not process any data. It only prepares a lot of input files for processing and a lot of run files. Then you need to execute all those generated run files in order. To know what is really going on, after running stackSentinel.py, look at each run file generated by stackSentinel.py. Each run file actually has several commands that are independent from each other and can be executed in parallel. The config files for each run file include the processing options to execute a specific command/function.
Note also that run files need to be executed in order, i.e., running run_3 needs results from run_2, etc. Note also that run files need to be executed in order, i.e., running run_03 needs results from run_02, etc.
############################################## ##############################################
@ -264,8 +272,7 @@ def get_dates(inps):
lons.append(lon) lons.append(lon)
import matplotlib
from matplotlib.path import Path as Path
# bbox = SNWE # bbox = SNWE
# polygon = bbox[0] bbox[2] SW # polygon = bbox[0] bbox[2] SW
@ -273,7 +280,7 @@ def get_dates(inps):
# bbox[1] bbox[3] NE # bbox[1] bbox[3] NE
# bbox[1] bbox[2] NW # bbox[1] bbox[2] NW
poly =Path(bbox_poly) poly = Path(bbox_poly)
point = (lat,lon) point = (lat,lon)
in_bbox = poly.contains_point(point) in_bbox = poly.contains_point(point)
@ -286,8 +293,8 @@ def get_dates(inps):
# If the product is till being rejected, check if the BBOX corners fall within the frame # If the product is till being rejected, check if the BBOX corners fall within the frame
if reject_SAFE: if reject_SAFE:
for point in bbox_poly: for point in bbox_poly:
frame= [[a,b] for a,b in zip(lats,lons)] frame = [[a,b] for a,b in zip(lats,lons)]
poly=Path(frame) poly = Path(frame)
in_frame = poly.contains_point(point) in_frame = poly.contains_point(point)
if in_frame: if in_frame:
reject_SAFE=False reject_SAFE=False
@ -436,19 +443,19 @@ def slcStack(inps, acquisitionDates, stackMasterDate, slaveDates, safe_dict, upd
if not updateStack: if not updateStack:
i += 1 i += 1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_unpack_topo_master") runObj.configure(inps, 'run_{:02d}_unpack_topo_master'.format(i))
runObj.unpackStackMasterSLC(safe_dict) runObj.unpackStackMasterSLC(safe_dict)
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_unpack_slave_slc") runObj.configure(inps, 'run_{:02d}_unpack_slave_slc'.format(i))
runObj.unpackSlavesSLC(stackMasterDate, slaveDates, safe_dict) runObj.unpackSlavesSLC(stackMasterDate, slaveDates, safe_dict)
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_average_baseline") runObj.configure(inps, 'run_{:02d}_average_baseline'.format(i))
runObj.averageBaseline(stackMasterDate, slaveDates) runObj.averageBaseline(stackMasterDate, slaveDates)
runObj.finalize() runObj.finalize()
@ -456,25 +463,25 @@ def slcStack(inps, acquisitionDates, stackMasterDate, slaveDates, safe_dict, upd
if not updateStack: if not updateStack:
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_extract_burst_overlaps") runObj.configure(inps, 'run_{:02d}_extract_burst_overlaps'.format(i))
runObj.extractOverlaps() runObj.extractOverlaps()
runObj.finalize() runObj.finalize()
i += 1 i += 1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_overlap_geo2rdr") runObj.configure(inps, 'run_{:02d}_overlap_geo2rdr'.format(i))
runObj.geo2rdr_offset(slaveDates) runObj.geo2rdr_offset(slaveDates)
runObj.finalize() runObj.finalize()
i += 1 i += 1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_overlap_resample") runObj.configure(inps, 'run_{:02d}_overlap_resample'.format(i))
runObj.resample_with_carrier(slaveDates) runObj.resample_with_carrier(slaveDates)
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_pairs_misreg") runObj.configure(inps, 'run_{:02d}_pairs_misreg'.format(i))
if updateStack: if updateStack:
runObj.pairs_misregistration(slaveDates, safe_dict) runObj.pairs_misregistration(slaveDates, safe_dict)
else: else:
@ -483,39 +490,39 @@ def slcStack(inps, acquisitionDates, stackMasterDate, slaveDates, safe_dict, upd
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_timeseries_misreg") runObj.configure(inps, 'run_{:02d}_timeseries_misreg'.format(i))
runObj.timeseries_misregistration() runObj.timeseries_misregistration()
runObj.finalize() runObj.finalize()
i += 1 i += 1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_fullBurst_geo2rdr") runObj.configure(inps, 'run_{:02d}_fullBurst_geo2rdr'.format(i))
runObj.geo2rdr_offset(slaveDates, fullBurst='True') runObj.geo2rdr_offset(slaveDates, fullBurst='True')
runObj.finalize() runObj.finalize()
i += 1 i += 1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_fullBurst_resample") runObj.configure(inps, 'run_{:02d}_fullBurst_resample'.format(i))
runObj.resample_with_carrier(slaveDates, fullBurst='True') runObj.resample_with_carrier(slaveDates, fullBurst='True')
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_extract_stack_valid_region") runObj.configure(inps, 'run_{:02d}_extract_stack_valid_region'.format(i))
runObj.extractStackValidRegion() runObj.extractStackValidRegion()
runObj.finalize() runObj.finalize()
if mergeSLC: if mergeSLC:
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_merge") runObj.configure(inps, 'run_{:02d}_merge'.format(i))
runObj.mergeMaster(stackMasterDate, virtual = 'False') runObj.mergeMaster(stackMasterDate, virtual = 'False')
runObj.mergeSlaveSLC(slaveDates, virtual = 'False') runObj.mergeSlaveSLC(slaveDates, virtual = 'False')
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_grid_baseline") runObj.configure(inps, 'run_{:02d}_grid_baseline'.format(i))
runObj.gridBaseline(stackMasterDate, slaveDates) runObj.gridBaseline(stackMasterDate, slaveDates)
runObj.finalize() runObj.finalize()
@ -530,20 +537,20 @@ def correlationStack(inps, acquisitionDates, stackMasterDate, slaveDates, safe_d
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_merge_master_slave_slc") runObj.configure(inps, 'run_{:02d}_merge_master_slave_slc'.format(i))
runObj.mergeMaster(stackMasterDate, virtual = 'True') runObj.mergeMaster(stackMasterDate, virtual = 'True')
runObj.mergeSlaveSLC(slaveDates, virtual = 'True') runObj.mergeSlaveSLC(slaveDates, virtual = 'True')
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_merge_burst_igram") runObj.configure(inps, 'run_{:02d}_merge_burst_igram'.format(i))
runObj.burstIgram_mergeBurst(acquisitionDates, safe_dict, pairs) runObj.burstIgram_mergeBurst(acquisitionDates, safe_dict, pairs)
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_filter_coherence") runObj.configure(inps, 'run_{:02d}_filter_coherence'.format(i))
runObj.filter_coherence(pairs) runObj.filter_coherence(pairs)
runObj.finalize() runObj.finalize()
@ -554,32 +561,32 @@ def interferogramStack(inps, acquisitionDates, stackMasterDate, slaveDates, safe
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_merge_master_slave_slc") runObj.configure(inps, 'run_{:02d}_merge_master_slave_slc'.format(i))
runObj.mergeMaster(stackMasterDate, virtual = 'True') runObj.mergeMaster(stackMasterDate, virtual = 'True')
runObj.mergeSlaveSLC(slaveDates, virtual = 'True') runObj.mergeSlaveSLC(slaveDates, virtual = 'True')
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_generate_burst_igram") runObj.configure(inps, 'run_{:02d}_generate_burst_igram'.format(i))
runObj.generate_burstIgram(acquisitionDates, safe_dict, pairs) runObj.generate_burstIgram(acquisitionDates, safe_dict, pairs)
runObj.finalize() runObj.finalize()
i += 1 i += 1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_merge_burst_igram") runObj.configure(inps, 'run_{:02d}_merge_burst_igram'.format(i))
runObj.igram_mergeBurst(acquisitionDates, safe_dict, pairs) runObj.igram_mergeBurst(acquisitionDates, safe_dict, pairs)
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_filter_coherence") runObj.configure(inps, 'run_{:02d}_filter_coherence'.format(i))
runObj.filter_coherence(pairs) runObj.filter_coherence(pairs)
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_unwrap") runObj.configure(inps, 'run_{:02d}_unwrap'.format(i))
runObj.unwrap(pairs) runObj.unwrap(pairs)
runObj.finalize() runObj.finalize()
@ -590,14 +597,14 @@ def offsetStack(inps, acquisitionDates, stackMasterDate, slaveDates, safe_dict,
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_merge_master_slave_slc") runObj.configure(inps, 'run_{:02d}_merge_master_slave_slc'.format(i))
runObj.mergeMaster(stackMasterDate, virtual = 'False') runObj.mergeMaster(stackMasterDate, virtual = 'False')
runObj.mergeSlaveSLC(slaveDates, virtual = 'False') runObj.mergeSlaveSLC(slaveDates, virtual = 'False')
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_dense_offsets") runObj.configure(inps, 'run_{:02d}_dense_offsets'.format(i))
runObj.denseOffsets(pairs) runObj.denseOffsets(pairs)
runObj.finalize() runObj.finalize()
@ -763,4 +770,3 @@ if __name__ == "__main__":
# Main engine # Main engine
main() main()