stackSentinel: use 2 digit number in run_files to better sort 10 or more steps

update topsStack/README.md accordingly.

stackSentinel: suppress matplotlib DEBUG msg

stackSentinel: remove the unused imp import
LT1AB
Zhang Yunjun 2020-04-21 23:30:46 -07:00 committed by piyushrpt
parent 0f60e9f24c
commit 659a7ed6b0
2 changed files with 92 additions and 86 deletions

View File

@ -10,7 +10,7 @@ To use the sentinel stack processor, make sure to add the path of your `contrib/
The scripts provides support for Sentinel-1 TOPS stack processing. Currently supported workflows include a coregistered stack of SLC, interferograms, offsets, and coherence. The scripts provides support for Sentinel-1 TOPS stack processing. Currently supported workflows include a coregistered stack of SLC, interferograms, offsets, and coherence.
`stackSentinel.py` generates all configuration and run files required to be executed on a stack of Sentinel-1 TOPS data. When stackSentinel.py is executed for a given workflow (-W option) a **configs** and **run_files** folder is generated. No processing is performed at this stage. Within the run_files folder different run\_#\_description files are contained which are to be executed as shell scripts in the run number order. Each of these run scripts call specific configure files contained in the “configs” folder which call ISCE in a modular fashion. The configure and run files will change depending on the selected workflow. To make run_# files executable, change the file permission accordingly (e.g., `chmod +x run_1_unpack_slc`). `stackSentinel.py` generates all configuration and run files required to be executed on a stack of Sentinel-1 TOPS data. When stackSentinel.py is executed for a given workflow (-W option) a **configs** and **run_files** folder is generated. No processing is performed at this stage. Within the run_files folder different run\_#\_description files are contained which are to be executed as shell scripts in the run number order. Each of these run scripts call specific configure files contained in the “configs” folder which call ISCE in a modular fashion. The configure and run files will change depending on the selected workflow. To make run_# files executable, change the file permission accordingly (e.g., `chmod +x run_01_unpack_slc`).
```bash ```bash
stackSentinel.py -H #To see workflow examples, stackSentinel.py -H #To see workflow examples,
@ -74,53 +74,53 @@ stackSentinel.py -s ../SLC/ -d ../DEM/demLat_N18_N20_Lon_W100_W097.dem.wgs84 -a
by running the command above, the configs and run_files folders are created. User needs to execute each run file in order. The order is specified by the index number of the run file name. For the example above, the run_files folder includes the following files: by running the command above, the configs and run_files folders are created. User needs to execute each run file in order. The order is specified by the index number of the run file name. For the example above, the run_files folder includes the following files:
- run_1_unpack_slc_topo_master - run_01_unpack_slc_topo_master
- run_2_average_baseline - run_02_average_baseline
- run_3_extract_burst_overlaps - run_03_extract_burst_overlaps
- run_4_overlap_geo2rdr_resample - run_04_overlap_geo2rdr_resample
- run_5_pairs_misreg - run_05_pairs_misreg
- run_6_timeseries_misreg - run_06_timeseries_misreg
- run_7_geo2rdr_resample - run_07_geo2rdr_resample
- run_8_extract_stack_valid_region - run_08_extract_stack_valid_region
- run_9_merge - run_09_merge
- run_10_grid_baseline - run_10_grid_baseline
The generated run files are self descriptive. Below is a short explanation on what each run_file does: The generated run files are self descriptive. Below is a short explanation on what each run_file does:
**run_1_unpack_slc_topo_master:** **run_01_unpack_slc_topo_master:**
Includes commands to unpack Sentinel-1 TOPS SLCs using ISCE readers. For older SLCs which need antenna elevation pattern correction, the file is extracted and written to disk. For newer version of SLCs which dont need the elevation antenna pattern correction, only a gdal virtual “vrt” file (and isce xml file) is generated. The “.vrt” file points to the Sentinel SLC file and reads them whenever required during the processing. If a user wants to write the “.vrt” SLC file to disk, it can be done easily using gdal_translate (e.g. gdal_translate of ENVI File.vrt File.slc). Includes commands to unpack Sentinel-1 TOPS SLCs using ISCE readers. For older SLCs which need antenna elevation pattern correction, the file is extracted and written to disk. For newer version of SLCs which dont need the elevation antenna pattern correction, only a gdal virtual “vrt” file (and isce xml file) is generated. The “.vrt” file points to the Sentinel SLC file and reads them whenever required during the processing. If a user wants to write the “.vrt” SLC file to disk, it can be done easily using gdal_translate (e.g. gdal_translate of ENVI File.vrt File.slc).
The “run_1_unpack_slc_topo_master” also includes a command that refers to the config file of the stack master, which includes configuration for running topo for the stack master. Note that in the pair-wise processing strategy one should run topo (mapping from range-Doppler to geo coordinate) for all pairs. However, with stackSentinel, topo needs to be run only one time for the master in the stack. The “run_01_unpack_slc_topo_master” also includes a command that refers to the config file of the stack master, which includes configuration for running topo for the stack master. Note that in the pair-wise processing strategy one should run topo (mapping from range-Doppler to geo coordinate) for all pairs. However, with stackSentinel, topo needs to be run only one time for the master in the stack.
**run_2_average_baseline:** **run_02_average_baseline:**
Computes average baseline for the stack. These baselines are not used for processing anywhere. They are only an approximation and can be used for plotting purposes. A more precise baseline grid is estimated later in run_10. Computes average baseline for the stack. These baselines are not used for processing anywhere. They are only an approximation and can be used for plotting purposes. A more precise baseline grid is estimated later in run_10.
**run_3_extract_burst_overlaps:** **run_03_extract_burst_overlaps:**
Burst overlaps are extracted for estimating azimuth misregistration using NESD technique. If coregistration method is chosen to be “geometry”, then this run file wont exist and the overlaps are not extracted. Burst overlaps are extracted for estimating azimuth misregistration using NESD technique. If coregistration method is chosen to be “geometry”, then this run file wont exist and the overlaps are not extracted.
**run_4_overlap_geo2rdr_resample:*** **run_04_overlap_geo2rdr_resample:***
Running geo2rdr to estimate geometrical offsets between slave burst overlaps and the stack master burst overlaps. The slave burst overlaps are then resampled to the stack master burst overlaps. Running geo2rdr to estimate geometrical offsets between slave burst overlaps and the stack master burst overlaps. The slave burst overlaps are then resampled to the stack master burst overlaps.
**run_5_pairs_misreg:** **run_05_pairs_misreg:**
Using the coregistered stack burst overlaps generated from the previous step, differential overlap interferograms are generated and are used for estimating azimuth misregistration using Enhanced Spectral Diversity (ESD) technique. Using the coregistered stack burst overlaps generated from the previous step, differential overlap interferograms are generated and are used for estimating azimuth misregistration using Enhanced Spectral Diversity (ESD) technique.
**run_6_timeseries_misreg:** **run_06_timeseries_misreg:**
A time-series of azimuth and range misregistration is estimated with respect to the stack master. The time-series is a least squares esatimation from the pair misregistration from the previous step. A time-series of azimuth and range misregistration is estimated with respect to the stack master. The time-series is a least squares esatimation from the pair misregistration from the previous step.
**run_7_geo2rdr_resample:** **run_07_geo2rdr_resample:**
Using orbit and DEM, geometrical offsets among all slave SLCs and the stack master is computed. The goometrical offsets, together with the misregistration time-series (from previous step) are used for precise coregistration of each burst SLC. Using orbit and DEM, geometrical offsets among all slave SLCs and the stack master is computed. The goometrical offsets, together with the misregistration time-series (from previous step) are used for precise coregistration of each burst SLC.
**run_8_extract_stack_valid_region:** **run_08_extract_stack_valid_region:**
The valid region between burst SLCs at the overlap area of the bursts slightly changes for different acquisitions. Therefore we need to keep track of these overlaps which will be used during merging bursts. Without these knowledge, lines of invalid data may appear in the merged products at the burst overlaps. The valid region between burst SLCs at the overlap area of the bursts slightly changes for different acquisitions. Therefore we need to keep track of these overlaps which will be used during merging bursts. Without these knowledge, lines of invalid data may appear in the merged products at the burst overlaps.
**run_9_merge:** **run_09_merge:**
Merges all bursts for the master and coregistered SLCs. The geometry files are also merged including longitude, latitude, shadow and layer mask, line-of-sight files, etc. . Merges all bursts for the master and coregistered SLCs. The geometry files are also merged including longitude, latitude, shadow and layer mask, line-of-sight files, etc. .
@ -166,4 +166,4 @@ stackSentinel.py -s ../SLC/ -d ../../MexicoCity/demLat_N18_N20_Lon_W100_W097.dem
This workflow is basically similar to the previous one. The difference is that the interferograms are not unwrapped. This workflow is basically similar to the previous one. The difference is that the interferograms are not unwrapped.
#### 5. Execute the commands in run files (run_1*, run_2*, etc) in the "run_files" folder #### #### 5. Execute the commands in run files (run_01*, run_02*, etc) in the "run_files" folder ####

View File

@ -4,35 +4,43 @@
####################### #######################
import os, imp, sys, glob import os, sys, glob
import argparse import argparse
import configparser import configparser
import datetime import datetime
import time import time
import numpy as np import numpy as np
# suppress matplotlib DEBUG message
from matplotlib.path import Path as Path
import logging
mpl_logger = logging.getLogger('matplotlib')
mpl_logger.setLevel(logging.WARNING)
import isce import isce
import isceobj import isceobj
from isceobj.Sensor.TOPS.Sentinel1 import Sentinel1 from isceobj.Sensor.TOPS.Sentinel1 import Sentinel1
from Stack import config, run, sentinelSLC from Stack import config, run, sentinelSLC
helpstr= ''' helpstr= '''
Stack processor for Sentinel-1 data using ISCE software. Stack processor for Sentinel-1 data using ISCE software.
For a full list of different options, try stackSentinel.py -h For a full list of different options, try stackSentinel.py -h
stackSentinel.py generates all configuration and run files required to be executed for a stack of Sentinel-1 TOPS data. stackSentinel.py generates all configuration and run files required to be executed for a stack of Sentinel-1 TOPS data.
Following are required to start processing: Following are required to start processing:
1) a folder that includes Sentinel-1 SLCs, 1) a folder that includes Sentinel-1 SLCs,
2) a DEM (Digital Elevation Model) 2) a DEM (Digital Elevation Model)
3) a folder that includes precise orbits (use dloadOrbits.py to download/ update your orbit folder. Missing orbits downloaded on the fly.) 3) a folder that includes precise orbits (use dloadOrbits.py to download/ update your orbit folder. Missing orbits downloaded on the fly.)
4) a folder for Sentinel-1 Aux files (which is used for correcting the Elevation Antenna Pattern). 4) a folder for Sentinel-1 Aux files (which is used for correcting the Elevation Antenna Pattern).
Note that stackSentinel.py does not process any data. It only prepares a lot of input files for processing and a lot of run files. Then you need to execute all those generated run files in order. To know what is really going on, after running stackSentinel.py, look at each run file generated by stackSentinel.py. Each run file actually has several commands that are independent from each other and can be executed in parallel. The config files for each run file include the processing options to execute a specific command/function. Note that stackSentinel.py does not process any data. It only prepares a lot of input files for processing and a lot of run files. Then you need to execute all those generated run files in order. To know what is really going on, after running stackSentinel.py, look at each run file generated by stackSentinel.py. Each run file actually has several commands that are independent from each other and can be executed in parallel. The config files for each run file include the processing options to execute a specific command/function.
Note also that run files need to be executed in order, i.e., running run_3 needs results from run_2, etc. Note also that run files need to be executed in order, i.e., running run_03 needs results from run_02, etc.
############################################## ##############################################
@ -60,7 +68,7 @@ stackSentinel.py -s ../SLC/ -d ../../MexicoCity/demLat_N18_N20_Lon_W100_W097.dem
%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%
Example 4: Example 4:
# slc workflow that produces a coregistered stack of SLCs # slc workflow that produces a coregistered stack of SLCs
stackSentinel.py -s ../SLC/ -d ../../MexicoCity/demLat_N18_N20_Lon_W100_W097.dem.wgs84 -b '19 20 -99.5 -98.5' -a ../../AuxDir/ -o ../../Orbits -C NESD -W slc stackSentinel.py -s ../SLC/ -d ../../MexicoCity/demLat_N18_N20_Lon_W100_W097.dem.wgs84 -b '19 20 -99.5 -98.5' -a ../../AuxDir/ -o ../../Orbits -C NESD -W slc
@ -96,10 +104,10 @@ def createParser():
parser.add_argument('-a', '--aux_directory', dest='aux_dirname', type=str, required=True, parser.add_argument('-a', '--aux_directory', dest='aux_dirname', type=str, required=True,
help='Directory with all aux files') help='Directory with all aux files')
parser.add_argument('-w', '--working_directory', dest='work_dir', type=str, default='./', parser.add_argument('-w', '--working_directory', dest='work_dir', type=str, default='./',
help='Working directory ') help='Working directory ')
parser.add_argument('-d', '--dem', dest='dem', type=str, required=True, parser.add_argument('-d', '--dem', dest='dem', type=str, required=True,
help='Directory with the DEM') help='Directory with the DEM')
@ -128,7 +136,7 @@ def createParser():
parser.add_argument('-z', '--azimuth_looks', dest='azimuthLooks', type=str, default='3' parser.add_argument('-z', '--azimuth_looks', dest='azimuthLooks', type=str, default='3'
, help='Number of looks in azimuth for interferogram multi-looking. -- Default : 3') , help='Number of looks in azimuth for interferogram multi-looking. -- Default : 3')
parser.add_argument('-r', '--range_looks', dest='rangeLooks', type=str, default='9' parser.add_argument('-r', '--range_looks', dest='rangeLooks', type=str, default='9'
, help='Number of looks in range for interferogram multi-looking. -- Default : 9') , help='Number of looks in range for interferogram multi-looking. -- Default : 9')
@ -158,7 +166,7 @@ def createParser():
parser.add_argument('--stop_date', dest='stopDate', type=str, default=None parser.add_argument('--stop_date', dest='stopDate', type=str, default=None
, help='Stop date for stack processing. Acquisitions after stop date are ignored. format should be YYYY-MM-DD e.g., 2017-02-26') , help='Stop date for stack processing. Acquisitions after stop date are ignored. format should be YYYY-MM-DD e.g., 2017-02-26')
parser.add_argument('-useGPU', '--useGPU', dest='useGPU',action='store_true', default=False, parser.add_argument('-useGPU', '--useGPU', dest='useGPU',action='store_true', default=False,
help='Allow App to use GPU when available') help='Allow App to use GPU when available')
@ -183,7 +191,7 @@ def cmdLineParse(iargs = None):
#################################### ####################################
def get_dates(inps): def get_dates(inps):
# Given the SLC directory This function extracts the acquisition dates # Given the SLC directory This function extracts the acquisition dates
# and prepares a dictionary of sentinel slc files such that keys are # and prepares a dictionary of sentinel slc files such that keys are
# acquisition dates and values are object instances of sentinelSLC class # acquisition dates and values are object instances of sentinelSLC class
# which is defined in Stack.py # which is defined in Stack.py
@ -205,7 +213,7 @@ def get_dates(inps):
SAFE_files = [] SAFE_files = []
for line in open(inps.slc_dirname): for line in open(inps.slc_dirname):
SAFE_files.append(str.replace(line,'\n','').strip()) SAFE_files.append(str.replace(line,'\n','').strip())
else: else:
SAFE_files = glob.glob(os.path.join(inps.slc_dirname,'S1*_IW_SLC*zip')) # changed to zip file by Minyan Zhong SAFE_files = glob.glob(os.path.join(inps.slc_dirname,'S1*_IW_SLC*zip')) # changed to zip file by Minyan Zhong
@ -232,8 +240,8 @@ def get_dates(inps):
################################ ################################
# write down the list of SAFE files in a txt file which will be used: # write down the list of SAFE files in a txt file which will be used:
f = open('SAFE_files.txt','w') f = open('SAFE_files.txt','w')
safe_count=0 safe_count=0
safe_dict={} safe_dict={}
bbox_poly = [[bbox[0],bbox[2]],[bbox[0],bbox[3]],[bbox[1],bbox[3]],[bbox[1],bbox[2]]] bbox_poly = [[bbox[0],bbox[2]],[bbox[0],bbox[3]],[bbox[1],bbox[3]],[bbox[1],bbox[2]]]
for safe in SAFE_files: for safe in SAFE_files:
@ -258,14 +266,13 @@ def get_dates(inps):
for pnt in pnts: for pnt in pnts:
lon = float(pnt.split(',')[0]) lon = float(pnt.split(',')[0])
lat = float(pnt.split(',')[1]) lat = float(pnt.split(',')[1])
# keep track of all the corners to see of the product is larger than the bbox # keep track of all the corners to see of the product is larger than the bbox
lats.append(lat) lats.append(lat)
lons.append(lon) lons.append(lon)
import matplotlib
from matplotlib.path import Path as Path
# bbox = SNWE # bbox = SNWE
# polygon = bbox[0] bbox[2] SW # polygon = bbox[0] bbox[2] SW
@ -273,21 +280,21 @@ def get_dates(inps):
# bbox[1] bbox[3] NE # bbox[1] bbox[3] NE
# bbox[1] bbox[2] NW # bbox[1] bbox[2] NW
poly =Path(bbox_poly) poly = Path(bbox_poly)
point = (lat,lon) point = (lat,lon)
in_bbox = poly.contains_point(point) in_bbox = poly.contains_point(point)
# product corner falls within BBOX (SNWE) # product corner falls within BBOX (SNWE)
if in_bbox: if in_bbox:
reject_SAFE=False reject_SAFE=False
# If the product is till being rejected, check if the BBOX corners fall within the frame # If the product is till being rejected, check if the BBOX corners fall within the frame
if reject_SAFE: if reject_SAFE:
for point in bbox_poly: for point in bbox_poly:
frame= [[a,b] for a,b in zip(lats,lons)] frame = [[a,b] for a,b in zip(lats,lons)]
poly=Path(frame) poly = Path(frame)
in_frame = poly.contains_point(point) in_frame = poly.contains_point(point)
if in_frame: if in_frame:
reject_SAFE=False reject_SAFE=False
@ -328,7 +335,7 @@ def get_dates(inps):
for date in dateList: for date in dateList:
#safe_dict[date].get_lat_lon() #safe_dict[date].get_lat_lon()
safe_dict[date].get_lat_lon_v2() safe_dict[date].get_lat_lon_v2()
#safe_dict[date].get_lat_lon_v3(inps) #safe_dict[date].get_lat_lon_v3(inps)
S.append(safe_dict[date].SNWE[0]) S.append(safe_dict[date].SNWE[0])
N.append(safe_dict[date].SNWE[1]) N.append(safe_dict[date].SNWE[1])
@ -339,7 +346,7 @@ def get_dates(inps):
if safe_dict[date].SNWE[0] <= bbox[0] and safe_dict[date].SNWE[1] >= bbox[1]: if safe_dict[date].SNWE[0] <= bbox[0] and safe_dict[date].SNWE[1] >= bbox[1]:
safe_dict_bbox[date] = safe_dict[date] safe_dict_bbox[date] = safe_dict[date]
safe_dict_bbox_finclude[date] = safe_dict[date] safe_dict_bbox_finclude[date] = safe_dict[date]
elif date in includeList: elif date in includeList:
safe_dict_finclude[date] = safe_dict[date] safe_dict_finclude[date] = safe_dict[date]
safe_dict_bbox_finclude[date] = safe_dict[date] safe_dict_bbox_finclude[date] = safe_dict[date]
@ -353,7 +360,7 @@ def get_dates(inps):
print (max(S),min(N),max(W),min(E)) print (max(S),min(N),max(W),min(E))
print ("*****************************************") print ("*****************************************")
if max(S) > min(N): if max(S) > min(N):
print ("""WARNING: print ("""WARNING:
There might not be overlap between some dates""") There might not be overlap between some dates""")
print ("*****************************************") print ("*****************************************")
################################ ################################
@ -396,10 +403,10 @@ def get_dates(inps):
sys.exit(1) sys.exit(1)
inps.master_date = dateList[0] inps.master_date = dateList[0]
print ("The master date was not chosen. The first date is considered as master date.") print ("The master date was not chosen. The first date is considered as master date.")
print ("") print ("")
print ("All SLCs will be coregistered to : " + inps.master_date) print ("All SLCs will be coregistered to : " + inps.master_date)
slaveList = [key for key in safe_dict.keys()] slaveList = [key for key in safe_dict.keys()]
slaveList.sort() slaveList.sort()
slaveList.remove(inps.master_date) slaveList.remove(inps.master_date)
@ -408,7 +415,7 @@ def get_dates(inps):
print ("") print ("")
return dateList, inps.master_date, slaveList, safe_dict return dateList, inps.master_date, slaveList, safe_dict
def selectNeighborPairs(dateList, num_connections, updateStack=False): # should be changed to able to account for the existed aquisitions -- Minyan Zhong def selectNeighborPairs(dateList, num_connections, updateStack=False): # should be changed to able to account for the existed aquisitions -- Minyan Zhong
pairs = [] pairs = []
@ -427,7 +434,7 @@ def selectNeighborPairs(dateList, num_connections, updateStack=False): # should
######################################## ########################################
# Below are few workflow examples. # Below are few workflow examples.
def slcStack(inps, acquisitionDates, stackMasterDate, slaveDates, safe_dict, updateStack, mergeSLC=False): def slcStack(inps, acquisitionDates, stackMasterDate, slaveDates, safe_dict, updateStack, mergeSLC=False):
############################# #############################
@ -436,19 +443,19 @@ def slcStack(inps, acquisitionDates, stackMasterDate, slaveDates, safe_dict, upd
if not updateStack: if not updateStack:
i += 1 i += 1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_unpack_topo_master") runObj.configure(inps, 'run_{:02d}_unpack_topo_master'.format(i))
runObj.unpackStackMasterSLC(safe_dict) runObj.unpackStackMasterSLC(safe_dict)
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_unpack_slave_slc") runObj.configure(inps, 'run_{:02d}_unpack_slave_slc'.format(i))
runObj.unpackSlavesSLC(stackMasterDate, slaveDates, safe_dict) runObj.unpackSlavesSLC(stackMasterDate, slaveDates, safe_dict)
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_average_baseline") runObj.configure(inps, 'run_{:02d}_average_baseline'.format(i))
runObj.averageBaseline(stackMasterDate, slaveDates) runObj.averageBaseline(stackMasterDate, slaveDates)
runObj.finalize() runObj.finalize()
@ -456,25 +463,25 @@ def slcStack(inps, acquisitionDates, stackMasterDate, slaveDates, safe_dict, upd
if not updateStack: if not updateStack:
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_extract_burst_overlaps") runObj.configure(inps, 'run_{:02d}_extract_burst_overlaps'.format(i))
runObj.extractOverlaps() runObj.extractOverlaps()
runObj.finalize() runObj.finalize()
i += 1 i += 1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_overlap_geo2rdr") runObj.configure(inps, 'run_{:02d}_overlap_geo2rdr'.format(i))
runObj.geo2rdr_offset(slaveDates) runObj.geo2rdr_offset(slaveDates)
runObj.finalize() runObj.finalize()
i += 1 i += 1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_overlap_resample") runObj.configure(inps, 'run_{:02d}_overlap_resample'.format(i))
runObj.resample_with_carrier(slaveDates) runObj.resample_with_carrier(slaveDates)
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_pairs_misreg") runObj.configure(inps, 'run_{:02d}_pairs_misreg'.format(i))
if updateStack: if updateStack:
runObj.pairs_misregistration(slaveDates, safe_dict) runObj.pairs_misregistration(slaveDates, safe_dict)
else: else:
@ -483,39 +490,39 @@ def slcStack(inps, acquisitionDates, stackMasterDate, slaveDates, safe_dict, upd
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_timeseries_misreg") runObj.configure(inps, 'run_{:02d}_timeseries_misreg'.format(i))
runObj.timeseries_misregistration() runObj.timeseries_misregistration()
runObj.finalize() runObj.finalize()
i += 1 i += 1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_fullBurst_geo2rdr") runObj.configure(inps, 'run_{:02d}_fullBurst_geo2rdr'.format(i))
runObj.geo2rdr_offset(slaveDates, fullBurst='True') runObj.geo2rdr_offset(slaveDates, fullBurst='True')
runObj.finalize() runObj.finalize()
i += 1 i += 1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_fullBurst_resample") runObj.configure(inps, 'run_{:02d}_fullBurst_resample'.format(i))
runObj.resample_with_carrier(slaveDates, fullBurst='True') runObj.resample_with_carrier(slaveDates, fullBurst='True')
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_extract_stack_valid_region") runObj.configure(inps, 'run_{:02d}_extract_stack_valid_region'.format(i))
runObj.extractStackValidRegion() runObj.extractStackValidRegion()
runObj.finalize() runObj.finalize()
if mergeSLC: if mergeSLC:
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_merge") runObj.configure(inps, 'run_{:02d}_merge'.format(i))
runObj.mergeMaster(stackMasterDate, virtual = 'False') runObj.mergeMaster(stackMasterDate, virtual = 'False')
runObj.mergeSlaveSLC(slaveDates, virtual = 'False') runObj.mergeSlaveSLC(slaveDates, virtual = 'False')
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_grid_baseline") runObj.configure(inps, 'run_{:02d}_grid_baseline'.format(i))
runObj.gridBaseline(stackMasterDate, slaveDates) runObj.gridBaseline(stackMasterDate, slaveDates)
runObj.finalize() runObj.finalize()
@ -530,20 +537,20 @@ def correlationStack(inps, acquisitionDates, stackMasterDate, slaveDates, safe_d
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_merge_master_slave_slc") runObj.configure(inps, 'run_{:02d}_merge_master_slave_slc'.format(i))
runObj.mergeMaster(stackMasterDate, virtual = 'True') runObj.mergeMaster(stackMasterDate, virtual = 'True')
runObj.mergeSlaveSLC(slaveDates, virtual = 'True') runObj.mergeSlaveSLC(slaveDates, virtual = 'True')
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_merge_burst_igram") runObj.configure(inps, 'run_{:02d}_merge_burst_igram'.format(i))
runObj.burstIgram_mergeBurst(acquisitionDates, safe_dict, pairs) runObj.burstIgram_mergeBurst(acquisitionDates, safe_dict, pairs)
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_filter_coherence") runObj.configure(inps, 'run_{:02d}_filter_coherence'.format(i))
runObj.filter_coherence(pairs) runObj.filter_coherence(pairs)
runObj.finalize() runObj.finalize()
@ -554,32 +561,32 @@ def interferogramStack(inps, acquisitionDates, stackMasterDate, slaveDates, safe
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_merge_master_slave_slc") runObj.configure(inps, 'run_{:02d}_merge_master_slave_slc'.format(i))
runObj.mergeMaster(stackMasterDate, virtual = 'True') runObj.mergeMaster(stackMasterDate, virtual = 'True')
runObj.mergeSlaveSLC(slaveDates, virtual = 'True') runObj.mergeSlaveSLC(slaveDates, virtual = 'True')
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_generate_burst_igram") runObj.configure(inps, 'run_{:02d}_generate_burst_igram'.format(i))
runObj.generate_burstIgram(acquisitionDates, safe_dict, pairs) runObj.generate_burstIgram(acquisitionDates, safe_dict, pairs)
runObj.finalize() runObj.finalize()
i += 1 i += 1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_merge_burst_igram") runObj.configure(inps, 'run_{:02d}_merge_burst_igram'.format(i))
runObj.igram_mergeBurst(acquisitionDates, safe_dict, pairs) runObj.igram_mergeBurst(acquisitionDates, safe_dict, pairs)
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_filter_coherence") runObj.configure(inps, 'run_{:02d}_filter_coherence'.format(i))
runObj.filter_coherence(pairs) runObj.filter_coherence(pairs)
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_unwrap") runObj.configure(inps, 'run_{:02d}_unwrap'.format(i))
runObj.unwrap(pairs) runObj.unwrap(pairs)
runObj.finalize() runObj.finalize()
@ -590,15 +597,15 @@ def offsetStack(inps, acquisitionDates, stackMasterDate, slaveDates, safe_dict,
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_merge_master_slave_slc") runObj.configure(inps, 'run_{:02d}_merge_master_slave_slc'.format(i))
runObj.mergeMaster(stackMasterDate, virtual = 'False') runObj.mergeMaster(stackMasterDate, virtual = 'False')
runObj.mergeSlaveSLC(slaveDates, virtual = 'False') runObj.mergeSlaveSLC(slaveDates, virtual = 'False')
runObj.finalize() runObj.finalize()
i+=1 i+=1
runObj = run() runObj = run()
runObj.configure(inps, 'run_' + str(i) + "_dense_offsets") runObj.configure(inps, 'run_{:02d}_dense_offsets'.format(i))
runObj.denseOffsets(pairs) runObj.denseOffsets(pairs)
runObj.finalize() runObj.finalize()
@ -669,7 +676,7 @@ def checkCurrentStatus(inps):
slaveDates = latestCoregSLCs + newAcquisitions slaveDates = latestCoregSLCs + newAcquisitions
slaveDates.sort() slaveDates.sort()
acquisitionDates = slaveDates.copy() acquisitionDates = slaveDates.copy()
acquisitionDates.append(stackMasterDate) acquisitionDates.append(stackMasterDate)
acquisitionDates.sort() acquisitionDates.sort()
@ -710,7 +717,7 @@ def main(iargs=None):
print('') print('')
print('**************************') print('**************************')
print('run_files folder exists.') print('run_files folder exists.')
print(os.path.join(inps.work_dir, 'run_files'), ' already exists.') print(os.path.join(inps.work_dir, 'run_files'), ' already exists.')
print('Please remove or rename this folder and try again.') print('Please remove or rename this folder and try again.')
print('') print('')
print('**************************') print('**************************')
@ -744,8 +751,8 @@ def main(iargs=None):
print ('Workflow: ', inps.workflow) print ('Workflow: ', inps.workflow)
print ('*****************************************') print ('*****************************************')
if inps.workflow == 'interferogram': if inps.workflow == 'interferogram':
interferogramStack(inps, acquisitionDates, stackMasterDate, slaveDates, safe_dict, pairs, updateStack) interferogramStack(inps, acquisitionDates, stackMasterDate, slaveDates, safe_dict, pairs, updateStack)
elif inps.workflow == 'offset': elif inps.workflow == 'offset':
@ -761,6 +768,5 @@ def main(iargs=None):
if __name__ == "__main__": if __name__ == "__main__":
# Main engine # Main engine
main() main()