tropy.io_tools package¶
Submodules¶
tropy.io_tools.HRIT module¶
-
tropy.io_tools.HRIT.
SEVIRI_channel_list
()¶
-
tropy.io_tools.HRIT.
bit_conversion
(input_set, inbit=8, outbit=10)¶ An integer array which is based on a n-bit representation is converted to an integer array based on a m-bit representation.
output_set = bit_conversion(input_set, inbit=8, outbit=10)
input_set : numpy integer array in a n-bit representation inbit : number of bits for input_set (n) outbit : number of bits for output_set (m)
output_set : numpy integer array in a m-bit representation
The size of input_set should be a multiple of inbit / outbit!
-
tropy.io_tools.HRIT.
channel_segment_sets
(set_name)¶ Outputs a predefined set of channels and segments’
chan_seg = channel_segment_sets(set_name)
set_name: name of a predefined set
chan_seg: Dictionary of channels with a list of segments.
-
tropy.io_tools.HRIT.
combine_segments
(seg_dict, missing=-1)¶ Combines individual segments of SEVIRI scans. Segments will be sorted by key.
combined = combine_two_segments(seg_list)
seg_list: dictionary of segment, sorted by key from bottom to top
combined: combination of both segments
-
tropy.io_tools.HRIT.
combine_two_segments
(seg1, seg2)¶ Combines two segments where seg1 is the upper and seg2 the lower segment.
seg = combine_two_segments(seg1,seg2)
seg1: upper segment seg2: lower segment
seg: combination of both segments
-
tropy.io_tools.HRIT.
divide_two_segments
(seg)¶ Divides one segment into two with half the line size.
seg1, seg2 = divide_two_segments(seg)
seg: combination of two segments
seg1: upper segment seg2: lower segment
-
tropy.io_tools.HRIT.
get_HRIT_from_arch
(day, chan_seg={'IR_016': [1, 2, 3, 4, 5, 6, 7, 8], 'VIS006': [1, 2, 3, 4, 5, 6, 7, 8], 'VIS008': [1, 2, 3, 4, 5, 6, 7, 8]}, scan_type='pzs', tarname=None, arch_dir=None, out_path=None)¶ Gets and decompresses the original HRIT files from archive. A typical set of channels and segments can be chosen.
file_list = get_HRIT_from_arch(day, chan_set = ‘nc-full’, scan_type = ‘pzs’ , arch_dir = None, out_path = None)
day: datetime object which include day and time of MSG time slot chan_set = name for a channel-segment set scan_type: which type of msg is used,
- ) allowed keys are: ‘pzs’ and ‘rss’
arch_dir = archive directory out_path = directory where the HRIT files are saved.
file_list : list of extracted HRIT files
-
tropy.io_tools.HRIT.
read_HRIT_data
(day, chan_seg, calibrate=True, scan_type='pzs', arch_dir=None, standard_seg=(464, 3712), add_meta=False, **kwargs)¶ Read HRIT data given a time slot, satellite type and channel-segment set
combined = read_HRIT_data(day, chan_seg, calibrate=True, scan_type = ‘pzs’, arch_dir = None, add_meta = False, **kwargs)
day = datetime object which include day and time of MSG time slot chan_seg = a channel-segment set, dictionary of channel names with lists of segments calibrate = flag, if slope / offset calibration should be done scan_type = which type of msg is used,
- ) allowed keys are: ‘pzs’ and ‘rss’
arch_dir = archive directory add_meta = flag, if meta data should be added out_path = directory where the HRIT files are saved.
combined = dictionary of retrieved channels with combined segments
-
tropy.io_tools.HRIT.
read_HRIT_seg_data
(hrit_file)¶ Reads directly the counts data for an HRIT segment.
seg_data = read_HRIT_seg_data(hrit_file)
hrit_file : HRIT file of the considered segment
seg_data: integer segment data field of radiance counts
-
tropy.io_tools.HRIT.
read_slope_offset_from_prolog
(pro_file, NJUMP=387065)¶ Reads directly slope and offset from HRIT PRO file.
slope, offset = read_slope_offset_from_prolog(pro_file)
pro_file : PROLOG file of the considered time slot
slope: slope of the data offset: offset of the data
Following transformation is applied:
DATA = SLOPE * COUNTS + OFFSET
if COUNTS are valid data points!
!! ATTENTION !! Jump in binary file is hard-coded!
-
tropy.io_tools.HRIT.
segments_for_region
(region, Nrow=3712, Nseg=8, seg_size=464)¶
-
tropy.io_tools.HRIT.
write_HRIT_seg_data
(seg_data, hrit_file)¶ Reads directly the counts data for an HRIT segment.
write_HRIT_seg_data(seg_data, hrit_file)
hrit_file : HRIT file of the considered segment seg_data: integer segment data field of radiance counts
tropy.io_tools.bit_conversion module¶
tropy.io_tools.data_collection module¶
-
class
tropy.io_tools.data_collection.
DataCollection
¶ Bases:
dict
-
add
(vname, array, setting='bt', subpath=None)¶
-
list
()¶
-
save
(fname)¶
-
-
class
tropy.io_tools.data_collection.
Dataset
(name, data=None, setting={'_FillValue': 0, 'add_offset': 0, 'dtype': '|u2', 'longname': 'Brightness Temperature', 'scale_factor': 0.01, 'unit': 'K'})¶ Bases:
object
The Dataset class is a building block of the DataCollection class which will be used to easy input and output dataset sets.
-
add
(array)¶
-
array
()¶
-
attrs
()¶
-
load
(fname, subpath=None)¶
-
save
(fname, subpath=None, mode='overwrite', time_out=300.0)¶
-
tropy.io_tools.data_settings module¶
-
tropy.io_tools.data_settings.
Dge_settings
()¶
-
tropy.io_tools.data_settings.
bt_settings
()¶
-
tropy.io_tools.data_settings.
histogram_settings
()¶
-
tropy.io_tools.data_settings.
refl_settings
()¶
-
tropy.io_tools.data_settings.
settings
(vartype='bt')¶
-
tropy.io_tools.data_settings.
standard_int_setting
(longname='Unknown', unit='Unknown')¶
-
tropy.io_tools.data_settings.
standard_real_setting
(longname='Unknown', unit='Unknown')¶
tropy.io_tools.file_status module¶
-
tropy.io_tools.file_status.
test
(fname)¶ Returns status of a file:
True: if open False: if not open
tropy.io_tools.find_latest_slot module¶
tropy.io_tools.hdf module¶
-
tropy.io_tools.hdf.
dict_merge
(a, b)¶ Recursively merges dict’s. not just simple a[‘key’] = b[‘key’], if both a and bhave a key who’s value is a dict then dict_merge is called on both values and the result stored in the returned dictionary.
from https://www.xormedia.com/recursively-merge-dictionaries-in-python/
-
tropy.io_tools.hdf.
get_seviri_chan
(chan_list, day, scan_type='rss', fname=None, calibrate=True, add_meta=False, add_geo=False, arch_dir=None)¶ Reads MSG - SEVIRI radiance of a given channel for a given date.
var = get_seviri_chan(chan_list, day, scan_type = ‘rss’, calibrate=True, add_meta = False)
- chan_list: name of channel, e.g. ir_108 or IR_108 (case does not matter)
- or list of channel names
day: datetime object which include day and time of MSG time slot
scan_type: sets of scanning modus, i.e. ‘rss’ or ‘pzs’ (DEFAULT: ‘rss’)
calibrate : optional, decides if output is radiance (True) or counts (False)
add_meta: meta data of the MSG-SEVIRI hdf file
- var: dictionary including radiance/counts of channel <ch_name> at date <day>
- and if chosen meta data
-
tropy.io_tools.hdf.
list_hdf_groups
(fname)¶ Makes a list of hdf groups.
Only the 1st level is implemented, TDB: recursive listing.
glist = list_hdf_groups(fname)
fname: filename of hdf file
glist: list of group names
-
tropy.io_tools.hdf.
read_dict_cont
(f, d)¶ Recursively read nested dictionaries.
-
tropy.io_tools.hdf.
read_dict_from_hdf
(fname)¶ The content of an hdf file with arbitrary depth of subgroups is saved in nested dictionaries.
d = read_dict_from_hdf(fname)
fname: filename of output hdf file
d: nested dictionary which contains the data
-
tropy.io_tools.hdf.
read_var_from_hdf
(fname, vname, subpath=None)¶ A specific variable is read from an hdf file. Group and subgroups can be specified.
v = read_var_from_hdf(fname, vname, subpath = None)
fname: filename of input hdf file vname: variable name as given in hdf file
subpath: group or subgroup path with /
v: variable as numpy array
-
tropy.io_tools.hdf.
save_dict2hdf
(fname, d, mode='w')¶ The content of nested dictionaries of arbritrary depth is saved into an hdf file.
The key, subkeys, etc. are mapped into the hdf-groups directory structure.
save_dict2hdf(fname, d)
fname: filename of output hdf file d: dictionary which contains the data
None
-
tropy.io_tools.hdf.
save_dict_cont
(f, d)¶ Recursively saves nested dictionaries.
-
tropy.io_tools.hdf.
update_dict_in_hdf
(fname, din)¶ The content of nested dictionaries of arbritrary depth is updated in an hdf file.
The key, subkeys, etc. are mapped into the hdf-groups directory structure.
update_dict_in_hdf(fname, d)
fname: filename of output hdf file d: dictionary which contains the data
None
tropy.io_tools.netcdf module¶
-
tropy.io_tools.netcdf.
read_icon_2d_data
(fname, var_list, itime=0)¶
-
tropy.io_tools.netcdf.
read_icon_4d_data
(fname, var_list, itime=0, itime2=None)¶ Read netcdf data.
dset = read_icon_4d_data(fname, var_list, itime = 0)
fname: netcdf filename var_list: variable name or list of variables itime (OPTIONAL): index of 1st dimension (assumed to be time) to be read only
if itime = None, the full field is readdset: dictionary containing the fields
-
tropy.io_tools.netcdf.
read_icon_dimension
(fname, dim_name)¶
-
tropy.io_tools.netcdf.
read_icon_georef
(fname)¶
-
tropy.io_tools.netcdf.
read_icon_time
(fname, itime=0)¶
-
tropy.io_tools.netcdf.
roundTime
(dt=None, roundTo=60)¶ - Round a datetime object to any time laps in seconds
- dt : datetime.datetime object, default now.
roundTo : Closest number of seconds to round to, default 1 minute. Author: Thierry Husson 2012 - Use it as you want but don’t blame me.
-
tropy.io_tools.netcdf.
save_icon_georef
(fname, geopath=None)¶
-
tropy.io_tools.netcdf.
save_icon_time_reference
(fname, outfile=None)¶