You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Current »

Acronym

c/s = CubeSat

Meeting minutes at NASA GSFC

Presented the mission (see slides) and discussed in particular with Gene C. Feldman, Susanne Craig and Ivona Ceticic. Overall good support from the audience (30-40 people) for this mission and how we want to operate, good questions & discussions. In their perspective, the “rapid response” idea in a multi-agent architecture is what makes this hyperspectral imaging concept/mission great, not the hyperspectral camera only. We should leverage more the idea what we want to do with UAVS, AUVS and USVs to respond to dedicated downlinked data from HYPSO (whether those are full raw images, spectral signatures and target coordinates).

Discussions after my presentation:

  • Should we do a Balloon test? Absolutely! Would increase TRL of payload considerably if it works. Almost no difference between Top of Atmosphere (ToA) radiance vs. radiance at 100,000 ft. Essential to understand what we will see in space.
  • How will we deal with clouds? 
    • We should make a historical database on weather reports (identify cloudy days; fully covered; partially covered) at different geographic locations in Norway to map where probability of observable conditions are highest.
    • MeteoSat, European GEO satellite, uses cloud product for a region on a daily basis, gives more reliability of where to point for our c/s.  
      • Its spatial coverage is 4 km. 
      • (mission operations once launched) Look for where there are small enough patches to look at in almost real-time.
    • Create a cloud screening algorithm, include an atmospheric variables.
  • PACE – a representative working on that mission asked what are spectral responses we are looking for at each wavelength/band. Is it the water-leaving spectral shape or the radiance that we want to reconstruct? Operational vs. scientific? Spectral shape algorithms – are they good enough (ref. Rick Strumpff's@NOAA methods)? Compare different algorithms and methods.

Discussions with Gene C. Feldman:

  • L1a data should be downlinked at minimum, NOT L0 (useless without ancillary data). Cannot reconstruct L0 and L1a (or it is difficult) from L1b, since it is normalized to sensor units (takes away raw data) – since we then use sensor calibration (sensor stability factors) to process to L1b. Gene is not happy with ESA giving L1b products since they are difficult to reconstruct. L2 does not make sense when chl-a conc. isn't appropriately correlated to ToA radiance. Radiometric calibration, geometric calibration and atmospheric correction need to be applied L1b->L2. Huge dataset for L1a, so need to consider that for downlink (he suggests X-band, maturity today is high).
  • For L1b data do we want to recalibrate the sensor per imaging, or let it stay as it is until really necessary? L1a dataset will change based on the recalibration. Essence with L1a is to characterize the image based and understand degradation of optics, FoV coefficients etc. With recalibration we cannot figure that out.
  • Things move after launch and in each operations, that's why we also need L1a at all times to see how the sensor+image data changes with time.
  • Ground truth is really bad for in-situ validation: instantaneous sampling on ground and remote imaging from space won’t happen. 
    • Why? Ground point-to-point in-situ validation is not reasonable since it is difficult to map this to the pixels on the sensor and also this does not give any information about how the whole image should be calibrated/corrected. Sampling across large area can give you a better clue but difficult to do. You can’t take a point and sample it and say hey this is how the water looks like at that point. Water is way more heterogeneous than you think from point to point.
    • Doing it across a e.g. 30x30 km grid makes sense though impossible to do operationally/physically. Things change very quickly over the course of the imaging operations and imaging on fine scale and sampling. We can't get a coherent structure in that way. Radiance values vary too much across pixels and where you take images. Need many many assets out there to get a better idea but will never be perfect. Algal blooms and ocean in general are too dynamic and highly varying – esp. wrt. currents and waves.
  • Apply deconvolution and super-resolution between L1b and L2: Correct. Minimum success criteria is to do super-resolution on ground, don't risk doing it onboard (at least in the first season of operations). 
  • Slew maneuver ambitious to achieve perfect GSD as proposed. Baseline should be to point Nadir, have 500 m spatial in-track resolution and view a 50 km line for 8 seconds. Slewing shall be full success criteria, we don't know the precision.
  • Our theoretical SNR is ok, but need to find out what it is in practice. Good that we redesigned from V4 to V6!
  • Hawkeye mission: Gene is working on this mission. They are using X-band for downlink and has deployable antennas. Suggestions:
    • S-band is ok, “touch the water before jumping into it” and since we will have a S-band ground station at NTNU and may want to also downlink there. But location really doesn’t matter in terms of real-time to downlink "fast", data will be distributed quickly and use S-band Ground Station across the globe. 
    • TRL for X-band is high enough today to downlink higher amount of data – send a lot more data in shorter time – save energy and operations complexity. Think about it for next mission.
    • Use a RGB camera to geo-reference the HSI FoV instantaneously and on image grid. Hawkeye uses FinderScope (a RGB camera - same principle).
  • Skepticism on HSI, esp. on the spectral binning branch. Increase SNR thru more binning and get less spectral resolution which would already be great at 20 nm-40 nm.
  • Processing in SeaDas would be great. Ensure format is readable.
  • Downlink to NASA GSFC is not necessary - they can download from our website/server. 

Ajit’s feedback on presentation:

  • State clearly the advantages and disadvantages with c/s vs. conventional satellites if we are to propose that cubesats are "so great". They will never replace bigger satellites but on the other hand give smaller complementary data.
  • Mission success criteria shall be stated clearly and formulated better. Distinguish what is minimum and what is full. Draw CONOPS for both (e.g. 1) no slew, downlink raw data, no complex uplink; 2) slew, downlink processed data, more complex uplink, using robotic agents.
    • Make a slide with minimum vs. full success criteria. What are the quantitative tresholds? Baseline threshold matrix in systems eng.-how they do it at NASA GSFC.
    • Images with binning is still considered raw data? What is minimum and full success? 1x binning minimum, 10x binning is full success. Differentiate between high-res mode, medium-res mode and low-res mode. Gain=0 always for raw data, but applying binning on raw data is up for discussion.
  • The idea of rapid response in robotic platform architecture is great. Leverage that for future missions we want to downlink spectral signatures directly to robotic agents (UAV or ASV). Thus, SDR sounds like a very good option where modulation and tuning of frequency happens both on satellite and ground assets. This allows even more rapid response for the the vehicles on surface to analyze the spectral signatures and give feedback to satellite through uplink (inter-calibration). Also target coordinates can be sent down directly to the ground assets for corrections in planning and maneuvering towards the target area and where there are interesting signatures to detect. This would save cost and time for the robotic assets considerably. For first-flight we don’t need think about it but reveal the plan for this for future missions (2nd possible?).
  • Check with Raphe on field validating/match up with HICO. Curt Davis collaborated on that, who was the PI on HICO. He is uncertain if there has been any field campaign to match the HICO data?
  • What again is the need for hyperspectral? We can do the multispectral with less amount of data and much higher SNR. Ours basically works as a MSI, because we select the specific reference bands? But we are not able to bin it? And our SNR could be much higher in general, so would be better to use a MSI and have higher efficiency. MSI has higher SNR at those reference bands for example. HSI basically is a trade-off for flexibility giving low SNR. We compromise SNR too much with HSI.

Ivona Ceticic & Susanne Craig (more TBD on this as I am discussing with them on email)

Main outcomes:

  • No labels