THE MIMOSA SATELLITE



CREATING AN END-TO-END SATELLITE TRACKING SOFTWARE

            During my post-doctoral fellowship, I learned much more about Linux and Python. No, I will likely never be a guru in either, but at least I realized the potential of both when considering beginning something that I had always wanted to do (since 1999 anyway) but never really got around to doing: creating an end-to-end satellite tracking software from scratch. 

            My journey began in 1990, when I first tried to investigate (and understand) basic orbit determination. It turned out that 1990 was way too soon for me to accomplish this. At that time, I knew little about propagating an orbit, obtaining decent observations, or tracking a satellite. CCD cameras were brand new in the amateur astronomy community and so their potential for satellite tracking would not yet be realized for several years at least. In the late 1990s, I was introduced to satellite tracking, which I loved to the extreme. I even designed and constructed an optical satellite tracking facility for the Canadian military. However, I always wanted to design a software for the astronomy community that would allow them to be introduced to the subject and to spark their curiosity, much like my former employers had done for me over 20 years ago. I have my PhD today because of that spark and I have never regretted the journey of space surveillance and space situational awareness (SSA) in general.

            Before I could start to write my software, I had to intricately learn the tools of satellite tracking. These included:

           1 - understanding orbit mechanics and the Two-Line Element Set (TLE) format;
      2 - creating a tracking schedule by propagating the orbit elements of desired satellites;
     
3 - tracking the satellites in the optical and/or some other frequency (such as radio);
      4 - detecting the satellites amongst the noise in the collected data (for example, streak detection);
      5 - extracting the tracking data; and
      6 - conducting orbit determination using the obtained tracking data and, if required, identifying the satelllite(s).

Each one of these steps involves very careful consideration and very intense study to fully understand their nuances and their purposes in the overall subject of satellite tracking. These parts would comprise the six primary components of the satellite tracking software that I would create.

            I required graduate school to obtain the final piece, which was orbit determination. In 2017, 27 years after my initial “foray” into orbital mechanics, I had finally developed everything that I had required to build my first end-to-end satellite tracking software for those within the general public who were interested in the “Outer Limits” of astronomy. I don’t even know if the public will even care about satellite tracking by the time I finish the software, but maybe some of the younger scientists will at least be curious about it, like I was in my younger (and underdeveloped) days. This satellite tracking software has begun with:

      1 - automatically downloading and sorting TLEs with respect to orbit period, common name and activity status;
      2 - propagating TLEs to produce observing schedules for transceivers and optical tracking facilities;
      3 - obtaining images of Low Earth Orbit (LEO) satellites using wide field CCD cameras;
      4 - automatically detecting and extracting LEO satellite streaks from optical images;
      5 - extracting tracking data from detected LEO satellite streaks contained within optical images; and
      6 - performing initial orbit determination of each detected satellite using a method that I had developed for observations conducted within 5 degrees of the local zenith. 

Right now, the code is in its early testing stage (as most code is). In this article, I have provided the specifics of what the code can currently do in each step. Much work is yet to be done; however, this is always the case for software of any kind; therefore I am not alone in this sort of endeavour.


PART 1 – OBTAINING AND SORTING TLEs 

            A “TLE” is a small three-line text file that identifies the satellite, its position within its orbit and its orbit’s position with respect to the Earth’s equatorial plane. In effect, a TLE identifies a satellite’s six orbit elements and the time (epoch) at which those orbit elements were most relevant.

            At first glance, extracting and sorting TLEs seems easy. Just parse some main TLE file, obtained from some website, such that specific TLEs are placed into specific bins for a more convenient navigation. However, I first had to first consider how I would sort the TLEs. TLE files are normally presented as a main file (containing thousands of TLEs) or are sorted according to someone else’s criteria. This is fine if only one TLE is required. However, what if you are looking to sort TLEs with respect to your own criteria in order to save processor (and your own) time? What if you have different TLE criteria for different scenarios? What if you want some TLEs for optical detection and others for radio reception? I decided that my TLE sorting software had to be more flexible than some and had to be able to sort TLEs according to specific criteria, such as specific orbit elements, common names, status (if available) and even right down to specific amateur radio frequencies, when required

            If considering all of the possible criteria for TLE sorting, then the task of sorting TLEs can be considerable. The software would certainly have to have a front end (a GUI) that asks the user the specific sorting criteria that are desired. There would have to be a default sorting setting, one that I have defined and am using right now, based on my own specific criteria. However, the user should be able to override this default setting and pick and choose the criteria that he/she wishes. 

            Right now, my TLE sorting software automatically “scrapes” the Space-Track, Celestrak and amateur satellite tracking TLE databases. Space-Track is considered the “gold standard” of TLE repositories, containing nearly 20,000 TLEs from Vanguard 1 to the latest launches. Celestrak, managed by Dr. Thomas (T.S.) Kelso, conveniently separates the TLEs into more convenient bins, much like my software does. The amateur satellite tracking site primarily contains those TLEs that are not on the unclassified Space-Track and Celestrak sites. However, unlike Space-Track, which requires a user account, both Celestrak and the amateur site are freely available without a user account. My TLE sorting software automatically asks for the user's Space-Track UserID and password, but in the future it will list Space-Track as an option rather than a default, for those who do not have a Space-Track account.

            Right now, my software considers the Space-Track site to be the primary TLE repository. The Celestrak site is only used for the amateur (HAM) cubesat satellites, for those who wish to only consider those satellites that broadcast on the amateur radio bands. My software also looks at a very useful list of cubesat satellites transmitting on the amateur frequencies, created and maintained by JE9PEL, in order to produce TLEs and schedules of only the active cubesat satellites, as opposed to the derelict satellites that are likely no longer transmitting. A special thanks to JE9PEL’s very thorough efforts. 

            TLEs are normally sorted according to the satellites’ 5-digit NORAD ID identifiers. Vanguard 1, America’s first satellite, launched in 1958, and the oldest orbiting satellite, has the NORAD ID number of 00005. The International Space Station, originally launched in 1998, has the NORAD ID number of 25544. Since my TLE sorting software downloads TLEs from several different sources, it would be quite inconvenient to sort TLEs into different bins without sorting their NORAD ID numbers as well. My software automatically does that for all TLEs downloaded from all sources, with the understanding that not all TLEs will be 100% accurate. The TLEs from the amateur sites are identified in the TLEs themselves, but in a single column in Line 1 that is not used by most (if not all) orbit propagation software. 

            When run, my current version of the TLE sorting software automatically performs the following tasks:

      - asks for the user’s Space-Track UserID and password;
      - creates the folder infrastructure based on my default sorting criteria;
      - extracts the latest TLEs from the Space-Track, Celestrak and the amateur satellite tracking sites;
      - sorts all of the satellite TLEs from the Space-Track and the amateur satellite tracking sites and writes a main TLE file containing all TLEs in order of NORAD ID number;
      - places satellite TLEs into specific bins, mainly according to mean motion, common name and, in some cases, status (active/inactive/payload/debris); and
      - places the amateur cubesat TLEs (from Celestrak) into a special “HAM” folder, which also contains the latest cubesat status file downloaded from the JE9PEL site. 

Of course, the software can be expanded to include other features and more conveniences, as the users need them. However, the TLE sorting software is only used to download and sort TLEs and nothing more. At this time, I am creating the foundations of each part of the software in order to carefully test each for reliability. This work will be ongoing.


PART 2 – PROPAGATING TLEs AND CREATING OBSERVING SCHEDULES 

            Orbit propagation and creating tracking schedules are much more involved than sorting TLEs. In a nutshell, the process of orbit propagation unpacks the orbit elements of a satellite in order to produce predicted coordinates at specific times, also known to astronomers as an “ephemeris”.

            Orbit propagation is such an involved and nuanced topic, that I will not go into it in any great detail here. The orbit propagation model that is used can depend on the accuracy desired. For example, optically observing a satellite with a narrow field of view will require more accurate orbit propagation than observing it in a wide field of view or receiving a satellite with a transceiver facility. 

            For my initial orbit propagation software, I decided to begin with the simpler route of transceiver reception. A transceiver’s ability to detect a satellite whose orbit propagation is accurate to within a few degrees will still be very adequate to receive the satellite’s transmissions. Therefore, the propagator did not have to be extremely accurate. Of course, this is also true for an optical system with a wide field of view of 10 degrees or more. This kind of accuracy can be accomplished with a standard SGP4 propagator, which is used in many satellite prediction software today. 

            First, I had to make sure that all of the orbit propagation components (modules) were reliable enough to be used again and again without large prediction inaccuracies and/or code crashes. The orbit propagation portion of the scheduling software comprised of 13 modules; each one performing a specific and important task: from determining the satellite’s average radial distance from the Earth’s center to performing the coordinate transformations between the topocentric equatorial (RA/dec) and Alt-Az coordinate systems. Orbit propagation always involves using Kepler’s Equation, which cannot be solved analytically. Therefore, for each TLE, Kepler’s Equation has to be solved by numerical methods within its own module.

            Second, I had to consider those specific cubesats that would be above the local horizon within the specific times indicated. My software also includes user-defined elevation (also known as the "Alt" of the Alt-Az coordinate system) constraints, with a default of 60°. This 60° default elevation constraint was defined to make sure that the cubesat was close enough to be detected by even the smaller (handheld) transceivers. However, the elevation constraint can be changed by the user, depending on the transceivers and the antennas used.

            Currently, my software downloads all of the amateur satellite TLEs from the Celestrak website, cross-references them with the latest JE9PEL file that was extracted with the TLE sorting software (using the NORAD ID), determines the activity status (active or inactive), and produces a schedule of accessibility times of the active cubesats, based on the user-defined begin and end times, as well as the desired elevation constraint. Example output text of the cubesat scheduling software is shown in Table1. The header (first four lines) indicate the location (Saskatoon, Saskatchewan in this case), the begin and end times, and the elevation cutoff (between 60° and 90°). The next four lines identify the information contained in the remaining text.


Location: -106.631551  +52.096986  482m
From: 00:00:00 UTC 07-16-2019
To:      01:00:00 UTC 07-16-2019
Elevation: 60 to 90 degrees


NORAD Common1 (Common2)     Frequencies (MHz)
UTC hh mm ss         Az(deg) El(deg) --- 60 deg. entering target
UTC hh mm ss         Az(deg) El(deg) --- Highest Elevation
UTC hh mm ss         Az(deg) El(deg) --- 60 deg. exiting target


40911 XW-2B (XW-2B (CAS-3B)) beacon= 145.705/145.725 Rx= 435.090-435.110 Tx= 145.750-145.730
00 00 58         135.65     60.00
00 01 32           74.00     74.85
00 02 06           12.38     60.00

41842 XIAOXIANG 1 (Xiaoxiang-1 (TY-1 Tianyi-1)) beacon= 2401.600 Tx= 437.500/437.525
00 16 55         124.22     60.00
00 17 23           73.68     70.00
00 17 52           23.16     60.00

40912 KAITUO 1B (KAITUO-1B (CAS-3G DCBB)) Tx= 145.475*/437.950
00 34 52         187.66     60.00
00 35 27         256.71     78.38
00 36 02         325.73     60.00

43616 ELFIN B (ELFIN-B) Tx= 437.475
00 35 09           59.24     60.00
00 35 29           94.87     64.92
00 35 49         130.47     60.00

43615 CP-7 DAVE (DAVE (CP-7)) beacon= 437.150 Tx= 437.150
00 39 46           36.03     60.00
00 40 15           95.76     73.84
00 40 45         155.45     60.00

27848 CUBESAT XI 4 (XI-IV (CO-57)) beacon= 436.848 Tx= 437.490
00 43 37         206.46     60.00
00 44 25         256.54     69.79
00 45 13         306.60     60.00

27844 CUTE-1 (CUTE-I (CO-55)) beacon= 436.838 Tx= 437.470
00 43 48         213.48     60.00
00 44 31         256.95     67.38
00 45 14         300.41     60.00

35932 SWISSCUBE (SwissCube-1) beacon= 437.505 Tx= 437.505
00 50 29         322.28     60.00
00 50 58         290.04     64.09
00 51 26         257.81     60.00

23439 RADIO ROSTO (RS-15 (Radio Rosto)) beacon= 29.3525/29.3987
00 52 16         356.17     60.00
00 53 58           36.36     66.33
00 55 41           76.53     60.00

32788 AAUSAT CUBESAT 2 (AAUSAT-2) Tx= 437.426
00 55 43         155.16     60.00
00 56 26           74.82     84.59
00 57 10         354.57     60.00

Table 1: Example output from cubesat orbit propagation and scheduling software


    The tracking schedule in the output text file is ordered with respect to the time of first threshold elevation crossing (increasing elevation with time). Each satellite accessibility is shown using four distinct lines:

      Line 1: The satellite identification and frequencies (beacon, downlink and uplink);
      Line 2: The time and Alt-Az location at the first elevation threshold crossing (60° in the example);
      Line 3: The time and Alt-Az location at culmination (highest elevation of the pass); and
      Line 4: The time and Alt-Az location at the second (and final) elevation threshold crossing (60° in the example).

Within Line 1 (top line), the information pertaining to the satellite’s ID (NORAD and common) and its beacon, downlink (Rx) and uplink (Tx) frequencies are shown. In the future, it might be possible to use this information to automatically tune a transceiver to the required frequency and also to automatically correct for Doppler shift throughout the cubesat pass. However, testing the reliability of the foundation software is required in order to make sure that the transceiver will be reliably tuned to the correct frequency nearly 100% of the time. 

Eventually, the scheduling software will be accompanied by a GUI that will ask the user for specific user-defined constraints when considering the passes of the cubesats. This would include the time of consideration (start and stop), elevation (and possibly azimuth) constraints, as well as culmination constraints, if required. Graphical displays might also be used to show the user how the satellite will pass from horizon to horizon and approximately where it will be seen amongst the constellations. This is somewhat similar to the graphics of Ham Radio Deluxe and Heavens Above

This simpler scheduling software can eventually be modified to be used for optical (and other) satellite tracking, much like Systems Tool Kit (STK) currently does, but without all of the bells, whistles and graphics that an amateur astronomer might not require. The GUIs and graphics will depend on the users’ requirements. If the user requires the specialized graphics of STK, then that tool is already available. 

            Another feature of the software is that if the user attempts to use it without updating the TLEs, the software will automatically activate the TLE sorting software to update all TLEs in the user’s database. This is especially useful if the user forgets (or neglects) to update the TLEs during his/her vacation time! Already, the two pieces of software are beginning to talk with one another.


PART 3 – OBTAINING IMAGES CONTAINING SATELLITES 

            Once a satellite tracking schedule has been produced, the user can obtain images of the desired satellites and/or receive transmissions from the active satellites. Of course, the user could produce scripts containing the ephemerides and tie them into an existing telescope/CCD camera control software. However, maybe the user only requires a software that simply obtains images, automatically saves them somewhere and nothing else. In this case, the CCD can be controlled by my (eventual) software, which will use the satellite ephemerides to control the telescope and CCD camera (in the case of a guided system) to obtain images of the satellites of interest. Maybe the user has a wide field CCD camera pointed at zenith and wishes to catch whatever satellites drift into the field of view (normally called an "optical fence"). In this case, the user does not require a scheduling software; just the software that obtains images, identifies streaks, performs astrometric analysis and performs orbit determination. Finally, maybe the user wishes to implicitly define what would be written into every facet of the Flexible Image Transport System (FITS) image's header. My software will eventually enable the user to accomplish all of the above.

            Today, most of the CCD control software is bundled up with post-processing features, which is fine if you are into that. However, if a user simply wishes to obtain images and feed them into an automated streak detection algorithm, the user won’t care what the images look like, just as long as the streaks can be reliably extracted, analyzed and tracking data can be produced for further analysis. Maybe a user just wants a software to automatically obtain images and save them to some folder for future analysis. In this case, a much simpler “bare-bones” software is required. 

            This is the only part of the satellite tracking package that I have not worked on (or started) yet. This is primarily because I have used retail CCD control software to control my own CCD camera and I have plenty of CCD images that I can use to test the remaining pieces. However, I am now ready to make my own (very simple) CCD control software using Linux and Python. This will take some time, as I have no idea of how to communicate with or control a USB device using Python, but the learning process will be quite interesting. Once I acquire a better transceiver that can be computer controlled, I might also include a transceiver portion that can track transmitting cubesats. This part could become the first software that can accommodate both optical and radio satellite tracking enthusiasts, which would be quite exciting!

            Figure 1 shows an example image of two Low Earth Orbit (LEO) satellite streaks appearing to intersect with one another is shown below. This image will be used as the primary example for the streak and endpoint detection (Part 4), tracking data extraction (Part 5) and orbit determination (Part 6) steps that are shown below.

Figure 1: CCD image of two satellite streaks that appear to be intersecting. The image shown is the negative of the original.
The streaks are identified as "1" (brighter streak) and "2" (fainter streak). The location of the local zenith is identified as the "+".
This image of the local zenith was taken at 01:37:00 UTC on October 28, 2018 from Beaver Creek Conservation Area, just south of Saskatoon, Saskatchewan Canada.
The exposure time was 5 seconds. Increasing declination (north) is at top and increasing right ascension is towards the left, as indicated by the compass at the top left hand corner.


PART 4 – SATELLITE STREAK AND ENDPOINT DETECTION FROM CCD IMAGES 

            When a satellite travels within the field of view of a CCD detector, it will leave a detected trail (also known as a “streak”) of its travels during the entire time that the shutter is open (see Figure 1 above for examples of satellite streaks). The evidence of the satellite’s travels is normally left as a streak with two endpoints that correspond to the CCD camera’s shutter opening and closing. Sometimes an inactive satellite will spin, showing a variable brightness, thereby producing a streak of varying intensity. At other times, the satellite has a consistently high brightness such that the detectable streak is a roughly uniform intensity in the image.

One of the most daunting tasks that I have faced throughout my career in space surveillance has been streak detection. Satellites are not what you would call “predictable objects”. They can be small or large, reflective or non-reflective, active or inactive, in good sunlight or in bad sunlight. Some of these characteristics can be accurately predicted and determined (for example, Iridium satellite flares). However, in most cases, satellites are unpredictable and CCD images of them are primarily based on chance. 

            The best case scenario would be to obtain a solid, bright and uniform satellite streak, which does happen regularly. However, there will be times when the satellite will not be lit well or will be displaying a darker side which reduces the possibility of detection. The satellite could be spinning, thus producing a streak with unreliable endpoints. If the timestamps are only showing the shutter open and close times, and the satellite is tumbling, one or both of the apparent endpoints might not actually correspond to the two timestamps. 

            A software that detects streaks will have to contend with such scenarios. But first, the software will have to detect if there are any streaks in the image. An image of a satellite will not just contain the satellite. Any light that enters the CCD field of view will also be included within the image. This means that stars (and possibly any asteroids, planets, meteor streaks, any energetic particle hits and artificial artefacts, such as pixel and column defects) will also be shown in the image and the computer does not know (or care about) the difference between stars and streaks. See Figure 1 above to see stars, streaks and background noise in an image. Subtracting flat and dark frames will reduce a lot of the artificial noise (such as the ambient background); however, the stars and other celestial citizens will remain.

            Whether using tracking or stationary detectors, we can all count on the fact that the stars will not change their apparent positions with respect to one another within a several-second exposure time. It is also likely that background noise will not significantly change in a few seconds either. Therefore, it is possible to significantly reduce the noise due to background and stars by simply subtracting one image from a subsequent image of the same area of the sky obtained a few seconds later. 

            My software uses two images obtained several seconds apart. The satellite will likely move during this time and so the streak will be seen in two different places within the CCD field of view in the two images. Therefore, subtracting the second image from the first will eliminate much of the background noise and a lot of the stars; but not all of them. The stars will actually be appearing to move somewhat because of the atmospheric turbulence (twinkle effect). Therefore, the evidence of the brightest stars will likely remain even after the subtraction. Figure 2 shows three images. The first (left) image is the same image shown in Figure 1. The second (middle) image is an image taken several seconds later of the same sky area. Note that Streak #1 has moved up and to the right in the image and Streak #2 has left the field of view, likely having moved down and to the right. Finally, the third (right) image is the result of the second image being subtracted from the first image.

-

=

Figure 2: Subtraction of two CCD images containing satellite streaks. The image on the right is the result of subtracting the middle image from the left image.

            Note that in the subtracted image (at right in Figure 2), the streaks from the first (left) image appear black and the streak from the second (middle) image appears white. Since these are all negative images, a white object is actually darker than a black object. Therefore, the white streak is comprised of very low (zero or nearly zero) brightness values because of the subtraction of the streak from the background in the first image. Note that the background in the subtracted image appears noisier than the backgrounds of the first and second images. This is due to increased contrast and not due to an increased background noise. Finally, note that a number of the brightest stars remain after the subtraction, as expected.

            Even after image subtraction, a background remains consisting of a (much lower) noise floor and leftover stars. The software deals with the remaining noise floor by sampling parts of the image and averaging the results, such that an average noise floor is returned. The noise floor brightness is then subtracted from the previous image in order to remove much, if not all, of the background noise, as shown in Figure 3.

Figure 3: The image after background subtraction. Note that a number of the brighter stars remain but the two intersecting streaks are still prominent.

            All streaks in the (first) image will be more easily seen now that the background has been subtracted out. However, the computer still does not know the difference between the (remaining) stars and streaks. But, the software has a solution for that. Most satellite steaks are elongated (actually more streak-like) and the stars are normally round (smaller in size). Therefore, the software can automatically “delete” all of the stars within the image by checking the number of contiguous pixels. If the number of contiguous pixels is really small within a specific region of the image, it is likely that the object is a star (not a streak) and therefore can be deleted. If the number of contiguous pixels is larger, then the object is likely a satellite and can be kept for analysis. Of course, choosing the threshold number can be difficult, but testing can help produce a suitable threshold value. 

            Some of the detected satellites can travel very slowly in the detector's field of view. The slow motion of these satellites can produce very small and stubby streaks. These small streaks can be confused for stars and therefore might be deleted by the software. In effect, some satellites will appear to travel so slowly that they can appear to be stars in the image, in which case the software will definitely remove them from the image. This is an inherent danger of all streak detection algorithms and so a suitable threshold will need be chosen in order to retain most of the satellites and dispose of most (if not all) of the stars and other noise. 

            This part of the software is still in the development stage, but so far the results are very encouraging. It has been able to detect two intersecting satellite streaks (shown in Figure 4) while rejecting all of the stellar and background noise, which is the ultimate aim of the software. Note that the bright streak (Streak #1) in Figure 4 appears to a have a slight "kink" in it near its upper endpoint. This is due to a star that happened to be very near the streak at the time of imaging.

Figure 4: The image after final star removal, leaving only the two satellite streaks.

            Once the background noise and stars have been eliminated (or nearly so), the computer still does not know where the streaks are, including their endpoints. The software only knows all of the remaining non-zero pixel values. The streaks (and their endpoints) are contained within this data. Another problem encountered was that even if the computer knew the exact location of all streak endpoints, it still does not know the direction(s) of travel from a single image. Of course, if the second image also contains a streak from the satellite of interest, then the direction can be inferred from the satellite motion between the two images. However, in some cases, the satellite angular velocity is high enough such that a streak will appear in the first image but the satellite will be outside of the field of view by the second image, thus retaining the direction ambiguity. 

            The software does not solve for direction of travel right away. The reason for this will be revealed later on. The software first needs to locate the streak(s) endpoints and this is done by using pixel scanning and angles. 

            The software begins by scanning the image column by column and row by row (from the 0th row and increasing) until it sees the first non-zero pixel. In the case of our example streaks, this first pixel is located at point (344, 297), which also nearly corresponds to the upper endpoint of Streak #1. Since the stars and background have (likely) been eliminated, only satellite streaks are assumed to remain. Therefore, the first non-zero pixel that is encountered is likely one that is associated with a satellite streak endpoint. The first true endpoint is likely located close-by (but not exactly) on this first non-zero pixel. The software then searches for the brightest pixel within a small (5x5 pixel) box centered on the first non-zero pixel. Assuming a “well-behaved” (meaning uniform) streak, this brightest pixel should roughly correspond to the centroid position of the satellite when the shutter opened (or closed, depending on the direction of travel). Figure 5 illustrates this endpoint detection process.

Figure 5: Endpoint detection of two satellite streaks. The pixels in red depict the first detected pixels found in the scan. The green pixels depict the actual endpoint of each streak, determined by the software.

            Once the first endpoint of the streak has been located, the software has to find both the orientation and the second endpoint of the same streak. This is done by using a radial arm, originating at the first located endpoint, with an initial length of 10 pixels. The arm is first swept between 0 and 180 degrees from the positive x-axis in the image (pivoting about the first endpoint) looking for other non-zero pixels that will likely constitute another part of the streak. Once other non-zero pixels are found at the end of the 10-pixel arm, the arm size is increased to 11 pixels and the slightly longer arm is swept through a number of angles again. However, this time the range of angles is reduced to those that resulted in the 10-pixel arm locating non-zero pixels. These steps are repeated over and over, with ever-increasing arm sizes, until the end of the radial arm cannot find any more non-zero pixels. At that point, the last detected pixel is recorded. Like the first endpoint, the second endpoint is found by locating the brightest pixel in a small (5x5 pixel) box. 

            Subsequent streaks in the image (if any) are located in a similar fashion to the first. However, how does the computer know not to relocate the first streak and think it is another streak? To avoid streak re-detection, the slope and intercept of the first streak (considering the first detected endpoint as the origin) are recorded. A 5-pixel zone is drawn around this linear streak. A potential second streak endpoint can only be valid if another non-zero pixel value is found and is greater than 5 pixels from the minimum pixel distance from the first detected streak. If the next endpoint candidate is located at a minimum of less than 5 pixels, then it is considered a part of the first detected streak. Of course, this method can be modified such that a variable-sized zone, based on the thickness of the first streak, is used instead of a constant 5-pixel zone. However, this method is suitable for testing purposes, especially if the two streaks are not located very near each other within the image. 

Figure 6: Detection of the second satellite streak involves avoiding the first detected streak by placing a 5-pixel zone around it, shown by the red line. The second streak endpoint (single green pixel) was detected outside the first streak's avoidance zone and therefore was considered a detected second streak. The linear slope of the first streak is shown as the thin green line connecting the detected endpoints of Streak #1.

            In the future, third, fourth, and even fifth, etc. streaks can be detected in the same image by using a subroutine that runs until no further streaks are detected within the image. Exhaustive testing will be required to make sure that the majority, if not all, of the streaks are detected within every image obtained over an observing session.

             Finally, it is possible that an image will contain no detectable streaks. In this event, there should be no non-zero pixels within the image and the software will simply move on to the next image to look for satellite streaks there.


PART 5 – EXTRACTING TRACKING DATA FROM THE DETECTED STREAKS

            At this stage, the endpoints of the detected streaks have only x-y pixel coordinates. They have not yet been transformed into the equatorial (right ascension / declination) coordinate system. To accomplish this, the coordinates of the image center, the rotation of the image with respect to the coordinate system and the scale of the image (number of pixels per degree), would need to be established.

            The scale factor of the image can easily be obtained by using the CCD/optical apparatus to obtain a typical image containing a large number of stars. A number of star pairs are selected and pixel and angular separations between each pair of stars are measured. A plot of angular vs. pixel separation is produced and linear regression performed. Given enough data points, the slope and intercept of the plot will produce the scale equation in a typical linear equation form. 

            For example, my SBIG-9XE CCD camera fitted with a wide-field 50mm lens (which produced the image in Figure 1, etc.) has a linear scale equation of...

 

...based on an image produced by the apparatus. The variable “p” is the pixel separation and the variable "γ" is the angular separation. Therefore, it is possible to determine the angular separation between two streak endpoints by using their pixel separation, measured directly from the image.

            If the software knows the pixel distance (in x and y directions) of any streak endpoint from the image’s center, then a simple transformation can be used to determine the endpoint’s equatorial coordinates. When using images of the local zenith, this process becomes somewhat easier. 

            When using zenith imaging, I first level the CCD camera. This places the location of the local zenith very near the center of my field of view. Next, I orient the CCD camera in azimuth such that the top of the image is aligned to the direction of the North Celestial Pole (roughly near Polaris). This orients the image such that declination is mainly measured in the vertical (y) axis and that right ascension is measured mainly in the horizontal (x) axis of the image. My field of view (about 11.25 degrees) is assumed to be linear enough such that the declination lines are parallel to the image’s x-axis and right ascension lines are parallel to the image’s y-axis. 

            My setup places the zenith location near the center of the image with the meridian line nearly running vertically through the center of the image. The right ascension coordinate of the zenith point (and the meridian line) is the local sidereal time, which depends on the time that the image was taken. The declination coordinate of the zenith point is always the same as the observer’s geodetic latitude. Both coordinates can be easily determined and only the right ascension coordinate changes with time, assuming that the observer’s location (especially latitude) does not change. 

            Assuming that the CCD camera is suitably oriented such that increasing declination is exactly along the y-axis, endpoint coordinates can be roughly determined without relying on the background stars and astrometric analysis. In effect, the software can then extract the tracking data very quickly, which can then be used for initial orbit determination from the single image using my Zenith Method. If the zenith location is not exactly located at the image’s center, an x-y offset or astrometric analysis would be required to accurately determine each endpoint’s equatorial coordinates. The software can accept user-defined x-y offsets, if required, but only if the image contains the local zenith point.

             I do plan to use astrometric analysis using the stars that were initially removed from the image. Instead of simply throwing the stars away, I could place them into another image (minus the streaks) and use that image to determine the scale factor, rotation and location of the true zenith point and the streaks' endpoints. This technique can even be used to determine the scale factor at some specific sector of an image, rather than estimating the entire image's scale factor and assuming that it is the same throughout the image. Doing this will likely improve the astrometric accuracy of the endpoint coordinates.


PART 6a – ORBIT DETERMINATION FROM THE TRACKING DATA

            Just like orbit propagation, orbit determination is a very complicated subject. Orbit determination is the process of determining the size, shape and orientation of a celestial object’s orbit plane using tracking data obtained from observations. There are many different methods of orbit determination with varying degrees of accuracy. The required determination accuracy mainly depends on the accuracy of the propagation required from the orbit elements. For example, if someone wants to relocate a satellite at a later time, the accuracy of the predicted satellite’s location will depend on the orbit determination method used to determine the satellite’s orbit elements. 

            The amount of tracking data that can be obtained for a satellite can be very small (based on several observations) to very large (based on hundreds or even thousands of observations). Currently, my software only considers several observations (small amount of observations) obtained at or near the local zenith. In most cases, my software will only consider a single image containing satellite streaks, each having two endpoints (two pieces of tracking data each). Most orbit determination software will only accept a minimum of three observations to conduct orbit determination; however, my software will currently accept a minimum of two observations (corresponding to two streak endpoints) in order to conduct initial orbit determination at or near the local zenith

            MyZenith Method” of initial orbit determination begins with the assumption that the eccentricity of the satellite’s orbit is nearly 0 (a nearly circular orbit shape). This automatically means that the argument of perigee (angle between the satellite’s ascending node and the perigee point) is undefined. This also means that the Mean Anomaly (angle from perigee point) is also undefined. In effect, three of the six orbit elements are no longer valid because the orbit has been assumed to be nearly circular in shape. 

            The remaining orbit elements; semi-major axis (a), inclination (i) and the right ascension of the ascending node (RAAN) can all be determined using the “Zenith Method” and can also be used to identify every detected satellite, provided that a TLE exists for all of them. If required for further propagation, the Mean Anomaly can be replaced with the Argument of Latitude, which is the angle that the satellite has travelled from the ascending node, which is always defined, regardless of the orbit eccentricity.

            The semi-major axis is determined by first determining the streak’s angular size (transformed from the streak’s pixel size using the scale factor equation). The satellite’s angular velocity is then determined by dividing the streak's angular size by the exposure time. Assuming a nearly circular orbit, a smaller streak size will generally indicate a smaller angular velocity and a larger semi-major axis (larger zenith distance from the observer). Since most LEOs with altitudes under 1000 km have roughly circular orbits, this assumption holds for the longer streaks in the images. Smaller streaks could indicate high-altitude LEOs or MEOs, both of which could likely have more elliptical orbits. The direction of the satellite’s travel in the image does not affect the estimated value of the semi-major axis. 

            The orbit’s inclination and RAAN are both determined by the satellite streak’s orientation within the image. For example, a perfectly polar inclination (i=90°) will exhibit a streak that is roughly oriented vertically (parallel to the y-axis) in the image, while an orbit inclination that is the same as the observer’s geodetic latitude will result in a (zenith) streak that is roughly oriented horizontally (parallel to the x-axis) in the image. A streak oriented between horizontal and vertical will indicate an inclination of between the observer's latitude (prograde orbit) and the difference of the observer's latitude from 180° (retrograde orbit). The type of orbit (prograde or retrograde) will depend on the satellite's direction of travel at the time of the image.

            Since the three orbit elements (a, i and RAAN) are being determined using a single image and a very small amount of tracking data, it is likely that these elements will not be extremely accurate. However, they may be used to provide an initial TLE for those satellites that have been just launched, first detected and have no TLE. Therefore, this method can be used to predict where the satellite will be at a (near) future time, so that it can be tracked again and so a larger amount of tracking data can be obtained for a more accurate orbit determination.

            This rather simple orbit determination code (compared to the more robust and accurate orbit determination methods) can also be used to identify the detected satellites, provided that up to date TLEs already exist for the satellites detected.


PART 6b – IDENTIFYING THE DETECTED SATELLITES 

            Once the three orbit elements (a, i and RAAN) have been estimated from the tracking data, the software can identify the detected satellites in the image by propagating the most relevant and up to date TLEs and comparing the resulting predicted coordinates with the observed streak’s coordinates. The software could try each and every TLE pertaining to all catalogues of orbiting satellites; however, this would be a very time-consuming process because there are thousands of TLEs corresponding to the thousands of currently known orbiting satellites. 

            However, there is a way that the software greatly reduces the number of candidate satellites and therefore greatly reduces the overall number of TLEs to check. Since the software has determined three of the six orbit elements using the image itself, the software can compare these determined elements with those of the official TLEs. Using a user-defined tolerance for each of the three orbit elements, only those TLEs that are within the specific defined inclination, RAAN and mean motion (directly related to the semi-major axis) constraints will be considered for propagation. For example, if the satellite is determined to be residing in a sun-synchronous orbit having a specific mean motion and inclination, the software will not have to propagate any TLE that is not a high-inclination LEO satellite (such as any GEO, MEO or HEO satellites). This will greatly reduce the amount of TLEs that are considered and therefore will greatly reduce the amount of contenders.  In addition, the TLEs are selected from the TLE bins that were created using the TLE sorting software that was described in Part 1. Therefore, the software will not have to check out any MEO, GEO or HEO TLEs at all, thus saving a lot of processing time.

            After the initial TLE filtering, the software then propagates all of the remaining TLEs with the same orbit propagator used for the satellite orbit propagation and scheduling software described in Part 2. The predicted positions of all the candidate satellites at the time corresponding to the midpoint of each streak are compared to the equatorial coordinates (right ascension and declination) of the observed streaks’ midpoints. A satellite that has the equatorial coordinates closest (in angle) to a streak midpoint’s coordinates is considered the most likely identification of that particular streak in the image. Subsequent streaks in the images are identified in much the same manner. 

            Currently, my software can identify streaks within an image by using determined orbit elements and propagated TLEs. However, the question of direction has not yet been addressed. Currently, my software considers both directions of travel along the streak length, thus resulting in two inclinations (and two RAANs); one corresponding to a prograde orbit and the other corresponding to a retrograde orbit. The correct orbit (and correct direction) might be automatically ascertained from the number of satellites residing in both orbit types. For example, there is a much higher probability of a satellite residing in an orbit of inclination 65° (prograde) than one residing in an orbit having an inclination of 115° (retrograde). However, in the scenario of a more vertical streak slope, there is an almost equal probability of LEO orbits having inclinations of 85°(prograde) and 95° (retrograde and likely sun-synchronous). However, the final determination of identity also falls on the satellites that were within or near the (zenith) field of view at the time the image was obtained. Therefore a definitive identification is still possible, even in a high-inclination orbit scenario.

            At present, my software might produce two (or more) candidate satellites for the same streak. In this case, it might be necessary to look at the angular separation between the predicted location and the location of the streak center. One of the IDs might correspond to a larger angular separation than the other. In this case, the prediction that has the minimum angular separation could correspond to the most likely satellite.

            As with all of the other parts of my current software, the orbit determination and satellite identification portions might not be 100% reliable at first; however, nothing is and my software requires a lot of testing and tweaking as it evolves. I am sure that once users begin to test it out, I will get a lot of suggestions and comments surrounding its use and its reliability.


FIRST RESULTS

            Figure 7 shows the results of my first (nearly) end-to-end satellite tracking software in its present form. The software was able to positively identify both of the two satellite streaks in the image, estimate their endpoint locations, determine the satellites' semi-major axes, orbit inclinations and the locations of their ascending nodes. Finally, the software has been able to correctly identify both of the satellites. These are very encouraging results which motivate me to continue to develop this software so that one day other other users can successfully detect, track, determine orbits and identify the satellites that they detect.

Figure 7: Results from my first end-to-end satellite tracking and analysis software
"LST" refers to the Local Sidereal Time (degrees) at the location and when the image (shown in Figure 1) was obtained.
"Center Equatorial" refers to the equatorial (RA and dec) coordinates of the streak's midpoint.
Two inclinations and RAANs are indicated for each satellite: one for a prograde orbit and the other for a retrograde orbit (depending on satellite travel direction).
The "Streak ID" contains the Common ID, the NORAD ID and the angular separation (degrees) between the true streak midpoint (from image) and its predicted coordinates (from TLE propagation).

 

Dr. Michael A. Earl - July 16, 2019



CASTOR HOME

SITE MAP

HEADLINES

 

Pursuing a Dream - Creating an End-to-end Satellite Tracking Software: Was Last Modified On July 16, 2019