A robotic telescope is a telescope that can make observations without hands-on human control. Its low-level behavior is automatic and computer-controlled. Robotic telescopes usually run under the control of a scheduler, which provides high-level control by selecting astronomical targets for observation. The scheduler itself may or may not be highly automated. Some robotic telescopes are scheduled in a simple manner, and are provided with an observing plan by a human astronomer at the start of each night. They are only robotic in the sense that individual observations are carried out automatically.
Kitt Peak Remote Controlled Telescope. Image credit: Mark Hanna/NOIRLab/NSF/AURA/
The field of robotic telescopes has a long history. Around 1965, the Wisconsin Automatic Photoelectric Telescope, an 8-inch reflector, was coupled to a computer with 4KB of memory. It was capable of running unattended for several days at a time, shutting down with the dawn or poor weather. At around the same time the 50-inch Kitt Peak Remote Controlled Telescope also came into operation, managed by a control center 90km away in Tucson, Arizona. In this case scheduling was provided by an operator who would manually set the observation program at the beginning of each night, and then periodically monitor the performance of the telescope as the run progressed.
Robotic telescopes have many advantages. Removing humans from the observing process allows faster observation response times. Robotic telescopes can also respond quickly to alert broadcasts from satellites and begin observing within seconds. Particularly in the field of gamma ray bursts, very early observations have led to significant advances in astronomers’ understanding of these events. Automation in a telescope’s observing program eliminates the need for an observer to be constantly present at a telescope. This makes observations more efficient and less costly. Many telescopes operate in remote and extreme environments such as mountain tops, deserts, and even Antarctica. Under difficult conditions like these, a robotic telescope is usually cheaper, more reliable and more efficient than an equivalent non-robotic telescope.
Serol represents the Las Cumbres Observatory scheduling software. Credit: LCO
The sheer volume of observations that can be acquired by a robotic telescope is usually much greater than what can be done by humans. Artificial intelligence techniques can be applied to identify trends and anomalies in the data and in some cases the scientific return on the instruments can be maximized by deriving interesting secondary science from the data. Since no observer needs to be physically present at the telescope, there is no requirement that observations have to occur in a single consecutive block of time. This allows observers to get data over a longer span of time without an increase in the total number of observations needed. Perhaps the most exciting feature of automated telescopes, whether fully robotic or not, is that when many such telescopes are connected in a network spread across two or more geographically distant sites, observations do not need to end when the Sun comes up. They can continue to be carried out by a distant telescope that is still in the dark. Another advantage of a robotic telescope network is the ability to make simultaneous observations, with both similar and different instruments. For example, an astronomer might want to observe an object with a spectrograph and also have access to images in several filters.
The main disadvantage of a robotic system is that automation requires work. The more sophisticated the degree of autonomy the telescope has, the greater the amount of work required to enable that functionality. Scheduling systems usually combine a number of different variables (visibility, priority, weather conditions and many more) in order to decide the best course of action for a telescope at any given time.
A scheduler needs to have an interface that allows astronomers to input requests for complex, multi-step observations. The scheduler then needs to be able to prioritize among all of the observation requests, and find the most efficient way to complete as many high quality and complete observations as possible. These systems are complicated and difficult to develop.
- based on work by Eric Saunders