|
Upon first arriving at APO, we locate the day staff and find out what has been happening to the telescope during the day. This keeps us informed of what procedural changes may be necessary and gives us a heads-up as to where to look when/if things go wrong later in the night. If there is time remaining before the daily phone-con (which establishes observing priorities for the evening), we will start up the observing console and scan our email for important items and do some quick aliveness checks of the various systems. After the phone-con, we continue our aliveness checks (per a standardized check-list) and go to the telescope to check out the systems there. Once the telescope is checked out (and the instrument changed if necessary), we continue in the control room with our standard checks and email reading. (We typically get between 30-60 emails a day which can have subjects varying from who is not going to be in today at the Solar Obs. in Sunspot to new procedures for safe imager handling.) With no problems, our standard check out system takes about one hour. Once it is close enough to twilight, we remove the enclosure from the telescope and do the few remaining setup items we could not accomplish in the enclosure. We then go back to the control room and start the telescope slewing to our first field. During the night, we have roughly twenty-five (25) windows we use on the observer console. Here is a highlighted list of some of them: SOP - Spectrograph Operating Program
IOP - Imaging Operating Program
Murmur - a continuously scrolling window with tons of
useful information,
Watcher - a program which aims to catch important information
from the
Interlocks Display - an interlocks status display from the Watcher System Status Display - shows telescope position, instrument specifics, etc. Servers - controllers that provide the information to the watcher from the various subsystems TPM Display - real time display of some of the TPM data (ex. telescope & mirror positions) MCPMenu - controls the telescope from the MCP (position,
flat field
TCC - Telescope controller: another continuously scrolling
display that
Titrax - time tracking software for observer activities Weather:
Editor - for night log Email - constantly trying to catch up on the day's email traffic QA - there are no QA tools at this time Once we stop observing, we attempt to run the endNight [SI]OP script which prepares the data tapes and does some other house keeping routines. This process can often last hours. Once that's started, we put away the telescope, fill/replace the LN2 dewars as necessary, and finish the night log. Once endNight successfully completes, we prepare the tapes for shipping, finish any other details and head home, by this time, often overlapping with the day staff to exchange information on the night's work. Of course, we also should have a description here for crisis management, but although the need frequently arises, there is no telling where/when such work will be needed. Efficiency Most of the observing tools we have get the job done, but few are to the point where they are easy to use, convenient or efficient. The [SI]OP programs have the ability to write custom scripts inside them, so that is going to be an efficiency plus once we get beyond the current round of development and bug-fixing to devote some time to exploiting it. I want to highlight some other ares of inefficiency - some of which can be improved easily (and many are being discussed) and others that we are probably just going to have to live with: Spectro inefficiency: high overhead due to diamondPoints,
centering, focusing.
Imaging inefficiency: having to do multiple lskips.
Night Logs: we log many things which would be better done automatically. Telescope enclosure: some engineering/setup tasks we can
do in the
Interlocks: an important safeguard, but we often
lose time figuring out which
Watcher: error messages are often opaque: not necessarily
its fault
Information overload: scrolling murmur and TCC not easy
to use. The Watcher
QA - we as yet, do not have sufficient tools to monitor
data quality during
DA - system is much more reliable now, than in the past,
but depending
Documentation - there isn't enough. (Ellyne discussed this in more detail.) Staffing Dark runs seem to be averaging around 18 nights. We find it necessary to interact with the day staff before each observing night, and due to the endNight process and advantages of meeting again with the day staff at the end of the night, our "day" is something like 4pm - 8am. Thus we typically run two shifts a night that are 9-10 hours each. We need two observers per shift to be able to monitor the above mentioned myriad of systems as well as to handle the night's problems. This schedule provides some overlap with the observers in each team during the shift change, but working a two-shift a night schedule with three observing teams has proven quite awkward and has resulted in occasional zombie-like observers. This arrangement requires us to continuously switch from the first shift to the second shift (and back again) during a run. It has been shown in many studies that constant shift changes lead to increased exhaustion rates and employee unhappiness. We can verify this. The current arrangement is not sustainable over a five-year survey, and data uniformity could eventually suffer. In addition to the scheduled observing run, we have been needing about three nights at the beginning of the dark run to "shake out" new bugs in the systems (many of which have changed since the versions used the previous dark runs) and are talking now about adding an additional night of testing at the end of each run. This totals, then, about 22 nights * 4 people/night * 9 hours/person = 792 hours/run. At our current staffing level of 6 people working 40 hours/week, we have an available pool of approx. 960 hours/month. Thus after the 792 hours observing, we are left with about three (9hr) days of time available per observer. It is in this remaining three days of time, that the observer can, a) process what went right/wrong in the last run b) make improvements to the operating software, instruments, documentation, etc. c) check and develop quality control and long-term monitoring projects of data and operating systems, not to mention d) engage in some scientific research with the data. Clearly, the hours just aren't there to allow the observers to take an active role in developing and improving the observing systems and tools. It is usually the case that the best astronomical instruments (be they hardware or software) are developed by the people who are actually going to use them. Despite having the required skill-set to implement the needed last 10% of systems development, the observers simply do not have the time to do so. Most astronomers will also agree that the best data are taken by people who want to use the data and therefore the SDSS project wisely chose to hire scientists to perform the observations. Scientists are not likely to be happy if not given the chance to actually do science. A fourth team of observers would both eliminate the need to switch shifts as well as allow more time for the activities mentioned above which are simply falling through the cracks now. It is one of our biggest frustrations that we feel we have the tools and skills to improve our operating environment and contribute to the scientific output of the SDSS, but do not have the necessary amount of time to do so. While we realize everyone suffers from a lack of available time, we believe that given more time, we can greatly benefit the SDSS and can improve the satisfaction we each derive from our jobs as well.
Review of Observing Systems and Survey Operations Apache Point Observatory April 25-27, 2000 |