Livepage Sources

Telescope data

The data that is shown on the telescope live pages comes from computers at the observatories. Most of the data comes from a drive PC monitoring connection that is open on newsmerd at Mt Pleasant, and sille at Ceduna.

On newsmerd and sille, there is a C program that:

  • opens a drive PC monitoring connection to either sys26m or sys30m
  • opens a database connection to ares
  • collects data from the drive PC every 5 seconds and enters into the database on ares

On newsmerd, this program is called telmon_interface_26, while on sille this program is called telmon_interface_30. This program is found in /home/jstevens/TelMon_interface (on both computers), along with the source code for the program. The source is heavily based on the antenna_monitor code.

On both computers, telmon_interface_xx is configured to start automatically with the computer. However, there will be occasions when this program may need to be restarted. To do this follow the procedure:

  • log in to newsmerd or sille as observer
  • ps -u root | grep telmon ; you should see only one line. If you see more than one line then you will need to be root to kill all the processes with the command killall telmon_interface_26 (or killall telmon_interface_30 as the case may be).
  • give the command /etc/init.d/sql_monitors restart
  • ps -u root | grep telmon ; again you should now see only one line. If more than one line appears then go back to step 2. If nothing appears then the program was not able to start and you will need to troubleshoot the problem.

In general, the live page monitors are fairly robust and will detect if there is a problem either in getting a monitoring connection (eg. the drive PC has too many active connections) or a database connection (eg. the network link back to ares is not working). In these cases, it will attempt to reinitialise the connections every 5 seconds until it succeeds, and should not require a restart. However, there are known unresolved bugs. One should be mindful that often many instances of the telmon_interface will appear, which can saturate the drive PC. This seems to be more prevalent on newsmerd than on sille.

If the livepages for both Hobart and Ceduna are not updating then it is probably that the livepage generating script on ares that isn't running. To check this logon to ares and ps -ef | grep livepage if there are no processes listed (apart from the grep itself) then (as root) issue the following two commands

  1. export LD_LIBRARY_PATH=/usr/lib:/lib:/usr/local/lib
  2. /etc/init.d/livepages

The other function that the telmon_interface code provides is that of the antenna watchdogs. It was noticed that when not actively tracking a source (ie the telescope is idle), the power to the drives is on and the brakes are off, it is possible for the telescope to drift into the software limits. At Mt Pleasant this is an inconvenience as it is manned most of the time and is only a short 20 minute drive away. At Ceduna though, Bev must drive out to the telescope and manually drive it out, a process that takes at least an hour, and requires Bev to stop whatever else she is doing. To prevent this from happening, we made a watchdog program that is integrated into the live page monitor code. This watchdog automatically parks the telescope if it is left in an idle, powered-up state for 5 minutes.

Telescope images

The telescope images from Mt Pleasant come from the webcam pointing out of the window in the main control room. It has the IP address 131.217.63.195, and this camera is set to upload via FTP its image every 10 seconds to ares (as observer). This image is put in the directory /home/observer/public_html/ho26_live_pictures, and are labelled ho26_suffix.jpg, where suffix is an auto-incrementing number between 1 and 8640, so there should always be one day's worth of pictures in this directory. When the suffix gets up to 8640, it starts overwriting the existing files starting with ho26_1. A whole day's images are kept in case one day someone wants to make a time-lapse movie of Mt Pleasant's activities (this has been done once before, and looks quite impressive).

Tick-phase information

The tick-phase (the time between the GPS clock tick and the maser's pulse-per-second) is measured by an HP counter connected to the totally accurate clock (TAC) PC. Roughly the same setup exists at both Mt Pleasant and Ceduna. The TAC PC is made to record the measured tick-phase every minute into a directory that is shared via SMB as //tac32ho/tic_logs (Mt Pleasant) and //tac32cd/tic_logs (Ceduna). These shares are mounted on ares under the mount points /mnt/tac32ho and /mnt/tac32cd. In this directory you should always find yesterday's complete tick-phase log, and today's tick-phase log as it is being written. The logs are called 11HOdaynumberT.csv, where daynumber is the 3 digit UT day of the year number.

Quite often (especially with Ceduna) these SMB shares may become stale, so there is a shell script /etc/cron.hourly/remount_tac that unmounts both shares and remounts them every hour (at 17 minutes past the hour every hour to be specific).

Another cron job is run by the user jstevens (Jamie Stevens) to extract the latest data from these tick-phase logs and make two small files with the latest tick-phases: /home/jstevens/public_html/hotick.html

and /home/jstevens/public_html/cdtick.html. These files look like:
<td style="vertical-align: top;">Hobart</td>
<td style="vertical-align: top;">11.3914</td>
<td style="vertical-align: top;">2008 261/01:49:50</td>

It is like this so that the data it contains can be easily incorporated into the LBA Live monitoring pages that Chris Phillips maintains.

Baldor drive information

In the drive rooms of both Mt Pleasant and Ceduna there is a Baldor monitoring machine; at Mt Pleasant this is called hobaldor (131.217.63.162), and at Ceduna this is called cdbaldor (131.217.61.180). These machines are connected via serial to the four Baldor drive controllers that move the telescope. At Mt Pleasant, we use an RS485 connection, while at Ceduna we use four separate RS232 connections.

Each machine runs the code /usr/bin/baldor_serial_logger at startup, which is compiled from the code /home/observer/network_serial/baldor_serial_logger.c. This program reads instructions from the file /home/observer/baldor_commands.list. This file specifies the type of serial control to use, as well as the devices to control and the commands to be issued. An example of this file is given here:

interval=30
control-type=RS485
control-port=/dev/ttyS1
control-dev=A1
control-name=X1
control-dev=A2
control-name=X2
control-dev=A3
control-name=Y1
control-dev=A4
control-name=Y2
O
HL
IO

You must supply either a keyword=value pair, or a command to issue to the Baldors on a line by itself. The keywords are:

  • interval: the interval (in seconds) between receiving the output from the last Baldor command and the issuing of the first command again
  • control-type: the type of serial control to use: should be RS485 or RS232
  • control-port: the serial port to use for control. If using RS485 then only one port needs to be given. If using RS232 then one port is required per Baldor controller.
  • control-dev: the RS485 device address for a particular Baldor.
  • control-name: the name of the drive that the preceding control-port or control-name refers to.

So every interval seconds, the commands specified in the baldor_commands.list file are sent in turn to each of the drives, and the output is sent to the file /home/observer/baldor_26m_lastfile (for Mt Pleasant) or /home/observer/baldor_30m_lastfile (for Ceduna). The program then runs the script /home/observer/baldor_serial/baldor_webpage.pl. This script takes the lastfile file, extracts the pertinent information, and puts it into the drives database on ares.

Ceduna wireless network status

On the Ceduna livepage there is a health monitor for the wireless network that connects the telescope to an ADSL connection at Bev's house. The data for the health monitor is collected by attempting to load the web pages on each of the wireless access points and the ADSL router. If a web page can be loaded (by lynx) within 5 seconds of the request then the access point is alive, otherwise it is considered non-responsive. This data is output to the file /var/log/ceduna_network_status.log, along with the UT date and time that the tests started. Each test is appended onto the end of this file, so it is an effective log indicating the status of the network over time.