This guide is intended to help you get started with measuring asteroid lightcurves. The subject of photometry in general is involved. You should try to familiarize yourself with some of the basic principles and techniques as you go along. It's not critical that you have a deep understanding of photometry before you get started but the more you know, the easier your work will be and the better the results.
Different projects have different equipment requirements but, generally, you need a telescope or long focal length lens (preferably 0.5m or greater) that can track at sidereal rate, a CCD camera. It is not necessary to worry about tracking on the asteroid save in unusual circumstances such as an extremely fast moving asteroid.
You should be able to track guided or unguided for up to three minutes for most projects. The usual exposure is a bit less. See below.
It is not required that you have software to reduce the images, though much of the fun in this work is derived from measuring and producing results on your own. You can work with another observer who does have the necessary software to measure the images you take. Sometimes this is actually preferable in a collaboration with others since all data will be measured and reduced using the same software and procedures.
(Written by Richard Miles)
The specific brand of CCD is more a matter of preference and reputation of the maker. In general, the three most important factors to consider are anti-blooming, pixel size and cooling.
Anti-blooming is a feature that keeps light from very bright stars from spilling over into adjacent pixels. This helps keep those stars from overpowering nearby stars. However, extra care must be taken when using a camera having anti-blooming in order to achieve accurate photometry.
One of the great benefits of CCDs is that they have nearly perfect linear response to light at any given wavelength, i.e., for a given color of light, doubling the intensity of the light doubles the value stored by the CCD at the appropriate pixel. This linearity holds until the amount of light causes the pixel to be saturated, i.e. completely filled by electrons. If the exposure time for a CCD frame is too lengthy, some of the pixels in the images of stars, asteroids, etc., may become saturated or may exceed the dynamic range of the CCD output (typically 65535 units) in which cases the accuracy of photometry will be compromised.
For cameras equipped with antiblooming, when operated in unbinned mode, the response will no longer be linear above some fraction of the dynamic range, typically between 50-70% of the available dynamic range. For photometry, this means you must be very careful not to use any star for setting the reference magnitude or as a comparison that is beyond 50% saturation. Also, the pixels making up the image of the target itself must not exceed this limit. Note that where CCD cameras with antiblooming are operated in binned mode, the response is usually linear across the entire dynamic range provided that the images of stars or target are not "undersampled" (see below).
A second issue of importance when choosing a CCD camera is matching the pixel size to the focal length of the system, which also affects the Field of View (see What Field of View Do I Need?). Often you'll hear a rule of 2 arcseconds per pixel. This is not the best rule. Instead, you should have a scale such that each pixel is about one-half the Full Width at Half Maximum (FWHM) for your average seeing. For example, say the average seeing at your location is 4-5 arcseconds. In this case, 2 arcsecond pixels would be acceptable, as it would take two pixels to cover the full image of the star. On the other hand, if you live where the seeing is down around 1-2 arcseconds, not only are you lucky, you need to use much smaller pixels, those that scale to about 0.5 to 1 arcseconds.
If you have pixels that are too small, you "oversample" the image and are less efficient as the light of the star is spread over a much larger number of pixels. This increases noise and, therefore, decreases the signal-to-noise ratio (SNR).
If you have pixels that are too large, you are "undersampling" the image. In this case, you are not getting a good statistical profile of the star and so the accuracy of both astrometry and photometry suffer.
A third important characteristic of a CCD camera for astronomical applications is that the CCD chip should be coooled well below the ambient temperature of the environment to ensure that the inherent noise and dark current are kept to a minimum. Cooling to at least 30 degrees C below ambient is advisable. Some cameras are thermostatically controlled to ensure the CCD chip is kept at a preset temperature. This facility is useful but not essential for accurate photometry.
Other factors to consider are size of the array (see What Field of View Do I Need?), other imaging you may want to do, availability of filter systems, ability to run the camera at long distances (so the computer can be in the house or warm room and not by the telescope), and - of course - cost.
A simple answer is that the FOV is large enough to include a sufficient number of comparison stars of similar brightness and, preferably, color. However, this is not always possible. Anything less than 5-6 arcminutes will probably make getting those stars very difficult. A field of at least 10 arcminutes is preferred and the larger the better up to a point! Too large a field can result in poor image quality or serious vignetting that may be difficult to flat out. This is particularly true if using a focal reducer.
If you are restricted to a small field with the native focal length of your system and can't afford a larger CCD array camera, you should conduct tests with and without the reducer to see if a small FOV overly limits your ability to find comparison stars and the effects on photometric accuracy.
No, but they are certainly preferred, especially if youre collaborating with other observers or observing an asteroid over a long period of time several weeks or months. When you use filters designed to match the CCD to one or more of the standard magnitude bands, e.g., the Johnson V and Cousins R, all of your observations can be directly matched to those who also reduce their observations to the same bands.
If your goal is to simply determine the period and amplitude of the lightcurve during a given month (the amplitude can change and, surprisingly, even the period), then unfiltered observations are acceptable, especially if youre working by yourself. Unfiltered observations can be combined with data based on a standard system. Its sometimes difficult but it can be done.
The disadvantage to filtered observations is that they reduce the amount of light reaching the detector. That means your limiting magnitude goes down and, in many cases, the SNR goes up. Thats not always the case for the latter. If youre observing towards the red end of the visible spectrum, the sky background, being bluish, is darker than if observing through a visual or blue filter. This means the sky contributes a smaller percentage of the overall signal reaching the detector and so the SNR can be slightly higher for a given saturation of a star.
Signal-to-Noise is a statistical term that defines the ratio between the useful signal (photons from the star or asteroid) versus the total signal received (the photons from the star, sky background, inherent noise in the chip, etc). The basic formula can be stated as
Or put another way:
The larger this number, the more signal (photons) from only the target or star. A "good" value is 100, which means that the noise is about 1% of the total signal. Translated to magnitudes, a SNR of 100 means that your measurements are of about 0.01m precision (not accuracy there is a difference).
Computing an accurate SNR can be very complex and is beyond the scope of this paper. The software you use should be able to provide a reasonably good estimate of this value so that you can determine the quality of your data.
As mentioned above, by statistical definition, a SNR of 100 translates to about 0.01m precision. When the amplitude of the lightcurve is 0.1m or less, you can see that a value of 100 becomes fairly important, as you dont want the "scatter" to be a significant portion of the lightcurve. On the other hand, if the amplitude is larger, e.g., 0.2-0.5m, you can afford a slightly noisier signal if it means the difference between getting data or not.
Practical experience has shown that one can still get good results when the SNR drops to 50 and even a little below (implying a precision of about 0.02m). However, in those cases, youll probably need to get more data so that any data analysis can better "average" the noisy data and so derived a period and amplitude. Such was the case with 2000 DP107 when several amateurs turned in what, under other circumstances, might have been marginal data at best. Yet, some of that data helped confirm events that showed that the asteroid was a binary.
In short, get the most SNR you can but dont use a hard fast rule as to what will and won't work.
This goes back to getting the necessary SNR and depends on many factors: size of scope, type of CCD, whether or not youre using filters, the sky background, how fast the asteroid is moving, the quality of your dark and flat frames, and others.
Trying to consider all that can lead to total confusion, so use the guide the Arne Henden of the U.S. Naval Observatory published in the proceedings for the first Minor Planet Amateur-Professional Workshop
Main Belt Objects, 2 minute exposure, 100 SNR
This presumes unfiltered, average CCD equipment. The two minute limitation comes from keeping the asteroid image from trailing too much during the exposure. Elongated images are more difficult to measure accurately. You can extend the time if you catch the asteroid near its stationary point or work an object with a lower average motion, e.g., a TNO. Near Earth Objects present many challenges in addition to their rapid motion when near earth, one of them being able to calibrate data from a series of images that don&'t always use the same comparison stars. You shouldnt make an NEO your first lightcurve target, unless you want to concentrate on getting the images and then working with an experienced observer to reduce the data.
Some studies have shown that if you have 50 well placed data points, you can define the curve. However, that is pressing your luck. You should get as many data points as possible, but without overkill. A good rule of thumb for an "average" asteroid is to shoot at 1-2 minute intervals, i.e., pause this amount of time from the end of one exposure to the start of the next. If you determine that the asteroid is rotating fairly slowly, i.e., its period is greater than 8 hours, you can increase the delay time to 3 minutes. Unless you know the period is considerably longer than 12 hours or even 24 hours, you should pause no longer than 3 minutes. The more data points, the better any "noise" filters out.
On the other hand, if you know or suspect the asteroid has a very short rotation period, i.e., < 1 hour, you should decrease the pause time. In some cases, you may want to shoot almost as fast as your system allows. This is true not only because the asteroid may be a fast rotator but if its rapidly moving against the sky, you want as many images with the given set of comparison stars as possible.
Yes, you can. However, it does take a bit of extra effort, which is made considerably easier if youre using filtered observations and reducing your observations to a standard magnitude band. See echo "How Do I Deal with Different Comparison Stars from Night to Night?" and "How Do I Account for Changing Earth-Sun Distances and Phase Angles?" below for additional information.
Image a Landolt field near the meridian (zenith if possible) as if you were doing a lightcurve study, but only for an hour or so. Then reduce the data picking one of the standard stars as the lightcurve target and the others as comparison stars. Since none of the Landolt standards are variable, any variation from one image to the next is the result of systematic and random errors. By examining the magnitudes of the different stars and how they behave relative to one another, you can estimate the noise levels at various magnitudes.
In simple terms, seeing is the total effect on a star's image caused by its light passing through the atmosphere. Before a star's light reaches your telescope, it passes through layers that may not bend light exactly the same and so the position of the star changes very slightly. The sum of the variations is to expand (blur) the image beyond what it would be if only the telescope optics were involved. Since there may also be changes in the total air mass (effective path length) over short periods of time, the intensity of the star can also change slightly in step with those changes. If you want a detailed analysis of the nature and causes of seeing, see "High Speed Photometry" by Brian Warner.
In quantitative terms, seeing is often expressed in arcseconds, meaning the size of images at Full Width Half Maximum, i.e., the width of the stars profile at a height one-half the maximum value. At the most elite sites in the world, seeing is often at 0.5 arcseconds. Mere mortals (amateurs for the most part) are fortunate if they can get 2-3 arcsecond seeing.
As the light from the star is spread out over more pixels, the profile of the star is no longer symmetrical. Too, as mentioned in the discussion for determining the best pixel size, since the star covers more pixels, the noise due to dark current becomes larger. All of these factors combine to decrease the precision and accuracy of your measurements. At some point, when the seeing turns stars into "fuzzballs", its best to find something else to do.
The size of the seeing disk comes into play when choosing the size of the measuring aperture. Once again, we turn to the recognized expert, Arne Henden of USNO for advice. He recommends using an aperture that is 4-5x the seeing disk. So, if your seeing is 2 arc seconds, use an aperture of about 10 arcseconds. If your pixel scale is about 2"/pixel, that would mean an aperture of five pixels in diameter (or 5x5 if using a square aperture).
A bias frame is an instantaneous image of the inherent noise and variations of the CCD chip in your camera. In theory, you should take a 0-length exposure to obtain a bias frame so there is no build up of noise from a longer exposure. However, many CCD cameras and the drivers that run them do not allow 0-length exposures and so you should take the shortest possible image.
Actually, you take several of these images and then median combine them to form a "master" bias frame.
The reason you use a bias frame is to eliminate part of the overall noise that contributes to an actual image. When the bias frame (and dark frame) are subtracted from the image, then what remains is due almost exclusively to actual data from the stars, target, and sky background.
Dark frames are not the same as bias frames, though they are used in the same general manner is that the pixels values in the dark frame are subtracted from the pixel values in the image. The difference between dark frames and bias frames is that dark frames are usually taken at the same temperature and of the same duration as the image. At the very least, they must be at the same temperature. See below for a discussion about scaling dark frames. For example, if you plan to take a 1-minute image at 30°, then you would use dark frame that was 1-minute long also at 30°.
To create a dark frame, you take several exposures of the appropriate duration and temperature but do not open the shutter. Most camera control programs allow taking dark frames.
Once you have several dark frames, you should create a master dark. Do this by taking several dark frames (9 is a good number), subtract the master bias frame from each, and the use the bias-corrected darks in a median combine to create the final master.
There is a good reason for subtracting out the bias frame before doing the median combine. Namely that by so doing, you can, if your software allows, scale the dark frame taken at one exposure to work properly for another. This is possible because the bias-subtracted dark noise is multiplicative. This can be a great time-saver. Say youre planning 10-minute exposures. If you took nine darks for the master, youd use more than 90 minutes of observing time. If, however, you took nine 1-minute exposures, you could scale the master dark frame by 9 and have the same result as if you had taken the nine 10-minute exposures.
Not all pixels are created equal. Some are more sensitive, others less sensitive than the average pixel. The purpose of the flat field is to account for these variations. This is accomplished in software by multiplying the value for each pixel in the image by the value for the same pixel in the flat field such that with a perfectly evenly illuminated field, all pixels would have the same value. All the values in the flat field are usually "normalized" before this, meaning that the "average" pixel might have a value of 1.00 while those less sensitive would have values > 1 and those more sensitive would have values of < 1
Flat fields are one of the most difficult things about good photometry but without them one cant do good photometry. If you have an exceptionally good chip, you might be able to achieve 0.05m precision without a flat field. Some asteroid lightcurve have amplitude at or below that level.
There are several general approaches to obtaining flat fields. Each has its own merits and demerits but all share the common concept of shooting a series of image of an evenly illuminated source. The individual images are each bias and dark frame corrected and then merged using a median combine into a single master flat.
Whats important to understand about flat fields is that they represent the response of the entire system, meaning the telescope and camera. Response can be measured on different scales, e.g., the overall sensitivity of the chip using a given filter down to dust on the glass cover of the chip affecting the sensitivity of a few pixels. Ever wonder what those faint "donuts" on your images were? Those are shadows of dust particles in the system. The smaller the donut, the closer the dust particle is to the chip. It wont do to have the target sitting in the donut hole and the comparison on the donut. The flat field helps eliminate this effect (of course, so does keeping the system clean but not perfectly).
Another important point is that the system must be near the same focus as for when you image, usually infinity. If not, then the dust particles will produce different sized donuts than at focus and so affect the accuracy of your results.
Therefore, once you have a master flat, you shouldnt move the camera or change anything about the system. If you do, youll have to repeat the process. The good news is that once you have a good master flat, you can usually use it for a few days if not weeks. Dont go too long. Those dust particles do accumulate.
The problem with this method is that the window of opportunity, i.e., the time between when the sky is dark enough and before too many stars appear in the image, can be very short only a few minutes. If youre working with filters, that leaves very little time to get things done. One solution that some use to get around this problem is to put a piece of uniform frosted plastic in front of the system. This extends the available time because no stars will be imaged. The trick is getting a uniform piece of plastic. Many find that the diffuse plastic used by sign companies for back illuminated signs or the plastic used for artists tracing boxes work quite well.
One reason professionals prefer this method is that they can better control the intensity and color of the light used for the flat fields. Color does have a slight effect on the results for flat fields but, for most purposes, not enough to demand that this method be used.
A full discussion of differential photometry can take one or more chapters in a book so this discussion can only be a general overview.
Differential photometry means that one measures the difference between a comparison (or average of several comparisons) and the target. The result is sometimes called the "Delta Magnitude." Differential photometry is the easier of the two main methods (the other being All-Sky) and provides the most accuracy when measuring small variations. Thats particularly important when the amplitude of some asteroid lightcurves is under 0.1m.
With a modest CCD field of view, the process becomes very simple and very effective as the comparisons are often within the field with the target at all times. This means that all of the stars and targets have very similar air masses and so extinction effects all but cancel out providing the stars and the target are similar in color. If you use comparisons that are distinctly different in color, e.g., blue, while working a red target (asteroids tend towards the red), you may not be able to eliminate extinction calculations entirely, especially if you are working at low altitudes. This is one reason you should avoid going below 30° altitude.
You should use at least two comparison stars. This second star, the "check star", is used just in case the primary comparison is variable. Since the CCD field likely has a goodly number of stars, dont hesitate to use more stars, using the average value of the sum of the individual magnitudes as the single comparison value. This helps smooth out minor errors when measuring each star. Four or five is a good number as a compromise between averaging out minor differences in the measurement and creating addition work. Of course, if the software allows automatically measuring the stars, using a larger number wont hurt.
Be careful when having the software measure images without reviewing what the software is doing. Trailed images and targets merged with field stars can make for some interesting results that must be rechecked "manually."
Another advantage of differential photometry is that it does not require reducing the values to a standard magnitude system. The disadvantage is that the magnitude difference will not be the same as when you did use a standard system. Too, you can state values only as differences, not as an absolute magnitude. However, there are ways to convert both unfiltered and filtered differential magnitudes to a standard system. Those are discussed below.
This is the technique used more often by the professionals, primarily because they have facilities located in places where the nighttime transparency is excellent and constant. Those living in humid, low-altitude locations almost without exception cannot use this method.
In All-Sky photometry, several stars of well-known catalog magnitudes (stars in a catalog of standard stars) are measured in widely varying locations around the sky. The measurements of these stars can be used to determine both the extinction values for the evening but also the transformation values. These measurements are made not just at the beginning of the evening but at least once or twice during the run to assure conditions did not change dramatically and to provide "more data points" for the solution.
With the transform and extinction values, the measurements of the target and comparison stars can be reduced to absolute magnitudes on a standard system.
Literally, Air Mass is a measurement of the amount (mass) of air through which you look to see a given star. When the star is directly overhead, youre seeing it along the shortest possible path through the earths atmosphere. When you look at a star near the horizon, youre looking through a much longer path. Since starlight is dimmed by the atmosphere, the longer the path, the more the light is dimmed. Also, as you probably know, not all colors of light are affected the same. Blue light is absorbed much more than red light by the atmosphere. Thats why we have red sunsets.
There are several formulae for determining the value of the Air Mass. The most simple is:
Where z is the so called "zenith distance" or the distance of the object from the overhead point. In other terms:
When a star is directly overhead, z = 0° (altitude = 90°), which gives a value of X = 1.00. When a star is 30° above the horizon, the Air Mass = 2.00 (1 / cos(30) = 1/0.5).
30° is a somewhat standard "barrier." To borrow from the ancient mapmakers, below that altitude "there be dragons." The Air Mass changes rapidly below 30° altitude (60° zenith distance) and highly subject to humidity, barometric pressure, low clouds, high thin cirrus, and pollution. If at all possible, you should confine observations to at least 30° altitude and higher.
The formulae given above is a good approximation down to about 30° altitude but it should not be used for more critical reductions and certainly not if observations are below this altitude. The most common formulae used was developed by Bemporad in the early 20th century:
In this case, z is the apparent zenith distance not the true distance. Keep in mind this formula is based on observations almost 100 years ago. Things have changed considerably since then. Even so, the formula is probably good to 0.001 Air Mass more than sufficient for all but the most critical work.
To calculate the zenith distance for a star at a given position in the sky, use the formula:
When you measure the brightness of a star (or asteroid), you obtain what is called a "raw instrumental magnitude." The value is determined by the formula:
For CCD images, this translates into the total of all the pixel values from the star times the ADU conversion factor. Chip/camera manufactures often state this as e/ADU. A common value is 2.3. For example, given
Note that the TotalCount is not just the sum of the pixel values within a measuring aperture but is that actually contributed by the star, i.e., the computed background sky been subtracted out. Note, too, that the fainter a star, the close the instrumental magnitude gets to zero and that the brighter the star, the more negative the instrumental magnitude. For example, if I = 1, m = 0; if I = 1000000 (1 million), m = -15.00. This is in keeping with the tradition of the stellar magnitude system where brighter stars have lower values than fainter stars. In the case of instrumental magnitudes, all objects have negative values.
If working only differential photometry, you could almost stop here (assuming your comparisons and target were similar in color). However, if you want to compare you results directly with those from other observers or to compare values over a large range of time, you should convert the magnitudes to a standard system.
There are several standard systems. The most famous is probably the Johnson UBV system developed in the 1950s. The original system has been expanded to include the R (Cousins) and I regions. There are other standard systems but the Johnson B and V and Cousins R bands are the most commonly used among amateurs. Too, the phase coefficients for asteroid magnitudes (absolute magnitude, H, and slope, G) are in the Johnson V system.
To reduce to a standard system involves removing the effects of looking through the atmosphere (extinction) and adjusting for the systematic differences between your filter/CCD systems response to given colors versus the equipment that defined the system.
Extinction is a measurement of the attenuation (dimming) of starlight caused by passing through the earths atmosphere. It is usually expressed in units of magnitudes/Air Mass meaning the larger the air mass, the more the extinction.
The value for extinction is not the same for all colors. Since red light is absorbed less by the earths atmosphere than blue light, the value for extinction is lower for red light than blue. The value for visual (green) is somewhere in between.
"Extinction" is often preceded by "first order" and "second order." First order is the type of extinction just described, i.e., the value for a fixed or narrow range of colors. Second order extinction is dependent on the color of the object being measured. As the object moves towards the horizon from its highest point in the sky, the blue portion of its light is dimmed more rapidly than the red portion. Therefore, to correctly compute the true "exospheric magnitude" of the star (magnitude as seen outside the earths atmosphere), one must also determine the second order extinction.
This, too, is measured in terms of magnitudes/Air Mass but this time the magnitude is the difference of magnitude as measured in different colored filters. By far, the most common second order values will be based on the known Blue-Visual (B-V) magnitude of the object. Visual-Red (V-R) is sometimes used but the value is usually so small that excluding it from calculations does not have a significant impact save when doing very critical work (millimags or low altitudes).
However, you should not assume this is the case since your filter/CCD combination for a given color may not match exactly that of the standard system. For example, if your R filter combination is slightly towards the blue side of the standard system, you have a small color correction to make. See "How Do I Determine Extinction?"
There are several ways to determine the extinction values for a given night. Its important to remember that you must determine the extinction values for each night. Too, you must also determine the "nightly zero points" (to be described later). Both of these values change whenever the transparency of the night sky changes. There may be a thin haze tonight but not tomorrow or a volcanic eruption filled the atmosphere with fine dust (there were noticeable effects around the world from the Mt. St. Helens eruption for month after the actual event).
There are two principal methods for determining the extinction values for each filter (or unfiltered).
To convert raw instrumental magnitudes to a standard system requires that you determine the values (transforms) to apply to the instrumental magnitude for a given star so that the derived standard magnitude for the star matches its catalog value. This is independent of atmospheric extinction and is due to different response to light of a specific color between your system and the one that defined the standard system. For example, your system may be more sensitive overall but it may also be more sensitive towards the blue region when using a certain filter.
Part of the solution for the transforms is the zero point (the "intercept" when doing a linear least squares solution). If atmospheric conditions never changed and your systems response never changed, the zero point could be determined once and for all. However, the sky is not always the same transparency and, because of varying degrees of dust and pollution, not always consistent for all colors. If your mirrors are recoated, the systems response is different.
Therefore, you should compute the zero points for the system each night. However, it is not necessary to recompute the transform values every night. Unless youve made an actual change in the system (recoated the mirrors, changed filters, etc.), the transform values are usually constant enough to be used for days, if not weeks, at a time. You should confirm this assumption. Even some of the better professional sites have significant changes in the transform values on a seasonal basis.
If youre going to convert your measurements to a standard system, you need to use stars that have been carefully measured on that system. The stars in the original Johnson UBV are usually too bright and too far apart to do much good for the CCD imager. Far and away, the most common catalog is the one developed by Arlo Landolt. This is a series of fields near the celestial equator with well known magnitudes in at least the B, V, and often R bands. Being on the equator makes them readily available to observers in either hemisphere. The down side is that they are often well removed from fields containing asteroids. Therefore underscoring more the care you must take when determining the extinction and transform values youll use to convert raw instrumental magnitudes into standard magnitudes.
Be sure to read the LANDOLT.NOTES file. Not all the Landolt fields are of the highest quality, some stars were measured only three times or less, and should be avoided.
At the Lowell ftp site, youll also find the LONEOS catalog prepared by Brian Skiff. This is a good catalog and includes most of the better Landolt stars. However, other than those stars that are marked as Landolt standards, do not use stars from the LONEOS catalog to determine the transformation values. The systematic error in the LONEOS catalog is on the order of 0.05m. Good in many cases but not good enough for determining transformation values.
The good news here is that you can often use the same images you used to determine the extinction values (assuming you used standard fields). This saves you time by not having to shoot different fields. The bad news is that there seems to be as many ways to use the data as there are people doing the calculations. Usually, the differences are subtle. Arne Hendens book has some excellent worked examples but those tend to be based on methods that apply to sites where a photometric night is not uncommon.
Click here for a worked example using the Hardie method, where standard fields at disparate air masses are imaged as close in time as possible.
Once you have the extinction, transformation, and nightly zero point values, converting instrumental magnitudes into standard magnitudes is a matter of plugging the values into a few formulae. See the example in the section immediately above to see how to "put it all together."
Asteroids move. For the most part, they do not fast enough that you cant sit on a given field all night and keep the asteroid well within the frame. On the other hand, some are moving so quickly that the same field works for only a few images. If working differential photometry, this means you need to determine a common comparison star average magnitude that can be applied to all measurements.
If your observations are filtered, then the solution becomes much easier since you can reduce all your data to standard magnitudes. That way, you can put all your observations into the solution with a minimum of difficulty (there are still some considerations see How Do I Account for Changing Distances and Phase Angles?). Too, youll be able to include observations from other observers who are also reducing their observations to the same standard magnitude.
If youre not reducing to standard magnitudes, there are still ways to get the data from night to night to merge. Consider the data from each night (or common field if you are working a fast moving target) to be a "session." In each session, you use the same set of comparison stars to determine an average value for each observation that is used to determine the differential value. Taking the average of the averages gives you an arbitrary base line value, or "zero point".
Once you have two or more sessions, you can try to merge the data by keeping one zero point constant and adjusting the zero point of the other sessions until the data from all sessions agrees. This is usually determined by matching the peaks of each session to one another. At this point, you have a close approximation of a common zero point for all observations. Each session will have its own offset to its zero point that brings it into line with the others.
In theory, the offset is the difference of the average value of the any two zero points. However, keep in mind that the zero point is based on an average of averages. Through the course of the night, the average value for each observation changes due to extinction, being fainter nearer the horizon than near the meridian. If all sessions worked both sides of the meridian equally, the theory would be close to truth. However, if one session worked only one side of the meridian (clouds prevent observations in the second half of the night), then the zero point for that night would be skewed. In short, dont be concerned that the offset you find is not exactly what you expect.
When you start to do data analysis, plotting the data against a fixed period (a phase plot), youll often see that the data for a given session needs to be moved up or down slightly to better merge with the other data. Do this for each session, keeping one fixed (usually the first). Eventually, all the data merges to a good approximation, sometimes almost perfectly.
When the only work involved is determining the period and amplitude of the curve from data over a relatively short time, a few weeks or months, using arbitrary zero points is acceptable. However, if the average magnitude of each obtained curve is to be used to determine the absolute magnitude and slope parameter, then reducing to a standard magnitude (Johnson V) becomes mandatory. Standard magnitudes also become very important if keeping track of asteroids over long times for other reasons, e.g., pole determinations, and slow rotators. In these cases, you must also take into account changing phase angles and distances of the asteroid from the earth and sun.
As the asteroid and earth move in their orbits, the distance between the two bodies changes, as does the distance from the sun for each. Also, the "phase angle" of the asteroid changes. The phase angle is the angle between the sun and earth as seen from the asteroid. At opposition, when the asteroid is opposite the sun in the sky, the phase angle is 0 or nearly so (its rarely exactly 0, as that would imply that the centers of the three bodies are on the same line).
The changing distances affects two critical factors: the asteroids brightness and the timing of the observations. All other factors equal, the brightness of the asteroid is going to change as it moves closer or farther from the earth (assuming a fixed solar distance). Whether or not the asteroid is getting closer or farther from the sun dictates if the asteroid actually dims or brightens and by how much. All this affects the reduction of observations, especially if youre reducing the data to a standard magnitude. For example, if the asteroid is getting dimmer, youll see a slow decline in the average brightness over time. This effect must be removed from the reduced data before attempting to determine a period.
Usually, this is done by computing the predicted magnitude of the object, using the magnitude for the first set of data (a "session", usually one nights worth of data) as the base value. All other data sets are then adjusted by the difference between the predicted value for the given session and the base value. Another approach is to use the formula
-5 * log(SunDist * EarthDist)
This yields a magnitude that accounts for changing distances but is independent of the H and G values used to predict the asteroids magnitude. The value for the first session is still used as a base.
For most asteroids, it is not necessary to apply a magnitude correction per observation, i.e., an average value for all data in a given session can be used. This is not necessarily the case for an asteroid making a close pass to earth, where the distance over a few hours might change the derived magnitude by several tenths of a magnitude and even more than one full magnitude.
As an asteroid approaches or moves past opposition, at phase angles greater than about 7°, the change in magnitude due to changing distance is linear. At phase angles less than 7°, an asteroid will brighten faster than pure geometry allows. This is known as the "opposition effect." The G value you see in asteroid element sets is used to predict the degree of extra brightening due to the opposition effect. Again, one can use the predicted magnitude of the asteroid using the H and G values to determine the overall difference between the predicted magnitude for one session and the base session.
The problem with this is that the G value is not well known for very many asteroids. In fact, its by carefully measuring lightcurves for several months, on both sides of and near opposition, that one determines the G value. This is a bit of a Catch-22. When the G value is not known, a default value of 0.15 (magnitudes / degree) is assigned. If this value is in error by only a very small amount, it could affect your reductions. Alan Harris of JPL has compiled a complex table of phase correction values, however, it is based on assumptions, too, and so may not apply in all cases. The "best" solution is to work with the assumption that the default value of 0.15 is correct and make it your goal to help determine the true value.
Given there are more than 100,000 known asteroids, you have a pretty wide selection! Of course, most of those are very small or distant and so out of reach of most amateur equipment. A large part of your decision should be based on the amount of time and effort you want to which you want to commit and the equipment you have.
If you dont have filters or cant/dont want to spend a large part of your observing time on asteroid lightcurves, then period determinations are your best bet. Be forewarned, however. Most of the brighter, faster rotators have been worked. So if youre looking to cover only new ground, you should be prepared for a few (many) disappointments. Sometimes youll pick an asteroid and find after a nights run that the period appears to be fairly long (> 12 hours) or that it has a very low amplitude, which will make it difficult to find a period with data from only a few nights. If you have a larger instrument (40cm and up), youll have a better shot since you can go deeper by a fair amount over those who have the more typical 20-25cm scope, thus giving you more targets. Thats still no guarantee that youll pick an asteroid with a period thats less than each nights run.
If youre willing to work asteroids that already have periods established to a fair degree, then your opportunities are vastly improved. Very few asteroids have been so thoroughly studied as to have their pole orientations determined. To make this determination requires getting lightcurves over several months opposition and at two or more oppositions. Reducing the data to a standard magnitude makes the process easier but it is not absolutely mandatory.
If you are able to make filtered observations and reduce to a standard magnitude but still want to keep your observing time open to other targets, consider measuring the lightcurve of a given target once a month for a few months on either side of opposition with some extra work near opposition. By plotting the corrected average magnitude, you can determine the true H and G values (absolute magnitude and phase coefficient or slope parameter).
Taking even less time, you can provide the photometry version of "one-night stands" by providing a well reduced set of data that can be put into a database (or you make available) that other observers can use to add to theirs for detailed analysis.
Of course, if youre entirely devoted to asteroid lightcurves and can do filtered observations, you can do all the above and more. Slow rotators require that observations be acquired not in great detail every night and maybe not every night. Theres no point getting 100 images of an asteroid each night when its rotation period is on the order of days or weeks. Of course, the problem is: how do you know which asteroids are slow rotators? Good question. A good start would those numbered asteroids from 1 to 1000 that dont have established periods. Very possibly, some previous work indicated that there was no quick solution and the photometrist moved on to something that provided more immediate results.
Regardless of how much time youre willing to devote, you may have trouble getting a good lightcurve for some targets. The reason: the period is long or has an integral multiple very near 24 hours. Why is the latter a problem? Its called "aliasing." Say the period is just a few minutes (or seconds) over 8 hours and that the longest run you can make is between 6-8 hours. Say also that you start observing the same time each night. This means that youre observing almost exactly the same portion of the curve every night and that you never quite get 100% of the curve. Youll have a hard time getting the period locked down without following the asteroid for several days if not weeks. If the period is close to 24 hours, the task gets much harder. Whats the solution? Collaborating with other observers who are located at least three to four hours in longitude from your location. For example, if youre on the East Coast of the U.S., set up a collaboration with someone in Europe and, if possible, Hawaii or the Pacific Rim. Variable star observers have been doing this for years. The problem is that the pool of active asteroid photometrists is nowhere as large as for variable stars.
Collaborative Asteroid Lightcurve Link (CALL)
This site provides lists of numbered asteroids that are reaching opposition in a given three-month period. All of the targets on the list have either no known or poorly known lightcurve parameters. Choose one that is bright enough and accessible for your system and location.
The CALL site also has a "reservation" system where you can let others know you are working a given asteroid, need or want help to work an asteroid, or ask if a given asteroid is being worked. There is nothing "official" about this system, meaning anyone is free to work any asteroid. However, coordinating and not duplicating efforts allows the most amount of new work to be done.
Also on the CALL site is a page where you can submit a summary of your results for an asteroid, providing the period and amplitude as well as information about you and your equipment. This page should not be considered a place to publish formally your results. Its just for quick reference. The best place under most circumstances to publish your results is in the Minor Planet Bulletin. See "Where and How can I Publish My Results?"
This is like asking, "How long is a piece of string?" There are almost an infinite number of possibilities, many depending on the specific target and purpose of your observations. However, there are general steps that almost everyone takes and by doing so helps assure that the nights run is both fun and beneficial.
For the most part, this depends on how well youve covered the entire curve. If you have data points that cover 100% of the curve, and those data points were collected over a period of 3 revolutions, you can report the period to 1% of the period precision. For example, if you observed an asteroid each night over two nights, each night covered at least 100% of the curve, and the period was 6 hours, you could report the period to about 0.06h since there would have been about 4 revolutions from start to finish. If you got only 1-2 hours each night, even if each run had data for the same maximum, you should not report the precision to 1% since you dont have complete coverage of the cycle. In fact, you probably shouldnt report the period at all unless there was no chance to work the asteroid again.
To report the period to 0.1% (1/1000 of the period), youll need to cover approximately 30 revolutions. In the same example above, you need data spread out over at least a week (about 28 revolutions) and have complete coverage, i.e., no gaps in the cycle, even if the data covering a part of the curve came from only one night.
As always, these are guidelines. In some cases, you can report a higher precision with less data and in others, e.g., when the data is noisy, you will have to report a lesser precision. An extreme example was when one observer worked an asteroid with a period of 26h and could work the asteroid only 4-5 hours a night. He caught the same minimum each night for four nights and then waited a month before observing again. Still limited to 4-5 hour run, the timing was such that he was catching the same minimum again. Since the minimum point caught several times with a months separation, the period was reported to 0.01h (0.03%). This is somewhat akin to how variable star observers are able to measure the period of an eclipsing binary to a precision on the order of fractions of a second while each observation has a precision of only one or two minutes.
The battle cry of "More Data!" should be heard. As you accumulate more
observations, not on a given night but over a span of several nights or weeks, some of the
alternate solutions quickly drop out, usually to where you are left with one unambiguous
solution. Usually is the key word.
Consider a pyramidal shaped asteroid: it would show the double extrema in only 180° of revolution. If the asteroid has a satellite, the curve may be complex and so resolving the period even more difficult as you are trying to derive two or three periods from the same data. An asteroids period may appear to change if you follow it for several months, going from one value to another and possibly back to the original value. This has to do with the changing aspect ("face") of the asteroid because youre not seeing it with the pole exactly perpendicular to the line of sight.
In cases where the period is in well in excess of the time you can observe each night, you could easily find an alias of the real period, i.e., a period that is has an integral multiple close to the span between the start of observations. The period could also have an integral multiple of the span between observations. Unless you can cover a significant portion of the curve in a continuous stream of time (meaning youll need help from observers east and west of you), then ambiguous solutions are always a possibility. "MORE DATA!"
The best place for amateurs to publish their work is in the Minor Planet Bulletin, which is a publication of the Minor Planets Section of the Association of Lunar and Planetary Observers (ALPO).
The MPB is a refereed publication, meaning that the submitted articles are reviewed by one or more qualified jurors for quality of work. This is similar to professional journals and assures that what appears in the Bulletin has met minimum standards before publication. Note that what is not usually considered is the "correctness" of the derived period for a lightcurve. As noted in the section above, the true period can prove elusive. By publishing a period based on sound technique and analysis, one can add to the body of information available that, if needed, can be used to determine the true period or nature of the asteroid. Try to avoid what's been called the "Dusty Filing Cabinet" syndrome where data has been stuffed away in the belief it had no value when, as is often the case, it provides a missing link that leads to a breakthrough.
You can learn more about ALPO from their web site