Calculate Missing Values

Crystal Clear info.png This document is 'Work In Progress' so content may not be complete.

Request for help from Wiki Readers

  • Do you understand how MX works?
  • Do you use hardware, or MX functionality, that is not yet documented? Can you begin that documenting?
  • Can you contribute simple text for novice users, examples of what you have done, correction of typing or factual errors, or supply missing details?
  • Will you make this page more useful by bringing content up-to-date as new releases change some information written for older releases?
  • Does any page need a section for novices, so they don't need to read more technical information further down that page?
  • Is there some information on this page, that should be on a separate page? Can you create the new page and move the less relevant information off this page, don't forget this page needs a link to the new page so people who expect to find it here know where it has moved to?

If you plan on contributing to the Wiki, then you will need an account.

  • Please use the Request Account form to apply for an account. Note that the Wiki is currently undergoing restructuring and is largely locked for editing, but please apply for an account if you wish to contribute in the future.
  • You will find help on how to contribute to this wiki at How to Edit.
  • If you need to consult others, please use the Cumulus Wiki suggestions forum.

Please be aware that information on this page may be incorrect.

Cumulus Version MX SpecificCumulus Version 1 SpecificThis page applies to both the legacy Cumulus 1 and to MX. It is intended that this page will cover all the different ways in which you add back any missing data. This means it should cover:

  • Data that you want to add from another system for a period when you were not running Cumulus
  • How Cumulus captures archive data if it is available for a period when Cumulus was stopped
  • Derivatives that have been added since particular log files lines were stored
  • Solving problems when a line was not successfully written to the daily summary log file

How Cumulus Works

The way that Cumulus works is that:

  1. It reads what we can call source values (defined below) from your weather station (based on measurements from sensors being transmitted in some way by the station)
  2. It calculates what we can call derived values which may be either the source re-expressed in different units, or calculated by combining two or three source values to get a new derivative
  3. It tracks various extremes, and cumulative totals, by comparing the derived values against existing extremes and sums stored in various period tracking files
    • You can find out about how a rogue source value can affect extreme records derived from it, and how to correct such issues on the Correcting_Extremes page.
  4. It periodically stores the spot (current) derived values in a collection of Monthly_log_files, (and so when you move to a different device, or upgrade to a new release, then providing you copy these files to the new location you will not lose any data)
  5. At the end of each day, Cumulus logs the daily extremes or daily sums, from monitoring changes in each derived value into daily summary log

Reading archive data

If you are using a weather station type that has a internal memory storing weather data that Cumulus can read, then when Cumulus is restarted it can read historic data from that logging memory.

This means that if you discover that Cumulus has missed some data, soon after it misses that data, you can rewind, by stopping Cumulus, optionally replacing files in data folder with earlier copies of those files from backup folder or backup/daily folder, and restarting Cumulus. Typical reasons for missing some data would include power blips and problems with the interface between Cumulus and the weather station.

  • For Cumulus 1, to stop Cumulus, you select Exit from the main menu.
  • For MX, how you stop MX depends on your device, and whether running as a service, please see MX on Linux or MX on Windows OS pages as appropriate for advice.

Put simply, Cumulus stores the latest time it successfully read data from the weather station in today.ini. When Cumulus is restarted, if it is possible to read the historic data from the weather station, then any entries between the time stored and the current time will be read.

With Cumulus 1, there is some dependence on weather station type, but usually two passes are made through the external logging memory, the first pass investigates what records are available in the weather station, by reading backwards in time, and the second pass works forward in past time reading and processing those records.

With MX, again there will be some dependence on weather station type, and the process has not been documented by the developer; it appears just a single pass is made.

Importing data from other systems for periods when Cumulus was not running

This section needs more work on it

Essentially, what we need to do, is to compare the format of the data we have available against the format of the relevant Cumulus file.

CSV outputs from software like EasyWeather

You might want to read Transferring_past_observations_from_EasyWeather.dat_to_Cumulus, although that is now an obsolete article, Easy Weather has changed, and Cumulus MX is different from the original Cumulus described there.

The CSV output from such files has fields in the wrong order, may not match our Cumulus units, and does not have a match for each of the fields we need in our Cumulus - see importing into standard log file fields. The solution is to use a CSV file editor, or a spreadsheet, to move the fields into a better order, to apply a formula where necessary to change units, and create any missing fields.

Cumulus does not allow for "null", field not available, except in that it can allow shorter lines with just the fields applying to an earlier release. For MX, CreateMissing.exe can calculate some of the derivatives missing from earlier releases for the Standard_log_files, and that utility can generate the necessary dayfile.txt lines.

On the Support Forum - Dane - offered a translation service - see here, but the last post is May 2018, so I am unsure whether it is still available as I type this in July 2021.

Weather Display

Please see Software page - Weather_Display_Converter. When you run this routine, it brings up a screen where you select the units that you want the Cumulus Standard_log_files it produces to use, as the units that weather display uses for the export are fixed.

The routine was written for Cumulus 1, so will not produce all the fields that MX uses, but CreateMissing.exe can calculate some of the derivatives missing from earlier releases, as well as producing dayfile.txt.

Weather Link log file

There is a routine, see Software page - WeatherLink_Converter for this. When you run this routine, it brings up a screen where you select the units. Please note, this utility assumes the units you select on that screen apply both to the file (OR MULTIPLE FILES) being converted and to the (multiple) Cumulus Standard_log_files it produces.

It was written for Cumulus 1, so will not produce all the fields that MX uses, but CreateMissing.exe can calculate some of the derivatives missing from earlier releases, as well as producing dayfile.txt.

Weather Link Live

PLEASE CAN SOMEBODY FULLY DOCUMENT THIS.

My understanding is that while MX can read current data from the WLL, you need a "pro subscription" to export past data. I don't know the format of that past data export, but can only guess it can be easily imported into Cumulus.

OTHERS????

PLEASE CAN SOMEBODY CONTRIBUTE WAYS OF IMPORTING FROM OTHER SOFTWARE HERE.

Meanwhile, people should ask for help in the support forum.

Some definitions

To make sense of explanations on this page, you need to understand the terminology used here.

Source value

A weather station sends values based on its sensors to Cumulus. If Cumulus reports that value without changing it (an offset and/or multiplier might be applied to convert it to the unit wanted by the Cumulus User), the value is described as a source value because Cumulus is reporting something that has its source elsewhere.

There is not a single list of what weather values are called "source values", because this varies depending on the weather station, and in some cases, a Cumulus User can ask Cumulus to recalculate a value instead of using what is sent by their weather station.

However, Cumulus does include code that expects a weather station to provide a defined minimum set of source values:

  1. Current air temperature
  2. Current Relative Humidity
  3. At least one wind speed
  4. Current air pressure (absolute or sea-level)

Cumulus will stop processing any information from a weather station unless the above 4 source values are being supplied and reveal they are being updated (failure is set is after a total of 6 unsuccessful consecutive attempts to read each of these).

This requirement is a default, but it can be changed:

Cumulus also expects that your weather station can provide:

  • A rainfall counter (this could be annual rainfall, or count of rocker bucket gauge tips)

Although the lack of that rainfall counter source value will affect functionality, Cumulus will continue to process other source values that are available.

Some weather stations may also provide one, or more, of these optional source values (not a complete definitive list):

  • Dew-point Temperature
  • Wind Chill Temperature
  • Evaporation
  • Sunshine hours
  • Solar radiation
  • UV index
  • Air pollution measurement

Derived value

A dictionary will define derived as "obtained from a source", and that is the meaning adopted on this page. Steve Loft (in the Cumulus Support Forum) used the terminology "derived" for two purposes.

  1. One type of derived value takes a source value, applies any multiplier (may be both first order and second order multipliers) and/or constant that has been defined in calibration settings, and converts the output to the units selected by the Cumulus user.
  2. The other type of derived value takes more than one source value, applies a standard calculation, and ouputs a new derivative
    • Because newer releases calculate more derivatives than older releases, extra fields have been added to the standard log file

"Calculate Missing"

This also has two meanings in a Cumulus context:

  1. If a particular standard log file line has fewer fields than the latest line;
    • Calculate Missing is the process of looking at the derived values of first type above, and calculating any derivative (second type of derived value) that is missing in that particular line
  2. If a particular daily summary log file, either does not have a line for a particular meteorological date, or does not have all fields defined in a line for a particular meteorological date;
    • Please see Amending dayfile page for full details.
    • Calculate Missing is the process of scanning all the lines in the standard log file that relate to the meteorological date and recalulating approximate extremes, or sums, for the missing fields.

If you are using Cumulus MX, there is a download linked from here that does both of these. There are also editors within the admin interface for manually editing the files on a line by line basis. You can also use the PHP Hypertext Pre-processor (PHP) script specified for Cumulus 1 below, although be aware it was written for a very old PHP version.

If you are using the legacy Cumulus 1 software:

  1. For the standard log file meaning above, provided you have access to a web server that can run PHP Hypertext Pre-processor (PHP) scripts, then this post in support forum includes a script that produces a HTML form where you specify the log file name you wish to edit. The script will read that file, and output a replacement file with all possible spot derived fields populated. Please note that script was written to run on an old version of PHP that was current at the time the script was written, it will need some editing to work on latest PHP.
  2. For the daily summary log meaning above, go to the Edit menu, and select Dayfile.txt. This brings up an editor with a button labelled "Create Missing", that will not affect any existing line, but can insert missing lines, see Amending_dayfile#Create_Missing.

Accurate or Not?

This Wiki page describes some techniques for calculating and inserting values that are missing from standard log files and from daily summary log file.

Since the derived values this page is discussing are spot values, they have to be calculated from source values measured at the same time. This means that if one of your .ini files is missing some fields, these missing fields cannot be calculated from other fields. This applies to any missing extreme records for today, this month, this year, monthly-all-time, or all-time.

However, the techniques for correcting rogue values described on the Correcting_Extremes page, can be used for inserting missing values in the daily and longer period extreme records.

For the standard log files, all the fields in any one line relate to the same time, therefore for derived values calculated from other fields in the same line, you should have the same value whether it was calculated when that line was originally stored, or calculated afterwards. I say should because the calculation formula is not always the same for all releases, in particular there are differences between how Cumulus 1 and how MX calculate some derivatives.

For entries in today.ini, month.ini, year.ini, alltime.ini, and monthlyalltime.ini files, you don't have access to the source values used for the original calculation afterwards.

  • The original values are calculated as Cumulus is running
    • Depending on your weather station, Cumulus is able to read values every minute, and consequently update today.ini (and the other files listed) each minute if an extreme happens
    • Obviously, dayfile.txt is updated from today.ini, so it is just as accurate
  • Any "Calculate Missing" operation, done subsequently, does not have access to old data, it can only look in the spot values that have been logged.
    • If Cumulus is set up to only log the readings every half an hour, create missing is only able to see 1/30th of the data,
    • Due to this mismatch, the derived values (averages, highs, lows) this approach can store are much less accurate (hence getting missing lines from a backup is better)

Derived spot values

Cumulus software code as it reads source spot values, will detect if that source value is required for the calculation of an instant derived spot value.

Here are all the derived spot values that Cumulus can calculate (depending on Cumulus configuration settings, and what your weather station can output):

  • Dew point, a weather station might output dew point temperatures, but Cumulus can calculate it from source values for outdoor temperature and outdoor humidity. The original legacy Cumulus 1, and CumulusMX, use different formulae to calculate dew point, so there is a continuity break if some of your data logs were created by the original Cumulus software and some by CumulusMX.
  • Wet Bulb, is not calculated by CumulusMX
  • Wind Chill, again this might be output by your weather station, but Cumulus can calculate it from outdoor temperature and average wind speed.
  • Canadian Humidity Index (Humidex), USA Heat Index, and Apparent Temperature are not output by your weather station, but both the original Cumulus 1 and the newer Cumulus MX will derive these spot values for you (except if you are running a very old release)
    • The implementation of these by Cumulus software is briefly mentioned here.
    • The calculation formulae used for these may not be consistent for all releases, so again there is a possibility a data log might have continuity breaks.
  • Feels Like Temperature is calculated by CumulusMX only, the actual calculation formula has varied in different releases.
  • Heating Degree Days and Cooling Degree Days; these are further examples of derived values that most versions of Cumulus will calculate for you (from all processed outdoor temperatures in a day)

The links above will take you to where the derived values are explained in the Category:Terminology pages of this Wiki, however at the time of writing this page, many of those links have very little information, so you may wish to search online to find more information in for example Wikipedia.

There are some configuration settings where you can decide whether to use a weather station supplied dew point temperature and whether to use a weather station supplied wind chill temperature, please see the interface and Cumulus.ini pages for how to find the settings.

Field Count Variations

When the standard data logging file was introduced it only had 16 (or fewer?) fields. As time has gone by, extra fields have been added to the file. At time of writing, 29 fields have been in the file since release 3.6.12 (build 3088), and currently the "To Do" database does not include any suggestions that would add more fields.

When the daily summary log file was introduced it had 15 fields. As time has gone by, extra fields have been added to the file. At release 3.6.12 there were 54 fields, but at earlier and later releases there are fewer fields. At the last update of this page (release 3.7.0) there were 52 fields. The number of fields in a line of the file might be changed in a future release.

When you use Cumulus to edit any of these files, it expects the file to have the number of fields defined in the release you are using. If an existing line in the file has fewer fields, Cumulus can still read it, but Cumulus will add trailing field separators if the file line is edited.

Consequently, those people who have used Cumulus for a while may have files that include some lines with fewer fields stored than their latest lines.

Why do "Calculate Missing"?

Most functionality in Cumulus is concerned with current data or extremes/sums that are derived for a hour, a day, or longer, periods. For these contexts, you might encounter an odd rogue value that needs to be corrected as described on the Correcting_Extremes page. You are unlikely to worry about missing past values.

However, if you want to be sure that your all-time extremes, or monthly-all-time extremes, are correct, then this table shows how the start date for these extremes varies. You might want to achieve better consistency by adding missing fields to earlier lines in the log files, if so you want to do a "create missing".

If you are using the Historic Charts feature introduced from release 3.9.2 - b3097 (7 December 2020), you may notice that these new charts have gaps in available data, and the dates with/without data vary depending on what is being plotted. You might want to achieve better consistency by adding missing fields to earlier lines in the log files, if so you want to do a "create missing".


How to do "Calculate Missing"

As mentioned earlier, there are a number of options, here are the detailed instructions for each option.

CreateMissing.exe

Mark Crossley has written a utility that can add any of the following fields to your MMMYYlog.txt lines:

The primary purpose of the utility is however to create a new dayfile.txt populating each field as summarised in table below.

Please note the developer does not fully describe his utility at his github page so the author of this Wiki update cannot guarantee the detailed documentation here is correct


Requirements for using Create Missing utility

This utility is only for those who have already installed Cumulus MX:

Although the developer works on "CreateMissing.exe" independently of "CumulusMX.exe" code, so they can get out of step, normally the latest release of the former works with the latest release of the latter.

In a Microsoft Windows environment, Create Missing uses the .NET software that is normally already available.

In a UNIX-derived operating system (e.g. a computer running Linux or Raspberry Pi operating system), there is a need to install MONO to run either "CreateMissing.exe" or "CumulusMX.exe".


Obtaining the Create Missing Utility

There is a download link on Software#Create_Missing page for the latest release of the utility. The MX 3.20.0 release zip also includes a copy of the utility; it was not included in release packages for earlier MX releases, and I can't predict whether it will be included in any subsequent MX release.

All releases are available at Github release download, but be aware that Create Missing version 1.0.2 definitely had a bug as mentioned by developer in the forum, and the forum reports issues found by some users over handling the first day of any month with some other releases. Each release works only with specific "CumulusMX.exe" releases as described (with links to details) in previous subsection.

  1. Run whatever package your computer uses to extract/unzip packages.
  2. The package should list 2 or 3 components found in the zip file
  3. Extract those components to a new folder in your download area
  4. Copy/install the "CreateMissing.exe" and "CreateMissing.exe.config" into same folder as CumulusMX.exe.
  5. If there is a "Updates.txt" included, you will need to rename that file before copying as it conflicts with a file of that same name issued with "CumulusMX.exe" and "ExportToMySQL.exe" release downloads.

If you are installing it into a UNIX environment (e.g. a computer running Linux or Raspberry Pi operating system), the CreateMissing.exe file may need to be given execute access (see Preparing_your_Linux_computer_for_MX#chmod).

Preparing to run the Create Missing Utility

  1. Open up the MX interface in a browser, and navigate to Settings menu -> Station Settings -> General Settings -> Advanced Options -> Records Began Date
    • The date that is shown there is the date where "Create Missing" will start by default, so if you have MMMYYlog.txt log files with an earlier date, edit the date here
      1. Ensure any new date you enter there uses exactly the same format as the date that was there
      2. Click Save Settings button if you have made a change to this date
  2. Close your browser, and (if using an interactive screen for your computer) open up your file manager, or (if using a terminal session for access to your computer) navigate to your CumulusMX folder
  3. Navigate to your data sub-folder
  4. If there is a file there called dayfile.txt.sav, rename that file to dayfile.txt.sav.bak (or any other name that does not already exist)

Optionally, you may wish to take a backup of the existing contents of the "data" directory onto a separate storage device, because the utility edits files in there, and if something goes wrong mid-edit (e.g. power cut, storage device failure) you could lose your valuable data.


Running the Create Missing Utility

The utility can be run while "CumulusMX.exe" is running (see below) either interactively or as a service, or while MX is stopped.

On Microsoft Windows operating systems:

  1. First change the path to your Cumulus MX root folder
  2. Start a command window, Powershell window, or Terminal window (whichever is available when you right click in the folder or on the "Start" icon)
  3. Now type CreateMissing (or type "CreateMissing.exe", both will work)


On Unix-derived operating systems (such as Linux):

  1. First change your command line path to your Cumulus MX root folder (cd CHOSEN PATH/CumulusMX)
  2. Now type sudo mono CreateMissing.exe
    • If the user you are using already has execute access it is possible to leave out 'sudo' (the present writer has not tested this)
    • If mono is already running, it has been suggested it might be possible to leave out 'mono' (the present writer has not tested this)


It is important to understand that when you start MX interactively, or as a service, it reads the contents of dayfile.txt into an internal array (held in random access memory - RAM).

  • The only other time that 'CumulusMX.exe accesses the dayfile.txt is at rollover when it copies what it generates internally for the day just ended into the file
  • You should never run CreateMissing.exe near to your rollover time
  • At any other time, if you run CreateMissing.exe while CumulusMX.exe is running, any updates to the file are not seen by MX until that software is restarted.
  • Release 3.20.0 (beta build 3199 onwards) adds a new "Utils" menu with a new option to refresh the internally held values without restarting MX.
  • The internally held values, not the contents of dayfile.txt itself, are displayed when you use any of the extreme record editors


How the Create Missing Utility works

The utility program will output to any terminal session open and also (with further detail) to a file saved in MXdiags directory.

This utility program looks in Cumulus.ini for:

  1. The Cumulus start date in "StartDate=" parameter, which defaults to the date you first ran Cumulus (as mentioned above it can be edited to another date, to include imported earlier data or to exclude data that relates to a former location).
    • That will be the earliest date the utility program processes.
    • However, if a dayfile.txt file exists and that has an earlier date, then "Create Missing" will only continue if you accept that earlier date.
  2. The meteorological day start time in "RolloverHour=" and "Use10amInSummer=" parameters.
    • This identifies which standard log lines belong to each day by checking against date and time of that line.
  3. The thresholds for Heating Degree Days, Cooling Degree Days, and Chill Hours
  4. The starting month for Chill Hours Season

This utility program looks in the data sub-folder:

  1. If there is a file there called dayfile.txt.sav, the utility aborts
  2. If there is a file there called dayfile.txt, the utility renames that to dayfile.txt.sav


The sequence followed by the utility is:

  1. The utility will not run if a file called dayfile.txt.sav already exists in data folder
  2. The utility reads Cumulus.ini and displays the 'record start date' it finds
  3. It asks if this date is correct
  4. If you answer with anything other than 'Y' or 'y', the utility aborts
  5. If you answer with 'Y' or 'y', the utility continues
  6. It creates an internal array to hold the equivalent of dayfile.txt in random access memory - RAM
  7. If there is an existing dayfile.txt, its contents are used to populate that two-dimensional array
  8. It renames any existing 'dayfile.txt' to dayfile.txt.sav in your data sub-folder
  9. It then works through the MMMYYlog.txt files that have lines for dates after that 'record start date'
  10. For each of these Standard log files there is another sequence:
    1. The inner sequence involves opening the file (the log in MXDiags records when each file is opened, files may be opened more than once)
    2. The dates being processed are shown in the terminal window; as each output is followed by just a "Line Feed" character the lines will overwrite each other in a Microsoft Windows environment (which expects "Carriage Return" and "Line Feed" in sequence to terminate a line), will be on successive lines in a Linux environment (where the normal line terminator is "Line Feed"), and will be on one long line in a Mac operating system (where the normal line terminator is "Carriage Return")
    3. If a line read in the source file does not include a particular derived value (see list above), then the utility will calculate that required value from the spot source field values in the same line, and update that line
    4. The exact calculation done for each item in dayfile.txt is listed in the table #How the utility creates a dayfile.txt line below, but here are the basic principles
    5. For solar data, it examines lines in the file with times between two successive midnights
      • If the array already holds a "sun hours" figure for the date quoted in those lines, the utility skips to next midnight to midnight period
      • If the array is missing a "sun hours" figure for the date quoted in those lines, the utility stores the sun hours recorded in last entry of that period in the part of the internal array with matching date
    6. From release 1.3.0, CreateMissing.exe will also calculate the maximum rainfall in a 24-hour period; to do this it looks at rain counter for the line in the source file it is currently processing, and attempts to find rain counter in another line nearest to 24 hours later (if the time interval between the lines is not exactly 24 hours then the difference figure reported will be for whatever period is available) looking up the rain counter there; it will track the maximum difference ending on a date that date determines where in the array the figure is stored; if the difference in rain counter yields a negative figure it is ignored
    7. For other daily data, the lines examined will start at rollover time and continue until just before the next rollover time (with allowance for any DST change)
      • If the daily data is an extreme, then the highest or lowest (source or derived) value seen in that range of lines is added to the array if an entry does not already exist
      • If the daily data is cumulative (over a day or over a season), then if the internal array does not already have a value, that from the last value in the range of lines is stored
      • If the daily data is not directly related to fields held in the source file (e.g. cumulative chill hours), then it will be calculated from what is available (for chill hours, if the recorded temperature in line being examined is below threshold, then the time interval passed since the previous entry is added to an internally held count; note this MX calculation is slightly different to Steve Loft approach that worked on average of temperature in a particular line and the temperature in previous line being below threshold)
    8. When all lines in a particular file have been examined, that file is closed (the log in MXDiags records when each file is closed, files may be closed more than once)
    9. The next file (chronologically) is opened (the log in MXDiags records when each file is opened, files may be opened more than once) and the inner sequence continues
  11. When all the source files have been processed, the utility continues
  12. The utility creates a new file, naming it dayfile.txt
  13. The utility copies what is stored in its internal array to the new file
  14. To prevent the terminal screen closing, the utility ends with a "Press any key to continue" prompt


How the utility creates a dayfile.txt line

dayfile.txt field Standard log file fields Description
Daily derivative Preferred field First source Second source Third source (how calculated)
date Day-Month-Year Hour-Minute From processing lines linked with that Meteorological day.
Highest wind gust speed Cumulus Gust wind speed Stores highest value of that log file field in that Meteorological day.
Bearing of highest wind gust Average wind bearing (in degrees) Stores the bearing recorded at same time as maximum value in previous field
Time of highest wind gust Hour-Minute Stores the time in log file line used in two previous fields
Minimum temperature Current temperature Stores the lowest value of that log file field in that Meteorological day.
Time of minimum temperature Hour-Minute Stores the time in log file line used in the previous field
Maximum temperature Current temperature Stores highest value of that log file field in that Meteorological day.
Time of maximum temperature Hour-Minute Stores the time in log file line used in the previous field
Minimum sea level pressure Current sea level pressure Stores the lowest value of that log file field in that Meteorological day.
Time of minimum pressure Hour-Minute Stores the time in log file line used in the previous field
Maximum sea level pressure Current sea level pressure Stores highest value of that log file field in that Meteorological day.
Time of maximum pressure Hour-Minute Stores the time in log file line used in the previous field
Maximum rainfall rate Current rainfall rate Stores highest value of that log file field in that Meteorological day.
Time of maximum rainfall rate Hour-Minute Stores the time in log file line used in the previous field
Total rainfall for the day Total rainfall today so far Stores the entry in the last log file field in that Meteorological day.
Average temperature for the day Hour-Minute Current temperature Loop through every log file pair of fields in that Meteorological day:
  1. Work out interval time in minutes obtained by subtracting previous "Hour-Minute" field from current "Hour-Minute" field
  2. Work out product of above interval time times "Current temperature" field
  3. Sum the interval times in step 1 for whole day
  4. Sum the products in step 2 for whole day
  5. When completed loop, store the sum in step 3 divided by the sum in step 4
Daily wind run Hour-Minute Cumulus moving 'Average' of wind speed measurements over a particular period Loop through every log file pair of fields in that Meteorological day:
  1. Work out interval time in hours obtained by subtracting previous "Hour-Minute" field from current "Hour-Minute" field
  2. Work out product of above interval time times "Current average wind speed" field
  3. Sum the products in step 2 for whole day
  4. When completed loop, store the sum in step 3
Highest Average Wind Speed Cumulus moving 'Average' of wind speed measurements over a particular period Stores highest value of that log file field in that Meteorological day.
Time of Highest Avg. Wind speed Hour-Minute Stores the time in log file line used in the previous field
Lowest humidity Current relative humidity Stores the lowest value of that log file field in that Meteorological day.
Time of lowest humidity Hour-Minute Stores the time in log file line used in the previous field
Highest humidity Current relative humidity Stores highest value of that log file field in that Meteorological day.
Time of highest humidity Hour-Minute Stores the time in log file line used in the previous field
Total evapotranspiration Evapotranspiration Stores highest value of that log file field in that Meteorological day.
Total hours of sunshine Hours of sunshine so far today Stores highest value of that log file field in that calendar day (i.e. midnight to midnight)
High USA Heat index Heat Index Current relative humidity Current temperature The heat index is a derived value, if the field in "preferred field" does not contain a valid number, then that field is populated for each line linked with that Meteorological day using the values in fields named in the the other columns of this table. When all the preferred field in day have a value, the highest is stored.
Time of high heat index Hour-Minute Stores the time in log file line used in the previous field
High Apparent temperature Apparent temperature Current relative humidity Current temperature Apparent temperature is a derived value, if the field in "preferred field" does not contain a valid number, then that field is populated for each line linked with that Meteorological day using the values in fields named in the the other columns of this table. When all the preferred field in day have a value, the highest is stored.
Time of high apparent temperature Hour-Minute Stores the time in log file line used in the previous field
Low apparent temperature Apparent temperature Current relative humidity Current temperature Apparent temperature is a derived value, if the field in "preferred field" does not contain a valid number, then that field is populated for each line linked with that Meteorological day using the values in fields named in the the other columns of this table. When all the preferred field in day have a value, the lowest is stored.
Time of low apparent temperature Hour-Minute Stores the time in log file line used in the previous field
High hourly rain Total rainfall today so far High hourly rain is a derived value. Loop through every log file field in that Meteorological day, build up a series of hourly values (total rainfall in this entry minus total rainfall an hour earlier), find maximum of all those hourly values, and store that.
Time of high hourly rain Hour-Minute Stores the time in log file line used in the previous field
Greatest wind chill (high wind speed, low temperature) Wind chill Cumulus moving 'Average' of wind speed measurements over a particular period Current temperature Wind Chill can be reported by weather station or it can be derived. If the field in "preferred field" does not contain a valid number, then that field is populated for each line linked with that Meteorological day using the values in fields named in the the other columns of this table. When all the preferred field in day have a value, the highest is stored.
Time of greatest wind chill Hour-Minute Stores the time in log file line used in the previous field
High dew point Current dew point Dew Point can be reported by weather station or it can be derived. However, all Cumulus releases have this log file field. Stores highest value of that log file field in that Meteorological day.
Time of high dew point Hour-Minute Stores the time in log file line used in the previous field
Low dew point Current dew point Dew Point can be reported by weather station or it can be derived. However, all Cumulus releases have this log file field. Stores lowest value of that log file field in that Meteorological day.
Time of low dew point Hour-Minute Stores the time in log file line used in the previous field
Today's dominant/average wind direction Cumulus moving 'Average' of wind speed measurements over a particular period Average wind bearing (in degrees) The dominant/average wind direction is a derived value.
  1. Loop through every log file pair of fields in that Meteorological day:
    • Calculate increment in X as product of wind speed times sine of bearing, and sum those increments
    • Calculate increment in Y as product of wind speed times cosine of bearing, and sum those increments
  2. Convert final X and Y coordinates back to a bearing in degrees
Heating degree days (HDD) Hour-Minute Current temperature Loop through every log file pair of fields in that Meteorological day:
  1. Work out interval time in days obtained by subtracting previous "Hour-Minute" field from current "Hour-Minute" field
  2. Work out increment in HDD by subtracting current temperature from HDD threshold
  3. Work out product multiplying result in step 1 by result in step 2, and sum those products
  4. At end of loop store the final sum
Cooling degree days (CDD) Hour-Minute Current temperature Loop through every log file pair of fields in that Meteorological day:
  1. Work out interval time in days obtained by subtracting previous "Hour-Minute" field from current "Hour-Minute" field
  2. Work out increment in HDD by subtracting CDD threshold from current temperature
  3. Work out product multiplying result in step 1 by result in step 2, and sum those products
  4. At end of loop store the final sum
High solar radiation current solar radiation Stores highest value of that log file field in that Meteorological day.
Time of high solar radiation Hour-Minute Stores the time in log file line used in the previous field
High UV Index UV Index Stores highest value of that log file field in that Meteorological day.
Time of high UV Index Hour-Minute Stores the time in log file line used in the previous field
High Feels Like temperature Feels Like temperature Current relative humidity Cumulus moving 'Average' of wind speed measurements over a particular period Current temperature Feels Like temperature is a derived value, if the field in "preferred field" does not contain a valid number, then that field is populated for each line linked with that Meteorological day using the values in fields named in the the other columns of this table. When all the preferred field in day have a value, the highest is stored.
Time of high feels like temperature Hour-Minute Stores the time in log file line used in the previous field
Low Feels Like temperature Feels Like temperature Current relative humidity Cumulus moving 'Average' of wind speed measurements over a particular period Current temperature Feels Like temperature is a derived value, if the field in "preferred field" does not contain a valid number, then that field is populated for each line linked with that Meteorological day using the values in fields named in the the other columns of this table. When all the preferred field in day have a value, the lowest is stored.
Time of low feels like temperature Hour-Minute Stores the time in log file line used in the previous field
High Canadian Humidity Index or Humidex Humidex Current relative humidity Current temperature The Canadian Humidity index is a derived value, if the field in "preferred field" does not contain a valid number, then that field is populated for each line linked with that Meteorological day using the values in fields named in the the other columns of this table. When all the preferred field in day have a value, the highest is stored.
Time of high Humidex Hour-Minute Stores the time in log file line used in the previous field
Cumulative Seasonal Chill Hours Current temperature Hour-Minute "Chill Hours" is a derived value, loop through every log file field in that Meteorological day:
  1. Work out interval time in hours obtained by subtracting previous "Hour-Minute" field from current "Hour-Minute" field
  2. Work out if there is increment in Chill hours by seeing if "Current temperature" field is below Chill Hours threshold temperature
  3. If there is an increment, sum value from step 1
  4. At end of loop, store final value of sum after (except on first day of month specified as Start of Chill hours season) adding it to value in previous day
Maximum rainfall in a 24 hour period ending on particular day Rainfall Counter Two log file lines are examined, approximately 24 hours apart, the rainfall counter value in earlier of the two lines is subtracted from that in later of the two lines. If the difference is negative it is ignored. If the difference is zero, or positive, then it is inserted into the Rain 24 hours for the relevant Meteorological day date for the time in the later line, unless it is lower than any number already there
Time of maximum rainfall in a 24 hour period Hour-Minute Stores the time in later of the two log file lines used in the previous field

After running Create Missing

If MX was running while Create Missing was run, then MX will remain unaware that "dayfile.txt" has been updated, and that can cause a problem at rollover when MX appends a new line to the file.

If you are running a MX release up to 3.19.3, then you should first stop and then restart MX, so it loads the contents of the new file into its internal array (see #Running the Create Missing Utility. From release 3.20.0 (beta build 3199 onwards) released as build 3202, there is a Utils menu in the interface, and you should select Reload Dayfile menu item and on the web page subsequently displayed click the Reload Dayfile button.

Create Missing changes the content of your MMMYYlog.txt files and your dayfile.txt, so you may then wish to work through all the extreme record editors and decide if you want to update any entries.


How the utility reports progress

Here is a short section of typical output (from version 1.0.2 that had a bug and never processed 1st day of month) in a log that was stored in MXdiags:

2021-06-08 19:35:44.108 Loading log file - data/Jul20log.txt
2021-06-08 19:35:44.191 01/07/2020 : No monthly data was found, not updating this record
2021-06-08 19:35:44.688 Date: 02/07/2020 : Adding missing data
2021-06-08 19:35:44.705 Date: 03/07/2020 : Adding missing data
2021-06-08 19:35:44.719 Date: 04/07/2020 : Entry is OK
2021-06-08 19:35:44.720 Date: 05/07/2020 : Entry is OK
2021-06-08 19:35:44.720 Date: 06/07/2020 : Entry is OK
2021-06-08 19:35:44.720 Date: 07/07/2020 : Entry is OK
2021-06-08 19:35:44.720 Date: 08/07/2020 : Entry is OK
2021-06-08 19:35:44.720 Date: 09/07/2020 : Entry is OK
2021-06-08 19:35:44.720 Date: 10/07/2020 : Entry is OK
2021-06-08 19:35:44.721 Date: 11/07/2020 : Adding missing data
2021-06-08 19:35:44.777 Date: 12/07/2020 : Adding missing data
2021-06-08 19:35:44.791 Date: 13/07/2020 : Adding missing data
2021-06-08 19:35:44.805 Date: 14/07/2020 : Adding missing data
2021-06-08 19:35:44.819 Date: 15/07/2020 : Adding missing data
2021-06-08 19:35:44.834 Date: 16/07/2020 : Adding missing data
2021-06-08 19:35:44.848 Date: 17/07/2020 : Adding missing data
2021-06-08 19:35:44.863 Date: 18/07/2020 : Adding missing data
2021-06-08 19:35:44.877 Date: 19/07/2020 : Adding missing data
2021-06-08 19:35:44.892 Date: 20/07/2020 : Adding missing data
2021-06-08 19:35:44.905 Date: 21/07/2020 : Adding missing data
2021-06-08 19:35:44.919 Date: 22/07/2020 : Adding missing data
2021-06-08 19:35:44.933 Date: 23/07/2020 : Adding missing data
2021-06-08 19:35:44.948 Date: 24/07/2020 : Adding missing data
2021-06-08 19:35:44.962 Date: 25/07/2020 : Adding missing data
2021-06-08 19:35:44.977 Date: 26/07/2020 : Adding missing data
2021-06-08 19:35:44.992 Date: 27/07/2020 : Adding missing data
2021-06-08 19:35:45.006 Date: 28/07/2020 : Entry is OK
2021-06-08 19:35:45.006 Date: 29/07/2020 : Entry is OK
2021-06-08 19:35:45.006 Date: 30/07/2020 : Entry is OK
2021-06-08 19:35:45.141 Date: 31/07/2020 : Adding missing data
2021-06-08 19:35:45.156 Finished processing log file - data/Jul20log.txt
2021-06-08 19:35:45.156 Loading log file - data/Aug20log.txt

Using a PHP script on your web server

If you have access to a web server that can run PHP Hypertext Pre-processor (PHP) scripts, then this post in support forum includes a script that produces a HTML form where you specify the log file name you wish to edit. The script will read that file, and output a replacement file with all possible spot derived fields populated.

That might sound a bit technical, so here are some step by step instructions:

  1. Download the processStandardLog.php script from here.
  2. File transfer (or copy for local web server) that script to your web server
  3. File transfer (or copy for local web server) all the standard log files from the data sub-folder in your Cumulus installation to a suitable holding folder (you may need to create it) on your web server
  4. Open in a browser PATH/processStandardLog.php by preceding the file name with the path as defined from the root on your web server
  5. This loads a web page where you have a field asking you to enter a path and file name for the data log you want to process
  6. Continue to follow the instructions on the web page
  7. When it has created a replacement file, you can enter details for another data log, and continue until all your data logs have been processed
  8. Take a backup of your existing Cumulus installation (you should be doing that on a regular basis anyway, so I will not give instructions here)
  9. Carefully delete any non-current data log in your data sub-folder that you have a replacement for, and file transfer (or copy back) the replacement data logs from your web server into the local data sub-folder, noting that the file extension will need to be renamed from .csv to .txt.

Using the data log editor provided in MX

  This document was written for the (legacy) Cumulus 1 software. It has been updated to cover MX, but that was for a MX release that is no longer latest!

When this section was written, the number of lines shown was fixed at a maximum of 10; later releases have given the option to display different numbers of lines, and there may be other changes still to be documented here.

In the MX admin interface go to the Data Logs menu tab, and select the Data Logs page.

There is a box for selecting the data log you want to edit. Once you have loaded that, the first (up to) 10 lines are shown. Navigation links let you select 'First', 'Previous', 'Next', and 'Last' pages, also a small number of pages can be selected directly.

Once you select a line, an Edit button is enabled, click that and you can manually input the missing values for that line. Save that edit, and you can select another line. Once you have edited all the lines on that page, you can select another page, and repeat the process. Then you can select another log, and repeat the process.

It is a long-winded way to edit, and the MX editor does not even validate what you have entered. An alternative is to edit each log file externally, and you can read how to do that in the "Work around for standard log files" section below.

Some readers of the Cumulus support forum will know that a third-party replacement for the MX editor was worked on, but never got incorporated into MX. The idea was to replace the alt_editor software used by Mark Crossley, with a standard HTML form script. This allowed in-line editing, it allowed the derived values to be calculated and displayed (so you simply decided whether to accept the replacements as suggested for the various fields), and finally it applied some validation to each field to ensure any manual edit inserted a value that was within the allowed range. The main reason for its rejection from the public MX was the complex way in which different files included in the admin interface interact, and the consequent issue that changes made for this replacement had a knock on effect on other pages in the admin interface. The author could not afford the time to redesign the whole admin interface so the proposed replacement could be integrated.


Lack of editor in Cumulus 1

Cumulus 1 provides a viewer for the data logs, that does not permit editing of the file.

On the View menu, select 'Data Logs, then enter the file name you want to view and load it. You can scroll left to right through the fields, and you can scroll up and down through the lines. The viewer shows a header row so you know which field is which. You cannot do any editing.

If you find that this viewer cannot load a data log, it is probably because you ignored the read me that is part of the Cumulus 1 installation procedure, see FAQ: I can't find my data files. If the displayed headings do not match the data shown, you have not read the caution on the screen, which says the viewer is only for standard data logs, not extra sensor data logs , nor the daily summary log.

Cumulus 1 does not provide any functionality to edit the standard data logs, whether to correct a rogue value, or to add a missing derivative.

Work around for standard log files

An option is to edit the file outside Cumulus using a Comma Separated File editor, a plain text editor, or a spreadsheet program (like the free open source Libre Office Calc or the commercially charged for Microsoft Excel).

Note: Cumulus 1 applies an exclusive lock to current standard log file, and conflicts can happen if another process seeks to access this file. Consequently don't let your antivirus scans access this file, nor try to edit it outside Cumulus while the original Cumulus software is running. A full discussion of the problems with conflicts of access to the standard log file can be found in this support forum topic.

If you decide to edit the current log outside Cumulus, then remember that, if you leave Cumulus running it will continue to append new lines. Therefore, you either need to close Cumulus while you are doing the edit; or (if you are able to merge two files) close Cumulus while you replace its file with a merge of what you have edited and the extra lines added since you took away a copy to edit.

It is best practice, to take a back-up copy of all your Cumulus installation before starting any editing. It is also best practice to take a further copy of any file you want to amend, and to do your edit on that copy, so you do not edit any Cumulus file directly. The original full backup will preserve the existing file, so you can regress to it, should Cumulus find an error in your edit.

Also note these log files do not include a header line, and should not be edited to include it. All flavours of Cumulus provide, in the data sub-folder, a file called monthlyfileheader.txt which contains the headers appropriate to the release you are running, and you can add that temporarily to your spreadsheet if it helps you with editing, but don't forget to delete it before saving the file ready to make it available to Cumulus.

Here are some other rules to follow when editing the standard log files:

  • You can't edit any log file with a word processor, as they add control characters, and other information, that Cumulus cannot understand.
  • Editing is straight forward if you use a specialised comma separated value file editor, such editors will split the content by field so it is easy to ensure you only amend field content and do not accidentally change a field divider, plus these editors will not add additional content to any line as they can cope with the number of fields varying in different lines and don't change encoding.
  • If you want to use a text editor, it is best if you choose one designed for computer programmers or developers. Such an editor will allow you to select the encoding (Cumulus will be confused by any Byte Order Mark, so select the encoding type without BOM).
  • If you choose to use a spreadsheet, ensure that all columns are treated as normal text, do not let (don't accept Excel default) the spreadsheet recognise the first field contains a date as it will convert that column into a number (e.g. days since 1900 or days since 1970). For example in Libre Office make sure that "Detect special numbers" is not selected. Many spreadsheets will offer a CSV option for saving the file (in Libre Office tick "Edit Filter Settings" on "save as ...").
  • If you amend a field, ensure that replacement is same format as original (same decimal separator if not integer).
  • Ensure no blank lines are introduced by your editing.
  • Ensure that all lines continue to have date and time information at the start of the line, and that the format of that identifier is not changed (same sequence, same character(s) separating each element of date, and a colon separating hours and minutes, and that the time does not have a seconds element added.


General External Editing Rules

  • Take a copy of the file that can be reverted to if there is a subsequent problem, and you have messed up the file that Cumulus (1 or MX) is now trying to use.
  • Take another copy and use that for your editing, don't edit the actual file being used by the software.
    • This prevents any conflicts between access by the software and access by your script or tool being used to modify the file.
    • It also means that you can go back to the last working copy, you can't upset your "revert" copy.
  • The file must never be edited with a word processor, as they store many control and identification characters that prevent Cumulus correctly reading the values.
  • Generally, it is easiest if you use either a specialised "Comma Separated Value" file editor or a text editor.
    • These tools have the advantage that they can cope with different lines having a different number of fields depending on which version number of Cumulus created each line.
  • You can use a spreadsheet application, but if you do, there may be a number of settings to change from their defaults to ensure the file remains in a readable format for Cumulus.
    • You need to ensure that your spreadsheet treats every column as plain text, don't let it recognise dates or times and convert them into another format, don't let it convert any numeric field into another format
    • If you do use a spreadsheet, extra field separators may be added at end of shorter lines as these make all lines end up with same number of fields.
  • Don't remove any figures from fields where figures currently exist, simply overtype one entry with another entry in same format.
  • If your file has previously been edited by the relevant editor in MX, a field that looks empty may actually contain one or two space characters.
  • If you are editing a field which was empty previously, remember that Cumulus does not accept the concept of nulls (entering -999 or "Null"), there is nothing that can be placed as a place-holder when the correct figure is not known, and empty fields are not permitted in one field if any subsequent field in same line is not empty.
    • Beware - if you do insert zero or an obviously wrong extreme value, Cumulus will display those in any editing screen where you wish to update the all-time, monthly-all-time, this month, or this year, extremes. This can make editing by picking values in logs harder.
    • Cumulus itself will use zero for any parameters (e.g. solar) not provided by your station, and for up to 6 times it will repeat the last valid value if the station fails to send a value it should provide (normally six successive readings will happen between entries in the standard data log, so repeated values are less likely to affect log files).
  • The character (or in a few locales, two characters) used for separating both the day of the month from the month, and for separating the month from the year, must be consistent throughout the whole file (and must not be a single space). Normally, the separator will be either "-" or "/". Whether Cumulus expects a hyphen or a slash is determined by the locale, you must keep to the same locale for the whole file, you cannot change the locale when you do an edit, nor when you update the device running Cumulus. Although, use of comma or point for separating parts of the date is in some locales, and therefore allowed by Cumulus, those locale settings are not recommended as these date separators can cause issues for subsequent edits.
  • USA date format with month before day of month, and finally year, is not permitted for log files.
  • All figures must be within the range of sensible figures for that field (no hour 24 or higher, no signed numbers when accepted values must be positive, don't put in 200 for a relative humidity)
  • Make sure that any editing does not create any blank lines in the file. Cumulus assumes an empty line means end of processing.
  • Don't add a header line to the file, Cumulus expects all lines to be data lines.
  • Be aware that different devices use different line terminators, so ensure that after editing a file, the line terminator is correct for the device that is running Cumulus:
    • The single character representing line feed (in most encodings, LF is binary equivalent of a decimal 10) is used for both UNIX and Linux devices (including Raspberry Pi Operating System)
    • The single character representing carriage return (in most encodings, CR is binary equivalent of a decimal 13) is used for Apple operating systems (like Mac)
    • The two character sequence first CR then LF is used to terminate lines in all Windows operating systems (part of Microsoft's determination to be different)
    • Problems with terminating characters are normally intercepted by operating system, before the contents of a line reaches any software like Cumulus, but if partial editing or merging has produced a file with mixed line terminators, there is a high possibility this will stop any software understanding the resulting file, so be careful if you edit the file on a different device to that running Cumulus.
    • Finally, if you are going to use a script (such as JavaScript or PHP) to attempt to read a Cumulus file, that script might only recognise a different line terminator to that your device operating system recognises (most likely with processing on a windows device, the script will treat one of the terminating characters (CR) as part of the adjacent field's text, and only treat the LF as a line terminator).