The Table of Contents file needs to be updated with whatever additional contributions were found in the database after the initial export in July. Vittorio needs to communicate this to bob.
The Table of Contents needs to be checked against the papers in our database. Somebody has to make sure there is a paper copy printed of each contribution (if an existing paper copy is used instead of starting from a new print of everything, it should be compared to the electronic PS or PDF output at least for author, title, page number), which must then be ordered in the same order as the table of contents listing, and the author list, title, and number of pages must be checked against the information in the table of contents. [Can we pay a student to help with this? Vahe will supervise this in September.]
The Plenary and Parallel page separator file font size and style must be fixed in [mg9parallelpages.tex].
The break points for separating these 2500 some pages into three volumes must be chosen, and those must be inserted into the table of contents. Only the first volume should have the MG prize talks in the front matter. Only the last volume should have the back matter (list of participants and author index). A final page count should be made taking into account the repeated front matter.
Once all outstanding plenary talks are in, and the table of contents file is updated, the ordered author index file must be created from the unordered output file from latexing the table of contents file by a simple ordering file manipulation procedure plus manual editing to delete repeated names. bob will do this as described on the proceedings editor macro page.
The participant name/ city/ COUNTRY listing for the backmatter must be generated by a database export and massaging and bob's latexing. Vittorio is working on this.
We are waiting for the plenary contributions whose titles begin with ?? in the table of contents listing and the preface from remo.
Finally, we must figure out how to create an index of the zipped latex/figure files or MSWORD files for each contribution so that World Scientific can produce high quality output rather than photoreproducing the paper copies we print for accuracy purposes. Everything should then go on a CD to be sent to the publisher. Perhaps the publisher should be contacted for its input on this question...
The PDF files must all be regenerated since the robot quality is unacceptable. One can
then simply use the existing robot search software to access these contributions...? Is
there a global way to check as "published" all the contributions so that they
can be seen by the public?
[We have discussed the possibility of using the old "DVI2PDF converter" (it
is the file which at the beginning created the pdfs but crashed the robot, so we had to
substitute it with a more solid one but with poor quality in the pdf) to generate the new
pdf's by using a DOS batch file to process all the files in one time. I forgot how to use
these PC non-graphical interfaces but I'm sure that Vic, Carlo or Stefano could do it in
few minutes. -- CHCH]
This is the lowest priority and will require some thought about how to index and search the PDF/PS files (long and short??) of all contributions. This may require some paid consulting.
---bob
August 3, 2001