1 Emulation of Ancient Greek Music Using Sound Synthesis and Historical Notation Introduction In recent years several efforts have been recorded in Greece and elsewhere in reconstructing Ancient Greek Music instruments, both physically and with physical modeling techniques (Halaris 1992; Tsahalinas 1997; Politis, Vandikas and Margounakis 2005; Hagel 2007). Moreover, software for Ancient Greek Music (from this point and forth: AGM), especially for educational purposes, has been designed by (mostly) Greek researchers, as it will be described in a later section. The current project is a new contribution to the field of AGM instrumentation, since it presents a software application (“ARION”) that can be used as an editor, composer and synthesizer for AGM the same time. Such an electronic instrument has never been presented before. ARION is the first instrument of its kind. Its main advantage is that it provides researchers a user interface that alters scales, accents and pitch assignments helping them experiment with music forms and scales that have an inherent fuzziness. The challenge of the project is to be consistent to the source material and create an AGM composer with scientific accuracy and the same time to produce a synthesizing instrument with an easy to use interface targeting non-computer science experts. But, how can you faithfully reproduce ancient music when you had never heard something like it? The only safe way is to follow the work of experts in the field and the actual musical scores. But even these are usually incomplete. Also, the instruments used at the time were very different than modern or even medieval ones. Moreover, the true Ancient Greek accent is different from the Modern Greek 2 one and from the one used by foreigners today, the so called Erasmian (Devine and Stephens 1994), so extended research had to be carried out on the vocal reproduction of the lyrics. For the part of vocal reproduction, techniques for synthesis of the singing voice have been used, as it is described later in greater detail. The synthesis of the singing voice is a research area that has evolved over the last 30 years. Different aspects of this implicative field involve the interdisciplinary area of musical acoustics, signal processing, linguistics, artificial intelligence, music perception and cognition, music information retrieval and performance systems (Georgaki 2004). As far as resources about AGM are concerned, researchers and pioneers like West (1992) and Pöhlmann (2001) have managed to collect and organize a very large amount of documents, some accompanied by actual music scores, and have given a scientific insight for a music system over 2000 years old. This project takes their work and tries to make a connection between that music and prevailing modern Western Music. Two software modules are produced: ARION and ORPHEUS. ARION provides a unique interface for AGM composition and reproduction. ARION can accurately reproduce Ancient Greek melodies, using the sound of aulos, as well as vocal elements. The function of the application is based on the mapping and conversion of each AGM symbol to the modern Western notation system. The reverse process (conversion of Western Music to ancient Greek symbolism) is also feasible. The user can experiment with the various scales, symbols and frequencies having the total freedom to “imagine” and hear how AGM really was. ARION has the potential to be synthesis software for professional music researchers. ORPHEUS is an interactive presentation for demonstrating AGM instruments. The virtual environment of ORPHEUS allows the experimentation with the use and the sounds of the modeled ancient instruments. The Ancient Greek Guitar (“Kithára”), which was the first modeled instrument, can be virtually strummed 3 using the mouse or the keyboard. The auditory result is ancient Greek melodies. The application, which is accompanied by information about the history of AGM and a picture gallery relative to the Ancient Greek Instruments, has mainly educational character. Its main scope is to demonstrate the Ancient Greek Musical Instruments to the audience. The applications ORPHEUS and ARION have been presented to vast audiences during the International Fair of Thessaloniki between September 8-17, 2006. They were presented in international conferences, in nation-wide radio emissions, in newspaper and magazine articles. The next section provides an overview of AGM, while section “Related Work” presents previous related literature. The following two sections describe in greater detail the applications ORPHEUS and ARION. Finally, the last section is about the objectives of future work on the project. Overview of AGM In this section we provide an overview of Music in Ancient Greece. Readers interested in greater detail should refer to (Anderson 1994; Landels 1999; West 1992). General Characteristics of AGM A first elementary clue, which is extracted from the research on AGM, is that the singer possessed the main role on a musical performance. The soloist’s voice was the basic “instrument” in a performance. The melody came indispensably from singing. A musical instrument accompanied the sung Greek poetry. Ancient Greek poetry and tragedy was inseparable from music (Borzacchini and Minnuni 2001). The term “lyric” stems from the word “lyra”, or lyre. 4 Figure 1. Singers and performers in Ancient Greece. In ancient Greece, the roles of composers and performers intertwined with each other (see Figure 1). The reason why not so many handwritten scores from this era exist today, is that performers used to improvise on the musical instrument, while the soloist was singing the melody, and not read notes from papers. In a time that papyri, inscriptions, and other written sources were not readily available, people used to recall by memory vast amounts of information, unthinkable for contemporary scholars or performers. In general, the performer followed the singer’s tempo and sound, but he also tried to achieve heterophony (by improvising). So, the performer was also the composer at the same time. As it can be easily conceived from the above, the nature of AGM was not harmonic. Aristides Quintilianus states: “Music is the science of melody and all elements having to do with melody” (Winnington-Ingram 1932). This definition of music fits with the monophonic and melodic structure of AGM. Pitch System In ancient Greek theory, there are three basic types of genus: the diatonic (“stretched out”), the chromatic (“colorful”), and the enharmonic (“harmonius”). The diatonic genus had a characteristic interval of approximately a "tone" at the top of the tetrachord, then two successive intervals of approximately a "tone" and then a 5 semitone at the bottom, making up a 4/3 "perfect 4th". The chromatic genus had a characteristic interval of approximately a minor 3rd at the top of the tetrachord, then 2 successive intervals of approximately a semitone at the bottom, making up the 4/3 perfect 4th. Finally, the enharmonic genus had a characteristic interval of approximately a "major 3rd" at the top of the tetrachord, then 2 successive intervals of approximately a quarter-tone at the bottom, making up a 4/3 "perfect 4th". Instead of using ratios, Aristoxenus divided the tetrachord into 30 parts, of which, in his diatonic syntonon, each tone has 12 parts, each semitone 6 (Barbour 1972). It should be noted here that the tuning systems are described in terms of ratios of string lengths or pipe lengths (which corresponds to frequency ratios), so that we don’t know the absolute frequencies (although they could be calculated). Ancient Greek music theorists, like Archytas, Eratosthenes, Didymus and Ptolemy propose exact ratios for the intervals of non-diatonic AGM systems, and even versions of the diatonic with microtonal modifications (Franklin 2005). The following table contains different tuning ratios of tetrachords for the three genera according to literature. Table 1. Tunings of different genera, as described by AGM theorists Frequency ratios Pythagorean diatonic 256:243 9:8 9:8 Pythagorean chromatic 256:243 2187:2048 32:27 Didymos chromatic 16:15 25:24 6:5 Eratosthenes chromatic 20:19 19:18 6:5 Ptolemy soft chromatic 28:27 15:14 6:5 Ptolemy intense chromatic 22:21 12:11 7:6 Archytas enharmonic 28:27 36:35 5:4 6 Researchers trained in Western Music, with its diatonicism and tempered scales, need additional training to understand the chromatic (Barski, 1996; Politis, Margounakis, and Mokos 2004; Politis and Margounakis 2003) and enharmonic background of AGM (West 1992). There are several writers, like Otto Gombosi (1939) that managed to interpret and recognize the microtonal nature of Ancient Greek Music theory and practice. Sources Although sources about Eastern Music, the successor of AGM, are scattered and not thoroughly indexed as is the case with its counterpart, Western Music, we know quite a lot about AGM. There are four main types of evidence by which we know about AGM: (1) Textual. Although not so many handwritten scores of AGM have been saved, there are (luckily) abundant sources about AGM theory. Numerous treatises in Greek, Latin and Arabic have survived which, mingled with the study of other material, became integrated into the cultures of all Western peoples, the heirs of Hellenic learning (Harmonia Mundi 1979). We know a great deal about the rhythms and the tempo of the music, since these are reflected in the metrical patterns of Greek verse (Pöhlmann and West 2001). Adequate knowledge has been gathered about the musical system, that is, how the scales were conceived and the like, since the works of several Greek musical theorists survive, like those of Aristoxenus and Archytas, which are dated in 4th century BC (Winnington-Ingram 1932; Burkert 1972; Barker 1989). (2) Notational. Over 60 melodies, most of them fragmented, have survived as stone inscriptions or musical papyri (scraps of papyrus, the ancient equivalent of paper) containing musical notation. For instance, the song of 7 Seikilos is one of the preserved compositions of AGM. It is engraved into a grave pillar that was found in 1883 and is dated between 200 BC and 100 AD (see Figure 2). Pöhlmann and West (2001) have collected and present 61 original fragments of AGM. While it is certainly true that the hearings are lost recent research has satisfactorily deciphered AGM notation and rhythm. (3) Organological. We can infer much about the instruments, using as evidence surviving fragments of ancient instruments (Halaris 1992; Tsahalinas 1997). Unfortunately the instruments found at excavation sites were not duly shaped to reproduce music. The hearings and recordings we have (Halaris 1992; Harmonia Mundi 1979; ECCD 1999) come from reconstructed instruments, bearing an inherent fuzziness and bias from contemporary construction techniques and materials. (4) Iconographic. Depictions of musicians and musical events in vase and wall painting and sculpture provide valuable information about the kinds of instruments that were preferred and how they were actually played (see Figure 1). The interested reader could refer to (Anderson 1994; Bundrick 2005). Figure 2. The song of Seikilos. AGM notation and its transcription to modern western music. 8 Musical Instruments There are several references about the musical instruments, which were used in AGM. Some of them namely are: the lyra, the aulos, the kithara, the hydraulis (Figure 3), the monochordon, the trichordon etc. The monochordon, the lyra, the kithara and the trichordon constitute some examples of ancient stringed instruments. The monochordon (or monochord) was a rectangular sound box of arbitrary length with a single string, which could be derived by a movable bridge (Rieger 1996). The kithara was a plucked string instrument and consisted of a square wooden box that extended at one end into heavy arms (Hagel 2007). Originally it had five strings, but additional strings were later added to include seven and finally eleven strings. These were stretched from the sound box across a bridge and up to a crossbar fastened to the arms. Figure 3. The archaeological finds of a hydraulis (courtesy: the Dion Archaeological Museum) and the reconstructed module by the European Cultural Center of Delphi (ECCD ). Photograph by G. Ventouris. Courtesy: ECCD. 9 The aulos and its variations were a kind of wind of instrument. In this first version of ARION, the sound of the musical instrument, which accompanies the Ancient Greek Singer, is an approximation of the sound of aulos, while the ancient kithara is the first instrument that was modeled in ORPHEUS. Notation In AGM scripts, above each line of Greek is notation that looks mostly like Greek letters, but is in fact vocal musical notation. Interestingly, the Greeks had two systems of musical notation, which correspond note for note with each other: one for the vocal and one for the instrumental melody (West 1992). The instrumental system of notation consists of numerous distinct signs probably derived from an archaic alphabet, while the vocal system is based on the 24 letters of the Ionian alphabet. Some of these symbols can be seen in Figure 4. The whole system covers a little over three octaves. In particular, it contains notes between Eb-3 and G-6 (West 1992).The symbols form groups of three. The first symbol (from the left) in each triad represents a “natural” note on a diatonic scale. The symbol in the middle represents the sharpening of the “natural” note, while the third symbol represents the flattening of the “natural” note. For example, the first triad in the instrumental repertory in Figure 4 corresponds (roughly) to the notes E3, E#3, Eb3, while the second triad to the notes F3, F#3 and Fb3. Figure 4. Notes for instrumental and vocal performance, chosen from a pool of symbols comprising the instrumental and vocal repertory. 10 Related Work The research has performed an amazing evolution over the last decades on the synthesis of the singing voice. The efforts of the scientists were focused on the differences between the speaking and the singing voice, taking into account the special characteristics of the singing voice (e.g. precision in frequency and rhythm, use of vibrato, etc.). There are plenty of works worldwide on singing synthesis (Carlson, Ternstrom, and Sundberg 1991; Chowning 1981; Cook 1993; Sundberg 2007). A special tribute should be made to IRCAM’s CHANT project (Rodet, Pottard, and Barrière 1984) . Originally conceived for singing voice synthesis, CHANT turned out to be well-suited to simulating other instruments, as well as rich for synthesis in general. A variant of CHANT coping with diphone synthesis (Rodet and Lefevre 1997) is close to how ARION interprets and performs the lyrics of composed AGM melodies. There is also some relative work on Greek Music. Two text-to-speech/singing projects on Greek Music are IGDIS (Cook, Kamarotos, Diamantopoulos and Philippis 1993) and AOIDOS (Xydas and Kouroupetoglou 2001). The latter is a virtual Greek Singer (vocal synthesizer) for analysis and synthesis of Greek Singing. In recent years several efforts have been recorded in Greece and elsewhere in reconstructing AGM instruments, both physically and with physical modeling techniques (Tsahalinas 1997; Politis, Vandikas and Margounakis 2005; Hagel 2007). The most notable was the reconstruction of the ancient hydraulis by the European Cultural Centre of Delphi in 1999. A wide range of other instruments has been also presented in exhibitions and live performances. For example, Halaris (1992) has reconstructed AGM instruments. He has exhibited them and his ensemble performs with them. As prototypes for this restoration have been used fragments of AGM instruments found in excavations or descriptions of them in papyri. 11 ORPHEUS According to Greek Mythology, Orpheus (son of Apollo and the muse Calliope) was a poet and musician. After his wife Eurydice‘s death, he went to the underworld to ask for her return. His beautiful singing and lyre playing put the guard dog Cerberus to sleep and moved Hades to let Eurydice go. Arion was a historical figure, a famous musician at the court of Periander, king of Corinth. It is said that he played the lyre and sang while on a boat, fascinating some dolphins, one of whom subsequently saved him from drowning. Overview The first of the two AGM applications is about the modeling and presentation of the AGM instruments to the audience. ORPHEUS is a multimedia application, designed with Macromedia Flash MX, which also incorporates Microsoft Agents Technology. ORPHEUS has mainly an educational character. You may download ORPHEUS from the following site: www.seearchweb.net Arion Orpheus . The application provides an interactive Interface, where the Ancient Greek Instruments are presented. The first modeled instrument (since more instruments are planned to be added to ORPHEUS in the future), which is the ancient Greek kithara, is presented here (for more information about kithara see section Musical Instruments). The electronic visualization of the ancient guitar is based on information that was extracted from writings and angiographies, which have survived from that era. The user is able to “touch” the strings of the guitar (either using the mouse or the keyboard shortcuts) and create that way Ancient Greek melodies and sounds. 12 Figure 5. The Ancient Greek Guitar (“Kithara”). On the right is visible the “talking parrot”, a Microsoft text-­‐‑to-­‐‑ speech agent that orally instructs on the use of the “kithara.” Kithara Sample Playback As Figure 5 shows, the modeled kithara is historically the most recent version of the instrument, having 11 strings. The sounds of the ancient guitar’s strings has been implemented according to the correspondence of the Ancient Greek symbols to the modern notes (West 1992), which can be seen in Table 2. Table 2. Mapping of AGM symbols to modern notes for the Ancient Greek Guitar. AGM symbol Modern Note C D E F G A B C D E F The users of ORPHEUS get accustomed with the hearing of the kithara, the arrangement of the AGM symbols for instrumental performance, and the correspondence of AGM notes with contemporary ones. The strings of the kithara 13 can be triggered in various ways, depending on the performance preferences or the learning style of the user. As for the timbre of the sound, as basis were used segments from recordings of reconstructed instruments’ performances. Using DSP techniques, sound fonts were created. They are sound data, along with instructions on how to co-articulate these sounds when a chain of notes has to be performed, each one having variable duration. In other words, a wavetable synthesizer is formed. ARION Overview of purpose and features ARION provides a unique interface for Ancient Greek Music composition and reproduction. ARION can accurately reproduce Ancient Greek melodies, using the sound of aulos, as well as vocal elements. The application runs on Windows XP, Vista or any newer version running with .NET Framework 1.1 or a latest release (which can be found in the application CD). Apart from the obvious functions that are described in the next sections, ARION provides some more advanced features for the professional researcher. First of all, there is full support of microtonality, by allowing any possible frequencies for the notes, since the user is free to edit all the parameters for each note. Also, all the idioms of the Ancient Greek Language can be used. The option Synchronize copies the notes from the Instrumental field and pastes them into the Vocal field, or the opposite. Finally, there is the capability of music or voice isolation in the final WAV file. In order to demonstrate the capabilities of the application, ARION comes with three ancient songs as presets: “The Song of Seikilos”, “Innovation of the Muse” and “Innovation of Calliope and Apollo”. 14 C-Sound Instrument The sound of the instruments that the ARION project performs was made with the use of CSound. As is well known, CSound is an open-source programming language designed and optimized for the signal processing of music files (Vercoe 1986, Boulanger 2000). The language consists of over 450 opcodes - the operational codes that the sound designer uses to build “instruments” or patches. For the instrumental performance of ARION, a new instrument was devised. Taking into account recordings from reconstructed AGM instruments, along with existing Csound patches, a new opcode was created for the .orc (orchestra) file that is used with ARION’s distribution. Aulos, a wind instrument was simulated, resembling the hearing of a modern flute. The Csound module is pipelined with ARION’s Graphical User Interface (GUI). When triggered, the Csound reads a text-based score file (.sco) and renders the sound of the aulos. However, as we will see, the vocal performance is rendered outside of Csound by another software module, the Phoneme Modeler. Since the two voices, that of aulos and that of the male singer, have to be heard synchronously, ARION’s Csound module undertakes the task of merging the two rendered sound files “on the fly” into one file. The quality of the final synthetic emulation result proves to be adequate, comparing to commercial recordings of AGM. The handling, the instructions and the directives to the Csound processes, running underneath ARION’s GUI are committed by ARION, not bothering the user with any Csound intrinsics. As a matter of fact, only a very experienced user may notice that Csound is used! It is true that alternate tools could have been used for the performance of virtual instruments, like the PW and PWGL visual interfaces the Sibelius Academy at Helsinki has developed to be used along with the ENP notation system (Laurson et al. 2005). However, AGM has its own musical symbol repertory, completely 15 different from the Common Music Notation paradigm and therefore new interfaces had to be devised from scratch for the score-writing process. Figure 6: The articulation procedure and the physical modeling for voice reproduction. Phoneme Modeler ARION uses 32 synthesized phonemes for the voice production of the Ancient Greek Singer. The default phoneme configuration of ARION was designed in a special interface for this purpose: the Phoneme Modeler. The Phoneme Modeler is a TCL/TK interface for the modeling and processing of phonemes. It makes use of the Syntmono server, which manipulates SKINI messages and is part of the Synthesis Toolkit (STK) in C++ (Cook 1993; Scavone and Cook 2004). This interface simulates the human voice reproduction mechanism. The excitations produced by the air flow from the lungs via the vocal cords are the primary sources of articulation (Figure 6). When producing vowels, a strong 16 blow from the lungs serves as the first excitation, while for consonants a noisy outburst is used. By shaping the vocal cords, the various vowels and the consonants are produced. However, the vocal cords are not able to differentiate the subtle variations of similar consonants, for instance the plosives /p/ and /b/. Therefore, the pharynx, the soft palate, the oral cavity, the langue and even the teeth or the nasal cavity undertake the task to filter the airflow accordingly and articulate the desired phoneme. Although the phonemes are the basic units of speech, they are not the atomic entities of the server module. The server uses two basic vocal sources, one for consonants (white noise generator) and one for vowels (Keller, 1995). Following Gold and Rabiner (1968), the reproduction procedure is described by a digital oscillator formed by the two zero and pole filter banks used to produce the resonance and anti-resonance characteristics of the vocal tract. The mathematical analysis leads to programming modules (Klatt 1980) described in schematic terms with the blocks seen in Figure 7. Programmable variable filters Pulse generator Filter F1 Filter 2 F2 Filter 3 F3 Gain control digital filters Frequency enrichment Voice White noise generator Pole filter Zero filter Figure 7. The diagram of a cascaded formant synthesizer. The top part synthesizes vowels and the bottom part consonants. The shaping of the vowels takes place by passing through a filter bank of at least 3 digital filters with the characteristics of the naturally uttered phonemes (Figure 8). 17 dB F3 F2 F1 20 0 -20 -40 1 2 3 4 kHz Figure 8: Formant shaping. The cascaded formant synthesizer along with the formant shaping and the filter banks simulates the actual procedures for voice reproduction, depicted in Figure 6, justifying the term physical modeling (Cook 1996). Indeed, the entities that are exchanged in the client-server multimedia framework of the Syntmono server are the phonemes of the uttered melody. These messages are created in the Phoneme Modeler application and are sent to the server, using TCP/IP sockets as a communication protocol. Once communication Figure 9. The Graphical User Interface of the Phoneme Modeler. 18 is established, the processing of these SKINI messages, which are compatible to some extend with MIDI messages, results in the audible reproduction of the phoneme. The Phoneme Modeler GUI we have devised can be seen in Figure 9. The interface represents the attributes of each phoneme with sliders. The user may define the central frequency, the bandwidth and the relative position of the 1st, 2nd, 3rd and 4th formant for each of the 32 phonemes. The users accustomed with the notion of Sundberg’s KTH synthesis of singing modules (Sundberg 2006) will feel at home by modeling the computer produced voice altering the shapes of the formants involved. GUI The graphical user interface of ARION can be seen in Figure 10. Figure 10. The Graphical User Interface of ARION. The application consists of three major interfaces: The Symbol Repertory Interface, the Ancient Greek Music Interface and the Modern Music Interface. 19 The Symbol Repertory is the container of all AGM Symbols used by the application. It holds the Instrumental and the Vocal symbols. While browsing through the symbols the user can see as a tool tip the symbols frequency and the corresponding modern note. Figure 11. AGM Interfaces: notes for vocal and instrumental melodic scripting along with the lyrics. AGM time duration symbols AGM notes Vocal performance Instrumental performance lyrics The AGM Drawing Interface consists of three fields, Vocal Symbols, Instrumental Symbols and Lyrics (Figure 11). The user can either drag’n’drop a symbol from the Symbol Repertory to the corresponding field or one can use the Text Tool (which is located in the Toolbar) to change each field. The Modern Music Interface has two modes, the Vocal Mode and the Instrumental Mode. The user can interact with only one mode at a time (Figure 12). By right-clicking on a note the Edit Modern Note Dialog Box is invoked where the user can modify the note’s duration and frequency shifting it from double flat to double sharp and in between (Figure 13). 20 Figure 12. Transcription to the Modern Music Interface. Figure 13. Altering AGM note’s pitch and duration. Many notes of AGM cannot be easily related to a counterpart in modern Western music. Especially in AGM modes like Phrygian and Lydian, a substrate for the subsequent development of oriental music systems can be detected. Since often no accurate correspondence can be made, the software gives users the flexibility to experiment by assigning different tunings to notes. This can help resolve uncertainty about scales in a trial-and-error manner by hearing the notes and deciding which tuning sounds most correct. If the user wishes to change one or more attributes of a single note, he/she could right click on the note on Symbol Repertory. On the ‘Edit Note Attributes’ window (Figure 14), he /she can modify the note’s frequency, the double sharp and double flat thresholds, the octave, the corresponding western note, as well as the predefined type (sharp / flat) of the note. 21 Figure 14. Edit Note Attributes – the Dialog Box. To keep things simple, if the user inserts the frequency of a note, the double flat and the double sharp value can be easily calculated using the corresponding button. The calculation follows the formula: By clicking OK, the changes are stored permanently in the Association Table and the user is prompted to update the already inserted notes in his composition. The Association Table is the table from which the mapping function reads the data and maps AGM notes to Western Music notes (Figure 15). Each note has the following properties: ID (the unique index for each note), CharacterID (the code that defines the note’s symbol), Frequency, Vocal/Instrumental (note’s category), Double Flat Frequency, Double Sharp Frequency, Type (the predefined type, e.g. sharp), Note (the correspondent western modern note), Octave, CorrespondingID (the ID code of the corresponding Vocal or Instrumental note used for synchronization). It should be mentioned here that the attributes Note and Octave are those that define the position of the note on staff and do not affect the sound of the note in the final produced sound file. 22 For more extensive modifications or when changes in the attributes ID, CorrespondingID, CharacterID or Vocal/Instrumental are necessary, the user may edit directly the Association Table from menu “Toolsà Edit Association Table”. . Figure 15. Editing the Association Table. The ‘Edit Association Table’ window contains the full list of notes and allows the user to delete, add, or modify notes using the corresponding buttons. Editing the “Note” column makes the “Frequency” column change to the standard equaltemperament frequency of the new note name. After the changes, there is the ability to store the Association Table by choosing ‘Save Changes and Close’, while the button ‘Close without saving’ does not store any changes. Moreover, the Association Table can be extracted in a .aat (Arion Association Table) file from menu “Advanced Settings”, so as to be able of reusing it at a later time. Finally, the predefined Association Table of ARION can be restored any time. 23 Figure 16. Adding new notes. Figure 17. Defining the attributes of the file to be produced – the Dialog Box. The last function is used for creating an audio representation of the current music document. By clicking it, a dialog box about the status of the output file appears (Figure 17). The user can configure the final audio output by choosing which musical elements it should contain: instrumental, vocal and lyrics. He can also define the tempo of the song (the default value is 60). By clicking on “Export”, a WAV file is created in the current working directory in a non-real time manner and a message about successful creation appears on the screen. The user can then listen to the file from any audio player on his/her system. The audio file is produced by using Csound’s rendering processes along with these of the Syntomono server. 24 Phonemic Parser One unique characteristic of ARION is that the user can add the lyrics of an AGM composition in their original format, i.e. in the polytonal system of writing the ancient Greek language. Of course, the tool also gives the option to write immediately in the Modern Greek language (see Figure 11). The polytonal system (with accents and breath symbols) was invented by Aristophanes- the Byzantine - in about 200 BC, in order to help the foreign students of the Ancient Greek language read and spell it correctly, since the Ancient Greek accent was musical and tonal (Spyridis and Efstratiou 1989), which means that the vowels were pronounced in a very different way from nowadays. The Help section of ARION contains explicit instructions on how to install a polytonal Ancient Greek font. As previously mentioned, ancient Greek pronunciation is quite different from the modern one, although the letters used are the same. On the web, there are sites, like that of S. Hagel (http://www.oeaw.ac.at/kal/agp) with sound examples of how classic texts are thought to be properly pronounced or AGM melodies should be performed (Hagel 2007). Accordingly, ARION has a parser module that transforms words to their phonemic equivalent, using a set of rules. For instance, if the word “ἄ- - ” is met, meaning messenger, then the parser would normally transliterate that to /a//g//g//e/-/l//o//s/. However, the rule based parser would locate that the morpheme [ ] has a hidden /n/ implied, and would transcribe the word to /a/- /n//g//e/-/l//o//s/, explaining amongst others why the English version of the word is angel. In the same notion, if the word “ἥ-­‐‑ - ”was met, meaning sun, it would be ascribed to /h//ɛ:/-/l//i/-/o//s/ since its first letter is a long vowel with a rough breathing. Again, this explains its approximate English variants: Helios, Helium, etc. 25 Future Work The next version of ARION will contain a far more improved sound reproduction, in terms of the singer’s voice. There will be a much better sound quality, so as the voice to be heard as more ‘natural’ and less digitized. Our efforts also focus on an even more exact approach of the ancient Greek accent and the precise ascription of lyrics. Finally, the next version will have important changes in the user interface, so as the tool to be very functional and satisfy all of its users needs. The objective here is to go beyond just educational purposes, and create a very realistic synthesis result. Part of our future work is also the enhancement of ORPHEUS with more musical instruments, like lyre, monochordon and trichordon. This task premises the electronic modeling of instruments and the corresponding sounds each of them produces. Acknowledgements ORPHEUS has been supported by the “HERMES” project (Program “Science and Technology Week”, Action 4.4.5. “HERMES” 2006, funded by the General Secretariat for Research and Technology, Greek Ministry for Development), while ARION is under the auspices of the SEEArchWeb project (“SEEArchWeb – South Eastern Europe Archaeology Web”: An interactive web-based presentation of Southeastern European Archeology. A MINERVA-SOCRATES EU funded project with code no. 110665-CP-1-2003-1-GR-MINERVA-M). Articles, emissions, sound archives, and multimedia presentations on ORPHEUS and ARION can be found at the project’s site with URL www.seearchweb.net . The authors would like to express their gratitude to Professor D. Pantermalis (the Dion Archaeological Museum & the New Acropolis Museum) for granting permission to publish the images of the excavation findings of the hydraulis of Dion. Many thanks go to Professor and Ch. Giallouridis (European Cultural Centre 26 of Delphi) for providing permission to publish photos of the reconstructed hydraulis. References Anderson, W.D. 1994. Music and Musicians in Ancient Greece, Cornell University Press, Ithaca. Barbour, J.M. 1972. Tuning and Temperament: a Historical Survey, Da Capo Press, New York. Barker, A. 1989. Greek Musical Writings II, Cambridge. Barski, V. 1996. Chromaticism. Harwood Academic Publishers, Netherlands. Borzacchini, L., and Minnuni, D. 2001. A Mathematical notebook about ancient Greek music and mathematics, University of Bari. Boulanger, R. 2000. The Csound Book. MIT Press, Cambridge, Mass. (with a CD-ROM of data and applications). Burkert, W. 1972. Lore and Science in Ancient Pythagoreanism, Cambridge Mass, 1972. Bundrick, S. D. 2005. Music and Image in Classical Athens, Cambridge University Press, New York. Carlson, C., Ternstrom, S. and Sundberg, J. 1991. “A new Digital System for Singing Synthesis allowing Expressive Control”, Proceedings of the 1991 International Computer Music Conference, Montreal : International Computer Music Association. Chowning, J. 1981. “Computer Synthesis of Singing Voice”, Proceedings of the 1981 International Computer Music Conference, La Trobe University, Melbourne: International Computer Music Association. Cook, P. 1993. “Spasm, a real-time Vocal Tract Physical Model Controller and Singer -the companion Software Synthesis System”, Computer Music Journal, 17(1): 30-44. Cook, P. R., D. Kamarotos, D., Diamantopoulos, T., and Philippis, G. 1993. “IGDIS: A Modern Greek Text to Speech/Singing Program for the SPASM/Singer Instrument”, Proceedings of the 1993 International Computer Music Conference, Tokyo : International Computer Music Association. Cook, P. 1996. “Singing voice synthesis: History, current work, and future directions”, Computer Music Journal, 20(3):38-46. 27 Devine, A.M., and Stephens, L.D. 1994. The Prosody of Greek Speech, Oxford University Press, New York. Franklin, J. C. 2005. “Hearing Greek Microtones”, in Hagel, S. /Harrauer, Ch. (eds.), Ancient Greek Music in Performance, Wiener Studien Beiheft 29, Vienna. Georgaki, A. 2004. “Virtual Voices on Hands: Prominent Applications on the Synthesis and Control of the Singing Voice”, SMC 2004, Sound and Music Computing 2004, Paris, France, 20-22 October, 2004. Gold, B, and Rabiner, L.R. 1968. “Analysis of Digital and Analog Formant Synthesizers”. IEEE Transactions on Audio Electroacoustics, AU-16:81-94. Gombosi, O. J. 1939. “New Light on Ancient Greek Music”, International Congress of Musicology, New York, 1939, p. 168. Hagel, S. 2007. Ancient Greek Music. http://www.oeaw.ac.at/kal/agm/ Commission for Ancient Literature, Austrian Academy of Sciences. Halaris, C., Music of Ancient Greece, booklet and CD, 1992. Harmonia Mundi – Paniagua, G. 1979. Musique de la Grèce Antique, booklet and CD, HMA 1951015, France. Klatt, D. 1980. “Software for a formant synthesizer “, J. Acoust. Soc. Am., 67(3): 972-995. Keller, E. 1995. Fundamentals of speech synthesis and speech recognition John Wiley and Sons Ltd. Chichester, UK. Landels, J. 1999. Music in Ancient Greece and Rome, Routledge, London. Laurson, M,. Norilo, V., and Kuuskankare M., 2005. “PWGLSynth: A Visual Synthesis Language for Virtual Instrument Design and Control”, Computer Music Journal, 29(3):2945. Politis, D., and Margounakis, D. 2003. “Determining the Chromatic Index of Music”, WEDELMUSIC 2003, 3rd International Conference on Web Delivering of Music, Leeds, 15-17 September 2003, pp. 93-100. Politis, D., Margounakis, D., and Mokos K. 2004. “Vizualizing the Chromatic Index of Music”, WEDELMUSIC 2004, 4th International Conference on Web Delivering of Music, Barcelona, 13-15 September, 2004. Politis, D., Vandikas, K., and Margounakis, D. 2005. “Notation-Based Ancient Greek Music 28 Synthesis with ARION”, Proceedings of the 2005 International Computer Music Conference, Barcelona : International Computer Music Association, pp. 475-478. Pöhlmann, E., and West, M.L. 2001. Documents of Ancient Greek Music, Oxford. Rieger, M. 1996. “Music before and after Solesmes”, In: STS Working-Papers, Penn State University. Rodet, X., Potard, Y., and Barrière, J.B. 1984. The CHANT Project: From Synthesis of the Singing Voice to Synthesis in General, Computer Music Journal, 8(3): 15-31. Rodet, X., and Lefevre, A. 1997. "The Diphone program: New features, new synthesis methods, and experience of musical use," Proceedings of the 1997 International Computer Music Conference, Thessaloniki: International Computer Music Association, pp. 418-421. Scavone, G., and Cook, P. 2004.“Synthesis Toolkit in C++ (STK)”, Audio Anecdotes, Volume 2, K. Greenebaum and R. Barzel Eds., A.K. Peters Press. Spyridis, H.C, and Efstratiou, S.N. 1989. “Computer approach to the music of ancient Greek speech”, Acustica, 69(5):211-217. Sundberg, J. 2006. “The KTH synthesis of singing”, Advances in cognitive Psychology, Special Issue on Music Performance, 2(3) no. 3, 2006:145-162. Sundberg, J. 2007. “Synthesizing Singing”, SMC’07, 4th Sound and Music Computing Conference, 11-13 July 2007, Lefkada, Greece, pp. 9-13. Tsahalinas, K., 1997. “Physical Modeling Simulation of the Ancient Greek Auloi”, Proceedings of the 1997 International Computer Music Conference, Thessaloniki: International Computer Music Association, pp. 454-457. Vercoe, B., 1986. Csound: A Manual for the Audio Processing System and Supporting Programs with Tutorials. Program documentation. MIT Media Lab. West, M.L. 1992. Ancient Greek Music, Clarendon Press, Oxford, 1992. Winnington-Ingram, R.P. 1932. “Aristoxenos and the intervals of Greek music”, The Classical Quarterly, 26(3-4):195-208. Xydas, G. and Kouroupetroglou, G. 2001. “The DEMOSTHENES Speech Composer”, Proceedings of the 4th ISCA Tutorial and Research Workshop on Speech Synthesis, Perthshire, Scotland, August 29 – September 1, 2001, pp. 167-172.