As computer science advances, it intersects intriguingly with the realm of music acoustics, particularly in enhancing piano performance through technological means. This paper delves into an ...innovative approach to piano learning and creation, focusing on emotional expression’s nuances. We have devised a system capable of precise musical tone recognition and sound quality evaluation by adopting Mel Frequency Cepstral Coefficients (MFCC) for the nuanced extraction of piano sounds and integrating dynamic fuzzy neural networks. Our findings show an impressive accuracy rate, with musical tone misidentification below 2.58% and sound quality assessment errors within a 5% margin. This work not only sets a new benchmark in piano performance analysis but also paves the way for revolutionary teaching methods in music education, with profound implications for artistic instruction and emotional expression.
The computer-assisted music composition is an active research area since mid-1900. In this paper, we have applied the VOGUE model for designing musical sequence of bandish notations of raga Bhairav, ...a classical Indian music. Variable Order and Gapped hidden Markov model for unstructured elements can capture variable length dependencies with variable gaps in sequential data. In most of raga pattern, a particular pattern repeats itself which may be separated by variable length gaps. VOGUE mines the frequent patterns in raga having different length gaps. These mined patterns are used to model VOGUE for Indian music ragas. Furthermore, we analyzed the benefits of VOGUE model over the standard HMM. To the best of author’s knowledge, this is the very first attempt to model Indian classical music with variable order gapped HMM.
The interactive artwork on iPad unifies various colors and musical pitch. Ordinary people, particularly children, are able to play imaginatively to unify those two. Even in artistic field, the ...composers with synaesthesia like Scriabin, Messiaen and so on, created some pieces and concept models of instruments. This application aims to two experience for user, the one is to provide the artistic experience of the great composers, the other is provide the free combinations color and pitch to the ordinary user. It is developed on iPad by Swift language.
The Spatial-Numerical Association of Response Codes (SNARC) suggests the existence of an association between number magnitude and response position, with faster left-key responses to small numbers ...and faster right-key responses to large numbers. The attentional SNARC effect (Att-SNARC) suggests that perceiving numbers can also affect the allocation of spatial attention, causing a leftward (vs. rightward) target detection advantage after perceiving small (vs. large) numbers. Considering previous findings that revealed similar spatial association effects for both numbers and musical note values (i.e., the relative duration of notes), the aim of this study is to investigate whether presenting note values instead of numbers causes a spatial shift of attention in musicians. The results show an advantage in detecting a leftward (vs. rightward) target after perceiving small (vs. large) musical note values. The fact that musical note values cause a spatial shift of attention strongly suggests that musicians process numbers and note values in a similar manner.
Songs play a vital role in our day to day life. A song contains basically two things, vocal and background music. Where the characteristics of the voice depend on the singer and in case of background ...music, it involves mixture of different musical instruments like piano, guitar, drum, etc. To extract the characteristic of a song becomes more important for various objectives like learning, teaching, composing. This project takes song as an input, extracts the features and detects and identifies the notes, each with a duration. First the song is recorded and digital signal processing algorithms used to identify the characteristics. The experiment is done with the several piano songs where the notes are already known, and identified notes are compared with original notes until the detection rate goes higher. And then the experiment is done with piano songs with unknown notes with the proposed algorithm.
Execution time in hitting instrument buttons in human play was identified using time-frequency analysis and peak detection to define time range which can be tolerated as time value that not too fast ...or not too late in hitting buttons, and then the result of the analysis was used as parameters to randomize approximate time to play a note. ...automatically hitting an instrument button to play a note should not be executed as exact as its time target but it should refer to human play that hits an instrument button based on an approximate time. Peak detection is to identify a time value when human hitting an instrument button, and then the time value was calculated based on a time target to set a tolerated range of time (RT) used for randomizing approximate time (AT) to play a note. 2.Research Method Natural automatic musical notes player was developed by analyzing the way a musician estimating execution time referred to the time target of the tempo. The analysis was conducted by performing fast fourier transform (FFT) technique to remove noise, and then followed by performing peak detection technique to find the exactly time value of an approximate time from the gamelan musician play.
Data hiding techniques for steganography, which embed secret data in multimedia imperceptibly, are useful for protecting information security. By taking advantage of the popularity of MIDI files on ...the Internet, a new data hiding method via MIDI files is proposed, which modifies the velocities of musical note sequences to embed secret data. Initially, musical note sequences with monotonic pitches, each consisting of at least three consecutive notes with pitches either entirely non-decreasing or entirely non-increasing, are found from an input MIDI file. Next, for each of such musical note sequences, a reference velocity is generated for each non-end note in the sequence by a linear interpolation scheme. Then, a number of data bits are embedded into each non-end note by adding the decimal value of the bits to or subtracting the value from the corresponding reference velocity value to yield a new velocity for the note. The new velocity value does not differ much from the original one and fits the velocity trend in the musical note sequence, so that the resulting stego-MIDI file does not yield abnormal note strengths and the musical expression is kept. Moreover, a melody humanization scheme is proposed for modifying the velocity values in strength-invariant MIDI channels to create data embeddability without producing unreasonable melodies. The original MIDI file size is also kept unchanged after data embedding, avoiding attracting attentions from hackers. Experimental results show the feasibility of the proposed method. Also, a comparison with five other methods show that the method has the merit of reducing the resulting melody distortion or file-size change while yielding a reasonable secret-bit embedding rate.