Stored procedures in database management systems are often used to implement complex business logic. Correctness of these procedures is critical for flawless working of the system. However, testing ...them remains difficult due to many possible database states and constraints on data. This leads to mostly manual testing. Newer tools offer automated execution for unit testing of stored procedures but the test cases are still written manually. We propose an approach of using dynamic symbolic execution for generating automated test cases and corresponding database states for stored procedures. We model the constraints on data imposed by the schema and the SQL statements, treating values in database tables as symbolic. We use SMT solver to find values that will drive the stored procedure on a particular execution path. We instrument the internal execution plans generated by PostgreSQL to extract constraints. We use Z3 to generate test cases consisting of table data and procedure inputs. Our evaluation using stored procedures from a large business application and various GitHub repositories quantifies the evidence of effectiveness of our technique by generating test cases that lead to schema constraint violations and user-defined exceptions.
This Innovative Practice Full Paper presents a framework for generating computer-based exams for complex engineering systems (such as cache memories) that can be machine graded while still offering ...partial credit for students. Complex multi-faceted engineering systems often require long, multi-part problems to fully assess students' understanding of those systems. Cache memories represent one such system in computer architecture courses. Traditionally, we assessed students' understanding of caches using comprehensive, multipart questions in a paper-based exam. Grading these exams was time-consuming and relied on subjective grading. To cope with rising enrollment, we sought to address these issues by developing machine administered and gradable exams that did not heavily rely on multiple-choice questions or exact numerical responses. Additionally, this system needed to provide partial credit, a common expectation of our students. We developed a cache simulator to use as a back-end for our questions. We used the simulator to develop exam questions and new homework assignments to help students practice cache memory concepts. To give students access to fair partial credit, we allowed multiple submissions for the exam questions with limited feedback. We also awarded partial credit for answers within certain tolerance of the correct answer. The partial credit awarded reduced as deviation from the correct answer increased. Consequently, students could correct minor mistakes or propagating errors which are common reasons for awarding partial credit. To evaluate the effect of the switch from paper-based to computerized exam, we ported questions from one of our paper-based exams to a computerized exam. We evaluated the differences in student performance on paper-based version and the computerized version of the questions and found mixed results with students performing comparably or better than the paper-based exam on the computer-based exam. We also surveyed students about their experience with the computer-based exam. Students overwhelmingly indicated a preference for the computer-based exam. We believe that ideas from our work can be used to automate generation, administration, and grading of complex multi-part questions in engineering disciplines beyond computer architecture.
A Modular Assessment for Cache Memories Mahmood, Suleman; Herman, Geoffrey L.
Proceedings of the 52nd ACM Technical Symposium on Computer Science Education,
03/2021
Conference Proceeding
We construct and evaluate a modular assessment for students' knowledge about CPU cache memories. Caches play a key role in improving performance in modern computing. They are difficult for students ...to learn, but we have little conceptual or empirical evidence about why. Building on prior frameworks, we propose six underlying knowledge components that we believe students need to robustly evaluate how a cache can affect the performance of code on a processor. We constructed a modular assessment using these components that can be used as a diagnostic instrument to find the concepts students are struggling to understand. Because different institutions teach caches at varying depths of detail, individual modules of the assessment can be used by instructors and researchers as appropriate for their context. We evaluated the assessment using a combination of Classical Test Theory, Exploratory Factor Analysis, and Confirmatory Factor Analysis. Our results suggest that the assessment is reliable and can be used modularly to assess various components of students' knowledge about caches, though future work needs to be done to evaluate the validity of these modules at different institutions. This assessment can help instructors and researchers design more precisely targeted instructional interventions to help students learn caches. The creation of similar modular assessments may help us in improving instruction in other difficult topics in computing.
Mutation testing is widely used in research for evaluating the effectiveness of test suites. There are multiple mutation tools that perform mutation at different levels, including traditional ...mutation testing at the level of source code (SRC) and more recent mutation testing at the level of compiler intermediate representation (IR). This paper presents an extensive comparison of mutation testing at the SRC and IR levels, specifically at the C programming language and the LLVM compiler IR levels. We use a mutation testing tool called SRCIROR that implements conceptually the same mutation operators at both levels. We also employ automated techniques to account for equivalent and duplicated mutants, and to determine minimal and surface mutants. We carry out our study on 15 programs from the Coreutils library. Overall, we find mutation testing to be better at the SRC level: the SRC level produces much fewer mutants and is thus less expensive, but the SRC level still generates a similar number of minimal and surface mutants, and the mutation scores at both levels are very closely correlated. We also perform a case study on the Space program to evaluate which level's mutation score correlates better with the actual fault-detection capability of test suites sampled from Space's test pool. We find the mutation score at both levels to not be very correlated with the actual fault-detection capability of test suites.
Test-suite reduction (TSR) speeds up regression testing by removing redundant tests from the test suite, thus running fewer tests in the future builds. To decide whether to use TSR or not, a ...developer needs some way to predict how well the reduced test suite will detect real faults in the future compared to the original test suite. Prior research evaluated the cost of TSR using only program versions with seeded faults, but such evaluations do not explicitly predict the effectiveness of the reduced test suite in future builds.
We perform the first extensive study of TSR using real test failures in (failed) builds that occurred for real code changes. We analyze 1478 failed builds from 32 GitHub projects that run their tests on Travis. Each failed build can have multiple faults, so we propose a family of mappings from test failures to faults. We use these mappings to compute Failed-Build Detection Loss (FBDL), the percentage of failed builds where the reduced test suite misses to detect all the faults detected by the original test suite. We find that FBDL can be up to 52.2%, which is higher than suggested by traditional TSR metrics. Moreover, traditional TSR metrics are not good predictors of FBDL, making it difficult for developers to decide whether to use reduced test suites.
Distributed applications, in particular web applications, often depend on a centralized database. The results of database operations depend on the state of database at that time and often also on the ...order of execution of operations performed by concurrent clients. Verification of such applications requires modeling all these possible orders so that the user can determine which are incorrect orderings and can prevent them with transactions or business logic. However, straightforward exploration leads to state space explosion. Partial order reduction prunes orderings that are equivalent to other orderings already explored. We present a novel technique of Effective Partial Order Reduction (EPOR) for model checking software of Java applications sharing database state. EPOR improves upon prior work by performing a more precise analysis and supports many more operations. The key idea behind EPOR is that monitoring the effect of database operations inside database implementation gives a more precise view of operation dependencies than what can be achieved from an external view. Like prior work, EPOR also relies on Java Pathfinder model checker for model checking Java application. However, unlike prior work, there is additional instrumentation inside the database that enables our precise analysis and allows supporting more constructs. Our results improve upon prior work by achieving significant reduction in number of states explored and thus enables more effective model checking of database applications with concurrent operations.
Stored procedures in database management systems are often used to implement complex business logic. Correctness of these procedures is critical for correct working of the system. However, testing ...them remains difficult due to many possible states of data and database constraints. This leads to mostly manual testing. Newer tools offer automated execution for unit testing of stored procedures but the test cases are still written manually.
In this paper, we propose a novel approach of using dynamic symbolic execution to automatically generate test cases and corresponding database states for stored procedures. We treat values in database tables as symbolic, model the constraints on data imposed by the schema and by the SQL statements executed by the stored procedure. We use an SMT solver to find values that will drive the stored procedure on a particular execution path.
We instrument the internal execution plans generated by PostgreSQL database management system to extract constraints and use the Z3 SMT solver to generate test cases consisting of table data and procedure inputs. Our evaluation using stored procedures from a large business application shows that this technique can uncover bugs that lead to schema constraint violations and user defined exceptions.
The study investigated the association between emotional intelligence and academic success among undergraduates of Kohat University of Science & Technology (KUST), Pakistan. A sample of 186 students ...who were enrolled during the semester Fall 2015 to Spring 2018 was selected through a random sampling technique. A cross-sectional, descriptive and correlational research methods were employed in this study. A standardized tool "Emotional Intelligence Scale" was employed for the collection of information from the undergraduates. Cumulative Grade Point Average (CGPA) of the students was considered as academic success. Data were collected through personal visits. Statistical tools i.e., simple percentage, mean, standard deviation, ANOVA, Pearson's product-moment correlation and multiple linear regression were employed to reach the desired research outcomes. The findings revealed that there was a strong positive relationship (r = 0.880) between emotional intelligence and academic success among undergraduate students. The multiple linear regression analysis showed that self-development (Beta = 0.296), emotional stability (Beta = 0.197), managing relations (Beta = 0.170), altruistic behaviour (Beta = 0.145), and commitment (Beta = 0.117) predict academic success of undergraduates positively. The findings suggest that the emotional intelligence of the undergraduate students may be further improved so that their academic performance may further be enhanced.
Emotional intelligence is extremely indispensable in functioning leadership positions as leaders wish everybody to fulfill his/her responsibilities and obligations effectively while job satisfaction ...has a direct association with the productivity and efficiency of an organization and also to individuals' success. Therefore, this cross-sectional study examined the relationship between emotional intelligence and job satisfaction among secondary schools heads in Khyber Pakhtunkhwa. For this investigation, a total of 402 out of 884 secondary school heads were taken as a sample using a multistage sampling technique. The study was correlative, descriptive, and quantitative in nature, and survey research designed was used for collecting information from the participants. Statistical tools, i.e. mean, standard deviation, Pearson's product-moment correlation, multiple linear regression, and analysis of variance, were applied. The findings showed that there was a moderate positive correlation between emotional intelligence and job satisfaction. Additionally, there was a moderate positive correlation between all the subdimensions of emotional intelligence and job satisfaction except emotional stability, where the correlation was also positive and the effect size weak. Furthermore, five dimensions of emotional intelligence such as managing relations, emotional stability, self-development, integrity, and altruistic behavior were found significant predictors of job satisfaction. Therefore, it is imperative to concentrate on those practices that promote emotional intelligence among secondary school heads.
Bag filters are commonly used for fine particles removal in off-gas purification. There dust laden gas pervades through permeable filter media starting at a lower pressure drop limit leaving dust ...(called filter cake) on the filter media. The filter cakeformation is influenced by many factors including filtration velocity, dust concentration, pressure drop limits, and filter media resistance. Effect of the stated parameters is investigated experimentally in a pilot scale pulse-jet bag filter test facility where lime stone dust is separated from air at ambient conditions. Results reveal that filtration velocity significantly affects filter pressure drop as well as cake properties; cake density and specific cake resistance. Cake density is slightly affected by dust concentration. Specific resistance of filter cake increases with velocity, slightly affected by dust concentration, changes inversely with the upper pressure drop limit and decreases over a prolonged use (aging). Specific resistance of filter media is independent of upper pressure drop limit and increases linearly over a prolonged use.
Specific resistance increase with increasing filtration velocity at constant dust concentration. Display omitted
Specific resistance of filter media is independent of pressure drop limit while that of the cake decreases.
Specific resistance of filter media increases linearly and that of filer cake decreases on aging.
► Specific cake resistance depends on velocity, pressure drop limit and aging. ► Pressure drop limit influences the cake formation at higher levels only. ► Dust settling decreases at higher velocity and vice versa. ► Filter media resistance is independent of upper pressure drop limit. ► Filter media resistance increases linearly on aging.