In the High-Luminosity Large Hadron Collider (HL-LHC), one of the most challenging computational problems is expected to be finding and fitting charged-particle tracks during event reconstruction. ...The methods currently in use at the LHC are based on the Kalman filter. Such methods have shown to be robust and to provide good physics performance, both in the trigger and offline. In order to improve computational performance, we explored Kalman-filter-based methods for track finding and fitting, adapted for many-core SIMD (single instruction, multiple data) and SIMT (single instruction, multiple thread) architectures. Our adapted Kalman-filter-based software has obtained significant parallel speedups using such processors, e.g., Intel Xeon Phi, Intel Xeon SP (Scalable Processors) and (to a limited degree) NVIDIA GPUs. Recently, an effort has started towards the integration of our software into the CMS software framework, in view of its exploitation for the Run III of the LHC. Prior reports have shown that our software allows in fact for some significant improvements over the existing framework in terms of computational performance with comparable physics performance, even when applied to realistic detector configurations and event complexity. Here, we demonstrate that in such conditions physics performance can be further improved with respect to our prior reports, while retaining the improvements in computational performance, by making use of the knowledge of the detector and its geometry.
We describe the dynamic discourse interactions between a teacher and her students in a third-grade science classroom. We focused on how the teacher and students initiate, prompt, respond, and provide ...feedback; use questioning and power strategies; and how questions are associated with power dynamics. We relate the consequences of teacher use of power to the engagement of student with subject matter. Two classroom sessions were observed and teacher-student interactions audio recorded. Data were transcribed and a method was developed for analyzing teacher-student interactions, power dynamics, and types of questions asked. Results revealed that teacher talk was twice as frequent as students' talk; questions were primarily closed-ended and task-oriented; and students asked few questions. The teacher exercised power by keeping activities organized and conventional, and utilizing subject matter. The developed methods showed us the complexity of question and power dynamics in classroom discourse and have implications for professional development and research.
Giving Students the Power to Engage with Learning Cochran, Kathryn F.; Reinsvold, Lori A.; Hess, Chelsie A.
Research in science education (Australasian Science Education Research Association),
12/2017, Letnik:
47, Številka:
6
Journal Article
Recenzirano
This critical discourse analysis study identifies and describes power relationships in elementary classrooms that support science engagement by providing students time to think, ask questions, and ...find their voices to talk about subject matter. The first analyses involved identification and description of classroom episodes showing high levels of student power and engagement associated with learning science. Classroom episodes were grouped into seven
power patterns
: use of questions, teacher sharing authority, giving students credit for knowledge, legitimate digressions, enhanced feedback, and writing opportunities. The second analyses documented the manner in which these patterns formed more complex classroom engagement processes called
power clusters
. These examples further our understanding of the dynamics of classroom discourse and the relationships between student power and engagement in subject matter.
In high energy physics (HEP), analysis metadata comes in many forms—from theoretical cross-sections, to calibration corrections, to details about file processing. Correctly applying metadata is a ...crucial and often time-consuming step in an analysis, but designing analysis metadata systems has historically received little direct attention. Among other considerations, an ideal metadata tool should be easy to use by new analysers, should scale to large data volumes and diverse processing paradigms, and should enable future analysis reinterpretation. This document, which is the product of community discussions organised by the HEP Software Foundation, categorises types of metadata by scope and format and gives examples of current metadata solutions. Important design considerations for metadata systems, including sociological factors, analysis preservation efforts, and technical factors, are discussed. A list of best practices and technical requirements for future analysis metadata systems is presented. These best practices could guide the development of a future cross-experimental effort for analysis metadata tools.
In this paper we document the current analysis software training and onboarding activities in several High Energy Physics (HEP) experiments: ATLAS, CMS, LHCb, Belle II and DUNE. Fast and efficient ...onboarding of new collaboration members is increasingly important for HEP experiments as analyses and the related software become ever more complex with growing datasets. A meeting series was held by the HEP Software Foundation (HSF) in 2022 for experiments to showcase their initiatives. Here we document and analyse these in an attempt to determine a set of key considerations for future experiments.
In High Energy Physics (HEP), analysis metadata comes in many forms -- from theoretical cross-sections, to calibration corrections, to details about file processing. Correctly applying metadata is a ...crucial and often time-consuming step in an analysis, but designing analysis metadata systems has historically received little direct attention. Among other considerations, an ideal metadata tool should be easy to use by new analysers, should scale to large data volumes and diverse processing paradigms, and should enable future analysis reinterpretation. This document, which is the product of community discussions organised by the HEP Software Foundation, categorises types of metadata by scope and format and gives examples of current metadata solutions. Important design considerations for metadata systems, including sociological factors, analysis preservation efforts, and technical factors, are discussed. A list of best practices and technical requirements for future analysis metadata systems is presented. These best practices could guide the development of a future cross-experimental effort for analysis metadata tools.
A
bstract
Three searches are presented for signatures of physics beyond the standard model (SM) in
ττ
final states in proton-proton collisions at the LHC, using a data sample collected with the CMS ...detector at
s
= 13 TeV, corresponding to an integrated luminosity of 138 fb
−
1
. Upper limits at 95% confidence level (CL) are set on the products of the branching fraction for the decay into
τ
leptons and the cross sections for the production of a new boson
ϕ
, in addition to the H(125) boson, via gluon fusion (gg
ϕ
) or in association with b quarks, ranging from
O
(10 pb) for a mass of 60 GeV to 0.3 fb for a mass of 3.5 TeV each. The data reveal two excesses for gg
ϕ
production with local
p
-values equivalent to about three standard deviations at
m
ϕ
= 0
.
1 and 1.2 TeV. In a search for
t
-channel exchange of a vector leptoquark U
1
, 95% CL upper limits are set on the dimensionless U
1
leptoquark coupling to quarks and
τ
leptons ranging from 1 for a mass of 1 TeV to 6 for a mass of 5 TeV, depending on the scenario. In the interpretations of the
M
h
125
and
M
h
,
EFT
125
minimal supersymmetric SM benchmark scenarios, additional Higgs bosons with masses below 350 GeV are excluded at 95% CL.
A
bstract
A search for heavy neutral leptons (HNLs), the right-handed Dirac or Majorana neutrinos, is performed in final states with three charged leptons (electrons or muons) using proton-proton ...collision data collected by the CMS experiment at
s
= 13 TeV at the CERN LHC. The data correspond to an integrated luminosity of 138 fb
−
1
. The HNLs could be produced through mixing with standard model neutrinos
ν
. For small values of the HNL mass (
<
20 GeV) and the square of the HNL-
ν
mixing parameter (10
−
7
–10
−
2
), the decay length of these particles can be large enough so that the secondary vertex of the HNL decay can be resolved with the CMS silicon tracker. The selected final state consists of one lepton emerging from the primary proton-proton collision vertex, and two leptons forming a displaced, secondary vertex. No significant deviations from the standard model expectations are observed, and constraints are obtained on the HNL mass and coupling strength parameters, excluding previously unexplored regions of parameter space in the mass range 1–20 GeV and squared mixing parameter values as low as 10
−
7
.
A
bstract
A direct search for electroweak production of charginos and neutralinos is presented. Events with three or four leptons, with up to two hadronically decaying
τ
leptons, or two same-sign ...light leptons are analyzed. The data sample consists of 137 fb
−
1
of proton-proton collisions with a center of mass energy of 13 TeV, recorded with the CMS detector at the LHC. The results are interpreted in terms of several simplified models. These represent a broad range of production and decay scenarios for charginos and neutralinos. A parametric neural network is used to target several of the models with large backgrounds. In addition, results using orthogonal search regions are provided for all the models, simplifying alternative theoretical interpretations of the results. Depending on the model hypotheses, charginos and neutralinos with masses up to values between 300 and 1450 GeV are excluded at 95% confidence level.