Code Monkey home page Code Monkey logo

hps's People

Contributors

tomeichlersmith avatar

Watchers

 avatar

hps's Issues

Compare ABCD analyses when requiring different clusters to have a track

Notes

  • positron - electron0 (no absolute value, the readout window is asymetric) for timing cut
  • Before signal peak in time is "safer" than after
    • positron time is more trust-able since they are more rare
    • BUT positron is easier to fake by a photon
  • Require X cluster to have a track for X in trident clusters
    • Do ABCD with Y cluster time relative to X cluster and E Sum for Y in trident clusters without X
  • really don't expect accidental positrons, expect requiring positron to have a track

More Plots

  • Track kinematics when we have a matched track
  • Position of clusters for unmatched clusters

Tag with the Two Trigger Clusters

  • Find the trigger clusters
  • Match trigger clusters to reco clusters based on seed crystal position
  • Require these two reco clusters to have tracks
  • Does the third trident cluster have a track?

HPS 2016 p vs tanLambda slope resolution

Let's try to remove the p vs tanLambda slope by (slightly) adjusting the spacing between modules. PF has outlined a procedure for this based off his work with 2019.

  • Use ST tracks to insure that constraints are functioning
  • Apply beam-spot and momentum constraint
  • Only run over events with a single ecal cluster with energy > 2GeV to focus on the FEE peak
  • Free both tu, rw, and tw of both axial and stereo but with additional constraints to avoid tw of the hole/slot sensors from moving separately

Constraints

https://www.desy.de/~kleinwrt/MP2/doc/html/draftman_page.html#sssec_consinf

Expect each module (pair of sensors in a layer) to have a constraint like

Constraint 0.
113XX 1.
113YY -1.

where XX is the ID number for the hole side sensor and YY is the ID number for the slot side.

iDM Phase Space

The default parameters that came with the MG model assume a much higher energy beam than what HPS uses. I am studying the 2016 data set with a 2.3GeV electron beam hitting tungsten (at rest with a mass set to 174GeV). This has pushed me to lower the masses of the DM substantially (mZP = 1GeV, mChi = 0.1GeV, and dMChi = 0.001 GeV), but even in this low mass configuration, MG is still struggling to find any phase space available. In fact, I am only getting zero-events, zero-cross section runs. Is it just a difficult phase space to access and I should be more patient? Am I changing the wrong parameters?

Near the end of running, I get this error.

Survey return zero cross section. 
   Typical reasons are the following:
   1) A massive s-channel particle has a width set to zero.
   2) The pdf are zero for at least one of the initial state particles
      or you are using maxjetflavor=4 for initial state b:s.
   3) The cuts are too strong.
   Please check/correct your param_card and/or your run_card.
Zero result detected: See https://cp3.irmp.ucl.ac.be/projects/madgraph/wiki/FAQ-General-14

Occasionally Corrupted mille data files

Noticed while trying to read a large (200) batch of mille data files. Two of them were corrupted and had an "unexpected EoF" error.

  Record      2200000
 PEREAD: file          121 read the first time, found       19491  records
readC: problem with stream or EOF skipping doubles
 Read error for binary Cfile         122 record       16443 :         -32
STOP PEREAD: stopping due to read errors

Going to use this issue to keep notes on why this is happening.

stdhep file format

My deduction notes...

Indices start at 1 so that 0 can be reserved as a null value.

File

  • ilbl: (relic of "lumi block") seems to be a label of some kind, like an ID number to separate one file from another
  • the list of events

Event

  • nevhep: event number
  • nhep: number of particles in this event
  • the list of particles

Particle

  • isthep: "stage" of particle in event, in some sense particles of the same "stage" exist at the same "time"
  • idhep: PDG ID as parsed from LHE
  • jmohep: indices of particles who are the mothers of this particle (two, can be the same)
  • jdahep: indices of particles who are the daughters of this particle (two, only same if no daughters)
  • phep: length-five array [px, py, pz, E, m] in GeV
  • vhep: length-four array [x, y, z, t] in mm

Run hps-mc

From cam
Here is my bashrc, you don't need to use the whole thing but you can find the parts you need in there

/sdf/group/hps/users/bravo/setup/bashrc.sh

you can make a ~/.hpsmc and put this in it:

[MG4]
madgraph_dir = /sdf/group/hps/users/bravo/src/hps-mc/generators/madgraph4/src

[MG5]
madgraph_dir = /sdf/group/hps/users/bravo/src/hps-mc/generators/madgraph5/src

[EGS5]
egs5_dir = /sdf/group/hps/users/bravo/src/hps-mc/generators/egs5

[StdHepConverter]
egs5_dir = /sdf/group/hps/users/bravo/src/hps-mc/generators/egs5

[SLIC]
slic_dir = /sdf/group/hps/users/bravo/src/slic/install
hps_fieldmaps_dir = /sdf/group/hps/users/bravo/src/hps-fieldmaps
detector_dir = /sdf/group/hps/users/bravo/src/hps-java/detector-data/detectors

[JobManager]
hps_java_bin_jar = /sdf/home/b/bravo/.m2/repository/org/hps/hps-distribution/5.1-SNAPSHOT/hps-distribution-5.1-SNAPSHOT-bin.jar
java_args = -Xmx3g -XX:+UseSerialGC

[FilterBunches]
hps_java_bin_jar = /sdf/home/b/bravo/.m2/repository/org/hps/hps-distribution/5.1-SNAPSHOT/hps-distribution-5.1-SNAPSHOT-bin.jar
conditions_url = jdbc:sqlite:/sdf/group/hps/users/bravo/db/hps_conditions_test.db

[ExtractEventsWithHitAtHodoEcal]
hps_java_bin_jar = /sdf/home/b/bravo/.m2/repository/org/hps/hps-distribution/5.1-SNAPSHOT/hps-distribution-5.1-SNAPSHOT-bin.jar
conditions_url = jdbc:sqlite:/sdf/group/hps/users/bravo/db/hps_conditions_test.db

[HPSTR]
hpstr_install_dir = /sdf/group/hps/users/bravo/src/hpstr/install
hpstr_base = /sdf/group/hps/users/bravo/src/hpstr

[LCIOCount]
lcio_bin_jar = /sdf/home/b/bravo/.m2/repository/org/lcsim/lcio/2.7.4-SNAPSHOT/lcio-2.7.4-SNAPSHOT-bin.jar

[LCIOMerge]
lcio_bin_jar = /sdf/home/b/bravo/.m2/repository/org/lcsim/lcio/2.7.4-SNAPSHOT/lcio-2.7.4-SNAPSHOT-bin.jar

[EvioToLcio]
hps_java_bin_jar = /sdf/home/b/bravo/.m2/repository/org/hps/hps-distribution/5.1-SNAPSHOT/hps-distribution-5.1-SNAPSHOT-bin.jar
java_args = -Xmx3g -XX:+UseSerialGC

though you won't be able to access my hps-java or lcio so you can install that into your own home directory by downloading the hps-java code and doing alias mvnclbd='mvn clean install -DskipTests=true -Dcheckstyle.skip'

I sent that as an alias because I alias it because I use it so much

you also want to do the same command in hps-lcio (edited)

export JAVA_HOME=/sdf/group/hps/users/bravo/src/jdk-15.0.1
export PATH=/sdf/group/hps/users/bravo/src/jdk-15.0.1/bin:/sdf/group/hps/users/bravo/src/apache-maven-3.6.3/bin:$PATH

those two commands in my bashrc will get the java env setup for you
you can just use my slic install, which is what the .hpsmc will take care of for you (edited)
you also might want to get rid of the conditions_url lines in my .hpsmc
those I have there because I was testing some calibration stuff I put into our conditions database

hps-mille automation

Desired Features

  • Histogram merger
  • Batch running: run tracking on detector in parallel, merge histograms and run MP after all done
  • Apply deduced MP solution to a new iteration of the detector
  • Build MP steering file and run MP

Calculating chi2 Width

Eq(33) from Dark Sectors at the Fermilab SeqQuest Experiment is

$$ \Gamma(\chi_2\to\chi_1\ell^+\ell^-) \approx \frac{4\epsilon^2\alpha_{em}\alpha_D\Delta^5m_1^5}{15\pi m_{A'}^4} $$

I could use this but it explicitly states that this width is in the limit that $m_{A'} \gg m_1$, $\Delta \ll 1$, and $m_\ell \approx 0$. The first and last requirements seem appropriate, especially since they use $m_{A'}=3m_1$ in their own sensitivity plots, but the middle requirement is tough. The maximum $\Delta$ they have is $0.1$ but HPS needs $\Delta$ near its cosmological maximum of $\sim 0.6$ to have any sort of acceptance.

I think for now, I can use this equation as an approximate value for $\Gamma$ but we may need to return to it if there is a large dependence on the decay width.

iDM in MadGraph5

Got an initial copy of the MG5 model Stefania was using to generate iDM dark brem events. Going to use this issue to document my attempts to use it.

Initial Attempt

  1. Download MG5 v3.4.1 from https://launchpad.net/mg5amcnlo and unpack it.
  2. Symlink the MG5 model DarkPhotonIDM into the models subdiretory of MG5
  3. Try to launch MG from its root: ./bin/mg5_aMC, failed since the python version on our desktops is too low. Going to do a local install of CPython v3.11.1 to try to fix this.

Now That KF-based Alignment is Functional...

Updates

  • Special run number: 1194550 for 2019 MC
  • Plot $\chi^2$ distribution of tracks that are being included in alignment

Next Steps

  • Single sensor, single translation
  • Single sensor, simple rotation around w
    • calculate expected residual = lever arm * rotation
    • lever arm comes from knowledge of sensor: the location in the insensitive direction v
    • rotation is the known misalignment introduced into the detector
    • a plot of ures vs v should have a linear trend with slope corresponding to the rotation
  • Plot track-parameter derivatives
  • Include n-hit-13 KF tracks in alignment

iDM Model Restriction to Improve Speed

At the beginning of running, I get a warning from MG.

WARNING: The optimizer detects that you have coupling evaluated to zero:
GC_76
This will slow down the computation. Please consider using restricted model:
https://answers.launchpad.net/mg5amcnlo/+faq/2312

I have tried following the instructions from the linked site, but the restricted model (while it does run faster) does not produce similar outgoing kinematic distributions compared to the original model. To make this comparison between restricted and unrestricted easier, I increased the energy of the electron beam so I could use the default mass parameters that came with the model.

Look Into Starting Alignment

SDF

/sdf/group/hps/users/bravo/run/ali21
  • KF steering file: alignmentDriver_2021.lcsim
  • GBL/ST momentum constraint: alignmentDriver_PC.lcsim
  • GBL/ST alignment plots: alignmentDriver_gblplots.lcsim

Email Nathan Balt*, Stepan, Matt G, and Cam about getting added to ifarm at JLab.

iDM in MadGraph

Necessities

  • enable e- N > e- N zp process, current model does p p > j zp
    • this involves figuring out how to define N and define how N interacts with e- and zp
  • #14

Wants

  • Nuclear properties, DM masses, etc are all run-time parameters (not compile-time)
    • I think that these parameters (except the nuclear ones) are all run-time at the moment, but I need a way to check
  • Converter to DataFrame awkward array for easier analysis
  • Custom pre-compiled "gridpack" that takes DM masses, nuclear properties, and electron beam energy and produces events
    • current gridpacks only allow varying of the beam energy, but I don't know if that is a CLI issue or a deeper issue
  • speed up survey time by adding a restriction on the model so MG can eliminate parameters that we know don't matter
    • we are doing e-N interactions with written-in form factors so including quarks of varying masses adds nothing and only slows down the computation
  • put form factor constants calculated from A, Z as "internal" parameters so that they are calculated automatically
  • #15

Helpful Details

Plot Comments

  • Distributions of variables I'm cutting on (after all previous cuts)
  • Show ECal crystal occupancy (may need to apply a 30 MeV hit threshold to match readout threshold)
  • Remove min E sum cut

Other

  • Make sure counts have E sum and time cuts
  • Use trigger cluster #5 as reference time(s)
  • Fiducial: will need to open possibility of "dead/off" crystals as part of non-fiducial
  • Call Step 1 Preselection

KF-based Alignment Service Work

Copied over from #7

Service Task

  • to debug and fully test the Millipede alignment framework for Kalman filter tracking on 2016 data and Monte Carlo where we have a well understood GBL alignment and a fully working detector. PF has written code to enable Millipede alignment with KF tracks, but the code does not work for reasons that he has been unable to determine after working on it periodically over many months. The deliverables here are
    • benchmarking of KF based alignment against the existing GBL alignment, demonstrating equivalent functionality and consistent results.
    • demonstrating that KF based alignment works correctly when data is missing from some sensors, which is the key motivation for this tool.
  • to work with the experts on 2019 and 2021 alignment to deploy this tool on the 2019 and 2021 datasets and assist in using it to improve the 2019 and 2021 alignment. The deliverable here is not to produce these alignments, per se, but to provide those performing these alignments with support in using the KF-based Millipede alignment, include studies on data and MC that are required to understand its performance on these detectors.

Service To Dos

  • debug KF-based alignment
  • benchmarking KF-based alignment against ST-based alignment, demonstrating equivalent functionality and consistent results
  • demonstrate KF-based alignment works correctly when data is missing from some sensors
  • translation (50um) + rotation (2mrad) of single sensor
  • alignment without informing pede which parameters were actually changed
  • deploy KF-based alignment to 2019 and 2021 datasets
  • add current mille/align as "extension" on hps-mc so it can use hps-mc under-the-hood for running batch hps-java jobs
  • try procedure over a 2016 data run just to see

mille data file format tester

We need a way to check if a generated mille data file is corrupted or not. a 4/200 ~ 2% rate is way too high to just do it manually each time. I'm thinking a C executable using the same functions that pede uses or a python executable based off the readMilleBinary.py script shipped with MillepedeII.


Thanks for looking into this more. I think your solution of just rerunning these partitions is reasonable, but we need a nice way of finding the corrupted files and deleting them. hps-mc has a way to only submit jobs with missing output files, so after you remove the corrupted files this could be used to easily submit the jobs that are needed.

Originally posted by @cbravo135 in #12 (comment)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.