
DOI: 10.22661/AAPPSBL.2020.30.5.58
Why is LaMET an Effective Field Theory for Partonic Structure?
Xiangdong Ji^{ 1,2,3, }*
^{1 }Center for Nuclear Femtography, SURA, 1201 New York Ave. NW, Washington, DC 20005, USA
^{2 }Maryland Center for Fundamental Physics, Department of Physics,
University of Maryland, College Park, Maryland 20742, USA
^{3 }TsungDao Lee Institute, Shanghai Jiao Tong University, Shanghai, 200240, China
* xji@umd.edu
Partons are effective degrees of freedom describing the structure of hadrons involved in highenergy collisions. Familiar theories of partons are QCD lightfront quantization and softcollinear effective theory, both of which are intrinsically Minkowskian and appear unsuitable for classical Monte Carlo simulations. A "new" form of the parton theory has been formulated in term of the oldfashioned infinitemomentum external states. The partonic structure of hadrons is then related to the matrix elements of static (equaltime) correlators in the state Î¡^{z} = ∞>. This representation lays the foundation of largemomentum effective theory (LaMET) which approximates parton physics through a systematic M/Î¡^{z} expansion of the lattice QCD matrix elements at a finite but large momentum Î¡^{z}, and removes the residual logarithmicÎ¡^{z} dependence by the standard effectivefieldtheory matching and running.
INTRODUCTION
In 2013, I wrote a paper on Physics on Euclidean Lattice" [1], describing a new method to directly calculate parton distribution functions (PDFs) and other parton observables using Euclidean quantum chromodynamics (QCD), which can be implemented in lattice field theory. This paper has generated much interest in the community of parton physics: many followup works have been published, and a few reviews have appeared describing the rapid progress in both theory and lattice QCD simulations [2, 3].
In 2014, I realized that the method in fact implies an effective field theory (EFT) approach to calculating par tonic structure of a hadron, allowing a controlled systematic approximation to almost any parton property. Therefore, I wrote another paper explaining the basic principle behind the previous one [4]. This paper did not get much attention, at least judged by the number of citations. A more fundamental reason perhaps is that many, including my friends, may consider this to be a dubious EFT, therefore they have kept silent about this, quoting my method only as the DF approach. A few critics either don't care, or only raise questions through referee reports. This is understandable because EFT is nowadays synonymous to "systematic and fundamental," differing from uncontrolled "models." I understand perfectly well that the word should not be abused. Otherwise, we will have more EFTs than the number of theorists in the field. Actually, a long listing of EFTs is given on a website by I. Stewart for his famous online course on EFT [5] and also [6, 7]. A former smart postdoc of mine has done a lot of important work in this new field, but never mentioned LaMET in his papers.
Troubled by this, I finally ask him why not quoting my LaMET paper? He honestly replied, "I never understand why it is an EFT. Where is the effective Lagrangian?"
This is indeed a good question! LaMET reverses the logic of a standard EFT and use the full QCD Lagrangian as the effective one. It follows a "new" insight that partons can be generated by an infinitemomentum external state. It seeks an approximate solution through a systematic expansion using finite but large momentum, similar to the way that discrete points have been used to approximate the continuous spacetime in lattice QCD. Why I still think of it as an effective theory has been explained in a recent review paper I coauthored [3]. However, the review is long and many people don't have the patience to read it through, except some poor graduate students whose advisors assign it as a reading material. On the other hand, perhaps only a student with good training in quantum field theory (QFT) can finally work through the logic in the paper.
In my view, LaMET provides, for the first time, a practical and systematic theoretical framework to calculate parton physics through Euclidean lattice QCD simulations, a goal that late Ken Wilson and others tried to achieve through directly solving Minkowskian lightfront QCD [8, 9], a wellknown theory of partons. In fact, recent works on transversemomentumdependent(TMD) parton distributions demonstrate this point quite clearly [10 13]. The upcoming ElectronIon Collider [14] will allow measuring a very broad range of observables with unprecedented accuracy. LaMET may provide the unique tool to link most of these unambiguously to the partonic structure of the proton within QCD.
For all reasons above, I wrote up this article based on a seminar I gave. The article aims at beginning graduate students, perhaps after two semesters of a QFT course.
WHAT IS AN EFFECTIVE THEORY?
When taking my first physics course in middle school, I remembered one comment from a teacher very clearly: Physicists always consider idealized concepts so that a problem can be simplified to a point that it has a simple solution. Hence we have "point" particles and "frictionless" surfaces in Newton's mechanics, "ideal" gases in thermodynamics, and "ideal" fluid in fluid dynamics, etc. These simplifications make the main physics points clear and fun, and represent a methodology as well as an art which has contributed significantly the great success of physics.
As an undergraduate, after learning calculus, I suddenly realized that this is called Taylor expansion. Imagine some physics quantity f(x, ϵ, Î´, ...) depending on the variable x and many other parameters x, ϵ, Î´, ... etc., (e.g., the radius of the Earth when studying its rotation around the Sun), one can simplify the problem by expanding around the ideal limit, ϵ = Î´... = 0,
 (1)

The first term is what we learn in middle and high schools, and much frontier research is about understanding highorder terms in the series. This way of doing physics may be called an effective approach. So an effective theory is in a certain sense about a Taylor expansion, which may lead to Nobel prizes in precision measurements, e.g., the magnetic moment of the electron, the pulseperiod variation in a neutron star binary, etc.
There are many such examples in college physics. The most famous one is probably the multiple expansion in electrostatics. If a charge system has a size R, the electric potential of such system at large r can be worked out as an expansion in R/r, with the first term coming from the total electric charge Q, followed by the dipole and quadruple potentials, etc.
Many practical quantum mechanics problems cannot be solved without effective theory concepts. Usually, the Hilbert spaces have an infinite dimension, so the eigenvalue problems are almost impossible to solve exactly except for, e.g., the wellknown harmonic oscillator or hydrogen atom. One can usually write a Hamiltonian as
 (2)

where H_{0} is diagonal, and H' is not. Without loss of generality, let us assume all matrix elements of H' have the same size,  H' . If H_{0} contains a cluster of states that have similar eigenvalues and span a subspace P of dimension d_{P}, then the eigenstates of H with largest overlaps with P can be obtained through an effective Hamiltonian,
 (3)

which has a finite dimension d_{P}, and can be often diagonalized by a computer. H_{eff} has a Taylor expansion in terms of the energy ratio ϵ =  H'  / ΔE (ΔE is the typical energy difference between the states in P and the rest) which, when P is properly chosen, could be a small parameter (if not, you are dealing with a strongly coupled system and your luck runs out!). P is sometime called model space, and the complementary infinitedimensional space Q = 1  P has been summed or "integrated out" in Eq. (3), as explicitly seen in the second term. An excellent example of the above is degenerate states. When d_{P}, = 1, one simply recovers the nondegenerate perturbation theory, which is ubiquitous in quantum problems.
One important point is that the effective wave functions obtained by diagonalizing H_{eff} cannot be directly used to calculate matrix elements of a physical observable O. One has to "integrate out" contribution in O to get an effective operator O_{eff} which will be used together with the effective wave functions. This yields concepts such as "effective" charge and mass. Thus an effective theory in quantum mechanics requires an effective Hamiltonian and matched effective observables. The physics in the model Pspace is presumably simpler to understand. For example, all kinds of quasiparticles in condensed matter physics, including anyons and Majorana zeromodes, are effective objects arising from ordinary Coulomb interactions. The effective theory strategy has also been widely used in solving nuclear quantum manybody problems [15].
The success of effective theory concepts is closely related to the separation of energy scales in nature. From elementary particles, to nuclear physics, to atomic and molecule physics, and to chemistry and biology, it appears that a set of approximate rules can often be found in each domain without worrying about those at the next level. It surely is an important gift by nature to us scientists!
WHAT IS AN EFFECTIVE FIELD THEORY?
By name, an EFT is a field theory with some effective degrees of freedom (dof's), while others of the full QFT have been integrated out. So examples are:
• The standard model EFT integrates out all unknown physics above the electroweak scale, which might be a grand unification theory or string theory. ϵ = M_{ew }/Λ_{NP}, with M_{ew} as the electroweak scale and Λ_{NP} as the new physics scale.
• QCD perturbation theory (pQCD) keeps all highmomentum dof's active, parametrizing the physics of infrared dof's with nonperturbative matrix elements (e.g. parton distributions). ϵ = Λ_{QCD }/Q, where Λ_{QCD} is nonperturbative QCD scale and Q is the hard scattering scale.
• Chiral perturbative theory keeps Goldstone bosons as lowenergy dof's, and parametrizes the highenergy physics in terms of "lowenergy" (which should really be called highenergy!) constraints. ϵ = p/M where p is a low momentum scale, M is a hadron mass scale.
• Lattice QCD is a EFT with highmomentum dof's, k > 𝜋/a (here a is lattice spacing), integrated out, and ϵ = aΛ_{QCD}.
• Heavy quark effective theory (HQET) considers an expansion around the quark mass m_{Q} = ∞, and ϵ = Λ_{QCD }/m_{Q}.
Thus by nature, EFT is not so much different from the simple Taylor expansion learned as a junior undergraduate, except there is a catch: ultraviolet (UV) divergences!
A QFT contains infinite many dof's with low and high momentum particles or fields needed to maintain Lorentz symmetry. With local interactions, highmomentum dof's will produce infinite contributions to physical observables. Since very highmomentum physics cannot be described by the theory, it will be integrated out and its effects have to be parameterized in terms of some "highenergy" constants. If the number of such constants is finite, the theory is called renormalizable. Thus the renormalization process of a QFT consists of constructing an EFT with only lowenergy dof's! The same is in a sense true for a cutoff theory. Therefore some would say that all QFTs are EFTs!
UV divergences, however, sometimes make Taylor expansions not so straightforward. Let's consider a function f(x, ϵ, ..., Λ), which now contains a UV cutoff scale Λ. If one does a Taylor expansion around ϵ = 0, one finds there is an ambiguity. Either you expand after finishing the full calculation, or you take ϵ = 0 beforehand. There is a difference because taking ϵ → 0 does not commute with Λ → ∞ and the function f(x, ϵ, ..., Λ) is nonanalytic at the point ϵ = 0! HQET in QCD is such an example. Feynman integrals with m_{Q} finite and with m_{Q} = ∞ have completely different UV behaviors [16], which makes the quark mass dependence of a physical quantity nonanalytic!
Thus, an EFT often deals with a Taylor expansion around a singular point of the relevant parameters, which is a bit tricky. It requires the skill of a good graduate student, or some smart undergraduates. In fact, in other physical problems, physicists have already encountered examples where naive perturbative expansions fail, and more sophisticated expansion technologies are required such as LindstedtPoincarÃ©, KrylovBogoliubovMitropolsky, and multiplescales methods, and renormalization theory etc. [1719].
The standard EFT methodology is to take ϵ = 0 before doing any computation. An effective Lagrangian is constructed to evaluate f(x, ϵ = 0, ..., Λ), and this calculation is presumably simpler. However this does not give the right answer f(x, ϵ, → 0, ..., Λ). One needs to figure out what is their difference, and this is very important! This difference is quite often independent of other parameters x. So if one does a calculation for some specific values of x and figures out the difference, the result can be used for all x. Once an effective theory calculation is done, one can get the right Taylor series by adding up the difference. This is called EFT matching! Matching is
needed to get the effective Lagrangian as well as effective operators.
The UV behavior of an EFT at ϵ = 0 is very different from the full theory, and this difference can be exploited for useful purposes. It can help to sum up the socalled large logarithms in the coupling constant expansion of the full theory through the renormalizationgroup running in the EFT. So many people working in EFTs are doing matching and running, matching and running, much like the life of an adult male among Yi people in China. Again, all this is nothing but a sophisticated Taylor expansion. Armed with these clarifications, I can talk about partons.
LIGHTFRONT EFFECTIVE THEORY OF PARTONS AND WHY IT IS HARD TO SOLVE
Partons, introduced by Feynman, are a fundamental concept in highenergy physics, and now are a standard topic of textbooks in QFT and highenergy physics [20, 21]. Partons are dof's in a hadron moving at infinite momentum Î¡^{z} = ∞, assumed along the zdirection. In reality, no hadron can travel at the speed of light, even though the proton at LHC travels at v = 0.999999999c. Therefore, partons are an idealized theoretical concept according to the middle school teacher. However, without such a beautiful concept, it is hard to imagine how to describe highenergy collisions of two protons at LHC with thousands of particles produced!
All partons have infinite momentum (zero wavelength!), carrying a fraction of hadron momentum, x = lim_{Î¡z}_{→∞} k^{z}/Î¡^{z}. They are a part of the dof's in QCD (forming a Pspace). It is possible to single out these dof's to write down an effective theory. The other dof's, which have finite momentum along z direction, carry the zero fraction of the hadron momentum, and hence are called "zero modes" (Qspace), including those making up you and me!
No one had much experience working with a quantum mechanical system travelling at the speed of light. Weinberg considered a scalar theory in 1966 and found a set of simple rules to do perturbation theory by eliminating the ubiquitous kinematic infinities [22]. After the advent of Feynman's parton model, further studies found that Weinberg's rules can simply be reproduced by the socalled lightfront quantization (LFQ) [2325], a form of dynamics proposed by Dirac as early as 1949 [26]. LFQ naturally uses the Hamiltonian formalism, bringing the EFT of partons in a form similar to a nonrelativistic manybody problem.
Many probably do not recognize that LFQ of a theory is in fact an EFT. The effects from all finite momentum modes cannot be directly calculated in LFQ with a smallx cutoff. There is now an infinite number of "lowenergy" constants that cannot be determined from the theory itself, including all the properties of the physical vacuum, as well as the mass and the spin of hadrons the theory targets to describe [27].
Solving LF quantized QCD has been exceedingly hard, if not impossible. Ken Wilson thought of a weakcoupling approach like that in atomic physics [8]. However, the idea has not paid off so far. Despite much progress [9], a systematic approximation in LFQ for the parton structure of QCD bound states has yet to be found. Recent progress in quantum computation has generated new hope that the problem may ultimately be solved using quantum computers.
Inspired by the LF formalism, infrared parton modes in pQCD are not represented by infinite momenta, but in terms of LF correlations. More specifically, the quark parton distribution functions (PDFs), which describe the probability distributions of quarks, are defined through correlation functions,
 (4)

where Ψ is a fullQCD quark field, n^{Âµ} is a LF fourvector n^{2} = 0, Î» is the LF distance, and P> is a hadron state of arbitrary momentum. LF correlation operators (or correlators) automatically select the parton dof's, which in turn project the hadron state into the effective space through the matrix elements as in Eq. (4). Therefore, partons can be defined and studied without the EFT machinery [21]. An explicit separation of parton modes in the pQCD Lagrangian has been made in softcollinear effective theory (SCET), where parton dof's are represented by LF collinear fields [2830]. Since the external hadron states are constructed in full QCD theory, the formulation of SCET is different from the LFQ program where states and operators are all manifestly in the effective space.
Imbedding the parton modes in full QCD in terms of the collinear LF fields makes the parton physics manifestly covariant. Eq. (4) is then amenable to the Feynman pathintegral formalism of QFT. However, it is still impossible to calculate on a classical computer because it involves explicitly the physical time. As such, we say the problem is Minkowskian (the same is true for LFQ), and it has the socalled "sign problem" which is known to be "NPhard" in the language of computation theory.
I am going to argue that directly solving the parton structure either in LFQ or in LF correlators full QCD is actually not a good idea. The Î¡^{z}= ∞ limit of a proton is very similar to critical points in condensed matter systems at which the correlation length diverges, and infinite longrange correlations make many degrees of freedom strongly coupled. [Correlation length Î¾ is usually defined by an exponential behavior exp(Î»/Î¾).] It is hard to make theoretical approximations as one learned from manybody systems at critical points.
Where is the infiniterange correlation in parton physics? The answer is at small x. As x → 0, parton distributions grow like x^{Î±}, where Î± is less than or equal to 1 from unitarity constraint, but generally positive. If one Fouriertransforms this to coordinate space, one finds the following correlation behavior,
 (5)

where Î» is a conjugate variable to x and the same as the LF distance mentioned above. Thus the correlation functions decay only algebraically with LF distance, an indication that the system is at a critical point. In condensed matter physics, no theorist actually tries to understand the critical phenomena by directly calculating at T = T_{c}!
The only known systematic approach to solve nonperturbative QCD is through Wilson's lattice gauge theory. However, it only works for Euclidean QCD where no physical time is involved. Many ideas have been proposed to calculate parton physics on an Euclidean lattice over the years, the most fruitful one so far is to calculate the first few moments of PDFs which are given by matrix elements of timeindependent local operators. However, this is far from solving the complete parton structure.
A "NEW" PARTON THEORY WITH INFINITEMOMENTUM HADRONS
Facing this impasse, it is useful to go back to Feynman's original idea about partons. Feynman proposed the concept not in the context of a QFT, but based on intuition and experience with atomic physics, along with basic notation of relativity. He argued that as a hadron moves at high energy or momentum, the interactions between constituents slow down due to Lorentz time dilation. This slowing down is dramatized by the infinite momentum limit Î¡^{z} = ∞, where the hadron is now made of incoherent, noninteracting constituents. A key property that characterizes the state of noninteracting partons is their longitudinal momentum distribution, f(x).
Feynman arrived at PDFs from the ordinary onedimensional momentum distribution (other dimensions being integrated out), f(k^{z}, Î¡^{z}) of the constituents in the hadron moving with centerofmass momentum Î¡^{z}. Assuming Î¡^{z} = ∞ is analytic, one can Taylorexpand the momentum distribution around this, and the famous parton distribution is just the first term of the expansion,
 (6)

where x = k^{z}/Î¡^{z} and M is a hadron mass scale. The fact that the momentum distribution of a system depends on the centerofmass frame is unfamiliar in nonrelativistic systems. However, it becomes very important when relativity is at play: as a composite system travels faster, the internal dynamics will change as the Hamiltonian is not invariant under Lorentz boost.
Feynman's original idea can be implemented in field theory to get a "new formulation" of partons. For example, the quark PDFs can now be regarded as the momentum distribution of a system travelling with infinite momentum, and can be expressed as a Fourier transformation of the spatial correlation [4],
 (7)

keeping Î» = lim_{Î¡z}_{→∞, z→0 zÎ¡z finite. The above correlation does not involve the physical time and can be formulated as a calculation in Euclidean field theory. It is not difficult to see that the parton correlations in LF formalism Eq. (4) and the above differ by precisely an infinite Lorentz transformation! The LF distance Î» in Eq. (4) has a correspondence with the infinite momentum limit of the Euclidean distance zÎ¡z, as shown in Fig. 1.}
Fig. 1: The connection between two pictures of parton physics: LF formalism and Feynman infinitemomentum picture. Through Lorentz boost, the correlation along the zdirection in the frame of a largemomentum hadron is equivalent to a correlation of length ~ Î³z close to the LF in the hadron state of zero momentum. As Î³ → ∞, the latter becomes exactly the LF correlation in Eq. (4).
The relations between the "old" and "new" parton formulations can further be clarified. In the LF formalism, the parton dof's are selected through the LF collinear fields in Eq. (4) and hence are intrinsically Minkowskian. This is analogous to the Heisenberg picture in quantum mechanics, where timedependence is incorporated in operators. On the other hand, in the "new" representation in Eq. (7), partons are filtered through the infinitemomentum external states and therefore allow Euclidean correlation functions. This is like Schrodinger picture in which operators are timeindependent [3].
Feynman, however, did not realize that Î¡^{z} = ∞ is not welldefined in field theories with UV divergences, and the difficulty can only be solved in asymptoticallyfree theory. LF parton formalism is a result of taking Î¡^{z} → ∞ before one does any calculation! The resulting parton PDFs f(x, Âµ) where 1 < x < 1 (negativex corresponding to antiparticle), where Âµ is a renormalization scale. This is different from a physical momentum distribution f(y = k^{z}/Î¡^{z}, Î¡^{z}), where ∞ < y < ∞ even in the Î¡^{z} = ∞ limit. The good news is that the difference between the limits is related to highmomentum modes only, and they can be matched through pQCD.
The mismatch between the two Î¡^{z} → ∞ limits is indeed due to UV divergences. Imagine a parton at x = 0.9 in a proton with finite physical momentum Î¡^{z}, it can radiate a gluon going backward with a momentum x = 0.2, and the parton ends up with momentum 1.1Î¡^{z}. This is perfectly fine in the full theory because no matter how large the proton momentum is, QCD always allows a parton to carry momentum bigger than Î¡^{z }because the UV cutoff is supposed to be ≫ Î¡^{z}. Where is the effect of this type of parton in physical cross sections? It has been taken care of through pQCD radiative corrections. The parton EFT is supposed to include only the lowenergy scale physics, and therefore x is limited to 1 by the EFT construction.
LARGEMOMENTUM EFFECTIVE THEORY
While computing f(x) in the "new" formulation of partons is now a Euclidean problem, it seems still a mission impossible: we don't know how to build a hadron state with infinite momentum. However, Eq. (6) puts the problem again into the context of a Taylor expansion! The idea is that one can approximate Î¡^{z} = ∞ by a finite but large Î¡^{z}, and systematically correct for any mistakes [1], keeping in mind though Î¡^{z} = ∞ is actually singular.
The quark and gluon momentum distributions f(k^{z}, Î¡^{z}) are quantities that can routinely be simulated in lattice QCD for a moderatelylarge momentum Î¡^{z}. The only question is how large is a large Î¡^{z} that can approximate ∞. The answer depends on the expansion parameter ϵ = (M/Î¡^{z})^{2}. One would naively expect that since M is on the order of 1 GeV, Î¡^{z} ~ 2 GeV already gives ϵ = 0.25 which is already a reasonably small parameter, a situation similar to the charm quark when using HQET. With incoming exascale computing, one can simulate in lattice QCD a proton at Î¡^{z }~ (35) GeV, making ϵ as small as 0.03.
To make practical and systematic use of the above observation is the main subject of largemomentum effective theory [4]. A number of important observations can be made about this theory:
• For a large momentum hadron, one can calculate many of its static properties or correlation functions on an Euclidean lattice [1]. Besides the momentum distributions, one can also calculate transversemomentumdependent (TMD) distributions, generalized momentum distributions with momentum transfer, and static correlations of quark and gluons fields between the hadron and QCD vacuum, etc. All of these physical properties (or quasi parton distributions) can be used to extract the partonic physics of bound states, yielding the generalized parton distributions (GPDs), TMDPDFs, LF wave functions, etc. [3].
• Although naive power counting suggests the expansion parameter to be ϵ = M^{2}/(Î¡^{z})^{2}, a more careful examination yields M^{2}/(k^{z})^{2} and M^{2}/(Î¡^{z } k^{z})^{2}, where k^{z} is the parton momentum [4, 31, 32]. Therefore, the expansion does not converge uniformly for all x. Just as in an experiment for which the centerofmass energy limits the smallest accessible x, the hadron momentum on the lattice set a limit on the smallest x partons that one can calculate, x_{min }~ Λ_{QCD}/Î¡^{z} [1]. The range of a reliable LaMET calculation is [x_{min}, x_{max}~1x_{min}] which goes to [0, 1] in the Î¡^{z} → ∞ limit.
• Matching between the finitemomentum properties and parton physics is a pQCD problem, free of infrared (IR) physics to all orders in pQCD (two loop calculations have appeared recently [33, 34]). The physical origin for this is that boost does not change the IR properties of a matrix element [4]. It has also been verified explicitly through pQCD analysis [31].
• The momentum distribution f(k^{z}, Î¡^{z}) contains large logarithms of Î¡^{z}, and one can write down an evolution equation in Î¡^{z} [4]. This momentum evolution corresponds exactly to the renormalization group (RG) equation of PDFs apart from a simple field renormalization. Thus the parton RG evolution has its physical origin as the change of momentum distributions due to the change of Hamiltonian and states under Lorentz boost, and helps to sum largeÎ¡^{z} logarithms in f(k^{z}, Î¡^{z}).
• Higher order (power) corrections in the Taylor expansion can be worked out systematically, and can be calculated through lattice simulations as well. They help to improve the precision of a LaMET calculation.
• In critical phenomena, one can have nominally different physical systems sharing the same criticalpoint properties. Examples include the spontaneous magnetization and liquidgas critical point. Similarly, in LaMET, one can use different Euclidean operators to get the same PDFs [35]. This is guaranteed by the projection through largemomentum external states and is a universality phenomenon. Therefore, apart from the static physical distributions, one can also calculate PDFs using many other Euclidean correlations including currentcurrent correlators, etc. [36, 37].
• LaMET provides a general recipe for calculating lightlike correlations entering the factorizations of highenergy processes. One can replace the lightlike Wilson lines in operators by largemomentum hadron external states when appropriate, and if the result is timeindependent, it can be calculated by a lattice simulation. An excellent example is the soft function appearing in TMD factorization [38].
Fig. 2: The parton EFT is an infinitemomentum limit of Euclidean QCD after proper matching. Solid line shows the region of convergence of the LaMET expansion. Similar to critical phenomena, the parton limit P^{z} = ∞ is like a critical point and corresponds to a fixed point of the momentum renormalization group. The solid line represents the critical region or the "basin of attraction" of the Renormalization flow.
Having discussed how the LaMET formalism actually works, I am in a position to discuss the nature of this approach in light of the properties of EFTs discussed in Sec. III.
• Approximating an infinite momentum by a finite momentum in LaMET is not new, and is exactly what has been used in lattice QCD. The highest momentum on a lattice is 𝜋/a, which is supposed to be taken to infinity in the end. One can handle this limit through a systematic expansion in aΛ_{QCD}.
• Going back to Eq.(1), one can invert the expansion, through
 (8)

Now one can regard f(x, ϵ, Î´, …) as an effective description of f(x, 0, 0, …)! Therefore, the magic word "effective" is not absolute, but mutual, analogous to a mirror symmetry. In one sense, LaMET uses calculations in the full QCD Lagrangian, righthand side in Eq. (8), to simulate the parton physics on the left, and the QCD on the lattice is an effective theory for LF theory of parton.
• On the other hand, one may regard the emergent parton physics as an EFT describing the physical properties of hadrons at large Î¡^{z} in full QCD [4]. This interpretation is analogous to HQET which uses an infinitelyheavy quark to describe the physics of a heavy quark. Whereas the usual effective theories use matching and running to get the effective Lagrangian and effective operators and then use them for calculations. LaMET does matching and running for all physical observables.
• Following Sec. V, Î¡^{z} = ∞ is like a critical point in condensed matter systems. At critical points, long wavelength modes dominate, and the critical phenomena are studied through systems very close to, but not exactly at, T_{c}. LaMET follows the same spirit [4], as shown in Fig. 2. The correlation length Î¾ is finite in the critical region, and is proportional to Î¡^{z}/Λ_{QCD}.
For all these reasons, the name LaMET has been given to the strategy of solving parton physics through simulations of Euclidean QCD at a large hadron momentum It is an effective theory for parton structure without directly dealing with the parton dof's themselves, as in LF quantization.
INSTEAD OF A CONCLUSION: TO CONTROL OR NOT TO CONTROL?
Partons provide a powerful language which is used everyday by thousands of physicists to describe highenergy collisions. Although conceptually simple, the mathematics of describing parton interactions forming the highenergy proton is very challenging. Therefore, it is important that any approach to calculating PDFs must have systematic controls of errors.
The infinitemomentumstate representation in Eq. (7) provides a starting point for applying approximation methods. LaMET formalism begins with lattice QCD data gen02erated with large hadron momenta, and offers a method to extract parton structure through a sophisticated Taylor expansion. Since a most important hallmark of an EFT is power counting, i.e., uncertainties can be quantied in sizes of expansion parameters, LaMET meets this criteria as an EFT, although calculating the power correction is by no means a small task.
Fig. 3: Comparison between the momentum and coordinate expansions to analyze LaMET data. The latter can be applied only to a subset of parton observables, for which the concept of a shortdistance expansion can be applied.
The Euclidean correlator in Eq. (7) introduced in Ref. [1] has also been considered in coordinatespace factorization (CSF) [39], introduced in an early work on meson distribution amplitude with currentcurrent correlator [36], see also [37]. The correlator can be factorized in terms of the LF correlations with an expansion parameter (zΛ_{QCD})^{2}. The formalism is naturally suitable for calculating the moments of PDFs or shortdistance LF correlations. To obtain the full parton physics, however, one has to simultaneously consider the constraint on the external momentum,
 (9)

This is identical to the observation in Ref. [4]: one must use large momenta to capture the full dynamical range of PDFs, which requires information on longrange correlations in Î». Despite complete equivalence [32, 40], there is a tendency in the literature to translate every momentumexpansion paper under the Sun into an equivalent CSF form, although some analytical matching calculations might be conveniently done in coordinate space. Not surprisingly, the same LaMET lattice data are needed for a CSF analysis to get PDFs. [Nominally, CSF can also admit data at small Î¡^{z}, but the same information is already in large Î¡^{z} data at smaller z.]
Interestingly, however, when the CSF methodology is applied to largeÎ¡^{z} data, it has not yet produced a controlled expansion scheme for xdependent PDFs. The CSF reinterpretation of the correlation functions prohibits using largez data, and the Fourier transformation to momentum space becomes incomplete due to a limited Î¡^{z}. As a consequence, one has to parametrize the physical PDFs and to covert them to coordinate space in order to make fits to lattice data at z ≪ 1/Λ_{QCD}. The process generates uncontrolled systematics through model parametrization and fitting. Alternatively, one might try to model the highorder (zΛ_{QCD})^{2} contributions and subtract them at largez. Again such modeling will introduce uncontrolled systematics. In reality, short distance expansion for largeÎ¡^{z} matrix elements is unnecessary unless one is interested in moments of PDFs, the momentum expansion automatically quantifies the highorder power contributions in the largez data through Fourier analysis. The relationship between momentum and coordinatespace analysis of the LaMET data is shown in Fig. 3, and the reader is referred to Ref. [41] for a more complete discussion.
This seems a bit strange at first, because ordinarily one would expect so long as Î¡^{z} is large enough, bulk of the parton physics will be produced by short distance data. From the above discussions, however, it is clear that physics and power counting systematics naturally lead to a momentum expansion, not CSF which unavoidably introduces unnecessary cuts on LaMET data. In more sophisticated LaMET applications such as TMDPDFs and LF wave functions, the coordinate space correlation data may not have an apparent CSF interpretation, and the momentum expansion might be the only game left to play [3].
The major difference between models and EFT's is about control of systematics. In 1998, H. Georgi was invited to a conference at JLab, "Quark Confinement" and the Hadron Spectrum III". He gave an after dinner talk in which he listed 10 reason for attending the meeting. Reason No. 7 says, "You are not a control freak, so you don't like controlled approximations." He went on saying, "The difference between the nuclear physics and particle physics traditions always amazes me here. Trained as a particle physicist, I tend to like to control my approximations even if it means distorting the physics. A lot of the speakers here clearly didn't care. I don't know whether this is good or bad." I had been educated as a nuclear theorist by an excellent Ph.D. advisor, the title of my thesis was "ShellModel Effective Interactions for N=50 Nuclei." So I had been trained as a control freak!
Acknowledgments: I thank J. W. Chen, Y. Z. Liu, J. P. Ma, A. SchÃ¤fer, W. Wang, B. W. Xiao, F. Yuan, J. H. Zhang and Y. Zhao for helpful discussions and comments related to the subject of this paper. I particularly thank Y. Q. Ma for many discussions and correspondences helping to make the paper readable to nonexperts, and A. SchÃ¤fer for a careful reading and editing suggestions of the manuscript. I also appreciate the figures helped by Y. Zhao. This work is partially supported by the U.S. Department of Energy under Contract No. DESC0020682, and by Center for Nuclear Femtography operated by Southeastern University Research Association in Washington DC.
References
[1] X. Ji, Phys. Rev. Lett. 110, 262002 (2013), arXiv:1305.1539 [hepph].
[2] K. Cichy and M. Constantinou, Adv. High Energy Phys. 2019, 3036904 (2019), arXiv:1811.07248 [heplat].
[3] X. Ji, Y.S. Liu, Y. Liu, J.H. Zhang, and Y. Zhao, (2020), arXiv:2004.03543 [hepph].
[4] X. Ji, Sci. China Phys. Mech. Astron. 57, 1407 (2014), arXiv:1404.6680 [hepph].
[5] https://ocw.mit.edu/courses/physics/8851effectivefieldtheoryspring2013/
[6] A. V. Manohar, in Les Houches summer school: EFT in Particle Physics and Cosmology (2018) arXiv:1804.05863 [hepph].
[7] E. b. S. Davidson, P. Gambino, M. Laine, M. Neubert, and C. Salomon, Effective Field Theories in Particle Physics and Cosmology, Lecture Notes of the Les Houches Summer School (Oxford University Press, 2020).
[8] K. G. Wilson, T. S. Walhout, A. Harindranath, W.M. Zhang, R. J. Perry, and S. D. Glazek, Phys. Rev. D49, 6720 (1994), arXiv:hepth/9401153 [hepth].
[9] S. J. Brodsky, H.C. Pauli, and S. S. Pinsky, Phys. Rept. 301, 299 (1998), arXiv:hepph/9705477 [hepph].
[10] X. Ji, L.C. Jin, F. Yuan, J.H. Zhang, and Y. Zhao, Phys. Rev. D99, 114006 (2019), arXiv:1801.05930 [hepph].
[11] M. A. Ebert, I. W. Stewart, and Y. Zhao, JHEP 09, 037 (2019), arXiv:1901.03685 [hepph].
[12] X. Ji, Y. Liu, and Y.S. Liu, (2019), arXiv:1911.03840 [hepph].
[13] A. A. Vladimirov and A. SchÃ¤fer, Phys. Rev. D 101, 074517 (2020), arXiv:2002.07527 [hepph].
[14] A. Accardi et al., Eur. Phys. J. A52, 268 (2016), arXiv:1212.1701 [nuclex].
[15] C. Drischler, W. Haxton, K. McElvain, E. Mereghetti, A. Nicholson, P. Vranas, and A. WalkerLoud (2019) arXiv:1910.07961 [nuclth].
[16] A. V. Manohar and M. B. Wise, Camb. Monogr. Part. Phys. Nucl. Phys. Cosmol. 10, 1 (2000).
[17] C. M. Bender and S. A. Orszag, Advanced Mathematical Methods for Scientists and Engineers I (Springer, 1999).
[18] L. Y. Chen, N. Goldenfeld, and Y. Oono, Phys. Rev. Lett. 73, 1311 (1994), arXiv:condmat/9407024.
[19] T. Kunihiro, Prog. Theor. Phys. 94, 503 (1995), [Erratum: Prog.Theor.Phys. 95, 835 (1996)], arXiv:hepth/9505166.
[20] R. Ellis, W. Stirling, and B. Webber, QCD and collider physics, Vol. 8 (Cambridge University Press, 2011).
[21] J. Collins, Camb. Monogr. Part. Phys. Nucl. Phys. Cosmol. 32, 1 (2011).
[22] S. Weinberg, Phys. Rev. 150, 1313 (1966).
[23] S.J. Chang and S.K. Ma, Phys. Rev. 180, 1506 (1969).
[24] J. B. Kogut and D. E. Soper, Phys. Rev. D1, 2901 (1970).
[25] S. D. Drell and T.M. Yan, Annals Phys. 66, 578 (1971), [Annals Phys.281,450(2000)].
[26] P. A. M. Dirac, Rev. Mod. Phys. 21, 392 (1949).
[27] X. Ji, (2020), arXiv:2003.04478 [hepph].
[28] C. W. Bauer, S. Fleming, D. Pirjol, and I. W. Stewart, Phys. Rev. D63, 114020 (2001), arXiv:hepph/0011336 [hepph].
[29] C. W. Bauer and I. W. Stewart, Phys. Lett. B516, 134 (2001), arXiv:hepph/0107001 [hepph].
[30] C. W. Bauer, D. Pirjol, and I. W. Stewart, Phys. Rev. D65, 054022 (2002), arXiv:hepph/0109045 [hepph].
[31] Y.Q. Ma and J.W. Qiu, Phys. Rev. D98, 074021 (2018), arXiv:1404.6860 [hepph].
[32] T. Izubuchi, X. Ji, L. Jin, I. W. Stewart, and Y. Zhao, Phys. Rev. D98, 056004 (2018), arXiv:1801.03917 [hepph].
[33] L.B. Chen, W. Wang, and R. Zhu, (2020), arXiv:2006.14825 [hepph].
[34] Z.Y. Li, Y.Q. Ma, and J.W. Qiu, (2020), arXiv:2006.12370 [hepph].
[35] Y. Hatta, X. Ji, and Y. Zhao, Phys. Rev. D89, 085030 (2014), arXiv:1310.4263 [hepph].
[36] V. Braun and D. MÃ¼ller, Eur. Phys. J. C55, 349 (2008), arXiv:0709.1348 [hepph].
[37] Y.Q. Ma and J.W. Qiu, Phys. Rev. Lett. 120, 022003 (2018), arXiv:1709.03018 [hepph].
[38] X. Ji, Y. Liu, and Y.S. Liu, (2019), arXiv:1910.11415 [hepph].
[39] A. Radyushkin, Phys. Rev. D 96, 034025 (2017), arXiv:1705.01488 [hepph].
[40] X. Ji, J.H. Zhang, and Y. Zhao, Nucl. Phys. B924, 366 (2017), arXiv:1706.07416 [hepph].
[41] X. Ji, Y. Liu, A. SchÃ¤fer, W. Wang, Y.B. Yang, J.H. Zhang, and Y. Zhao, (2020), arXiv:2008.03886 [hepph].
