Appearance
question:Long-Time Asymptotics for the Toda Lattice in the Soliton Region Helge Krüger Gerald Teschl 1 2 Introduction In this paper we want to compute the long time asymptotics for the doubly infinite Toda lattice which reads in Flaschka’s variables (see e.g.,, or ) label{tl} aligned dot b(n,t) &= 2(a(n,t)^2 -a(n-1,t)^2), dot a(n,t) &= a(n,t) (b(n+1,t) -b(n,t)), endaligned (n,t) in mathbb{Z}times mathbb{R}. Here the dot denotes differentiation with respect to time. We will consider solutions (a,b) satisfying label{decay} sum_n |n|^l (|a(n,t) - frac{1}{2}| + |b(n,t)|) < infty for every linmathbb{N} for one (and hence for all, see ) tinmathbb{R}. It is well-known that this initial value problem has unique global solutions which can be computed via the inverse scattering transform. The long-time asymptotics for this problem were first given by Novokshenov and Habibullin and were later made rigorous by Kamvissis, however, only in the case without solitons. The purpose of the present paper is to finally fill this gap and show how to include solitons. As in, our approach is based on the nonlinear steepest descent analysis for oscillatory Riemann–Hilbert problems from Deift and Zhou. It turns out that in the case of solitons, two new phenomena enter the scene which require significant adaptions to the original method of Deift and Zhou. Of course our technique also applies to other soliton equations, e.g., the Korteweg–de Vries equation. First of all, it is well-known that there is a subtle nonuniqueness issue for the involved Riemann–Hilbert problems (see e.g. ). In fact, in certain exceptional sets the corresponding vanishing Riemann–Hilbert problem has a nontrivial solution and hence by Fredholm theory the corresponding matrix Riemann–Hilbert problem has no solution at all. This problem does not affect the similarity region, since it is easy to see that there it does not happen for sufficiently large times. However, in the soliton region, this occurs precisely in the neighborhoods of the single solitons. To avoid this problem we will work directly with the vector Riemann–Hilbert problem and impose a symmetry condition in order to ensure uniqueness. This also has the advantage that it eliminates the step of going forth and back between the vector and matrix Riemann–Hilbert problem. It should be pointed out here that even the symmetry condition alone does not guarantee uniqueness, rather we will need existence of a certain solution with an additional property. We will also demonstrate that this additional property is in fact necessary. Secondly, in the regions of the single solitons, the zeroth order asymptotics are not equal to zero but given by a one soliton solution. Hence for the usual perturbation argument based on the second resolvent identity to work, a uniform bound for the inverse of the singular integral equation associated with the one soliton solution is needed. Unfortunately, such a bound cannot be easily obtained. To overcome this problem we will shift the leading asymptotics from the one soliton solution to the inhomogeneous part of the singular integral equation and craft our Cauchy kernel in such a way that it preserves the pole conditions for this single soliton. To state our main result, we begin by recalling that the sequences a(n,t), b(n,t), ninmathbb{Z}, for fixed tinmathbb{R}, are uniquely determined by their scattering data, that is, by the right reflection coefficient R_+(z,t), |z|=1 and the eigenvalues lambda_jin(-infty,-1)cup(1,infty), j=1,dots, N together with the corresponding right norming constants gamma_{+,j}(t)>0, j=1,dots, N. Rather than in the complex plane, we will work on the unit disc using the usual Joukowski transformation label{defzlam} lambda= frac{1}{2} left(z + frac{1}{z}right),quad z= lambda- sqrt{lambda^2 -1}, qquad lambdainmathbb{C}, : |z|leq 1. In these new coordinates the eigenvalues lambda_jin(-infty,-1)cup(1,infty) will be denoted by zeta_jin(-1,0)cup(0,1). The continuous spectrum [-1,1] is mapped to the unit circle mathbb{T}. Moreover, the phase of the associated Riemann–Hilbert problem is given by label{eq:Phi} Phi(z)=z-z^{-1}+2 frac{n}{t} log(z). and the stationary phase points, Phi'(z)=0, are denoted by z_0= -frac{n}{t} - sqrt{(frac{n}{t})^2 -1}, quad z_0^{-1}= -frac{n}{t} + sqrt{(frac{n}{t})^2 -1}, qquad lambda_0=-frac{n}{t}. For frac{n}{t}<-1 we have z_0in(0,1), for -1le frac{n}{t} le1 we have z_0inmathbb{T} (and hence z_0^{-1}=overline{z_0}), and for frac{n}{t}>1 we have z_0in(-1,0). For |frac{n}{t}|>1 we will also need the value zeta_0in(-1,0)cup(0,1) defined via mathrm{Re}(Phi(zeta_0))=0, that is, frac{n}{t} = -frac{zeta_0 - zeta_0^{-1}}{2log(|zeta_0|)}. We will set zeta_0=-1 if |frac{n}{t}|le 1 for notational convenience. A simple analysis shows that for frac{n}{t}<-1 we have 0<zeta_0 < z_0 <1 and for frac{n}{t}>1 we have -1<z_0 < zeta_0 <0. Furthermore, recall that the transmission coefficient T(z), |z|le 1, is time independent and can be reconstructed using the Poisson–Jensen formula. In particular, we define the partial transmission coefficient with respect to z_0 by begin{aligned} nonumber T(z,z_0) &= label{def:Tzz0} & begin{cases} prodlimits_{zeta_kin(zeta_0,0)} |zeta_k| frac{z-zeta_k^{-1}}{z-zeta_k}, & z_0 in (-1,0), left(prodlimits_{zeta_kin(-1,0)} |zeta_k| frac{z-zeta_k^{-1}}{z-zeta_k} right) expleft(frac{1}{2pimathrm{i}}intlimits_{overline{z_0}}^{z_0}log(|T(s)|) frac{s+z}{s-z} frac{ds}{s}right), & |z_0| = 1, left(prodlimits_{zeta_kin(-1,0)cup(zeta_0,1)}!!! |zeta_k| frac{z-zeta_k^{-1}}{z-zeta_k} right) expleft(frac{1}{2pimathrm{i}}intlimits_{mathbb{T}}log(|T(s)|) frac{s+z}{s-z} frac{ds}{s}right), & z_0 in (0,1). end{cases}end{aligned} Here, in the case z_0inmathbb{T}, the integral is to be taken along the arc Sigma(z_0)= {z inmathbb{T}| mathrm{Re}(z)<mathrm{Re}(z_0)} oriented counterclockwise. For z_0in(-1,0) we set Sigma(z_0)=emptyset and for z_0in(0,1) we set Sigma(z_0)=mathbb{T}. Then T(z,z_0) is meromorphic for zinmathbb{C}backslashSigma(z_0). Observe T(z,z_0)=T(z) once z_0in(0,1) and (0,zeta_0) contains no eigenvalues. Moreover, T(z,z_0) can be computed in terms of the scattering data since |T(z)|^2= 1- |R_+(z,t)|^2. Moreover, we set begin{aligned} nonumber T_0(z_0) &= T(0,z_0) &= begin{cases} prodlimits_{zeta_kin(zeta_0,0)} |zeta_k|^{-1}, & z_0 in (-1,0), left(prodlimits_{zeta_kin(-1,0)} |zeta_k|^{-1} right) expleft(frac{1}{2pimathrm{i}}intlimits_{overline{z_0}}^{z_0}log(|T(s)|) frac{ds}{s}right), & |z_0| = 1, left(prodlimits_{zeta_kin(-1,0)cup(zeta_0,1)} |zeta_k|^{-1} right) expleft(frac{1}{2pi mathrm{i}}intlimits_{mathbb{T}}log(|T(s)|) frac{ds}{s}right), & z_0 in (0,1), end{cases}end{aligned} and begin{aligned} nonumber T_1(z_0) &= frac{partial}{partial z} log T(z,z_0) Big|_{z=0} &= begin{cases} sumlimits_{zeta_kin(zeta_0,0)} (zeta_k^{-1} -zeta_k), & z_0 in (-1,0), sumlimits_{zeta_kin(-1,0)} (zeta_k^{-1} -zeta_k) + frac{1}{pimathrm{i}}intlimits_{overline{z_0}}^{z_0}log(|T(s)|) frac{ds}{s^2}, & |z_0| = 1, sumlimits_{zeta_kin(-1,0)cup(zeta_0,1)} (zeta_k^{-1} -zeta_k) + frac{1}{pi mathrm{i}}intlimits_{mathbb{T}}log(|T(s)|) frac{ds}{s^2}, & z_0 in (0,1). end{cases}end{aligned} [thm:asym] Assume [decay] and abbreviate by c_k= -frac{zeta_k - zeta_k^{-1}}{2log(|zeta_k|)} the velocity of the k’th soliton determined by mathrm{Re}(Phi(zeta_k))=0. Then the asymptotics in the soliton region, |n/t| geq 1 + C/t log(t)^2 for some C>0, are as follows. Let varepsilon> 0 sufficiently small such that the intervals [c_k-varepsilon,c_k+varepsilon], 1le k le N, are disjoint and lie inside (-infty,-1)cup(1,infty). If |frac{n}{t} - c_k|<varepsilon for some k, one has begin{aligned} nonumber prod_{j=n}^infty (2 a(j,t)) &= T_0(z_0) left( sqrt{frac{1-zeta_k^2 + gamma_k(n,t)}{1-zeta_k^2 + gamma_k(n,t) zeta_k^2}} + O(t^{-l}) right), sum_{j=n+1}^infty b(j,t) &= frac{1}{2} T_1(z_0) - frac{gamma_k(n,t) zeta_k (zeta_k^2-1)}{2((gamma_k(n,t) -1) zeta_k^2+1)} + O(t^{-l}),end{aligned} for any l geq 1, where gamma_k(n,t) = gamma_k T(zeta_k,-c_k - sqrt{c_k^2 -1})^{-2} mathrm{e}^{t (zeta_k - zeta_k^{-1})} zeta_k^{2n}. If |frac{n}{t} -c_k| geq varepsilon, for all k, one has begin{aligned} nonumber prod_{j=n}^infty (2 a(j,t)) &= T_0(z_0) left(1 + O(t^{-l}) right), sum_{j=n+1}^infty b(j,t) &= frac{1}{2} T_1(z_0) + O(t^{-l}),end{aligned} for any l geq 1. In particular, we recover the well-known fact that the solution splits into a sum of independent solitons where the presence of the other solitons and the radiation part corresponding to the continuous spectrum manifests itself in phase shifts given by T(zeta_k,-c_k - sqrt{c_k^2 -1})^{-2}. Indeed, notice that for zeta_kin (-1,0) this term just contains product over the Blaschke factors corresponding to solitons zeta_j with zeta_k<zeta_j. For zeta_kin (0,1) we have the product over the Blaschke factors corresponding to solitons zeta_jin(-1,0), the integral over the full unit circle, plus the product over the Blaschke factors corresponding to solitons zeta_j with zeta_k>zeta_j. The proof will be given at the end of Section 4. Furthermore, in the remaining regions the analysis in Section 4 also shows that the Riemann–Hilbert problem reduces to one without solitons. In fact, away from the soliton region, the asymptotics are given by begin{aligned} nonumber prod_{j=n}^infty (2 a(j,t)) &= T_0(-1) prod_{j=n}^infty (2 tilde{a}(j,t)) left(1 + O(t^{-l}) right), sum_{j=n+1}^infty b(j,t) &= frac{1}{2} T_1(-1) + sum_{j=n+1}^infty tilde{b}(j,t) + O(t^{-l}),end{aligned} where tilde{a}(n,t), tilde{b}(n,t) are the solutions corresponding to the case without solitons and with R_+(z,0) replaced by tilde{R}_+(z,0) = T(z,-1)^{-2} R_+(z,0). Note that the Blaschke product T(z,-1)= prodlimits_{zeta_kin(-1,0)} |zeta_k| frac{z-zeta_k^{-1}}{z-zeta_k} satisfies |T(z,-1)|=1 for zinmathbb{T}. Hence everything is reduced to the case studied in. Finally we remark that the same method can be used to handle solitons on a periodic background (cf. also,, ). The Inverse scattering transform and the Riemann–Hilbert problem In this section we want to derive the Riemann–Hilbert problem from scattering theory. The special case without eigenvalues was first given in Kamvissis. The eigenvalues will be added by appropriate pole conditions which are then turned into jumps following Deift, Kamvissis, Kriecherbauer, and Zhou. For the necessary results from scattering theory respectively the inverse scattering transform for the Toda lattice we refer to,,. Associated with a(t), b(t) is a self-adjoint Jacobi operator label{defjac} H(t) = a(t) S^+ + a^-(t) S^- + b(t) in ell^2(mathbb{Z}), where S^pm f(n) = f^pm(n)= f(npm1) are the usual shift operators and ell^2(mathbb{Z}) denotes the Hilbert space of square summable (complex-valued) sequences over mathbb{Z}. By our assumption [decay] the spectrum of H consists of an absolutely continuous part [-1,1] plus a finite number of eigenvalues lambda_kinmathbb{R}backslash[-1,1], 1le k le N. In addition, there exist two Jost functions psi_pm(z,n,t) which solve the recurrence equation H(t) psi_pm(z,n,t) = frac{z+z^{-1}}{2} psi_pm(z,n,t), qquad |z|le 1, and asymptotically look like the free solutions lim_{n to pm infty} z^{mp n} psi_{pm}(z,n,t) =1. Both psi_pm(z,n,t) are analytic for 0<|z|<1 with smooth boundary values for |z|=1. The asymptotics of the two projections of the Jost function are label{eq:psiasym} psi_pm(z,n,t) = frac{z^{pm n}}{A_pm(n,t)} Big(1 + 2 B_pm(n,t) z + O(z^2) Big), as z to 0, where label{defAB} aligned A_+(n,t) &= prod_{j=n}^{infty} 2 a(j,t), quad B_+(n,t)= -sum_{j=n+1}^infty b(j,t), A_-(n,t) &= !!prod_{j=- infty}^{n-1}! 2 a(j,t), quad B_-(n,t) = -sum_{j=-infty}^{n-1} b(j,t). endaligned One has the scattering relations label{relscat} T(z) psi_mp(z,n,t) = overline{psi_pm(z,n,t)} + R_pm(z,t) psi_pm(z,n,t), qquad |z|=1, where T(z), R_pm(z,t) are the transmission respectively reflection coefficients. The transmission and reflection coefficients have the following well-known properties: The transmission coefficient T(z) has a meromorphic extension to the interior of the unit circle with simple poles at the images of the eigenvalues zeta_k. The residues of T(z) are given by label{eq:resT} res_{zeta_k} T(z) = - zeta_k frac{gamma_{+,k}(t)}{mu_k(t)} = - zeta_k gamma_{-,k}(t) mu_k(t), where gamma_{pm,k}(t)^{-1} = sum_{ninmathbb{Z}} |psi_pm(zeta_k,n,t)|^2 and psi_- (zeta_k,n,t) = mu_k(t) psi_+(zeta_k,n,t). Moreover, label{reltrpm} T(z) overline{R_+(z,t)} + overline{T(z)} R_-(z,t)=0, qquad |T(z)|^2 + |R_pm(z,t)|^2=1. In particular one reflection coefficient, say R(z,t)=R_+(z,t), and one set of norming constants, say gamma_k(t)= gamma_{+,k}(t), suffices. Moreover, the time dependence is given by: The time evolutions of the quantities R_+(z,t), gamma_{+,k}(t) are given by begin{aligned} R(z,t) &= R(z) mathrm{e}^{t (z - z^{-1})} gamma_k(t) &= gamma_k mathrm{e}^{t (zeta_k - zeta_k^{-1})},end{aligned} where R(z)=R(z,0) and gamma_k=gamma_k(0). Now we define the sectionally meromorphic vector label{defm} m(z,n,t)= left{begin{array}{c@{quad}l} begin{pmatrix} T(z) psi_-(z,n,t) z^n & psi_+(z,n,t) z^{-n} end{pmatrix}, & |z|<1, begin{pmatrix} psi_+(z^{-1},n,t) z^n & T(z^{-1}) psi_-(z^{-1},n,t) z^{-n} end{pmatrix}, & |z|>1. end{array}right. We are interested in the jump condition of m(z,n,t) on the unit circle mathbb{T} (oriented counterclockwise). To formulate our jump condition we use the following convention: When representing functions on mathbb{T}, the lower subscript denotes the non-tangential limit from different sides, m_pm(z) = lim_{ zetato z,; |zeta|^{pm 1}<1} m(zeta), qquad |z|=1. In general, for an oriented contour Sigma, m_+(z) (resp. m_-(z)) will denote the limit of m(zeta) as zetato z from the positive (resp. negative) side of Sigma. Using the notation above implicitly assumes that these limits exist in the sense that m(z) extends to a continuous function on the boundary. [thm:vecrhp] Let mathcal{S}_+(H(0))={ R(z),; |z|=1; : (zeta_k, gamma_k), : 1le k le N } the right scattering data of the operator H(0). Then m(z)=m(z,n,t) defined in [defm] is meromorphic away from the unit circle with simple poles at zeta_k, zeta_k^{-1} and satisfies: 1. The jump condition label{eq:jumpcond} m_+(z)=m_-(z) v(z), qquad v(z)=begin{pmatrix} 1-|R(z)|^2 & - overline{R(z)} mathrm{e}^{-tPhi(z)} R(z) mathrm{e}^{tPhi(z)} & 1 end{pmatrix}, for z inmathbb{T}, 2. the pole conditions label{eq:polecond} aligned res_{zeta_k} m(z) &= lim_{ztozeta_k} m(z) begin{pmatrix} 0 & 0 - zeta_k gamma_k mathrm{e}^{tPhi(zeta_k)} & 0 end{pmatrix}, res_{zeta_k^{-1}} m(z) &= lim_{ztozeta_k^{-1}} m(z) begin{pmatrix} 0 & zeta_k^{-1} gamma_k mathrm{e}^{tPhi(zeta_k)} 0 & 0 end{pmatrix}, endaligned 3. the symmetry condition label{eq:symcond} m(z^{-1}) = m(z) begin{pmatrix} 0 & 1 1 & 0 end{pmatrix} 4. and the normalization label{eq:normcond} m(0) = (m_1quad m_2),quad m_1 cdot m_2 = 1quad m_1 > 0. Here the phase is given by Phi(z)=z-z^{-1}+2 frac{n}{t} log , z. The jump condition [eq:jumpcond] is a simple calculation using the scattering relations [relscat] plus [reltrpm]. The pole conditions follow since T(z) is meromorphic in |z| <1 with simple poles at zeta_k and residues given by [eq:resT]. The symmetry condition holds by construction and the normalization [eq:normcond] is immediate from the following lemma. Observe that the pole condition at zeta_k is sufficient since the one at zeta_k^{-1} follows by symmetry. Moreover, we have the following asymptotic behaviour near z=0: The function m(z,n,t) defined in [defm] satisfies label{eq:AB} m(z,n,t) = begin{pmatrix} A(n,t) (1 - 2 B(n-1,t) z) & frac{1}{A(n,t)}(1 + 2 B(n,t) z ) end{pmatrix} + O(z^2). Here A(n,t)= A_+(n,t) and B(n,t)= B_+(n,t) are defined in [defAB]. This follows from [eq:psiasym] and T(z)= A_+ A_- ( 1 - 2(B_+ - b +B_-)z+ O(z^2)). For our further analysis it will be convenient to rewrite the pole condition as a jump condition and hence turn our meromorphic Riemann–Hilbert problem into a holomorphic Riemann–Hilbert problem following. Choose varepsilon so small that the discs |z-zeta_k|<varepsilon are inside the unit circle and do not intersect. Then redefine m in a neighborhood of zeta_k respectively zeta_k^{-1} according to label{eq:redefm} m(z) = begin{cases} m(z) begin{pmatrix} 1 & 0 frac{zeta_k gamma_k mathrm{e}^{tPhi(zeta_k)} }{z-zeta_k} & 1 end{pmatrix}, & |z-zeta_k|< varepsilon, m(z) begin{pmatrix} 1 & -frac{z gamma_k mathrm{e}^{tPhi(zeta_k)} }{z-zeta_k^{-1}} 0 & 1 end{pmatrix}, & |z^{-1}-zeta_k|< varepsilon, m(z), & text{else}.end{cases} Then a straightforward calculation using res_zeta m = lim_{ztozeta} (z-zeta)m(z) shows [lem:pctoj] Suppose m(z) is redefined as in [eq:redefm]. Then m(z) is holomorphic away from the unit circle and satisfies [eq:jumpcond], [eq:symcond], [eq:normcond] and the pole conditions are replaced by the jump conditions label{eq:jumpcond2} aligned m_+(z) &= m_-(z) begin{pmatrix} 1 & 0 frac{zeta_k gamma_k mathrm{e}^{tPhi(zeta_k)}}{z-zeta_k} & 1 end{pmatrix},quad |z-zeta_k|=varepsilon, m_+(z) &= m_-(z) begin{pmatrix} 1 & frac{z gamma_k mathrm{e}^{tPhi(zeta_k)}}{z-zeta_k^{-1}} 0 & 1 end{pmatrix},quad |z^{-1}-zeta_k|=varepsilon, endaligned where the small circle around zeta_k is oriented counterclockwise and the one around zeta_k^{-1} is oriented clockwise. Next we turn to uniqueness of the solution of this vector Riemann–Hilbert problem. This will also explain the reason for our symmetry condition. We begin by observing that if there is a point z_1inmathbb{C}, such that m(z_1)=begin{pmatrix} 0 & 0 end{pmatrix}, then n(z)=frac{1}{z-z_1} m(z) satisfies the same jump and pole conditions as m(z). However, it will clearly violate the symmetry condition! Hence, without the symmetry condition, the solution of our vector Riemann–Hilbert problem will not be unique in such a situation. Moreover, a look at the one soliton solution verifies that this case indeed can happen. [lem:singlesoliton] Suppose there is only one eigenvalue and a vanishing reflection coefficient, that is, mathcal{S}_+(H(t))={ R(z)equiv 0,; |z|=1; : (zeta, gamma) } with zetain(-1,0)cup(0,1) and gammage0. Then the Riemann–Hilbert problem [eq:jumpcond]–[eq:normcond] has a unique solution is given by begin{aligned} label{eq:oss} m_0(z) &= begin{pmatrix} f(z) & f(1/z) end{pmatrix} nonumber f(z) &= frac{1}{sqrt{1 - zeta^2 + gamma(n,t)} sqrt{1 - zeta^2 + zeta^2 gamma(n,t)}} left(gamma(n,t) zeta^2 frac{z-zeta^{-1}}{z - zeta} + 1 - zeta^2right),end{aligned} where gamma(n,t)=gammamathrm{e}^{tPhi(zeta)}. In particular, A_+(n,t) = sqrt{frac{1-zeta^2 + gamma(n,t)}{1 - zeta^2 + gamma(n,t) zeta^2}}, qquad B_+(n,t) = frac{gamma(n,t) zeta (zeta ^2-1)}{2 (1 - zeta^2 + gamma(n,t) zeta^2)}. By symmetry, the solution must be of the form m_0(z) = begin{pmatrix} f(z) & f(1/z) end{pmatrix}, where f(z) is meromorphic in mathbb{C}cup{infty} with the only possible pole at zeta. Hence f(z) = frac{1}{A} left( 1+ 2 frac{B}{z - zeta}right), where the unknown constants A and B are uniquely determined by the pole condition res_zeta f(z) = -zeta gamma(n,t) f(zeta^{-1}) and the normalization f(0) f(infty)=1, f(0)>0. In fact, observe f(z_1)=f(z_1^{-1})=0 if and only if z_1=pm 1 and gamma=pm(zeta^{-1}-zeta). Furthermore, even in the general case m(z_1)=begin{pmatrix} 0 & 0 end{pmatrix} can only occur at z_1=pm1 as the following lemma shows. [lem:resonant] If m(z_1) = begin{pmatrix} 0 & 0 end{pmatrix} for m defined as in [defm], then z_1 = pm 1. Moreover, the zero of at least one component is simple in this case. By [defm] the condition m(z_1) = begin{pmatrix} 0 & 0 end{pmatrix} implies that either the Jost solutions psi_-(z_1,n) and psi_+(z_1,n) are linearly dependent or T(z_1)=0. This can only happen, at a band edge, z_1 = pm 1, or at an eigenvalue z_1=zeta_j. We begin with the case z_1=zeta_j. In this case the derivative of the Wronskian W(z)=a(n)(psi_+(z,n)psi_-(z,n+1)-psi_+(z,n+1)psi_-(z,n)) does not vanish frac{d}{dz} W(z) |_{z=z_1} ne 0 (). Moreover, the diagonal Green’s function g(lambda,n)= W(z)^{-1} psi_+(z,n) psi_-(z,n) is Herglotz and hence can have at most a simple zero at z=z_1. Hence, if psi_+(zeta_j,n) = psi_-(zeta_j,n) =0, both can have at most a simple zero at z=zeta_j. But T(z) has a simple pole at zeta_j and hence T(z) psi_-(z,n) cannot vanish at z=zeta_j, a contradiction. It remains to show that one zero is simple in the case z_1=pm 1. In fact, one can show that frac{d}{dz} W(z) |_{z=z_1} ne 0 in this case as follows: First of all note that psi_pm'(z) (where prime denotes the derivative with respect to z) again solves Hpsi_pm'(z_1) = lambda_1 psi_pm'(z_1) if z_1=pm1. Moreover, by W(z_1)=0 we have psi_+(z_1) = c psi_-(z_1) for some constant c (independent of n). Thus we can compute begin{aligned} W'(z_1) &= W(psi_+'(z_1),psi_-(z_1)) + W(psi_+(z_1),psi_-'(z_1)) &= c^{-1} W(psi_+'(z_1),psi_+(z_1)) + c W(psi_-(z_1),psi_-'(z_1))end{aligned} by letting nto+infty for the first and nto-infty for the second Wronskian (in which case we can replace psi_pm(z_1) by z_1^{pm n}), which gives W'(z_1) = frac{c+c^{-1}}{2}. Hence the Wronskian has a simple zero. But if both functions had more than simple zeros, so would the Wronskian, a contradiction. Finally, it is interesting to note that the assumptions zetain(-1,0)cup(0,1) and gammage0 are crucial for uniqueness. Indeed, if we choose gamma= zeta^2-1<0, then every solution is a multiple of f(z)=zzeta^{-1}(z-zeta)^{-1} which cannot be normalized at 0. A uniqueness result for symmetric vector Riemann–Hilbert problems In this section we want to investigate uniqueness for the holomorphic vector Riemann–Hilbert problem begin{aligned} nonumber & m_+(z) = m_-(z) v(z), qquad zin Sigma, label{eq:rhp4m} & m(z^{-1}) = m(z) begin{pmatrix} 0 & 1 1 & 0 end{pmatrix}, nonumber & m(0) = begin{pmatrix} 1 & m_2end{pmatrix}.end{aligned} where Sigma is a nice oriented contour (see Hypothesis [hyp:sym]), symmetric with respect to zmapsto z^{-1}, and v is continuous satisfying det(v)=1 and v(z^{-1}) = begin{pmatrix} 0 & 1 1 & 0 end{pmatrix}v(z)^{-1} begin{pmatrix} 0 & 1 1 & 0 end{pmatrix},quad zinSigma. The normalization used here will be more convenient than [eq:normcond]. In fact, [eq:normcond] will be satisfied by m_2^{-1/2} m(z). Now we are ready to show that the symmetry condition in fact guarantees uniqueness. Suppose there exists a solution m(z) of the Riemann–Hilbert problem [eq:rhp4m] for which m(z)=begin{pmatrix} 0 & 0end{pmatrix} can happen at most for z=pm1 in which case limsup_{ztopm 1} frac{zmp 1}{m_j(z)} is bounded from any direction for j=1 or j=2. Then the Riemann–Hilbert problem [eq:rhp4m] with norming condition replaced by label{eq:rhp4ma} m(0) = begin{pmatrix} alpha & m_2end{pmatrix} for given alphainmathbb{C}, has a unique solution m_alpha(z) = alpha, m(z). Let m_alpha(z) be a solution of [eq:rhp4m] normalized according to [eq:rhp4ma]. Then we can construct a matrix valued solution via M=(m, m_alpha) and there are two possible cases: Either det M(z) is nonzero for some z or it vanishes identically. We start with the first case. Since the determinant of our Riemann–Hilbert problem has no jump and is bounded at infinity, it is constant. But taking determinants in M(z^{-1}) = M(z) begin{pmatrix} 0 & 1 1 & 0 end{pmatrix}. gives a contradiction. It remains to investigate the case where det(M)equiv 0. In this case we have m_alpha(z) = delta(z) m(z) with a scalar function delta. Moreover, delta(z) must be holomorphic for zinmathbb{C}backslashSigma and continuous for zinSigma except possibly at the points where m(z_0) = begin{pmatrix} 0 & 0 end{pmatrix}. Since it has no jump across Sigma, delta_+(z) m_+(z) = m_{alpha,+}(z) = m_{alpha,-}(z) v(z) = delta_-(z) m_-(z) v(z) = delta_-(z) m_+(z), it is even holomorphic in mathbb{C}backslash{pm 1} with at most simple poles at z=pm 1. Hence it must be of the form delta(z) = A + frac{B}{z-1} + frac{C}{z +1}. Since delta has to be symmetric, delta(z) = delta(z^{-1}), we obtain B = C = 0. Now, by the normalization we obtain delta(z) = A = alpha. This finishes the proof. Furthermore, note that the requirements cannot be relaxed to allow (e.g.) second order zeros in stead of simple zeros. In fact, if m(z) is a solution for which both components vanish of second order at, say, z=+1, then tilde{m}(z)=frac{z}{(z-1)^2} m(z) is a nontrivial symmetric solution of the vanishing problem (i.e. for alpha=0). By Lemma [lem:resonant] we have [cor:unique] The function m(z,n,t) defined in [defm] is the only solution of the vector Riemann–Hilbert problem [eq:jumpcond]–[eq:normcond]. Observe that there is nothing special about the point z=0 where we normalize, any other point would do as well. However, observe that for the one soliton solution [eq:oss], f(z) vanishes at z = zeta frac{1+gamma-zeta^2}{(gamma-1) zeta ^2+1} and hence the Riemann–Hilbert problem normalized at this point has a nontrivial solution for alpha=0 and hence, by our uniqueness result, no solution for alpha=1. This shows that uniqueness and existence connected, a fact which is not surprising since our Riemann–Hilbert problem is equivalent to a singular integral equation which is Fredholm of index zero (see Appendix 5). Solitons and the soliton region This section demonstrates the basic method of passing from a Riemann–Hilbert problem involving solitons to one without. Furthermore, the asymptotics inside the soliton region are computed. Solitons are represented in a Riemann–Hilbert problem by pole conditions, for this reason we will further study how poles can be dealt with in this section. For easy reference we note the following result which can be checked by a straightforward calculation. [lem:conjug] Assume that widetilde{Sigma}subseteqSigma. Let D be a matrix of the form D(z) = begin{pmatrix} d(z)^{-1} & 0 0 & d(z) end{pmatrix}, where d: mathbb{C}backslashwidetilde{Sigma}tomathbb{C} is a sectionally analytic function. Set tilde{m}(z) = m(z) D(z), then the jump matrix transforms according to tilde{v}(z) = D_-(z)^{-1} v(z) D_+(z). If d satisfies d(z^{-1}) = d(z)^{-1} and d(0) > 0. Then the transformation tilde{m}(z) = m(z) D(z) respects our symmetry, that is, tilde{m}(z) satisfies [eq:symcond] if and only if m(z) does. In particular, we obtain tilde{v} = begin{pmatrix} v_{11} & v_{12} d^{2} v_{21} d^{-2} & v_{22} end{pmatrix}, qquad zinSigmabackslashwidetilde{Sigma}, respectively tilde{v} = begin{pmatrix} frac{d_-}{d_+} v_{11} & v_{12} d_+ d_- v_{21} d_+^{-1} d_-^{-1} & frac{d_+}{d_-} v_{22} end{pmatrix}, qquad zinSigmacapwidetilde{Sigma}. In order to remove the poles there are two cases to distinguish. If lambda_k >frac{1}{2}(zeta_0+zeta_0^{-1}) the jump is exponentially decaying and there is nothing to do. Otherwise we use conjugation to turn the jumps into exponentially decaying ones, again following Deift, Kamvissis, Kriecherbauer, and Zhou. It turns out that we will have to handle the poles at zeta_k and zeta_k^{-1} in one step in order to preserve symmetry and in order to not add additional poles elsewhere. [lem:twopolesinc] Assume that the Riemann–Hilbert problem for m has jump conditions near zeta and zeta^{-1} given by aligned m_+(z)&=m_-(z)begin{pmatrix}1&0 frac{gammazeta}{z-zeta}&1end{pmatrix}, && |z-zeta|=varepsilon, m_+(z)&=m_-(z)begin{pmatrix}1& frac{gamma z}{z-zeta^{-1}}0&1end{pmatrix}, && |z^{-1}- zeta|=varepsilon. endaligned Then this Riemann–Hilbert problem is equivalent to a Riemann–Hilbert problem for tilde{m} which has jump conditions near zeta and zeta^{-1} given by begin{aligned} tilde{m}_+(z)&= tilde{m}_-(z)begin{pmatrix}1& frac{(zeta z-1)^2}{zeta (z-zeta) gamma} 0 &1end{pmatrix}, && |z-zeta|=varepsilon, tilde{m}_+(z)&= tilde{m}_-(z)begin{pmatrix}1& 0 frac{(z-zeta)^2}{zeta z (zeta z-1) gamma}&1end{pmatrix}, && |z^{-1}- zeta|=varepsilon,end{aligned} and all remaining data conjugated (as in Lemma [lem:conjug]) by D(z) = begin{pmatrix} frac{z - zeta}{zeta z-1} & 0 0 & frac{zeta z-1}{z-zeta} end{pmatrix}. To turn gamma into gamma^{-1}, introduce D by D(z) = begin{cases} begin{pmatrix} 1 & frac{1}{gamma} frac{z-zeta}{zeta} - gammafrac{zeta}{z-zeta} & 0 end{pmatrix} begin{pmatrix} frac{z - zeta}{zeta z-1} & 0 0 & frac{zeta z-1}{z-zeta} end{pmatrix}, & |z-zeta|<varepsilon, begin{pmatrix} 0 & gammafrac{z zeta}{z zeta -1} -frac{1}{gamma} frac{z zeta -1}{z zeta} & 1 end{pmatrix} begin{pmatrix} frac{z - zeta}{zeta z-1} & 0 0 & frac{zeta z-1}{z-zeta} end{pmatrix}, & |z^{-1}-zeta|<varepsilon, begin{pmatrix} frac{z - zeta}{zeta z-1} & 0 0 & frac{zeta z-1}{z-zeta} end{pmatrix}, & text{else}, end{cases} and note that D(z) is analytic away from the two circles. Now set tilde{m}(z) = m(z) D(z), which is again symmetric by D(z^{-1})= big(begin{smallmatrix} 0 & 1 1 & 0end{smallmatrix}big)D(z) big(begin{smallmatrix} 0 & 1 1 & 0end{smallmatrix}big). The jumps along |z-zeta|=varepsilon and |z^{-1}- zeta|=varepsilon follow by a straightforward calculation and the remaining jumps follow from Lemma [lem:conjug]. Now we are ready to prove our main result: We begin by observing that the partial transmission coefficient T(z,z_0) introduced in [def:Tzz0] satisfies the following scalar meromorphic Riemann–Hilbert problem: 1. T(z,z_0) is meromorphic in mathbb{C}backslashSigma(z_0), where Sigma(z_0) is the arc given by Sigma(z_0) = {z inmathbb{T}| mathrm{Re}(z)<mathrm{Re}(z_0)}, with simple poles at zeta_k and simple zeros at zeta_k^{-1} for all k with lambda_k < frac{1}{2}(zeta_0+zeta_0^{-1}), 2. T_+(z,z_0) = T_-(z,z_0) (1 - |R(z)|^2) for zinSigma(z_0), 3. T(z^{-1},z_0) = T(z,z_0)^{-1}, zinmathbb{C}backslashSigma(z_0), and T(0,z_0)>0. Note also overline{T(z,z_0)}=T(overline{z},z_0) and in particular T(z,z_0) is real-valued for zinmathbb{R}. Next introduce D(z) = begin{cases} begin{pmatrix} 1 & frac{z-zeta_k}{zeta_k gamma_k mathrm{e}^{tPhi(zeta_k)}} - frac{zeta_k gamma_k mathrm{e}^{tPhi(zeta_k)}}{z-zeta_k} & 0 end{pmatrix} D_0(z), & |z-zeta_k|<varepsilon, : lambda_k < frac{1}{2}(zeta_0+zeta_0^{-1}), begin{pmatrix} 0 & frac{z zeta_k gamma_k mathrm{e}^{tPhi(zeta_k)}}{z zeta_k -1} -frac{z zeta_k -1}{z zeta_k gamma_k mathrm{e}^{tPhi(zeta_k)}} & 1 end{pmatrix} D_0(z), & |z^{-1}-zeta_k|<varepsilon, : lambda_k < frac{1}{2}(zeta_0+zeta_0^{-1}), D_0(z), & text{else}, end{cases} where D_0(z) = begin{pmatrix} T(z,z_0)^{-1} & 0 0 & T(z,z_0) end{pmatrix}. Note that we have D(z^{-1})= begin{pmatrix} 0 & 1 1 & 0 end{pmatrix}D(z) begin{pmatrix} 0 & 1 1 & 0 end{pmatrix}. Now we conjugate our vector m(z) defined in [defm] respectively [eq:redefm] using D(z), tilde{m}(z)=m(z) D(z). Since T(z,z_0) is either nonzero and continuous near z=pm1 (if pm 1 notinSigma(z_0)) or it has the same behaviour as T(z) near z=pm 1 (if pm 1 inSigma(z_0)), the new vector tilde{m}(z) is again continuous near z=pm 1 (even if T(z) vanishes there). Then using Lemma [lem:conjug] and Lemma [lem:twopolesinc] the jump corresponding lambda_k <frac{1}{2}(zeta_0+zeta_0^{-1}) (if any) is given by aligned tilde{v}(z) &= begin{pmatrix}1& frac{z-zeta_k}{zeta_k gamma_k T(z,z_0)^{-2} mathrm{e}^{tPhi(zeta_k)} } 0 &1end{pmatrix}, qquad |z-zeta_k|=varepsilon, tilde{v}(z) &= begin{pmatrix}1& 0 frac{zeta_k z -1}{zeta_k z gamma_k T(z,z_0)^2 mathrm{e}^{tPhi(zeta_k)}}&1end{pmatrix}, qquad |z^{-1}- zeta_k|=varepsilon, endaligned and corresponding lambda_k ge frac{1}{2}(zeta_0+zeta_0^{-1}) (if any) by aligned tilde{v}(z) &= begin{pmatrix} 1 & 0 frac{zeta_k gamma_k T(z,z_0)^{-2} mathrm{e}^{tPhi(zeta_k)}}{z-zeta_k} & 1 end{pmatrix}, qquad |z-zeta_k|=varepsilon, tilde{v}(z) &= begin{pmatrix} 1 & frac{z gamma_k T(z,z_0)^2 mathrm{e}^{tPhi(zeta_k)}}{z-zeta_k^{-1}} 0 & 1 end{pmatrix}, qquad |z^{-1}-zeta_k|=varepsilon. endaligned In particular, an investigation of the sign of mathrm{Re}(Phi(z)) shows that all off-diagonal entries of these jump matrices, except for possibly one if zeta_{k_0}=zeta_0 for some k_0, are exponentially decreasing. In this case we will keep the pole condition which now reads aligned res_{zeta_{k_0}} tilde{m}(z) &= lim_{ztozeta_{k_0}} tilde{m}(z) begin{pmatrix} 0 & 0 - zeta_{k_0} gamma_{k_0} T(zeta_{k_0},z_0)^{-2} mathrm{e}^{tPhi(zeta_{k_0})} & 0 end{pmatrix}, res_{zeta_{k_0}^{-1}} tilde{m}(z) &= lim_{ztozeta_{k_0}^{-1}} tilde{m}(z) begin{pmatrix} 0 & zeta_{k_0}^{-1} gamma_{k_0} T(zeta_{k_0},z_0)^{-2} mathrm{e}^{tPhi(zeta_{k_0})} 0 & 0 end{pmatrix}. endaligned Furthermore, the jump along mathbb{T} is given by tilde{v}(z) = begin{cases} tilde{b}_-(z)^{-1} tilde{b}_+(z), qquad lambda(z)> lambda_0, tilde{B}_-(z)^{-1} tilde{B}_+(z), qquad lambda(z)< lambda_0, end{cases} where tilde{b}_-(z) = begin{pmatrix} 1 & frac{R(z^{-1}) mathrm{e}^{-tPhi(z)}}{T(z^{-1},z_0)^2} 0 & 1 end{pmatrix}, quad tilde{b}_+(z) = begin{pmatrix} 1 & 0 frac{R(z) mathrm{e}^{tPhi(z)}}{T(z,z_0)^2}& 1 end{pmatrix}, and tilde{B}_-(z) = begin{pmatrix} 1 & 0 - frac{T_-(z,z_0)^{-2}}{1-|R(z)|^2} R(z) mathrm{e}^{tPhi(z)} & 1 end{pmatrix}!, quad tilde{B}_+(z) = begin{pmatrix} 1 & - frac{T_+(z,z_0)^2}{1-|R(z)|^2} overline{R(z)} mathrm{e}^{-tPhi(z)} 0 & 1 end{pmatrix}!. Now the jump along mathbb{T} can also be made arbitrarily small following the nonlinear steepest descent method developed by Deift and Zhou : Split the Fourier transform of the reflection coefficient into a part which has an analytic extension to a neighborhood of the unit circle plus a small error. Move the analytic part away from the unit circle using the factorizations from above. Since the Fourier coefficients decay faster than any polynomial, the errors from both parts can be made O(t^{-l}) for any linmathbb{N}. We refer to respectively for details. Hence we can apply Theorem [thm:remcontour] as follows: If |frac{n}{t} - c_k|>varepsilon for all k we can choose gamma_0=0 and w_0^tequiv 0. Since the error between w^t and w_0^t is exponentially small, this proves the second part of Theorem [thm:asym] upon comparing m(z) = hat{m}(z) begin{pmatrix} T(z,z_0) & 0 0 & T(z,z_0)^{-1} end{pmatrix} with [eq:AB]. Otherwise, if |frac{n}{t} - c_k|<varepsilon for some k, we choose gamma_0^t=gamma_k(n,t) and w_0^t equiv 0. Again we conclude that the error between w^t and w_0^t is exponentially small, proving the first part of Theorem [thm:asym]. Singular integral equations In this section we show how to transform a meromorphic vector Riemann–Hilbert problem with simple poles at zeta, zeta^{-1}, begin{aligned} nonumber & m_+(z) = m_-(z) v(z), qquad zin Sigma, label{eq:rhp5m} & res_{zeta} m(z) = lim_{ztozeta} m(z) begin{pmatrix} 0 & 0 - zeta gamma& 0 end{pmatrix},quad res_{zeta^{-1}} m(z) &= lim_{ztozeta^{-1}} m(z) begin{pmatrix} 0 & zeta^{-1} gamma 0 & 0 end{pmatrix}, nonumber & m(z^{-1}) = m(z) begin{pmatrix} 0 & 1 1 & 0 end{pmatrix}, nonumber & m(0) = begin{pmatrix} 1 & m_2end{pmatrix},end{aligned} where zetain(-1,0)cup(0,1) and gammage 0, into a singular integral equation. Since we require the symmetry condition [eq:symcond] for our Riemann–Hilbert problems we need to adapt the usual Cauchy kernel to preserve this symmetry. Moreover, we keep the single soliton as an inhomogeneous term which will play the role of the leading asymptotics in our applications. [hyp:sym] Let Sigma consist of a finite number of smooth oriented finite curves in mathbb{C} which intersect at most finitely many times with all intersections being transversal. Assume that the contour Sigma does not contain 0, zeta and is invariant under zmapsto z^{-1}. It is oriented such that under the mapping zmapsto z^{-1} sequences converging from the positive sided to Sigma are mapped to sequences converging to the negative side. Moreover, suppose the jump matrix v can be factorized according to v = b_-^{-1} b_+ = (mathbb{I}-w_-)^{-1}(mathbb{I}+w_+), where w_pm = pm(b_pm-mathbb{I}) are continuous and satisfy label{eq:wpmsym} w_pm(z^{-1}) = begin{pmatrix} 0 & 1 1 & 0 end{pmatrix}w_mp(z) begin{pmatrix} 0 & 1 1 & 0 end{pmatrix},quad zinSigma. The classical Cauchy-transform of a function f:Sigmato mathbb{C} which is square integrable is the analytic function C f: mathbb{C}backslashSigmatomathbb{C} given by C f(z) = frac{1}{2pimathrm{i}} int_{Sigma} frac{f(s)}{s - z} ds,qquad zinmathbb{C}backslashSigma. Denote the non-tangential boundary values from both sides (taken possibly in the L^2-sense — see e.g. ) by C_+ f respectively C_- f. Then it is well-known that C_+ and C_- are bounded operators L^2(Sigma)to L^2(Sigma), which satisfy C_+ - C_- = mathbb{I} and C_+ C_- = 0 (see e.g. ). Moreover, one has the Plemelj–Sokhotsky formula () C_pm = frac{1}{2} (mathrm{i}H pm mathbb{I}), where H f(t) = frac{1}{pi} mathchoice{{setbox 0=hbox{displaystyle{textstyle-}{int}} vcenter{hbox{textstyle-}}kern-.5wd 0}}%{{setbox 0=hbox{textstyle{scriptstyle-}{int}} vcenter{hbox{scriptstyle-}}kern-.5wd 0}}%{{setbox 0=hbox{scriptstyle{scriptscriptstyle-}{int}} vcenter{hbox{scriptscriptstyle-}}kern-.5wd 0}}%{{setbox 0=hbox{scriptscriptstyle{scriptscriptstyle-}{int}} vcenter{hbox{scriptscriptstyle-}}kern-.5wd 0}}% !int_Sigma frac{f(s)}{t-s} ds,qquad tinSigma, is the Hilbert transform and mathchoice{{setbox 0=hbox{displaystyle{textstyle-}{int}} vcenter{hbox{textstyle-}}kern-.5wd 0}}%{{setbox 0=hbox{textstyle{scriptstyle-}{int}} vcenter{hbox{scriptstyle-}}kern-.5wd 0}}%{{setbox 0=hbox{scriptstyle{scriptscriptstyle-}{int}} vcenter{hbox{scriptscriptstyle-}}kern-.5wd 0}}%{{setbox 0=hbox{scriptscriptstyle{scriptscriptstyle-}{int}} vcenter{hbox{scriptscriptstyle-}}kern-.5wd 0}}% !int denotes the principal value integral. In order to respect the symmetry condition we will restrict our attention to the set L^2_{s}(Sigma) of square integrable functions f:Sigmatomathbb{C}^{2} such that label{eq:sym} f(z^{-1}) = f(z) begin{pmatrix} 0 & 1 1 & 0 end{pmatrix}. Clearly this will only be possible if we require our jump data to be symmetric as well (i.e., Hypothesis [hyp:sym] holds). Next we introduce the Cauchy operator (C f)(z) = frac{1}{2pimathrm{i}} int_Sigma f(s) Omega_zeta(s,z) acting on vector-valued functions f:Sigmatomathbb{C}^{2}. Here the Cauchy kernel is given by Omega_{zeta}(s,z) = begin{pmatrix} frac{z-zeta^{-1}}{s-zeta^{-1}} frac{1}{s-z} & 0 0 & frac{z-zeta}{s-zeta} frac{1}{s-z} end{pmatrix} ds = begin{pmatrix} frac{1}{s-z} - frac{1}{s-zeta^{-1}} & 0 0 & frac{1}{s-z} - frac{1}{s-zeta} end{pmatrix} ds, for some fixed zetanotinSigma. In the case zeta=infty we set Omega_{infty}(s,z) = begin{pmatrix} frac{1}{s-z} - frac{1}{s} & 0 0 & frac{1}{s-z} end{pmatrix} ds. and one easily checks the symmetry property: label{eq:symC} Omega_zeta(1/s,1/z) = begin{pmatrix} 0 & 1 1 & 0 end{pmatrix}Omega_zeta(s,z) begin{pmatrix} 0 & 1 1 & 0 end{pmatrix}. The properties of C are summarized in the next lemma. Assume Hypothesis [hyp:sym]. The Cauchy operator C has the properties, that the boundary values C_pm are bounded operators L^2_s(Sigma) to L^2_s(Sigma) which satisfy label{eq:cpcm} C_+ - C_- = mathbb{I} and label{eq:Cnorm} (Cf)(zeta^{-1}) = (0quadast), qquad (Cf)(zeta) = (astquad 0). Furthermore, C restricts to L^2_{s}(Sigma), that is (C f) (z^{-1}) = (Cf)(z) begin{pmatrix} 0 & 1 1 & 0 end{pmatrix},quad zinmathbb{C}backslashSigma for fin L^2_{s}(Sigma) and if w_pm satisfy (H.[hyp:sym]) we also have label{eq:symcpm} C_pm(f w_mp)(1/z) = C_mp(f w_pm)(z) begin{pmatrix} 0 & 1 1 & 0 end{pmatrix},quad zinSigma. Everything follows from [eq:symC] and the fact that C inherits all properties from the classical Cauchy operator. We have thus obtained a Cauchy transform with the required properties. Following Section 7 and 8 of, we can solve our Riemann–Hilbert problem using this Cauchy operator. Introduce the operator C_w: L_s^2(Sigma)to L_s^2(Sigma) by C_w f = C_+(fw_-) + C_-(fw_+),quad fin L^2_s(Sigma) and recall from Lemma [lem:singlesoliton] that the unique solution corresponding to vequiv mathbb{I} is given by m_0(z)= begin{pmatrix} f(z) & f(frac{1}{z}) end{pmatrix}, quad f(z) = frac{1}{1 - zeta^2 + gamma} left(gamma zeta^2 frac{z-zeta^{-1}}{z - zeta} + 1 - zeta^2right) Observe that for gamma=0 we have f(z)=1 and for gamma=infty we have f(z)= zeta^2 frac{z-zeta^{-1}}{z - zeta}. In particular, m_0(z) is uniformly bounded away from zeta for all gammain[0,infty]. Then we have the next result. [thm:cauchyop] Assume Hypothesis [hyp:sym]. Suppose m solves the Riemann–Hilbert problem [eq:rhp5m]. Then label{eq:mOm} m(z) = (1-c_0) m_0(z) + frac{1}{2pimathrm{i}} int_Sigma mu(s) (w_+(s) + w_-(s)) Omega_zeta(s,z), where mu = m_+ b_+^{-1} = m_- b_-^{-1} quadmbox{and}quad c_0= left( frac{1}{2pimathrm{i}} int_Sigma mu(s) (w_+(s) + w_-(s)) Omega_zeta(s,0) right)_{!1}. Here (m)_j denotes the j’th component of a vector. Furthermore, mu solves label{eq:sing4muc} (mathbb{I}- C_w) mu = (1-c_0) m_0(z). Conversely, suppose tilde{mu} solves label{eq:sing4mu} (mathbb{I}- C_w) tilde{mu} = m_0(z), and tilde{c}_0= left( frac{1}{2pimathrm{i}} int_Sigma tilde{mu}(s) (w_+(s) + w_-(s)) Omega_zeta(s,0) right)_{!1} ne -1, then m defined via [eq:mOm], with (1-c_0)=(1+tilde{c}_0)^{-1} and mu=(1+tilde{c}_0)^{-1}tilde{mu}, solves the Riemann–Hilbert problem [eq:rhp5m] and mu= m_pm b_pm^{-1}. This can be shown as in the non-symmetric case (cf. e.g. ). Note that in the special case gamma=0 we have m_0(z)= begin{pmatrix} 1 & 1 end{pmatrix} and we can choose zeta as we please, say zeta=infty such that c_0=tilde{c}_0=0 in the above theorem. Hence we have a formula for the solution of our Riemann–Hilbert problem m(z) in terms of (mathbb{I}- C_w)^{-1} m_0 and this clearly raises the question of bounded invertibility of mathbb{I}- C_w. This follows from Fredholm theory (cf. e.g. ): Assume Hypothesis [hyp:sym]. The operator mathbb{I}-C_w is Fredholm of index zero, mathrm{ind}(mathbb{I}-C_w) =0. By the Fredholm alternative, it follows that to show the bounded invertibility of mathbb{I}-C_w we only need to show that ker (mathbb{I}-C_w) =0. The latter being equivalent to unique solvability of the corresponding vanishing Riemann–Hilbert problem in the case gamma=0 (where we can choose zeta=infty such that c_0=tilde{c}_0=0). Assume Hypothesis [hyp:sym]. A unique solution of the Riemann–Hilbert problem [eq:rhp5m] with gamma=0 exists if and only if the corresponding vanishing Riemann–Hilbert problem, where the normalization condition is replaced by m(0)= begin{pmatrix} 0 & m_2end{pmatrix}, with m_2 arbitrary, has at most one solution. We are interested in comparing two Riemann–Hilbert problems associated with respective jumps w_0 and w with |w-w_0|_infty small, where |w|_infty= |w_+|_{L^infty(Sigma)} + |w_-|_{L^infty(Sigma)}. For such a situation we have the following result: [thm:remcontour] Assume that for some data zeta_0, gamma_0^t, w_0^t the operator mathbb{I}-C_{w_0^t}: L^2_s(Sigma) to L^2_s(Sigma) has a bounded inverse, where the bound is independent of t, and suppose that the corresponding tilde{c}_{0,0}^t is away from -1 again uniformly in t. Furthermore, let zeta=zeta_0, gamma^t=gamma_0^t and assume w^t satisfies |w^t - w_0^t|_infty leq alpha(t) for some function alpha(t) to 0 as ttoinfty. Then (mathbb{I}-C_{w^t})^{-1}: L^2_s(Sigma)to L^2_s(Sigma) also exists for sufficiently large t and the associated solutions of the Riemann–Hilbert problems [eq:rhp5m] only differ by O(alpha(t)) uniformly in z away from Sigma cup {zeta,zeta^{-1}}. By boundedness of the Cauchy transform, one has |(C_{w^t} - C_{w_0^t})| leq const |w|_infty. Thus, by the second resolvent identity, we infer that (mathbb{I}-C_{w^t})^{-1} exists for large t and |(mathbb{I}-C_{w^t})^{-1}-(mathbb{I}-C_{w_0^t})^{-1}| = O(alpha(t)). From which the claim follows since this implies |tilde{c}_0^t - tilde{c}_{0,0}^t| = O(alpha(t)) respectively |mu^t - mu_0^t|_{L^2} = O(alpha(t)) and thus m^t(z) - m_0^t(z) = O(alpha(t)) uniformly in z away from Sigma cup {zeta,zeta^{-1}}. We thank S. Kamvissis for several helpful discussions and I. Egorova, K. Grunert and A. Mikikits-Leitner for pointing out errors in a previous version of this article. 1. Research supported by the Austrian Science Fund (FWF) under Grant No. Y330.↩ 2. Math. Z. 262, 585–602 (2009)↩ What is the purpose of introducing the partial transmission coefficient T(z,z_0)?
answer:Based on the document, the purpose of introducing the partial transmission coefficient T(z,z_0) is: 1) It satisfies a scalar meromorphic Riemann-Hilbert problem that isolates the poles and zeros corresponding to solitons with velocities less than a threshold set by zeta_0. This allows handling these solitons separately from the others. 2) It is used to construct the transformation matrix D(z) that conjugates the vector Riemann-Hilbert problem to one without poles. Specifically, it cancels the poles/zeros in the original problem while introducing exponentially decaying jumps away from the unit circle. 3) After this conjugation, the solution corresponding to solitons with velocities greater than the threshold is kept as a pole condition. The remaining problem no longer has solitons below the velocity threshold. So in summary, T(z,z_0) serves to separate faster solitons from slower ones, so that the slower solitons can be fully eliminated from the problem using conjugation. This appears to be a key step in the asymptotic analysis.
question:Among the real numbers sqrt{3}, -frac{1}{3}, -2, and 0, the smallest real number is ( ). A: sqrt{3} B: -frac{1}{3} C: -2 D: 0
answer:To determine the smallest real number among sqrt{3}, -frac{1}{3}, -2, and 0, we compare them based on their values on the number line. 1. First, we observe that sqrt{3} is a positive number greater than 1, and 0 is neutral, meaning it is neither positive nor negative. 2. Next, we compare the negative numbers -frac{1}{3} and -2. To understand their relationship, we consider the absolute values of these numbers. The absolute value of -frac{1}{3} is frac{1}{3}, and the absolute value of -2 is 2. 3. We know that frac{1}{3} < 2, which implies that when considering their positions on the number line as negative numbers, -frac{1}{3} > -2 because a number with a smaller absolute value is closer to 0 on the number line. 4. Therefore, we can arrange the numbers in ascending order as follows: -2 < -frac{1}{3} < 0 < sqrt{3} 5. From this arrangement, it is clear that -2 is the smallest number among the given options. Thus, the smallest real number is -2. Therefore, the correct answer is boxed{C}. **Analysis:** This solution relies on understanding the properties of real numbers, specifically how their values compare on the number line. Positive numbers are greater than 0, and among negative numbers, the one with a greater absolute value is actually smaller when comparing their positions on the number line. **Comments:** This question tests the ability to compare real numbers. The key to solving the problem is recognizing that among two negative numbers, the one with the greater absolute value is actually smaller, which is a fundamental concept in understanding the order of real numbers.
question:How can I write a Java program to find the second largest element in an array using a generic method?
answer:You can achieve this by implementing a generic method in Java. Here's an example code that demonstrates how to find the second largest element in an array: ```java public class SecondLargestElement { static <T extends Comparable<T>> void findSecondLargest(T[] arr) { T first = arr[0]; T second = arr[0]; for (int i = 1; i < arr.length; i++) { if (arr[i].compareTo(first) > 0) { second = first; first = arr[i]; } else if (arr[i].compareTo(second) > 0 && arr[i].compareTo(first) != 0) { second = arr[i]; } } if (second == arr[0]) { System.out.println(There is no second largest element); } else { System.out.println(The second largest element is + second); } } public static void main(String[] args) { // Example usage with an array of integers Integer arr[] = {1, 3, 2, 6, 5, 7}; findSecondLargest(arr); } } ``` In this code, we define a generic method `findSecondLargest` that takes an array `arr` as its parameter. The generic type `T` must extend the `Comparable` interface, allowing us to compare elements using the `compareTo` method. The `findSecondLargest` method initializes two variables, `first` and `second`, with the first element of the array. It then iterates over the remaining elements, comparing each element with the current `first` and `second` elements. If an element is greater than `first`, it becomes the new `first` element and the previous `first` element becomes the new `second` element. If an element is greater than `second` but not equal to `first`, it becomes the new `second` element. After the iteration, we check if the `second` element is still the same as the first element in the array. If so, it means there is no second largest element. Otherwise, we print the value of the `second` element, indicating that it is the second largest element in the array. In the provided example usage, we create an array of integers and pass it to the `findSecondLargest` method to find the second largest element.
question:What is the index of 7 in the list [1, 2, 3, 4, 5, 6, 8, 9, 10] and the index of 11 in the same list?
answer:<tool_call>[{name: binary_search, arguments: {arr: [1, 2, 3, 4, 5, 6, 8, 9, 10], target: 7}}, {name: binary_search, arguments: {arr: [1, 2, 3, 4, 5, 6, 8, 9, 10], target: 11}}]</tool_call>