New procedure to generate multipliers in complex Neumann problem and effective Kohn algorithm

The purpose of this paper is threefold. (i) To explain the effective Kohn algorithm for multipliers in the complex Neumann problem and its difference with the full-real-radical Kohn algorithm, especially in the context of an example of Catlin-D’Angelo concerning the ineffectiveness of the latter. (ii) To extend the techniques of multiplier ideal sheaves for the complex Neumann problem to general systems of partial differential equations. (iii) To present a new procedure of generation of multipliers in the complex Neumann problem as a special case of the multiplier ideal sheaves techniques for general systems of partial differential equations.


Introduction
The purpose of this paper is threefold. (i) To explain the effective Kohn algorithm for multipliers in the complex Neumann problem and its difference with the full-real-radical Kohn algorithm, especially in the context of an example of Catlin-D'Angelo concerning the ineffectiveness of the latter. (ii) To extend the techniques of multiplier ideal sheaves for the complex Neumann problem to general systems of partial differential equations. (iii) To present a new procedure of generation of multipliers in the complex Neumann problem as a special case of the multiplier ideal sheaves techniques for general systems of partial differential equation.
For a priori estimates in the theory of partial differential equations some of the standard techniques are the following. (i) Using integration by parts to get L 2 estimates of derivatives in certain directions, for example, L 2 estimates of all first-order partial derivatives of a function with compact support from applying integration by parts to its inner product with its Laplacian. (ii) Using Lie bracket of two vector fields to conclude, from given derivative estimates of fractional orders along each of the two vectors, the derivative estimates of lower fractional order along their Lie bracket, for example, Hörmander's work on the sum of squares of vector fields [6].
The technique of multiplier ideal sheaves introduced by Kohn for the complex Neumann problem is a new method to conclude, from L 2 estimates of derivatives along certain complex-valued vector fields, the derivative estimates of lower fractional orders in all directions by introducing the notion of multipliers. In a system of partial differential equations where the estimate is for a vector-valued test function with components ψ ν , for given complex-valued vector fields Y j , when the estimates for several linear combinations ∑ j,ν ρ j,ν Y j ψ ν with some given smooth functions ρ j,ν are known, the multipliers are the smooth coefficients a ν of a linear combination ∑ ν a ν ψ ν such that there is an estimate of the Sobolev L 2 norm of ∑ ν a ν φ ν of some positive fractional order. The goal is to derive differential relations among the multipliers to obtain some geometric condition to solve the regularity problem for the given system of partial differential equations, by the method of using the differential relations and some initial multipliers to conclude that the function which is identically 1 is a scalar multiplier.
For Kohn's original setting of a bounded weakly pseudoconvex domain Ω in C n the vector fields Y j are the vector fields L 1 , . . . , L n−1 of type (1, 0) tangential to the boundary ∂Ω of Ω together with their complex conjugatesL 1 , . . . ,L n−1 . The test function is a (0, 1)-form in the domains of∂ and∂ * whose n − 1 tangential components are φ 1 , . . . , φ n−1 . The given linear combinations ∑ j,ν ρ j,ν Y j ψ ν are the (n − 1) 2 + 1 linear combinationsL j φ ν (for 1 j, ν n − 1) and ∑ n−1 ν=1 L ν φ ν . The structure of this note is as follows. We start out with the background and motivation for the technique of multiplier ideal sheaves. We then discuss the two Kohn algorithm of generating multiplies. One is the full-real-radical Kohn algorithm. The other is the effective Kohn algorithm. We explain the algebraic geometric techniques in the effective Kohn algorithm, with special attention paid to the effectiveness of the orders of subellipticity in each step. The effective Kohn algorithm is then applied to Catlin-D'Angelo's example to highlight the difference in effectiveness between the full-real-radical Kohn algorithm and the effective Kohn algorithm. We present the generalization of the technique of multipliers to general systems of partial differential equations. Finally, we apply the generalized techniques to the complex Neumann problem to obtain a new procedure to generate vector multipliers from matrix multipliers for special domains.
The notation N, R, and C mean respectively all positive integers, all real numbers, and all complex numbers. The notation O C n ,P and m C n ,P mean respectively all holomorphic germs on C n at P and the maximum ideal of the local ring O C n ,P . Unless specified otherwise, ∥ · ∥ denotes the L 2 norm and (·, ·) denotes the L 2 inner product. The notation ∥ · ∥ L 2 (U ) is also used to more clearly specify that it is the L 2 norm over U . The notation (·, ·) L 2 (U ) is also used to more clearly specify that it is the L 2 inner product over U . The notation L 2 k means the Sobolev norm defined by using the L 2 norm of derivatives up to order k. The notation ∂ j and∂ j mean respectively ∂ ∂z j and ∂ ∂zj , where z 1 , . . . , z n are the coordinates of C n .

Background and motivation for multiplier ideal sheaves
In regularity problems of local or global systems of partial differential equations, multiplier ideal sheaves describe the location and the extent of the failure of a priori estimates. There are two ways to introduce such multiplier ideal sheaves. The first way is from the continuity method of solving partial differential equations (which are usually nonlinear partial differential equations defined on compact Riemannian manifolds), where a multiplier ideal sheaf arises as the limit of rescaling of local coordinate charts to make possible the use of Ascoli-Arzela techniques for convergence. The second way is just to directly introduce multiplier ideal sheaves as factors required in the integral norms to make a priori estimates hold for the regularity problem. We very briefly describe both.

Multiplier ideal sheaves from limit of rescaling of local coordinate charts
One method to solve nonlinear partial differential equations is to use the continuity method which uses a family of partial differential equations parameterized by t ∈ [0, 1] so that (i) t = 1 is the given partial differential equations, (ii) t = 0 can be solved (which means solvability of partial differential equation at initial parameter), and (iii) from the solution for t = t 0 < 1 it is possible to solve for t < t 0 + ε for some ε > 0 (which means that the openness property is assumed).
The difficulty is to prove the closedness property that for 0 < t * 1 solutions s ν for the parameter value t = t ν for t ν ↗ t * can be used to construct a solution s for the parameter value t = t * .
The natural approach is to obtain s by taking the limit of s ν k for some subsequence {ν k } of the sequence {ν}. Usually the boundedness in some weak norm for s ν , for example, L 2 , can be derived from the setup of the given partial differential equation. The method of Ascoli-Arzela calls for boundedness in some stronger norm for s ν , for example, L 2 1 defined by the first-order derivatives being L 2 . In the setting of a manifold the derivative involved in L 2 1 depends on the choice of local coordinates so that L 2 1 is determined up to equivalence (of sandwiching between its products with two constants). One can always cheat by using different rescaling of local coordinates for each s ν , but one has to pay the price that at the end the variable rescaling of local coordinates results in a factor (or multiplier), which is the limit of the Jacobian determinant in coordinate change in the integral for L 2 .
The limit s of s ν k from the Ascoli-Arzela argument is L 2 only after the insertion of the multiplier in its L 2 integral. This can be interpreted as transforming the requirement in the the method of Ascoli-Arzela for two norms L 2 and L 2 1 (with difference in orders of differentiation in their definitions) to two norms, which are the L 2 norm and the multiplier-modified L 2 1 norm. Multipliers form an ideal sheaf, called the multiplier ideal sheaf. Global conditions, such as topological conditions, can be used to conclude that the multiplier ideal sheaf must be the full structure sheaf, giving the solvability of the partial differential equation from global conditions. Examples are (i) the existence of Hermitian-Einstein metrics for stable holomorphic vector bundles over compact algebraic (or Kähler) manifolds, where the method of limit of rescaling of local coordinates was first applied by Donaldson [5] for the surface case and (ii) the existence of Kähler-Einstein metrics for certain Fano manifolds, where the method of multiplier ideal sheaves as defined by taking limit of metrics of the anti-canonical line bundle was first applied by Nadel [11].
We now look at the second way of introducing multiplier ideal sheaves, which is best described by using, as an example, the problem of the regularity of the Kohn solution for the complex Neumann problem on a bounded weakly pseudoconvex domain in C n with smooth boundary.

Regularity problem of Kohn solution for complex Neumann problem on weakly pseudoconvex domain
Let Ω be a bounded weakly pseudoconvex domain in C n with smooth boundary ∂Ω. It means that there is a smooth function r defined on some open neighborhood W of ∂Ω in C n such that is the bundle of all tangent vectors ξ of C n of type (1, 0) at points of ∂Ω which are tangential to ∂Ω (in the sense that ξ(r) = 0).
For notational simplicity, we will consider only the complex Neumann problem in the case of (0, 1)forms (instead of (0, p)-form for 1 p n). For a∂-closed (0, 1)-form f on Ω which are smooth on the closureΩ of Ω, the Kohn solution u for the∂-equation∂u = f on Ω means the unique smooth function u on Ω such that∂u = f on Ω and u is perpendicular to all L 2 holomorphic functions on Ω with respect to the usual Euclidean volume form of C n .
The regularity problem of the Kohn solution for the complex Neumann problem on a bounded weakly pseudoconvex domain with smooth boundary is to study under what additional assumption on Ω the Kohn solution u is always smooth onΩ when the given (0, 1)-form f is smooth onΩ.

Regularity from subelliptic estimate
The subelliptic estimate of order ε > 0 is said to hold at a point P of ∂Ω if there exist an open neighborhood U of P in C n and positive numbers ε and C satisfying for all smooth (0, 1)-forms φ on U ∩Ω with compact support, which belong to the domain of the actual adjoint∂ * of∂ (with respect to the usual L 2 inner product), where (i) ∥| · |∥ ε is the Sobolev L 2 norm on Ω involving derivatives up to order ε in the boundary tangential directions of Ω, and (ii) ∥ · ∥ is the usual L 2 norm on Ω without involving any derivatives. See [9, p. 92, (3.4)] for a detailed definition of the Sobolev norm ∥| · |∥ ε . In this note, we also use the notation Λ s introduced in [9, p. 92, (3.3)], which is the pseudo-differential operator corresponding to the ( s 2 )-th power of 1 plus the Laplacian in tangential coordinates of the boundary of Ω.
We would like to comment on the reason for the use of the Sobolev norm ∥| · |∥ ε in whose definition only derivatives along the tangent directions of ∂Ω is used. The condition for a smooth (0, 1)-form g onΩ to belong to the domain of the actual adjoint∂ * of∂ (instead of the formal adjoint of∂) is that the components of g normal to ∂Ω vanish at all points of ∂Ω (which, in other words, means that the pointwise inner product of g and∂r vanishes at every point of ∂Ω). When g is in the domain of∂ * , the derivative of g along tangent directions of ∂Ω still belongs to the domain of∂ * , but if g is differentiated in the normal direction of ∂Ω, the result in general will no longer belong to the domain of∂ * . That is the reason why we would like to avoid using differentiation along the normal directions of∂ in the Sobolev norm of order ε adopted in the definition of the subelliptic estimate.
Sobolev norms involving derivatives are used to enable us to conclude the order of differentiability of the solution u of the∂-equation∂u = f from the order of differentiability of the right-hand side f of the equation. Though we do not include the differentiation in the normal direction of ∂Ω in the Sobolev norm used, from the order of differentiability of the solution u along the tangent directions of ∂Ω we can still conclude the order of differentiability of u along the normal direction of ∂Ω because the equation ∂u = f itself provides us directly the differentiability of u along the real normal direction of ∂Ω from the differentiability of u along the (0, 1) component of the complex normal direction of ∂Ω.
It was proved by Kohn and Nirenberg [10,p. 458,Theorem 4] in 1965 that if for some ε > 0 the subelliptic estimate of order ε holds at every point of ∂Ω, then the smoothness onΩ of the Kohn solution u of∂u = f follows from the smoothness of the∂-closed (0, 1)-form f onΩ.

Multipliers to measure location and extent of failure of a priori estimates
A smooth function germ F at a point P of ∂Ω (defined on some open neighborhood U F of P in C n ) is called a scalar multiplier if for some positive number ε F > 0 and some positive constant C F the subelliptic estimate ∥|F φ|∥ 2 of order ε F , modified by the factor F , holds for for all smooth (0, 1)-forms φ on U F ∩Ω with compact support, which belong to the domain of the actual adjoint∂ * of∂. We say that the order of subellipticity for the scalar multiplier F is greater than or equal to ε. We are only interested in an effective lower bound for the order of subellipticity ε F and will not study the supremum of all possible such ε F . The collection of smooth function germs at P which are scalar multipliers at P forms an ideal. This ideal is called the multiplier ideal at P and is denoted by I P . The precise definition of the multiplier ideal makes precise the intuitive motivation that the direction and the order of vanishing of the multiplier ideal measures the location and extent of the failure of the a priori subelliptic estimate.
The test function φ which is multiplied by the scalar multiplier F to yield the modified subelliptic estimate (1.1) is not a scalar and is a (0, 1)-form with n components (of which the normal component is 0). It is possible to get more information by using vector multipliers instead of just scalar multipliers. A smooth germ of (1, 0)-form θ at a point of ∂Ω (defined on some open neighborhood U θ of P in C n ) is called a vector multiplier if for some positive number ε θ > 0 and some positive constant C θ the subelliptic estimate ∥|θ · φ|∥ 2 of order ε θ , modified by the dot product with θ, holds for for all smooth (0, 1)-forms φ on U θ ∩Ω with compact support which belong to the domain of the actual adjoint∂ * of∂. Here, the dot product θ · φ means the pointwise inner product ⟨g,θ⟩ of the two (0, 1)-formsθ and g with respect to the usual Euclidean Hermitian inner product of C n . In other words, if θ = ∑ n j=1 θ j dz j and φ = ∑ n j=1 φjdz j (where z 1 , . . . , z n are the global coordinates of C n ), then θ · φ = ∑ n j=1 θ j φj. The convention to introduce a vector multiplier θ as a (1, 0)-form and the dot product θ · φ, instead of introducing a (0, 1)-form ψ and the pointwise inner product ⟨φ, ψ⟩, is chosen so that, in the case of a special domain described in (2.1) below, one needs only consider scalar multipliers and vector multipliers which are holomorphic (see Subsection 2.9.3 below).
We say that the order of subellipticity for the vector multiplier θ is greater than or equal to ε θ . Again we are only interested in an effective lower bound for the order of subellipticity ε θ and will not study the supremum of all possible such ε θ .
The collection of smooth germs of (1, 0)-forms at P , which are vector multipliers at P forms a module over the algebra of all smooth function germs at P . This module is called the module of vector multipliers at P and is denoted by A P .

Generation of multipliers in Kohn's algorithm
It is easy to define multipliers to describe the location and the extent of failure of subelliptic estimates, but it is difficult to use multipliers to find easily verifiable conditions on the weakly pseudoconvex domain Ω to obtain subelliptic estimates and thereby the smoothness of the Kohn solution u onΩ from the smoothness of the right-hand side f onΩ. The most important part of the theory of multipliers for the complex Neumann problem is the generation of scalar and vector multipliers by Kohn's algorithm which makes it possible to study subelliptic estimates from the geometric condition (known as finite type) of the finiteness of the maximum normalized order of contact between the boundary ∂Ω of Ω and a local holomorphic curve f : ∆ → C n (where ∆ is the open unit disk in C). Here, the normalized order of contact means that the vanishing order of the pullback by f of the defining function r of ∂Ω divided by the vanishing order of f . The precise definition of finite type condition and its history will be given later in Subsection 2.3 below.

Kohn's algorithm to generate scalar and vector multipliers
The following three procedures constitute the Kohn algorithm of generating scalar and vector multipliers at a boundary point P of ∂Ω.
(i) The function r belongs to the ideal of multipliers I P at P . Its order of subellipticity for the scalar multiplier r is greater than or equal to 1.
(ii) For any germ of smooth vector field ξ = ∑ n k=1 a k ∂ ∂z k at P of type (1, 0) which is tangential to ∂Ω, the (1, 0)-form (which is the interior product of the (1, 1)-form ∂∂r and the (0, 1)-vector fieldξ) belongs to the module A P of vector multipliers at P . The order of subellipticity for the vector multiplier ∂∂ j r is greater than or equal to 1 2 for 1 j n − 1 at points P , where ∂r is normalized to be dz n (see [9, p. 97, (4.29)]). (B) Generation of new multipliers.
(C) Real radical property. If g ∈ I P and |f | m |g| for some m ∈ N, then f ∈ I P . The order of subellipticity for the scalar multiplier f is greater than or equal to ε m if the order of subellipticity of g is greater than or equal to ε. For the purpose of discussing the effectiveness of Kohn's algorithm later, we introduce now two terms concerning the radicals of ideals. For an ideal J of smooth function germs at P , we call the ideal of all smooth functions f such that |f | m |g| for some m ∈ N and for some g ∈ J the full real radical of J. If q ∈ N is given, we call the ideal of all smooth functions f such that |f | q |g| for some g ∈ J the real radical of root order q.

Key features of Kohn's algorithm
The subelliptic estimate holds at a point P of ∂Ω when there is a scalar multiplier which is nonzero at P . Kohn's algorithm seeks to reduce the vanishing order of scalar multipliers by differentiation and by roottaking. The procedure of differentiation described in (B) in Subsection 2.1 above would only allow certain differential operators to lower the vanishing order of multipliers, namely only (1, 0)-differentiation is allowed and only the determinants of coefficients of (1, 0)-differentials of scalar multipliers (from Cramer's rule) can be used to produce new scalar multipliers. The procedure of root-taking described in (C) in Subsection 2.1 above identifies a smooth function germ as a scalar multiplier when a positive integral power of its absolute-value is dominated by the absolute-value of some known scalar multiplier.

Condition of finite type
The type m at a point P of the boundary of weakly pseudoconvex Ω is the supremum of the normalized touching order ord 0 (r • f ) ord 0 f to ∂Ω, of all local holomorphic curves f : ∆ → C n with φ(0) = P , where ∆ is the open unit 1-disk and ord 0 is the vanishing order at the origin 0. The notion of finite type was first introduced by Kohn [7, p. 525, Definition 2.3] in 1972 for the case of n = 2, where the formulation is in terms of the nonvanishing of ∂r on the iterated Lie brackets of tangential vector fields to ∂Ω of types (1, 0) and (0, 1). It was extended to the case of a general n by D'Angelo [2, p. 59] in terms of finite algebraic obstructions to the existence of a nontrivial holomorphic complex curve germ in C n tangential to ∂Ω and then in the formulation in terms of normalized touching order in [3, p. 625, Definitions 2.16 and 2.18].

Kohn's conjecture
The goal of the theory of multipliers for the complex Neumann problem is to prove by using the procedures of generating new multipliers to prove that the function identically 1 is generated as a multiplier so that for a bounded weakly pseudoconvex domain Ω with finite type m the subelliptic estimate for some positive order ε > 0 holds and as a consequence the Kohn solution u of the∂-equation∂u = f on Ω with the right-hand side f smooth onΩ is also smooth onΩ. Moreover, there is effectiveness in the use of the procedures to generate new multipliers so that ε is some explicit function of the type m of the domain Ω and its complex dimension n.

Full-real-radical Kohn algorithm
If the question of effectiveness is to be set aside, the following algorithm can be used for the generation of new multipliers in I P .
(i) We start with the initial member r of I P and denote by I (0) P the ideal generated by this initial scalar multiplier r. Likewise, we start out with the initial members (∂∂r) ¬ ¯ξ of A P for all choices of smooth (1, 0)-vector fields ξ tangential to ∂Ω and denote by A (0) P the module generated by these initial vector multipliers.
(ii) We use induction on the nonnegative integer ν to define I  We then hope that if the pseudoconvex domain Ω is of finite type m, the above construction of I  containing the function 1 for some ν m , which effectively depends on m and n. In other words, the algorithm terminates. In particular, the subelliptic estimate of some positive order ε > 0 holds. However, since there is no control on the order of the root-taking used in going from the idealÎ there is no way for us to conclude that the order ε of the subellipticity proved depends effectively on m and n. For the purpose of our discussion, we call this algorithm of using induction on ν to construct I (ν) P with the goal of ending up with 1 ∈ I (νm) P the full-real-radical Kohn algorithm.

Effective Kohn algorithm
In order to end up with an effective order ε of subellipticity, it will turn out that one needs to modify the full-real-radical Kohn algorithm to use the following effective Kohn algorithm.
(i) The starting point for the effective Kohn algorithm is the same as that for the full-real-radical Kohn algorithm. Again we start with the initial member r of I P and denote by I (0) P the ideal generated by this initial scalar multiplier r. Likewise, we start out with the initial members (∂∂r) ¬ ¯ξ of A P for all choices of smooth (1, 0)-vector fields ξ tangential to ∂Ω and denote by A (0) P the module generated by these initial vector multipliers.
(ii) The difference with the effective Kohn algorithm is that we introduce a positive integer q ν for every nonnegative integer index ν so that we take the real radical of root order q ν instead of the full real radical in the step of going from ν − 1 to ν. First we set q 0 = 1 and set I . For the induction step from ν to ν + 1 we define I (ν+1,qν+1) P as the real radical of root order q ν+1 of the idealÎ (ν,qν ) P . We add to A (ν,qν ) P all the vector multipliers ∂F for F ∈ I (ν+1,qν+1) P and then use the resulting collection of vector multipliers to form a module which is A (ν+1,qν+1) P for the next step of going from ν to ν + 1 in the construction by induction on ν.
We then hope that if the pseudoconvex domain Ω is of finite type m, the above construction of I (ν,qν ) P and A (ν,qν ) P by induction on ν will result in I (νm,qν m ) P containing the function 1 for some ν m which effectively depends on m and n. Moreover, we hope that each q ν for 0 ν ν m depends also effectively on m and n so that the subelliptic estimate of some positive order ε > 0 holds with ε depending effectively on m and n.
Note that I (ν,µ) P and A (ν,µ) P are defined only when µ = q ν . Instead of using the notation I (ν,qν ) P and A (ν,qν ) P we could have used the notation, which depend only on ν, for example,Ĩ (ν) P andÃ (ν) P . We prefer the clumsier notation I (ν,qν ) P and A (ν,qν ) P to highlight the effective choice of q ν .
In the effective Kohn algorithm described above, no procedure is given to determine the sequence q ν . The sequence q ν is obtained by a rather complicated algebraic geometric argument. Since the purpose is to achieve 1 ∈ I (νm,qν m ) P , it suffices to describe explicitly how to use algebraic geometric techniques to determine the procedures of differentiation and root-taking of order q ν to construct scalar multipliers and vector multipliers from some initial scalar multipliers and vector multipliers to achieve 1 ∈ I Kohn's papers [8,9] discussed relations between subelliptic estimates, finite type property, and the termination of the full-real-radical Kohn algorithm for smooth weakly pseudoconvex domains. The relations can be summarized in the following Kohn conjecture, formulated separately for the full-real-radical Kohn algorithm and the effective Kohn algorithm.

Conjecture on full-real-radical Kohn algorithm
For a bounded weakly pseudoconvex domain Ω in C n with smooth boundary and of finite type m and for a point P of the boundary of Ω, in the ascending chain of multiplier ideals I (ν) contains the constant function 1. In other words, the chain of multiplier ideals I (ν) If the positive integer ν * depends effectively on m and n, then we say that the full-real-radical Kohn algorithm terminates effectively.

Conjecture on effective Kohn algorithm
For a bounded weakly pseudoconvex domain Ω in C n with smooth boundary and of finite type m and for a point P of the boundary of Ω, there existν ∈ N and a sequence of positive numbers q 1 , . . . , qν such that in the ascending chain of multiplier ideals I (ν,qν ) P for 0 ν ν, the multiplier ideal I For the conjecture on the full-real-radical Kohn algorithm, when the boundary ∂Ω of the bounded weakly pseudoconvex domain Ω is assumed real-analytic, the finite type condition becomes a conclusion instead of an assumption. Kohn first showed that if the the chain of multiplier ideals I (ν) P for ν ∈ N∪{0} does not terminate, then the boundary ∂Ω contains a local real-analytic subvariety of holomorphic dimension greater than or equal to 1 (see [8, p. 2215, Lemma 20] and [9, p. 113, Proposition 6.20]). Then Diederich and Fornaess proved that the real-analytic boundary of a bounded weakly pseudoconvex domain cannot contain a local real-analytic subvariety of holomorphic dimension greater than or equal to 1 (see [4, p. 373, Lemma 2] and [4, p. 374, Theorem 3]). This result of Kohn [8,9] and Diederich and Fornaess [4] holds not only for the case of (0, 1)-forms but for the general case of (0, q)-forms for 1 q n. Since their method of proof is by contradiction, there is no effectiveness in the termination of the chain of multiplier ideals I For the conjecture on Kohn's effective algorithm, Siu [12] introduced algebraic geometric techniques to study the problem by looking first at the special case of Ω being a special domain. For notational convenience we now consider a domain Ω in C n+1 instead of C n . A special domain Ω in C n+1 (with coordinates w, z 1 , . . . , z n ) is a bounded domain given by where F j (z 1 , . . . , z n ) which is defined on some open neighborhood ofΩ in C n+1 depends only on the variables z 1 , . . . , z n and is holomorphic in z 1 , . . . , z n for each 1 j N . In [12], the verification of the conjecture on Kohn's effective algorithm for the case of n = 2 (which means for special domains of complex dimension 3) was given in detail, with only indications for the case of special domains of general complex dimension. A rough outline was given there for the extension of the method of algebraic geometric techniques first to the general real-analytic case and then the general smooth case. We carry out here the effective Kohn algorithm, which was introduced in [12], in a way which keeps track of the order of subellipticity in each step, in the context of comparing the full-real-radical Kohn algorithm and the effective Kohn algorithm.
In the chain of multiplier ideals I (ν) P in the conjecture for the full-real-radical Kohn algorithm, if for every ν there is an effective positive integer p ν (i.e., dependent only on m and n) such that (I P , then we can use p ν = q ν and the proof of the conjecture on Kohn's effective algorithm is simply reduced to the conjecture on the full-real-radical Kohn algorithm with effective termination. Unfortunately, there are simple examples, even for special domains of complex dimension 3, with no effective p ν . This means that the conjecture for effective Kohn algorithm is different from the conjecture for the full-real-radical Kohn algorithm with effective termination. A simple example of this kind was given by Catlin and D'Angelo [1], which we will discuss in Section 4 below.

Algebraic geometric techniques in effective Kohn algorithm
We now explain how the algebraic geometric techniques precisely work to provide naturally the positive integer q ν to make the process effective. The reason for considering special domains in C n+1 is that instead of ideals of smooth function germs on C n+1 we need only consider ideals of holomorphic function germs on C n . That is the reason why for notational convenience we suddenly consider domains in C n+1 instead of domains in C n . We focus on the case where each F j vanish at the origin of C n (for 1 j N ) so that the origin belongs to the boundary of Ω. We are concerned only with the problem of the subelliptic estimate at the origin.
The new notion of pre-multipliers needs to be introduced. A holomorphic function germ f (z 1 , . . . , z n ) on C n at the origin is a pre-multiplier if its differential df is a vector multiplier at the origin. We now verify that the holomorphic function germs F 1 , . . . , F N are pre-multipliers and the order of subellipticity of each dF j is greater than or equal to 1 4 .

Levi form and initial vector multiplier for special domain
First, for a special domain we write down explicit expressions for a tangent vector, its Levi form, and a smooth test (1, 0)-form in the domain of the actual adjoint∂ * of∂. Since the defining function for the special domain Ω is it follows that and all the (1, 0)-vector tangential to ∂Ω at the origin are of the form The value of the Levi-form ∂∂r = ∑ N j=1 dF j ∧ dF j at η ∧η for the element which means that a special domain is always weakly pseudoconvex and is strict pseudoconvex at a boundary point if and only if dF j1 , . . . , dF jn are C-linearly independent at that point for some 1 j 1 < · · · < j n N . For the element For an open neighborhood U of a point of ∂Ω in C n , if φ = ∑ n ν=1 φνdz ν +φdw is a smooth test (0, 1)-form on U ∩Ω with compact support which belongs to the domain of the actual adjoint∂ * of∂, then the normal component of φ vanishes on U ∩ ∂Ω, which means that It will turn out that in the case of a special domain the use of multipliers and vector multipliers can be limited to those which are holomorphic in z 1 , . . . , z n and are independent of w and as a consequence in the study of the subelliptic estimate for a special domain the componentφ of a test (0, 1)-form φ actually plays no role. (See Subsection 2.9.3 below.)

Initial pre-multiplier for special domain
To verify that dF j is a vector multiplier with order of subellipticity greater than or equal to 1 4 , we take an open neighborhood U of 0 on C n+1 on which the holomorphic functions F 1 , . . . , F N of (z 1 , . . . , z n ) ∈ C n are defined. For 0 < ε 1 2 and any smooth (0, 1)-form φ = ∑ n ν=1 φνdz ν +φdw on U ∩Ω with compact support which belongs to the domain of the actual adjoint∂ * of∂, we have where C 1 and C 2 are positive constants independent of φ (but depend on U and ε), because is a vector multiplier whose order of subellipticity is greater than or equal to 1 2 for 1 ν n. This finishes the verification that each dF j = ∑ n ν=1 (∂ ν F j )dz j is a vector multiplier whose order of subellipticity is greater than or equal to 1 4 .

Holomorphic multipliers for special domain
We start out with pre-multipliers F 1 , . . . , F N and the vector multipliers dF 1 , . . . , dF N , which are holomorphic 1-forms in the variables z 1 , . . . , z n obtained from them. For the special domain Ω, we will only work with vector multipliers which are holomorphic 1-forms in the variables z 1 , . . . , z n . When we have n such vector multipliers G (j) = ∑ n ν=1 G (j) (z 1 , . . . , z n )dz j of order of subellipticity greater than or equal to ε ν for 1 ν n, when we apply the procedure (B)(ii) in Subsection 2.1 above to generate new multipliers, from j ) 1 ν,j n of the variables z 1 , . . . , z n is a scalar multiplier whose order of subellipticity is greater than or equal to min(ε 1 , . . . , ε n ). So for the special domain Ω, when we start out only with vector multipliers dF 1 , . . . , dF N (whose orders of subellipticity are all greater than or equal to 1 4 ) and use only the procedures in (B)(i)-(B)(ii) and (C) described in Subsection 2.1 above to generate new scalar and vector multipliers, we need only work with scalar and vector multipliers which are holomorphic functions or holomorphic 1-forms in the variables z 1 , . . . , z n .
We now translate the algorithm from the language of analysis to the language of algebraic geometry. The procedures in the algorithm now read as follows.

Algebraic geometric formulation of Kohn algorithm for special domain
The multiplicity q of the ideal generated by F 1 , . . . , F N given by is related to the type m of the special domain Ω at the origin by 2m q (n+2)2m. See [12,Lemmas (I.3) and (I.4)]. In particular, the special domain is of finite type at 0 if and only if 0 is an isolated point of the common zero-set of F 1 , . . . , F N . Assume that the multiplicity q of the ideal generated by F 1 , . . . , F N is finite. The N pre-multipliers F 1 , . . . , F N (with order of subellipticity of each dF j at least 1 4 ) are all that we start out with. For a special domain of complex dimension n + 1 there are only the following two procedures from Kohn's algorithm.
(i) If holomorphic function germs g 1 , . . . , g n at 0 on C n are pre-multipliers (which automatically include all multipliers), then the coefficient of dz 1 ∧ · · · ∧ dz n in dg 1 ∧ · · · ∧ dg n is a multiplier. In other words, the Jacobian determinant ∂(g 1 , . . . , g n ) ∂(z 1 , . . . , z n ) of the holomorphic functions g 1 , . . . , g n with respect to the variables z 1 , . . . , z n is a multiplier. If the order of subellipticity of each g 1 , . . . , g n is greater than or equal to η, then the order of subellipticity of their Jacobian determinant is greater than or equal to η 2 . (ii) If g and f are holomorphic function germs at 0 on C n with f m = g for some positive integer m and if g is a multiplier whose order of subellipticity is greater than or equal to η, then f is also a multiplier whose order of subellipticity is greater than or equal to η m . Note that the set of all multipliers forms an ideal in the ring of all holomorphic function germs and the set of all vector-multipliers form a module over the ring of all holomorphic function germs, but in general the set of all pre-multipliers does not form a module over the ring of all holomorphic functions, because though the differential dF of a pre-multiplier F is a multiplier, yet for any holomorphic function germ g the differential d(gF ) is equal to gdF + F dg and the term F dg, unlike the term gdF , is in general not a vector-multiplier.
We now discuss the algebraic geometric formulations of the steps in both the full-real-radical Kohn algorithm and the effective Kohn algorithm. We will describe first the steps in the full-real-radical Kohn algorithm. Then we will describe the effective Kohn algorithm but only in the case when the special domain is of complex dimension 3.

Steps in full-real-radical Kohn algorithm
We start out with the C-vector space V 0 of initial pre-multipliers generated by F 1 , . . . , F N . Let J 0 be the ideal generated by all the Jacobian determinants of any n elements g 1 , . . . , g n of V 0 . Let I 0 be the radical of J 0 .
Inductively, we construct the C-vector space V ν , the ideal J ν , and the ideal I ν for any nonnegative integer ν as follows. For the step of going from ν to ν + 1, we let V ν+1 be the C-vector space generated by elements of V ν and all elements of the ideal I ν . Let J ν+1 be the ideal generated by all the Jacobian determinants of any n elements of V ν+1 . Let I ν+1 be the radical of the ideal of J ν+1 . This finishes the construction by induction.
Let p ν be the smallest positive integer such that I pν ν is contained in J ν . Note that each element of I ν is a multiplier, but each element of V ν is only a pre-multiplier.
The full-real-radical Kohn algorithm for F 1 , . . . , F N terminates if there exists some nonnegativeν such that Iν is the unit ideal, which means the entire ring of all holomorphic function germs. In that case we chooseν to be the smallest such nonnegative integer.
We say that the full-real-radical Kohn algorithm for F 1 , . . . , F N terminates effectively ifν and each p ν for 0 ν ν are bounded by explicit functions of n and q.
The order of subellipticity for the multiplier 1 is at least This order of subellipticity from the termination of the original Kohn algorithm is effective only whenν is effective and each p ν is effective for each 0 ν ν.

Ideal containing an effective power of its radical in effective Kohn algorithm
The key difference between the full-real-radical Kohn algorithm (which in general is not effective) by the effective Kohn algorithm is to replace the taking of the radical I ν of the ideal J ν by an appropriately chosen sub-idealĨ ν of I ν with the property that an effective power (Ĩ ν ) sν ofĨ ν is contained in J ν . The choice of the sub-idealĨ ν of I ν and the positive integer s ν involves rather complicated algebraic geometric techniques. In order to facilitate the explicit comparison of the full-real-radical Kohn algorithm with the effective Kohn algorithm in a concrete example (such as the example of Catlin and D'Angelo [1]), in the description of the steps in the effective Kohn algorithm for special domains we will confine ourselves to special domains of complex dimension 3, which means the case of n = 2.

Orders of subellipticity in algebraic geometric techniques for 3-dimensional special domain
We now describe the steps in effective Kohn algorithm for special domains of complex dimension 3. Again we start with holomorphic function germs F 1 , . . . , F N at 0 on C 2 (which define the special domain in C 3 and which generates an ideal in O C 2 ,0 of multiplicity less than or equal to q).

Step one
We take two generic C-linear combinationsF 1 ,F 2 of F 1 , . . . , F N of vanishing order less than or equal to q at 0 such that the Jacobian determinant has vanishing order less than or equal to q at 0 as a holomorphic function germ at 0 on C 2 . An order of subellipticity of h * 2 as a multiplier at the origin is at least 1 4 . Let be the factorization into irreducible holomorphic function germs h * 2,1 , . . . , h * 2,ℓ2 with k 1 k 2 · · · k ℓ2 1. Since the vanishing order of h * 2 is less than or equal to q, we have k 1 + · · · + k ℓ2 q and, in particular, k 1 q. The holomorphic function germ (h * 2,1 · · · h * 2,ℓ2 ) k1 contains h * 2 as a factor and is therefore a multiplier whose order of subellipticity is greater than or equal to 1 4 . Letĥ 2 = h * 2,1 · · · h * 2,ℓ2 . Since the k 1 -th power ofĥ 2 is a multiplier, it follows from k 1 q thatĥ 2 is a multiplier whose order of subellipticity is greater than or equal to 1 4q . The construction ofĥ 2 from h * 2 is to make sure that the divisor ofĥ 2 is reduced (which means that its multiplicity at any of its regular points is 1, though it is possibly reducible with many branches). Up to this point the only goal accomplished is to produce a reduced holomorphic function germĥ 2 at 0, with vanishing order less than or equal to q at 0, which is a multiplier whose order of subellipticity is greater than or equal to 1 4q . This step of construction ofĥ 2 from h * 2 is included only for the sake of convenience and is not actually absolutely necessary.

Step two
Take a generic C-linear combination h 1 of F 1 , . . . , F N and a generic C-linear coordinate system w 1 and w 2 of C 2 such that the following three conditions (i)-(iii) are satisfied. Here, generic means that h 1 = ∑ N j=1 a j F j and w k = ∑ 2 ℓ=1 g kℓ z ℓ with the element (a j , g kℓ ) 1 j N, 1 k,ℓ 2 of C N +1 chosen outside some proper subvariety Z of C N +4 . The proper subvariety Z of C N +4 can be obtained as the union of three proper subvarieties of C N +4 , one for each of the three conditions (i)-(iii).
(i) The origin 0 of C 2 is an isolated zero of w 1 and the holomorphic function germs h 1 .
(ii) The multiplicity at 0 of the ideal generated by h 1 andĥ 2 is less than or equal to q 2 .
(iii) The multiplicity at 0 of the ideal generated byĥ 2 and ( ∂h1 ∂w1 ) w2=const is less than or equal to 3q 2 . The reason why a generic C-linear combination h 1 of F 1 , . . . , F N satisfies Condition (ii) is that the multiplicity of the ideal generated by F 1 , . . . , F N at the origin is less than or equal to q. The number q 2 in Condition (iii) is the product of the multiplicity q ofĥ 1 at 0 and the multiplicity q of the ideal generated by F 1 , . . . , F N at 0.
A choice of a generic C-linear combination h 1 of F 1 , . . . , F N and a generic C-linear coordinate system w 1 and w 2 of C 2 satisfying Condition (iii) is obtained from the following statement.

Statement
Statement 3.1. For each holomorphic function germ f on C n at 0 which vanishes at 0, the function germ f n+1 on C n at 0 belongs to the ideal generated by ∂f ∂zj for 1 j n (where z 1 , . . . , z n are the coordinates of C n ).
The above statement is a consequence of Skoda's result on ideal generation [13] (see [12, p. 1232, Propositioni (A.2)]) and can be considered as a generalization, to general holomorphic function germs, of Euler's formula expressing a homogeneous polynomials in terms of its first-order partial derivatives. We apply the statement to f = h 1 to conclude that (h 1 ) 3 belongs to the ideal generated by ∂h1 ∂z 1 and ∂h1 ∂z 2 . Condition (ii) implies that a generic C-linear combination H = ∑ 2 k=1 c k ∂h1 ∂z k satisfies the condition that the multiplicity at 0 of the ideal generated by H andĥ 2 is less than or equal to 3q 2 . We can choose a generic C-linear coordinate system w 1 and w 2 of C 2 such that H = ( ∂h1 ∂w1 ) w2=const . We are now ready to construct more multipliers from the choice of h 1 , w 1 and w 2 . Since h 1 vanishes at 0, it follows Condition (iii) that there exist holomorphic function germs α and β at 0 on C 2 such that Though h 1 which is a C-linear combination F 1 , . . . , F N of pre-multipliers F 1 , . . . , F N may not be a multiplier, the long key argument given below is to show that actually h 1 is a multiplier. We need the following statement concerning Weierstrass polynomials.

Statement 3.2.
Let ζ 1 and ζ 2 be holomorphic function germs at 0 on C 2 vanishing at 0 such that the origin 0 of C 2 is an isolated point of the common zero-set of ζ 1 and ζ 2 . Let H be a holomorphic function germ at 0 on C 2 vanishing at 0 such that the origin 0 of C 2 is an isolated point of the common zero-set of H and ζ 1 . Let ℓ be a positive integer. If ζ ℓ 2 belongs to the ideal generated by H and ζ 1 , then some holomorphic function germ on C 2 at 0 of the form of a Weierstrass polynomial ζ ℓ 2 + ∑ ℓ−1 j=0 θ j (ζ 1 )ζ j 2 (with ζ 1 and ζ 2 as variables) contains H as a factor, where θ j is a holomorphic function germ on C at 0, which vanishes at 0 for 0 j ℓ − 1.
For the proof of Statement 3.2 on Weierstrass polynomials, first we observe that for the special case where ζ 1 and ζ 2 are the coordinate functions z 1 and z 2 of C 2 and the restriction of H to {z 1 = 0} is equal to z ℓ 2 as holomorphic function germ on C with z 2 as coordinate, the statement is simply the usual factorization of a holomorphic function germ H as a product of a nowhere zero holomorphic function germ and a Weierstrass polynomial of degree ℓ in the variable z 2 . For the proof of the general case, we consider the germ at 0 of the holomorphic map π : C 2 → C 2 defined by (z 1 , z 2 ) → (ζ 1 , ζ 2 ). Since the origin 0 of C 2 is an isolated point of the common zero-set of ζ 1 and ζ 2 , the map π is an analytic branched cover (as the germ of a holomorphic map). Let C be the divisor of H andC be the image of C under π (with multiplicities counted) and letH be a holomorphic function germ on C 2 at 0 whose divisor isC. Since the origin is an isolated point of the common zero-set of H and ζ 1 and since ζ ℓ 2 belongs to the ideal generated by ζ 1 and H, it follows that the restriction ofH to {ζ 1 = 0} is equal to ζl 2 as holomorphic function germ on C with ζ 2 as coordinate for some positive integerl ℓ. The general case now follows from applying the special case when H is replaced by ζl −ℓ 2H and the coordinates z 1 and z 2 are replaced by ζ 1 and ζ 2 .

Step three
Because of Condition (ii) in Subsection 3.2, we can now apply the second part of Statement 3.2 on Weierstrass polynomials to the case of H =ĥ 2 and ζ 1 = h 1 and ζ 2 = w 2 to get a holomorphic function germ h 2 of the form which containsĥ 2 as a factor. (The property ofĥ 2 being a reduced holomorphic function germ means that in applying the above statement on Weierstrass polynomial, the divisor C in the proof of Statement 3.2 on Weierstrass polynomials is reduced and we do not have to worry about multiplicities of its branches, but this point, though offering some convenience, is not essential.) Since h 2 contains as a factor the multiplierĥ 2 whose order of multiplicity is greater than or equal to 1 4q , it follows h 2 is a multiplier whose order of subellipticity is greater than or equal to 1 4q . Let h 2,0 = h 2 and for 1 ν q 2 let which is obtained by differentiating ν-times the function h 2 with respect to w 2 with h 1 fixed when h 2 is regarded as a function of h 1 and w 2 . Then dh 2,ν = η ν dh 1 + h 2,ν+1 dw 2 for 0 ν q 2 − 1 for a holomorphic function germ η ν , which is the partial derivative of h 2,ν with respect to h 1 with w 2 fixed when h 2,ν is regarded as a function of h 1 and w 2 . Let it follows thath 2,1 is a multiplier whose order of subellipticity is greater than or equal to 1 8q . Sinceĥ 2 h 2,1 , being a multiple of the multiplierĥ 2 , is itself a multiplier, it follows that the linear combination of the two multipliersĥ 2 h 2,1 andh 2,1 with coefficients, which are holomorphic function germs at 0 on C 2 , is a multiplier whose order of subellipticity is greater than or equal to 1 8q .

Recursive argument in Step three
Now we repeat the above argument with h 2,ν replacingĥ 2 in the following way to conclude by induction on ν that (h 1 ) 3q 2 ν h 2,ν is a multiplier with order of subellipticity greater than or equal to 1 2 ν+2 q for 1 ν q 2 .
The case of ν = 1 was just proved. Suppose we have proved the step up to some ν < q 2 and we would like to prove the next step of ν + 1. From it follows that ( ∂h1 ∂w1 ) w2=const (h 1 ) 3q 2 ν h 2,ν+1 is a multiplier whose order of subellipticity is greater than or equal to 1 2 ν+3 q .
Sinceĥ 2 (h 1 ) 3q 2 ν h 2,ν+1 , being a multiple of the multiplierĥ 2 , is itself a multiplier, it follows that the linear combination of the two multipliersĥ 2 (h 1 ) 3q 2 ν h 2,ν+1 and β( ∂h1 ∂w1 ) w2=const (h 1 ) 3q 2 ν h 2,ν+1 with coefficients, which are holomorphic function germs at 0 on C 2 , is a multiplier with order of subellipticity greater than or equal to 1 2 ν+3 q . This finishes the proof by induction on ν that (h 1 ) 3q 2 ν h 2,ν is a multiplier with order of subellipticity greater than or equal to 1 2 ν+2 q for 1 ν q 2 . Since h 2,q 2 is equal to (q 2 )!, it follows that (h 1 ) 3q 4 is a multiplier whose order of subellipticity is greater than or equal to 1 2 q 2 +2 q . By the real radical property of multipliers we conclude that h 1 is a multiplier whose order of subellipticity is greater than or equal to 1 3q 5 2 q 2 +2 . Since by Condition (ii) of Subsection 3.2 the ideal generated by h 1 andĥ 2 contains the q 2 -th power of the maximum ideal m C 2 ,0 of C 2 at 0, it follows that w 1 and w 2 are multipliers whose order of subellipticity is greater than or equal to 1 3q 7 2 q 2 +2 . By taking the Jacobian determinant of w 1 and w 2 , we end up with 1 being a multiplier whose order of subellipticity is 1 3q 7 2 q 2 +3 .

Remark
Remark 3.3. For use in Subsection 4.3 below we would like to remark that the above arguments work in the same way when the holomorphic function germ h 2 is chosen to be if r is effective in the sense that r is bounded by an explicit function of q. Of course, the effective lower bound of the order of subellipticity of the constant function 1 as a multiplier needs to be correspondingly modified to be 1 3q 7 2 q 2 +r+3 .

Effective Kohn algorithm applied to Catlin-D'Angelo's example
We now apply the algebraic geometric techniques in the effective Kohn algorithm to the example of Catlin and D'Angelo given in [1] for which the full-real-radical Kohn algorithm is ineffective.

Catlin-D'Angelo's example of ineffectiveness of full-real-radical Kohn algorithm
Let K > M 2 and N 3. The special domain Ω with in C 3 is defined by the two holomorphic functions F 1 (z 1 , z 2 ) = z M 1 and F 2 = z N 2 + z 2 z K 1 on C 2 . The origin of C 3 is the boundary point of Ω whose scalar and vector multipliers we consider. The following is reproduced from [1, pp. 81-82] in the notation and terminology used in this note. By Weierstrass division (applied to the Weierstrass polynomial which is the product F 2 and a nowhere holomorphic function germ), modulo F 2 every element of O C 2 ,0 is equal to ∑ N −1 j=0 a j (z 1 )z j 2 for some holomorphic function germs a 0 , . . . , a N −1 on C 2 at 0. Modulo F 1 we can write Hence, the multiplicity q given by is less than or equal to M N .
The full-real-radical Kohn algorithm proceeds as follows in this example. Let g be the Jacobian determinant of F 1 and F 2 with respect to z 1 and z 2 . We use the notation in Subsection 2.10. The ideal J 0 is the principal ideal with the irreducible function germ g as the generator and its radical I 0 is the same as J 0 . The ideal J 1 is Since g 2 modulo ∂(F1,g) ∂(z1,z2) is equal to z 2(K+M −1) 1 , it follows from K + M − 1 1 that the holomorphic function germ z 1 at 0 belongs to the radical I 1 of J 1 . From N 3 we conclude that modulo (z 2 ) 2 the three holomorphic function germs g, F 1 and F 2 become respectively z K+M −1 Hence the ideal J 1 generated by the three Jacobian determinants formed from pairs out of g, F 1 and F 2 is contained in the ideal generated by z M +K−2 1 and z 2 . This means that z m 1 cannot be in J 1 for m < M + K − 2, otherwise z m 1 belongs to the ideal generated by z M +K−2 1 and z 2 , which is a contradiction. Since the holomorphic function germ z 1 belongs to I 1 , this means that the smallest positive integer p 1 satisfying (I 1 ) p1 ⊂ J 1 must be greater than or equal to M + K − 2 K. Thus, the full-real-radical Kohn algorithm is not effective, because K is arbitrary and there cannot be any function of N M which bounds K.

Remark on Subsection 4.1
In the above subsection, when we carry out the full-real-radical Kohn algorithm for Catlin-D'Angelo's example, we stopped after showing the algorithm to be ineffective. For later comparison, we now carry out the remaining steps of the algorithm until we produce the constant function 1 as a multiplier. We have seen that z 1 belongs to I 1 . Since all three holomorphic function germs g, ∂g ∂z1 and ∂g ∂z2 contain z 1 as a factor, it follows that J 1 is contained in the principal ideal generated by z 1 and I 1 must be equal to the principal ideal generated by z 1 . The function germ z N 2 = F 2 − z 2 z K 1 is a pre-multiplier in V 2 . The Jacobian determinant belongs to J 2 . Hence z 2 belongs I 2 and we can conclude that I 2 is the maximum ideal m C 2 ,0 of C 2 at 0. By taking the Jacobian determinant of the elements z 1 and z 2 of I 2 , we conclude that 1 is a multiplier. To get to the multiplier 1 from F 1 and F 2 , we have to perform differentiation 4 times in the construction of Jacobian determinants.

Effective Kohn algorithm for Catlin-D'Angelo's example
We now carry out concretely the steps in the effective Kohn algorithm for Catlin-D'Angelo's example to illustrate the difference between the full-real-radical Kohn algorithm and the effective Kohn algorithm.
The key point in the effective Kohn algorithm is to construct a Weierstrass polynomial h 2 in one coordinate w 2 such that (i) h 2 contains as a factor a multiplierĥ 2 which is obtained in a procedure involving the Jacobian determinant of two C-linear combinations of the defining holomorphic functions F 1 , . . . , F N of the special domain and (ii) the coefficients of h 2 are holomorphic function germs of some C-linear combination h 1 of F 1 , . . . , F N . Then by using induction on ν we show, with effectiveness, that the Jacobian determinant of h 1 and the function (h 1 ) mν ( ∂ ν h2 ∂w ν 2 ) h1=constant is a multiplier (for some effective positive integer m ν ), resulting finally in the conclusion that h 1 is a multiplier. In the key argument the Weierstrass polynomial h 2 can be replaced by the product of an effective power of h 1 and a Weierstrass polynomial (see Remark 3.3).
In the example of Catlin-D'Angelo where the defining functions for the special domain in C 3 are the two holomorphic functions as h 2 which is the product of h 1 and the Weierstrass polynomial in the variable z 2 , where m = K M which we assume for the time being to be a positive integer. It turns out that the argument used in the effective Kohn algorithm works in the same way without the assumption that m = K M is a positive integer. This assumption used in the setup merely motivates the steps of the argument.
For Catlin-D'Angelo's example, the induction on ν to show, with effectiveness, that the Jacobian determinant of h 1 and the function (h 1 ) mν ( ∂ ν h2 ∂w ν 2 ) h1=constant is a multiplier (for some effective positive integer m ν ) is translated (after obvious modifications) to verifying by induction on j that each H j defined by is a multiplier for 1 j N , because the ν-th derivative of for 1 ν N − 1. At this point we can forget that the use of H j = z (j+1)(M −1) 1 z N −j 2 for 1 j N is motivated by the steps of the effective Kohn algorithm. We now simply carry out the induction on j to verify that is a multiplier whose order of subellipticity is greater than or equal to 1 2 j+2 for 1 j N .
Since the Jacobian determinant is multiplier whose order of subellipticity is greater than or equal to 1 4 , the Jacobian determinant is a multiplier whose order of subellipticity is greater than or equal to 1 8 , which means that H 1 is a multiplier whose order of subellipticity is greater than or equal to 1 8 . Suppose H j has been verified to be a multiplier whose order of subellipticity is greater than or equal to 1 2 j+2 for some 1 j < N . Then the Jacobian determinant is a multiplier whose order of subellipticity is greater than or equal to 1 2 j+3 , which means that H j+1 is a multiplier whose order of subellipticity is greater than or equal to 1 2 j+3 . This finishes the induction argument and we know that H N = z (N +1)(M −1) 1 is a multiplier whose order of subellipticity is greater than or equal to 1 2 N +2 . By taking the effective (N + 1)(M − 1)-th root, we conclude that the holomorphic function germ z 1 is a multiplier whose order of subellipticity is greater than or equal to 1 2 N +2 (N +1)(M −1) and the holomorphic function germ z 2 z K 1 , which contains z 1 as a factor is a multiplier whose order of subellipticity is greater than or equal to is a pre-multiplier whose differential has order of subellipticity greater than or equal to 1 2 N +3 (N +1)(M −1) . Since both z 1 and z N 2 are pre-multipliers whose differentials have order of subellipticity greater than or equal to is a multiplier whose order of subellipticity is greater than or equal to is a multiplier whose order of subellipticity is greater than or equal to , we conclude that z 2 is a multiplier whose order of subellipticity is greater than or equal to z2) is a multiplier whose order of subellipticity is greater than or equal to

Remark
In carrying out above the effective Kohn algorithm for Catlin-D'Angelo's example, to get to the multiplier 1 from F 1 and F 2 , we have to perform differentiation N + 2 times in the construction of Jacobian determinants. Compared with the full-real-radical Kohn algorithm which requires only 4 differentiation to terminate, to avoid ineffectiveness in the taking of roots in the effective Kohn algorithm we choose the option of performing more, but still an effective number of, differentiations.

Geometric reason for ineffectiveness of full-real-radical Kohn algorithm for Catlin-D'Angelo's example
The above discussion shows by computation why in Catlin-D'Angelo's example the full-real-radical Kohn algorithm is ineffective while the effective Kohn algorithm gives effectiveness. Now we would like to analyze geometrically why such a phenomenon occurs. When F 2 = z N 2 +z 2 z K 1 is regarded as a polynomial in z 2 , its degree N is effective (in the sense of being bounded by an explicit function of q = M N ) but its discriminant obtained by eliminating z 2 from F 2 and ∂F2 ∂z 2 , as a function germ in z 1 vanishes to an order at z 1 = 0 which is a function of K and is not effective. In other words, the N roots (in z 2 ) of F 2 = 0 as N functions of z 1 are close together near z 1 = 0 to an order which is a function of K and is not effective. The discriminant of F 2 and the closeness of the N roots of F 2 = 0 enter the picture, because F 1 = z M 1 depends only on z 1 and the Jacobian determinant of F 1 and F 2 is the first multiplier in the algorithm. Because of the ineffectiveness of the vanishing order of the discriminant of F 2 at a function in z 1 at z 1 = 0, the step of root-taking is ineffective. On the other hand, the effective Kohn algorithm replaces ineffective root-taking of the discriminant of a Weierstrass polynomial by differentiating the Weierstrass polynomial with respect to its variable as many times as its degree to avoid the ineffective root-taking.

Multipliers in more general setting
We now discuss the generalization of Kohn's technique of multipliers to more general systems of partial differential equations.

Generalization of Kohn's technique of multiplier ideal sheaves to a more general setting
The generalization of Kohn's technique of multiplier ideal sheaves to a more general setting comes from looking at Kohn's technique for the complex Neumann problem from the following perspective. The subelliptic estimate for a bounded smooth weakly pseudoconvex domain Ω in C n at its boundary point P seeks to estimate ∥|φ|∥ 2 ε = ∥Λ ε φ∥ 2 by a constant C ε times for some ε > 0, for all smooth test (0, 1)-form φ on U ∩Ω with compact support which belongs to the domain of the actual adjoint∂ * of∂ (where U is an open neighborhood of P in C n ). For the convenience of discussion, we simply say that Λ ε φ is estimable on U when In general, we say that some expression ψ defined from φ is estimable on U (or simply estimable) if for some constant C independent of φ (which is smooth on U ∩ ∂Ω with compact support). The starting point is the basic identity where∇ is the (covariant) differentiation of φ in the (0, 1)-direction and Levi ∂Ω is the Levi form ∂∂r of the boundary ∂Ω of Ω when Ω is locally defined by r < 0 with dr ≡ 1 on ∂Ω. In particular, Together with ∥φ∥ 2 Q(x, y), this means that both∇φ and∂ * φ, as well as φ, are estimable. The expressions∇ are linear combinations of first-order partial derivatives of the components of φ. A multiplier F means the estimability of Λ ε (F φ) and a vector multiplier θ means the estimability of Λ ε (θ · φ). Kohn's technique is to use the estimability of∇φ,∂ * φ and φ and apply algebraic manipulations and integration by parts to construct from the estimability of Λ ε (F φ) and Λ ε (θ · φ) other F ′ and θ ′ with estimable Λ ε (F ′ φ) and Λ ε (θ ′ · φ). For such manipulations it does not matter what the meaning of Q(φ, φ) is. Moreover, the operations of integration by parts are along the tangent directions of the boundary, because Λ ε is the pseudo-differential operator corresponding to the ( ε 2 )-th power of 1 plus the Laplacian in tangential coordinates of the boundary.

Formulation of more general setting
For our generalization of the technique of multiplier ideal sheaves, we use the following simple setting which highlights the core argument of the technique. Fix an integer q 2. Let Ω be an open neighborhood of 0 in R m and Y j ν be complex-valued smooth differential operators on Ω for 1 j N and 1 ν q. For any q-tuple φ = (φ 1 , . . . , φ q ) of smooth complex-valued functions with compact support on Ω, let where ∥ · ∥ means the L 2 norm on Ω. An expression ψ of φ of the form ∑ q ν=1 Z ν φ ν (where each Z ν is a pseudo-differential operator on Ω) is said to be estimable on an open neighborhood U of 0 in Ω (or simply estimable) if there is a positive constant C such that for all q-tuple φ = (φ 1 , . . . , φ q ) of smooth complex-valued test functions with compact support on U . When ψ is vector-valued instead of scalar-valued, the estimability of ψ on U means the estimability of each of its components on U . When we have two such expressions ψ andψ, we say that the inner all q-tuple φ = (φ 1 , . . . , φ q ) of smooth complex-valued test functions with compact support on U . We refer to C as the constant of estimability of ψ or (ψ,ψ).

Multipliers in more general setting
Let Λ ε be the the pseudo-differential operator which is the ( ε 2 )-th power of 1 plus the Laplacian in the coordinates of R m . We introduce three kinds of multipliers: (i) scalar multiplier, (ii) vector multiplier, and (iii) matrix multiplier.
The germ at 0 of a smooth function α is a scalar multiplier at 0 with order of subellipticity greater than or equal to ε if Λ ε (αφ ν ) (for 1 ν q) is estimable on U for some ε > 0 and some open neighborhood U of 0 in Ω on which α is defined. Clearly, the product of a scalar multiplier with any smooth function is again a scalar multiplier with no change in the order of subellipticity. By considering the commutator [Λ ε , α] of pseudo-differential operators, we conclude that for 0 < ε 1 the estimability of Λ ε (αφ ν ) on U is equivalent to the estimability of α(Λ ε φ ν ) on U , because ∥φ∥ 2 is estimable on U .
The germ at 0 of a smooth q-tuple of smooth complex-valued functions ⃗ a = (a 1 , . . . , a q ) is called a vector multiplier at 0 with order of subellipticity greater than or equal to ε if Λ ε ( ∑ q ν=1 a ν φ ν ) is estimable on U for some ε > 0 and some open neighborhood U of 0 in Ω on which ⃗ a is defined. Clearly the product of a vector multiplier with any smooth function is again a vector multiplier with no change in the order of subellipticity. Again, for 0 < ε 1 the estimability of Λ ε ( ∑ q ν=1 a ν φ ν ) on U is equivalent to the estimability of ∑ q ν=1 a ν (Λ ε φ ν ) on U . A q × q matrix a = (a jk ) 1 j,ℓ q is called a matrix multiplier at 0 with order of subellipticity greater than or equal to ε if every one of its rows ⃗ a j = (a j1 , . . . , a jq ) (for 1 r q) is a vector multiplier at 0 with order of subellipticity greater than or equal to ε. Clearly, a matrix multiplier multiplied on the left by a q × q matrix with smooth functions as entries yields a matrix multiplier with no change in the order of subellipticity.
Some simple relations among scalar multipliers, vector multipliers, and matrix multipliers are as follows. The product of a scalar multiplier with any row q-vector with smooth functions as components is a vector multiplier. The product of a scalar multiplier with any q × q matrix with smooth functions as entries is a matrix multiplier. Any vector multiplier (as a row vector) multiplied on the left by a column q-vector with smooth functions as components yields a matrix multiplier. By Cramer's rule the determinant of a matrix multiplier is a scalar multiplier. Any matrix multiplier multiplied on the left by a row q-vector with smooth functions as components yields a vector multiplier.

Real radical property of multipliers in more general setting
Just like the real radical property (C) of Kohn's multipliers in Subsection 2.1, scalar multipliers here enjoy the same real radical property that if α is a scalar multiplier at 0 with order of subellipticity greater than or equal to ε (for some 0 < ε 1) and β is a smooth complex-valued function germ at 0 such that |β| σ |α| for some σ ∈ N, then β is a scalar multiplier with order of subellipticity greater than or equal to ε σ . The proof is completely analogous to the proof of [9,p. 98,Lemma 4.3.4] and is as follows. Let U be an open neighborhood of 0 in Ω such that both α and β are represented by smooth functions on U and φ is a test q-tuple of smooth functions on U with compact support. Let η = ε σ . Since it suffices to prove the statement that for 1 τ σ. (5.1) τ follows from descending induction on τ for 1 τ σ, because andC are constants independent of φ. In particular, if α is a scalar multiplier at 0 with order of subellipticity greater than or equal to ε (for some 0 < ε 1), then its complex-conjugateᾱ is also scalar multiplier at 0 with order of subellipticity greater than or equal to ε.
A very important part of the technique of multiply ideal sheaves is the differential relations among the scalar multipliers, vector multipliers, and matrix multipliers. This is represented by two procedures involving differentiation. The first procedure produces a new vector multiplier from a matrix multiplier. The second procedure produces a new vector multiplier from a scalar multiplier. The second procedure is similar to the procedure (B)(i) for Kohn's multipliers for the complex Neumann problem in Subsection 2.1. The first procedure is a new one, even in the special case of Kohn's multipliers for the complex Neumann problem. The following theorem presents a unified version of both procedures, which yields both procedures as special cases.

Theorem (generation of vector multiplier from matrix multiplier or scalar multiplier)
Theorem 5.1. Let X 1 , . . . , X q be complex-valued smooth first-order differential operators on Ω whose adjoint operators are X * 1 , . . . , X * q with respect to the L 2 inner product on Ω such that each X * j φ is estimable on Ω for 1 j q. Let Γ kℓ be a smooth complex-valued function on Ω for 1 k, ℓ q such that ∑ 1 k,ℓ q Γ kℓ X k φ ℓ is estimable on Ω. Let ε 1 and ε 2 be positive numbers less than or equal to 1. Let a = (a jk ) 1 j,k q be a matrix of multipliers at 0 so that each of its rows ⃗ a j = (a jk ) 1 k q is a vector multiplier at 0 with order of subellipticity greater than or equal to ε 1 for 1 j q. Let α be a scalar multiplier at 0 with order of subellipticity greater than or equal to ε 2 . Let (A jk ) 1 j,k q be a matrix of smooth complex-valued function germs at 0 such that ∑ q ℓ=1 A jℓ a ℓk equals to the Kronecker delta δ jk times α for 1 j, k q.
and ⃗ b = (b j ) 1 j q . Then ⃗ b is a vector multiplier at 0 whose order of subellipticity is greater than or equal to 1 2 min(ε 1 , ε 2 ). In particular, the following two special cases hold: and ⃗ c = (c j ) 1 j q , where adj(a) is the adjoint matrix of a (so that the matrix product of adj(a) and a is equal to det(a) times the identity matrix of order q). Then ⃗ c is a vector multiplier at 0 whose order of subellipticity is greater than or equal to ε1 2 . (ii) Let d j = ∑ q k=1 Γ kj X k α for 1 j q and ⃗ d = (d j ) 1 j q . Then ⃗ d is a vector multiplier at 0 whose order of subellipticity is greater than or equal to ε2 2 .
Proof. Let ε = min(ε 1 , ε 2 ). Let U be an open neighborhood of 0 in Ω such that α and the vector multiplier ⃗ a j = (a jk ) 1 k q (for 1 j q) are defined and smooth on U and Λ ε ( ∑ q ν=1 a jν φ ν ) is estimable on U for smooth test functions φ = (φ 1 , . . . , φ q ) on U with compact support. Let ψ be a linear combination of φ j (for 1 j q) with smooth functions on U as coefficients and which we will specify more precisely later. By the Cauchy-Schwarz inequality, for 1 p, ℓ q the inner product is estimable on U , because ⃗ a ℓ is a vector-multiplier for 1 ℓ q and X * p ψ is estimable (from the estimability of X * p φ) for 1 p q. Note that the constant of estimability depends on the smooth coefficient functions in the linear combination ψ of φ 1 , . . . , φ q which are yet to be specified. Integration by parts applied to X p (by switching X p over to X * p in the inner product) yields the estimability of ( on U for 1 p, ℓ q after we take care of the error terms from the commutator of pseudodifferential operators in the standard way. Now we apply ∑ q ℓ=1 A kℓ to (5.2) to get the estimability of ( on U for 1 k q, which is the same as because ∑ q j=1 A ij a jℓ = αδ iℓ for 1 j, ℓ q. We apply ∑ 1 p,k q Γ pk to (5.3) to get the estimability of ( on U , which is the same as ) up to estimable error terms from the commutators of pseudodifferential operators. Sinceᾱ is a scalar-multiplier at 0 (on account of α being a scalar multiplier at 0), from the estimability of ∑ 1 p,k q Γ pk (X p φ k ) on U and the Cauchy-Schwarz inequality we conclude that ( ∑ is estimable on U . Hence, is estimable on U .
We can now choose ψ = This means that ⃗ b is a vector multiplier at 0 whose order of subellipticity is greater than or equal to ε 2 . We now look at the two special cases. The special case (i) follows from setting (A jk ) 1 j,k q to be the adjoint matrix adj(a) of the matrix a and setting α to be det(a) with ε 2 = ε 1 . The special case (ii) follows from setting A jk to be the Kronecker delta δ jk for 1 j, k q and setting a jk to be αδ jk for 1 j, k q with ε 1 = ε 2 .

Remark
Though Theorem 5.1 is presented as involving interior estimates, the same argument works also in boundary situations like the complex Neumann problem where, for the argument, integration by parts is needed only for the boundary tangential directions which do not affect the condition of the test forms to be in the domain of the actual adjoint∂ * of∂. The special case (ii) of Theorem 5.1, after modification for application to the situation of the complex Neumann problem for special domains, gives a procedure to generate a vector multiplier from a matrix multiplier. In Subsection 6.5 below, computations of examples are given to show that this procedure is a new procedure of generating vector multipliers for special domains in C n with n 4.

Estimable linear combinations and initial multipliers
The goal of the technique of multiplier ideal sheaves is to use the differential relations among the multipliers and some initial multipliers to conclude, under some geometric conditions, that the function which is identically 1 is a scalar multiplier. An increase in the differential relations among the multipliers facilitates the achievement of the goal. Theorem 5.1 uses the collection (Γ kℓ ) 1 k,ℓ q of smooth functions on Ω to construct a new vector multiplier from a matrix multiplier. The condition on the collection (Γ kℓ ) 1 k,ℓ q of smooth functions on Ω is that ∑ 1 k,ℓ q Γ kℓ X k φ ℓ is estimable on Ω. For that reason we refer to the collection (Γ kℓ ) 1 k,ℓ q of smooth functions on Ω an estimable linear combination. To facilitate the construction of new vector multipliers, we can use a family of such estimable linear combinations (Γ (λ) kℓ ) 1 k,ℓ q indexed by 1 λ λ * instead of a single one. There remains the crucial question of geometric conditions to guarantee solution of the regularity problem. This condition (which is similar to the condition of finite type for the complex Neumann problem) should be a condition on the family of estimable linear combinations (Γ (λ) kℓ ) 1 k,ℓ q for 1 λ λ * and the choice of initial scalar multipliers α (σ) (for 1 σ σ * ) and initial vector multipliers ⃗ a (τ ) (for 1 τ τ * ). This question has not yet been satisfactorily answered.

New procedure to generate vector multiplier from matrix multiplier in complex Neumann problem of special domain
We now modify the argument in the special case (ii) of Theorem 5.1 to apply to the complex Neumann problem to obtain a new procedure of generating a vector multiplier from a matrix multiplier. This new procedure works for any bounded weakly pseudoconvex domain with smooth boundary, but we will carry out the modification only for a special domain, because the notation for a special domain have already been introduced here to make the argument for a special domain easier to present. Then we show by explicit computation for some special domains in C 4 that this new way of generating a vector multiplier cannot be derived from Kohn's procedures in Subsection 2.1.

New procedure to generate vector multiplier from matrix multiplier for the complex Neumann problem
Let Ω be a special domain in C n+1 (with coordinates w, z 1 , . . . , z n ) defined by holomorphic functions F j (z 1 , . . . , z n ) on some open neighborhood ofΩ, as described in Subsection 2.8.1. For the complex Neumann problem for the special domain Ω in C n+1 , the roles of the vector fields X 1 , . . . , X q are played by ∂ j = ∂ ∂zj for 1 j n and the roles of X * 1 , . . . , X * q are played by ∂j = ∂ ∂zj for 1 j n and the role of Γ jk for 1 j, k q is played by the Kronecker delta δ jk for 1 j, k n. Let a = (a jk ) 1 j,k n be a matrix whose entry a jk is a holomorphic function of z 1 , . . . , z n defined on an open neighborhood of 0 in C n for 1 j, k n such that each row vector ⃗ a j = (a jk ) 1 k n is a vector multiplier at 0 with order of subellipticity greater than or equal to η (for some 0 < η 1). Let U be an open neighborhood of 0 in C n+1 such that each a jk , as a holomorphic function in z 1 , . . . , z n , w but independent of w, is defined on U .
Let φ = ∑ n j=1 φjdz j +φdw be a smooth test (1, 0)-form onΩ ∩ U with compact support, which is in the domain of the actual adjoint∂ * of∂. Let ψ be a scalar function which is a linear combination of φj with smooth functions as coefficients and which we will specify more precisely later. Let 0 < ε < η 2 . The L 2 inner product is estimable on U by the Cauchy-Schwarz inequality from the assumption that ⃗ a ℓ = ∑ n j=1 a ℓ j dz j is a vector-multiplier at 0 with order of subellipticity greater than or equal to η 2ε for 1 ℓ n and the assumption that the L 2 norm of∂ p ψ is estimable on U for 1 p n (from the estimability of∂ p φj on U for any 1 p, j n). Integration by parts applied to∂ p yields the estimability of , ψ ) (6.1) on U after we take care of the error terms from the commutators of operators in the standard way. Let (A qℓ ) 1 q,ℓ n be the adjoint matrix of a so that ∑ n j=1 A ij a jk = (det a)δ ik for 1 i, k n (where δ jk is the Kronecker delta). Now for 1 p n we apply ∑ n ℓ=1 A pℓ to (6.1) to get the estimability of ( Λ 2ε ( n ∑ ℓ,j=1 A pℓ (∂ p a ℓ j )φj A pℓ a ℓ j (∂ p φj) ) , ψ ) on U , which is the same as because ∑ n j=1 A ij a jk = (det a)δ ik for 1 i, k n. We now summarize (6.2) over 1 p n to get the estimability of (  ((det a)ψ) ) .
As a determinant whose rows are vector-multipliers with order of subellipticity greater than or equal to η 2ε, the determinant det(a) (as well as its complex-conjugate) is a scalar-multiplier with order of subellipticity greater than or equal to η 2ε. The function Λ 2ε ((det a) is equal to ∥Λ ε ψ∥ 2 L 2 (Ω) . This means that the (1, 0)-form ) dz j is a vector-multiplier with order of subellipticity greater than or equal to η 2 . This is a new process of producing vector-multipliers from a matrix of vector-multipliers. We now summarize the result in the following theorem.
Let Ω be a special domain in C n+1 (with coordinates w, z 1 , . . . , z n ) defined by (2.1). Assume that 0 is a boundary point of Ω. Let a = (a jk ) 1 j,k n be a matrix whose entry a jk is a holomorphic function of z 1 , . . . , z n defined on an open neighborhood of 0 in C n for 1 j, k n such that each row vector ⃗ a j = (a jk ) 1 k n is a vector multiplier at 0 with order of subellipticity greater than or equal to η (for some 0 < η 1). Let adj(a) be the adjoint matrix of a. Then the holomorphic (1, 0)-form n ∑ j=1 ( ∑ 1 p,ℓ n (adj(a)) pℓ (∂ p a ℓ j ) ) dz j (6.3) is a vector multiplier at 0 with order of subellipticity greater than or equal to η 2 , where (adj(a)) pℓ is the entry of adj(a) in the p-th row and the ℓ-th column.

Comparison with known procedure of generating vector multiplier from matrix multiplier
In the case of a special domain, the known procedure (B)(i)-(B)(ii) in Subsection 2.1 to generate a vector multiplier from a given matrix multiplier a is to first use (B)(ii) in Subsection 2.1 to get the determinant det(a) of a as a scalar multiplier and then use (B)(i) in Subsection 2.1 to get the (1, 0)-form ∂(det(a)) as a vector multiplier. Here, we use the same notation as in Subsection 6.2.
We would like to compare ∂(det(a)) with the vector multiplier from Theorem 6.1. (adj(a)) kj (∂a jk ), (6.4) which is different from the vector multiplier (6.3) where the index j of a ℓ j is used as the index for the component of the vector multiplier instead of the subscript j of ∂ j which is the index for the component of the vector multiplier (6.4). As shown in the computations given below in Subsections 6.4 and 6.5, it turns out that in the case of n = 2 the old procedure (B)(i)-(B)(ii) in Subsection 2.1 produces the same result as the new procedure given in Theorem 6.1, but in the case of n 3 the new procedure indeed gives some new vector multipliers different from those produced by the procedures B(i)-B(ii).

New procedure gives no new vector multipliers for 3-dimensional special domain
We explicitly compute (6.3) and (6.4) in the case of a special domain Ω in C 3 to determine whether the result (6.3) from the new procedure is different from the result (6.4) from the old procedure. We need only consider holomorphic functions and holomorphic 1-forms on C 2 as scalar and vector multipliers. Let a j1 dz 1 + a j2 dz 2 as two holomorphic 1-forms which are vector multipliers. For the matrix multiplier a = (a jk ) 1 j,k 2 , the adjoint matrix adj(a) is .

New procedure gives more vector multipliers for 4-dimensional special domain
The new procedure of generating vector multipliers already gives vector multiplies different from those generated by the procedure B(