The laws of large numbers for Pareto-type random variables under sub-linear expectation

Binxia CHEN , Qunying WU

Front. Math. China ›› 2022, Vol. 17 ›› Issue (5) : 783 -796.

PDF (500KB)
Front. Math. China ›› 2022, Vol. 17 ›› Issue (5) : 783 -796. DOI: 10.1007/s11464-022-1026-x
RESEARCH ARTICLE
RESEARCH ARTICLE

The laws of large numbers for Pareto-type random variables under sub-linear expectation

Author information +
History +
PDF (500KB)

Abstract

In this paper, some laws of large numbers are established for random variables that satisfy the Pareto distribution, so that the relevant conclusions in the traditional probability space are extended to the sub-linear expectation space. Based on the Pareto distribution, we obtain the weak law of large numbers and strong law of large numbers of the weighted sum of some independent random variable sequences.

Keywords

Sub-linear expectation / Pareto type distribution / laws of large numbers / independent and identical distribution

Cite this article

Download citation ▾
Binxia CHEN, Qunying WU. The laws of large numbers for Pareto-type random variables under sub-linear expectation. Front. Math. China, 2022, 17(5): 783-796 DOI:10.1007/s11464-022-1026-x

登录浏览全文

4963

注册一个新账户 忘记密码

1 Introduction

Since the beginning of the 20th century, significant progress has been made in the traditional theory of probability spaces, which has driven the development of several fields such as mathematical statistics and econometrics. Both probabilities and expectations are additive in traditional probability spaces, but most problems in the real world are not additive. Therefore, there is a need to consider a new framework that can be applied to analyze the hidden probability and statistical distribution uncertainties behind real-world problems such as risk management and quantum mechanics. Therefore, Peng [10-12] introduced the concept of sub-linear expectation space. The probability and expectation under the probability space are transformed into capacity and sub-linear expectation, and the phenomena that do not possess additivity in various domains are explained by capacity and sub-linear expectation, which remedies the shortcomings of the traditional probabilistic framework applied in uncertainty problems. Since then, literature on the theory and applications related to sub-linear expectation spaces has emerged, enriching and generalizing the corresponding results in traditional probability spaces. For example, Zhang [16-18] established a series of important inequalities such as the exponential inequality, Rosenthal's inequality, and the strong law of large numbers (SLLN).

In classical probability spaces, the research on weak law of large numbers (WLLN) and SLLN has been abundant and has gained important applications in many fields. The limit theory under the traditional probability space has become more and more mature, but under the sub-linear expectation space, the research difficulty increases because the capacity and sub-linear expectation are not additive, and at the same time many moment inequalities and exponential inequalities are difficult to establish, so the research of limit theory is very challenging. In recent years, more and more scholars focus on the study of WLLN and SLLN under sub-linear expectation space, and the theoretical results are becoming more and more abundant. Representative studies such as Chen [3] obtained the WLLN in the sub-linear expectation space with respect to a sequence of independent random variables; Peng [13], Zhang [17] obtained the Kolmogorov SLLN under different conditions; Chen et al. [4,5] obtained three different SLLNs under the sub-linear expectation space. Based on Chen [4], Hu [6,7] extended some corresponding results under general moment conditions. Wu and Jiang [14] conducted a systematic study of SLLN under sub-linear expectations and extended the results of Chen [4], Hu [6], Marinacci [9], and Zhang [17] to the general case. Ma and Wu [8] obtained SLLNs for weighted sums of sequences of extended negatively dependent (END) random variables in sub-linear expectation space using the method of Wu and Jiang [14].

In the classical probability space, Yang et al. [15] studied and established the laws of large numbers for Pareto-type random variables and obtained some WLLN and SLLN for the weighted sum of negatively superadditive-dependent (NSD) random variables. Similarly, in the sub-linear expectation space, we can also establish some laws of large numbers for random variables satisfying the Pareto distribution. Thus, the relevant conclusions in the classical probability space can be generalized to the sublinear expectation space. Specifically, for all x>0, we study the problem of convergence in capacity that satisfies the weighted sum of a random variables sequence obeying the Pareto distribution

V( Xn>x)=1x+ cn,

where {cn} is a non-decreasing constant sequence, and cn1, which satisfies:

C n:=j=1n1 cj.

The symbol “ ” represents the limit when n.

Moreover, Yang et al. [15] studied SLLN mainly considering the tails of the Pareto probability distribution. Similarly, we also mainly utilize the form of the tail of the Pareto tolerance when we study the convergence in capacity almost everywhere of the weighted sum of a random variables sequence of the class Pareto in the sub-linear expectation space. A random variable X is said to obey the Pareto distribution if the tail capacity is

V( X>x) 1x , \;V( X<x)1x, x>1.

The symbol “~” denotes equivalence, i.e., ax bx denoteslimx ax/bx=1.

2 Preliminaries

We use the framework and notations of Peng [10-12]. Let (Ω,F) be a given measurable space and let H be a linear space of real functions defined on ( Ω ,F), if X1,,XnH then φ(X1, ,X n)H for eachφ Cl,Lip(Rn), where Cl,Lip( Rn) denotes the linear space of (local Lipschitz) functions φ satisfying

|φ (x)φ( y)|c (1+|x|m+|y|m)|xy|, x, yRn.

For some c>0, mN depending on φ, H is considered as a space of random variables. In this case we denote XH.

Definition 2.1 [11] A sub-linear expectation E^: H R, E ^ satisfying the following properties:

1.Monotonicity: IfXY, thenE ^( X)E^(Y);

2.Constant preserving: E ^( c)=c;

3.Sub-additivity: E ^( X+Y)E^(X)+ E^(Y);

4.Positive homogeneity: E ^( λX)=λ E^(X), forλ >0.

The triple(Ω,H,E^) is called a sub-linear expectation space. E^ is the sub-linear expectation. Let us denote the conjugate expectation ε ^ of E^ by

ε^(X)= E^(X),XH.

It is easily shown that

ε ^(X) E^(X), E^(X+c )=E^(X )+c,| E^(X Y)|E^|XY|, E^(X Y)E^(X )E^(Y ), X, YH.

If E^(Y)=ε ^( Y), then

E^( X+aY)= E^(X)+aE^(Y),aR.

Definition 2.2 [11] Let GF, a function V:G [0,1 ] is called a capacity if V satisfying the following properties:

1. V ()=0,V (Ω)=1;

2. V (A) V(B),AB, A,B G.

It is sub-additive if V(AB)V (A)+V(B) for all A,BG.

In the sub-linear space (Ω,H,E^), we denote a pair (V;v ) of capacities by

V(A)=inf{E^ξ,I(A )ξ,ξ H },v(A):=1V( Ac),AF,

where I(·) is the characteristic function, Ac is the complement set of A. From the definition,

v(A)V( A), A F.

If I(A) H, then

V(A)=E^ (I (A)),v (A)=ε^( I( A)) .

If fI( A)g ,f,g H,

E^f V(A) E^g,ε^f v(A) ε^g.

Therefore, Markov’s inequality

V( |X |x) E^( |X |p) xp,XH,x>0, p>0

can be derived from I(|X|x) |X|p xpH.

Definition 2.3 [11] We define the Choquet integrals/expecations

C V(X ):= 0 V( X>x) dx+ 0(V(X> x)1) dx,

where the capacity V can be replaced by the upper capacity V and the lower capacity ν.

Definition 2.4 [11] 1. The countably sub-additive of E^: ifX n=1 Xn,where X,XnH, X0, Xn0, n1, then E^(X) n=1E^(Xn).

2. If V satisfying

V( n=1 An)n= 1V (An), An F,

then we call V countably sub-additive.

Definition 2.5 [11] 1. Identical distribution: Let (Ω 1,H 1,E^1) and (Ω 2,H 2, E^2) be two sub-linear expectation spaces and XiH, i=1,2. X1 and X2 are called identical distribution, which is denoted by X1 =d X2, if

E^1 [φ (X1)]= E^2[φ(X2)],φ Cl,Lip(Rn).

2. Independence: Let (Ω,H,E^) be a sub-linear expectation space, and Y=(Y1, ,Y n),YiH, X=( X1, , Xm),Xi H. Y is said to be independent to X under E^, if for each test function φ C l,Lip (Rm×Rn), we have E^(φ(X,Y))=E^ (E^(φ(x,Y))|x=X), whenever φ ¯( x): =E ^[ |φ(x,Y)|]< for all x and E^[|φ¯ (X)|]<.

Definition 2.6 [14] A random variable sequence { Xn,n1} is said to converge to X almost surely V(a.s .V), denoted by X n Xa.s .V, if V( XnX)=0 as n . V can be replaced by V and ν respectively.

For AF, ε>0, we have

XnXa.s.Vv (XnX )=1 V( |XnX|>ε,i.o.)=0.

Proof Since v( A) +V( Ac)=1, we know that V(XnX)=0 v(XnX)=1. To prove that v (XnX )=1V( |XnX|>ε ,i.o .)=0, first of all, give

{ limm Xm= X}={ε>0,m,n> m,|XnX|<ε} =ε >0(m=1 n=m(|XnX|<ε)).

According to v(A )+V( Ac)=1, we have

v (XmX )=1v{limm Xm= X}=1v{ ε >0m=1 n=m(|XnX|<ε) }=1 V{ ε>0 m=1 n=m( |XnX|ε) }=0 V{ m=1 n=m( |XnX|ε)}= 0,ε\gt 0 V{ |XnX|ε ,i.o} =0, ε\gt 0.

Then v(XnX )=1V( |XnX|>ε ,i.o .)=0.

Definition of convergence in capacity: ε>0, if

limnV( |XnX|ε)=0,

then a random variable sequence { Xn,n1} is said to converge to X in capacity V, denoted by X nVX. XnVX+stands for limnV( XnXε)=0. V can be replaced by V and ν respectively.

Later in this paper, logx=ln(max (x ,e)). The symbol “c” denotes a constant independent of n, which can take different values in different places. “” denotes the limit when n. ax bx denotes limx ax/bx=1. “anbn” indicates the existence of a constant c>0 such that a ncbn holds for sufficiently large n.

3 Main results

Theorem 3.1  Let { Xn,n1} be a sequence of non-negative independent random variables satisfying (1.1) and (1.2), and both the E^ and V are countably sub-additive. Note that

X nj= XjI (Xjbn cj)+ bncjI(Xj>bncj), 1jn.

Let bn= CnlogCn. Then

1 bnj=1n cj 1(XjE^ Xnj) V0+.

Corollary 3.1  If {Xn,n1} satisfies the condition of Theorem 3.1, and c n=n, n1, then

1 lognloglogn j=1nj1( Xj E^Xnj)V 0+.

Theorem 3.2  Let { Xn,n1} be a sequence of independent and identically distributed, and both the E ^ and V be countably sub-additive. If X1 satisfies (1.3), then for β >0, we have

limsupn1logβnj=1nlogβ2jjX j1 βa.s .V

and

lim infn1log β nj= 1nlog β 2j jXj 1β a.s.V.

Note Based on the Pareto distribution, we obtain the WLLN and the SLLN for weighted sums of independent random variables sequences. Theorem 3.1 and Corollary 3.1 generalize the WLLN results of Yang et al. [15] and Alder [2] in probability space to obtain the WLLN of a sequence of independent random variables in sub-linear expectation space, respectively. Theorem 3.2 generalizes the SLLN of Alder [1] in probability space to the sub-linear expectation space to obtain the SLLN of a sequence of independent identically distributed random variables in the sub-linear expectation space.

4 Proof of main result

Since E^ is defined only for XH, an extension E *: E ( X)=inf{E^[Y] :XY, YH} can be defined in the space of random variables such that E* is defined for all X.

Lemma 4.1 [16]  E ( X) is a sub-linear expectation in the space of random variables and has the following properties:

E [ X]=E^[X ],XH;V(A)= E[IA],AF.

Lemma 4.2 [16, 17]  For k=1,,n 1, if X k and ( Xk+1,,Xn) are independent of each other and satisfy E^ Xk0, k=1,, n, Sk= i=1kXi, then we have

E ^( |maxk nSk|p)22pk= 1nE^ Xk2, 1p 2,

V( S nx) c k=1nE^ Xk2x 2,x> 0.

Lemma 4.3 [17] (Borel-Cantelli lemma) Let { An,n1} be a column of events in F. Assume that V is a capacity with countably sub-additive. If n=1V (An)< , then

V(An ; i.o.)=0 ,where{An;i.o.}=n=1 i=nAi.

Lemma 4.4 [16]  If E^ is countably sub-additive, then E^( |X|) CV (| X|) , XH.

Lemma 4.5 [16] (Hölder inequality)  Let p, q>1 be two real numbers satisfying p 1+q1=1. Then for two random variables X,YH, we have

E ^( |XY|)(E^ |X|p)1 /p(E^ |Y|q)1 /q.

Proof of Theorem 3.1 1 bnj=1n cj 1(Xj E^Xnj) can be decomposed into

1b nj= 1ncj1( Xj E^Xnj) =1bn j=1ncj1(Xj Xnj)+ 1bn j=1n cj1( Xnj E^ Xnj) := I1+ I2.

To prove that (3.2) holds, it is only necessary to prove that

I 1=1 bn j=1n cj 1( Xj Xnj)V 0

and

I 2=1 bn j=1n cj 1( Xn jE^ Xnj) V0+ .

First, from (1.1) and (1.2), we have

V( 1bn|j= 1ncj1( Xj Xn j)|> ε)V( j=1n(XjXnj)) j=1nV( Xj>bncj)= j=1n 1bncj+ cj=1bn+1 j=1n1cj= Cn1+Cnlog Cn0, ε>0.

Therefore (4.3) holds.

Next, we prove (4.4). Lemma 4.5 shows that (E^ Xj)2 E^X j2, so we have

E ^(XjE^ Xj)2 2E ^(Xj2 + (E^Xj)2) E^X j2.

From (1.1), we have

cj 20bn2cj2V( Xj 2>t)dt=cj20bn2cj2 1t1 /2+cjdt ( let t1 /2=x)= cj 20bncj2xx+cj dx cj1( bnln (bn+1)).

Since the E ^ is countably sub-additive, it follows from Lemma 4.4 that E^|Xj|CV(|Xj|). Then by XjH and Lemma 4.1, we have E ^( Xj)=E( Xj). Since {cj1(XnjE^ Xnj),1jn}H is an independent sequence with zero mean, we can apply (4.2) of Lemma 4.2 to cj1(XnjE^ Xnj). Then by (1.2), (2.2), (3.1), (4.5) and (4.6), we have

V( 1bnj= 1ncj1( Xnj E^Xnj)ε) 1bn2 j=1n cj 2E^ (Xnj E^Xnj)2 1bn2j=1nc j2 E^Xnj2 =1 bn2 j=1ncj2EX nj 2 =1 bn 2j=1nc j2(EX j2I( Xj bnc j)+bn2cj2V( Xj>bncj) ) =1 bn 2j=1n( cj 2EX j2I( Xj bnc j)+bn2V( Xj>bncj) ) 1bn2 j=1n( cj 2CV (X j2I( Xj bnc j))+b n2 V( Xj>bncj) ) 1b n2 j=1n (cj 20bn2cj2V( Xj 2>t)dt+bn2bncj+ cj) j=1n(1bn2(1cj (bnln( bn+ 1))+1cj(bn+1)bn2)) Cnbn0.

Therefore (4.4) holds. Theorem 3.1 is proved.

Proof of Corollary 3.1 When cn=n, Cn= j=1n 1cj=j=1n1 jlogn, so cn=n also satisfies the conditions of Theorem 3.1. According to bn= CnlogCn, we know that b n=lognloglog n, so we have

1 lognloglogn j=1n j1( Xj E^Xnj)V 0+.

Proof of Theorem 3.2  j1, let a j=logβ2jj, b j=logβj, cj=j logαj for 1<α <2. Let

X~j= cjI (Xj<cj)+XjI(|Xj|cj)+cjI( Xj> cj).

As

1 logβn j=1n log β 2jj Xj =1bn j=1n aj Xj =1 bn j=1n aj( Xj X~j)+1bnj= 1naj(X~ jE^ X~j)+1 bn j=1n aj( E^X~j):=I1+I2+I3.

According to (4.8), to prove that (3.3) holds, we only need to prove that

I 1=1 bn j=1n aj( Xj X~j)0 a.s.V;

limsupn I2=limsupn1bn j=1n aj(X~ jE^ X~j)0a.s.V;

limsupn I3= limsup n 1bnj= 1naj(E^X~j)1β.

Taking 0<u<1, let the even function g(x ) Cl ,Lip( R), satisfying 0g (x)1 for all x. When |x|u, g(x )=1; when |x|>1, g(x )=0, the following holds:

I(|x|u)g(|x|)I( |x|1),I (|x|>1)1 g(|x|)I( |x|> u).

Therefore, we have

j=1V( |Xj|cj) j=1E^(1g (|Xj|cj) ) =j=1E^(1g (|X1|cj) ) j=1V( |X1|>μ cj).

From (1.3) and (4.12), we have

j=1V( |Xj|>cj) j=1V( |X1|>μ cj) j=1 V( X1>μcj) + j=1 V( X1<μ cj)j= 11 μcj j=11j logαj <.

Combining (4.7), we have

j=1V(Xj X~j)=j=1 V( |Xj|>cj)< .

Since V is countably sub-additive, by Lemma 4.3, we have

V( XjX~j , i.o.)=0.

Further according to limn1bn=limn1logβn=0, we have

|I1|=1bn | j=1naj(Xj X~j) |1 bn j=1n|aj(Xj X~j)| 0a .s. V,

Therefore (4.9) holds.

Next, prove (4.10). First, for every n, there exists k such that 2 k1<n 2 k. For sufficiently large n, we have b 2kb2k 1=logβ2klogβ2k11. Further, we have

I2=1 bnj=1n aj( X~j E^X~ j) 1 b2k 1| max 2k1<n 2kj=1naj(X~j E^X~j)| | max1<n2kj=1naj(X~j E^X~j)|1b2k b2k b2k1 | max1<n2kj=1naj(X~j E^X~j)|1b2k.

From (4.12), to prove (4.10), it is only necessary to prove that

li msupn 1b2n|max1<j2n i=1jai(X~ iE ^X~ i)|=0a .s. V.

Next, since V is countably sub-additive, applying Lemma 4.3, it is sufficient to prove that

n=1V( 1b2n| max1<j2ni=1jai(X~i E^X~i)|>ε)< ,ε >0,

Therefore, (4.14) holds.

First, since the E ^ is countably sub-additive, by Lemma 4.4, we have E ^| Xj|CV (| Xj|). From XjH and Lemma 4.1, we have E ^( Xj)=E( Xj). Further, the following equations can be derived from (1.3), (2.2) and (4.12):

E^ X~j2=E^ (c j2I(|Xj|>cj)+Xj2I(|Xj|cj))= E (c j2I(|Xj|>cj)+Xj2I(|Xj|cj))c j2 V( |Xj|>cj)+EX j2I(|Xj|cj) cj2V( |X1|>μcj)+EX12I(|X1|cj) cj2V( |X1|>μcj)+CV(X12I(|X1|cj))cj+1 cj 2V( X1>t1 /2) dt cj+1 cj 21t1/2dt cj.

Therefore, from (4.16), we have

j=12n aj 2 E^ X~j2 j=1 2na j2 cj= j=1 2nlog2β 4jj2 j logαj= j=1 2nlog2β+α 4jj log2β+α 32n n 2β +α3.

From Definition 2.5, { X~j E^X~ j,j1}H and {1 cj(X~ jE^ X~j),j1 }H are also a sequence of independent random variables with zero mean. The following equation can be deduced from (2.1), (4.1), (4.5) and (4.17):

V( 1b2n| max1<j2ni=1jai(X~i E^X~i)|>ε) 1ε2b 2n2 E^|max1< j 2ni=1j ai( X~i E^X~ i)|2 1b2n2j= 12n E^ (aj(X~j E^X~j))2 1 log2 β2nj= 12n aj2 E^X~j2 n2β +α3n2β=1 n3α,ε >0.

From (4.18), (4.15) holds when 3α >1. Since 1<α<2, (4.15) holds. Then, according to Lemma 4.3, (4.14) holds. Further, (4.10) also holds according to (4.13).

Next, prove (4.11), i.e., prove that

limsupn I3 =limsupn1bn j=1n aj( E^X~j)=limsupn1bn j=1n aj( cj V( Xj<c j)+E XjI (| Xj| cj) + cjV(Xj>cj)) 1β.

According to (1.1) and (4.12), we have

H:=1 bn| j=1n aj[ cj V( Xj<c j)+cjV( Xj>cj)]| 1 bnj=1n logα +β2jV( |Xj|>cj)1 bn j=1n logα +β2jV( |X1|>μ cj) 1logβn j=1n1j log 2βj{ log lognlogn0 ,β = 1;1logn0 ,β 1.

Therefore, it can be proved that

H 0.

Furthermore, since E ^ is countably sub-additive, we have E ^| Xj|CV (| Xj|) by Lemma 4.4. Then, since X jH and Lemma 4.1, we have E ^( Xj)=E( Xj). Combining with (1.3) and (2.2), we have

| E* XnI (| Xn|cn)| E *|Xn|I(|Xn|cn) =E *|X1|I(|X1|cn)CV |X1|I(|X1|cn) =0V( |X1|>t)d t1 cnV( |X1|>t) dt 1cn 1t dt=log c nlogn.

Thus,

j=1n ajbn|E*X nI( |Xn|cn)| j=1najbn logj= 1logβn j=1n logβ 1jj1β.

Combining (4.19) and (4.20), we have

limsupn I3 1β.

In summary, (4.9)‒(4.11) all hold, so (3.3) holds.

Obviously, { Xj ;j1} also satisfies the condition of Theorem 3.2. By replacing {Xj;j 1} in (3.3) with { Xj;j 1}, we can get (3.4) also holds. Theorem 3.2 is proved.

References

[1]

Adler A. Laws of large numbers for two tailed Pareto random variables. Probab Math Statist 2008; 28(1): 121–128

[2]

Adler A. An exact weak law of large numbers. Bull Inst Math Acad Sin (N S) 2012; 7(3): 417–422

[3]

ChenJ. Limit theory for sub-linear expectations and its applications. Ph D Thesis, Jinan: Shandong University, 2014 (in Chinese)

[4]

Chen Z J. Strong laws of large numbers for sub-linear expectations. Sci China Math 2016; 59(5): 945–954

[5]

Chen Z J, Wu P Y, Li B M. A strong law of large numbers for non-additive probabilities. Internat J Approx Reason 2013; 54(3): 365–377

[6]

Hu C. A strong law of large numbers for sub- linear expectation under a general moment condition. Statist Probab Lett 2016; 119: 248–258

[7]

Hu C. Weak and strong laws of large numbers for sub-linear expectation. Comm Statist Theory Methods 2020; 49(2): 430–440

[8]

MaX CWu Q Y. On some conditions for strong law of large numbers for weighted sums of END random variables under sub-linear expectations. Discrete Dyn Nat Soc, 2019: 7945431 (8 pp)

[9]

Marinacci M. Limit laws for non-additive probabilities and their frequentist interpretation. J Econom Theory 1999; 84(2): 145–195

[10]

PengS G. G-expectation, G-Brownian motion and related stochastic calculus of Itô type, In: Stoch Anal Appl, Abel Symp, Vol 2, Berlin: Springer, 2007: 541–567

[11]

Peng S G. Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation. Stochastic Process Appl 2008; 118(12): 2223–2253

[12]

Peng S G. Survey on normal distributions, central limit theorem, Brownian motion and the related stochastic calculus under sub-linear expectations. Sci China Ser A 2009; 52(7): 1391–1411

[13]

PengS G. Law of large numbers and central limit theorem under nonlinear expectations. Probab Uncertain Quant Risk, 2019, 4: 4 (8 pp)

[14]

Wu Q Y, Jiang Y Y. Strong law of large numbers and Chover’s law of the iterated logarithm under sub-linear expectations. J Math Anal Appl 2018; 460(1): 252–270

[15]

Yang W Z, Yang L, Wei D, Hu S H. The laws of large numbers for Pareto-type random variables with infinite means. Comm Statist Theory Methods 2019; 48(12): 3044–3054

[16]

Zhang L X. Rosenthal’s inequalities for independent and negatively dependent random variables under sub-linear expectations with applications. Sci China Math 2016; 59(4): 751–768

[17]

Zhang L X. Strong limit theorems for extended independent random variables and extended negatively dependent random variables under sub-linear expectations. Acta Math Sci 2022; 42: 467–490

[18]

Zhang L X. Exponential inequalities under the sub-linear expectations with applications to laws of the iterated logarithm. Sci China Math 2016; 59(12): 2503–2526

RIGHTS & PERMISSIONS

Higher Education Press 2022

AI Summary AI Mindmap
PDF (500KB)

693

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/