当前位置: 首页 > news >正文

YOLOv11改进——基于注意力机制和密集小目标增强型EVA模块的设计与实现

随着计算机视觉技术的快速发展,目标检测算法在实时性与检测精度间的平衡成为研究重点。YOLO(You Only Look Once)系列算法以其高效性和准确性,长期占据实时目标检测领域的前沿位置。然而,尽管最新版本在通用场景表现优异,但在处理密集小目标检测(如遥感图像中的车辆、医学影像中的细胞或自动驾驶场景中的行人)时仍面临显著挑战。核心问题在于:(1)密集小目标的局部特征易被背景干扰,导致特征表征模糊;(2)模型对关键区域的注意力不足,难以有效区分高密度目标间的关联性;(3)传统特征金字塔架构难以高效融合多尺度信息以提升小目标细节捕捉能力。为此,本研究以注意力机制与密集小目标增强型EVA(Enhanced Vision Attention)模块为核心,提出YOLOv11的改进方案。在注意力机制方面,改进后模型通过混合空间-通道注意力机制,动态强化目标区域的空间显著性与通道间的重要性权重,解决小目标因面积占比小导致的注意力分配不足问题。进一步,针对密集小目标场景,本文设计了EVA模块,该模块融合了①多尺度密集特征增强路径,通过轻量级卷积与跨层特征交互(CFA)提升小目标边界细节的可辨性;②自适应特征金字塔重组,结合Transformer机制增强长距离依赖关系,缓解密集目标间的遮挡与混淆效应;③动态权重损失函数,针对密集小目标在训练中因采样不足引发的类别不平衡问题,通过自适应调整损失权重以提升模型对小目标的鲁棒性。

在这里插入图片描述
论文地址:https://arxiv.org/pdf/2210.02093
代码地址:https://github.com/QY1994-0919/CFPNet

1. 整体架构

在这里插入图片描述
EVAblockYOLO11中专门为密集和小目标设计的一种新型模块,其核心思想是通过高效的多尺度特征融合和局部-全局注意力机制,提升模型对复杂场景的适应能力。

1.1 多尺度特征融合

EVAblock通过金字塔结构提取多尺度特征,并利用深度可分离卷积Depthwise Separable Convolution)降低计算成本。具体步骤如下:

  1. 特征分组 :将输入特征图分为多个子特征图,分别对应不同的感受野。
  2. 多分支处理 :每个子特征图通过独立的卷积路径进行处理,捕获不同尺度的目标信息。
  3. 特征融合 :将各分支的输出特征图通过加权求和的方式融合,形成统一的多尺度特征表示。

1.2 局部-全局注意力机制

为了进一步增强特征表达能力,EVAblock引入了局部-全局注意力机制(Local-Global Attention Mechanism, LGAM)。该机制包含两个部分:

  1. 局部注意力 :通过滑动窗口操作,捕获局部区域内的上下文信息。局部注意力模块能够有效保留目标的细节特征,特别适合小目标检测。
  2. 全局注意力 :通过全局池化操作,生成整个特征图的全局描述符。全局注意力模块能够建模目标之间的长程依赖关系,适合处理密集目标。

LGAM通过自适应权重分配策略,动态平衡局部和全局注意力的作用,从而在精度和效率之间取得良好的折衷。

2. 将EVC添加到YOLO11

2.1 EVC代码

步骤一:将下面代码粘贴到在/ultralytics/ultralytics/nn/modules/block.py中

class Encoding(nn.Module):def __init__(self, in_channels, num_codes):super(Encoding, self).__init__()# init codewords and smoothing factorself.in_channels, self.num_codes = in_channels, num_codesnum_codes = 64std = 1. / ((num_codes * in_channels) ** 0.5)# [num_codes, channels]self.codewords = nn.Parameter(torch.empty(num_codes, in_channels, dtype=torch.float).uniform_(-std, std), requires_grad=True)# [num_codes]self.scale = nn.Parameter(torch.empty(num_codes, dtype=torch.float).uniform_(-1, 0), requires_grad=True)@staticmethoddef scaled_l2(x, codewords, scale):num_codes, in_channels = codewords.size()b = x.size(0)expanded_x = x.unsqueeze(2).expand((b, x.size(1), num_codes, in_channels))# ---处理codebook (num_code, c1)reshaped_codewords = codewords.view((1, 1, num_codes, in_channels))# 把scale从1, num_code变成   batch, c2, N, num_codesreshaped_scale = scale.view((1, 1, num_codes))  # N, num_codes# ---计算rik = z1 - d  # b, N, num_codesscaled_l2_norm = reshaped_scale * (expanded_x - reshaped_codewords).pow(2).sum(dim=3)return scaled_l2_norm@staticmethoddef aggregate(assignment_weights, x, codewords):num_codes, in_channels = codewords.size()# ---处理codebookreshaped_codewords = codewords.view((1, 1, num_codes, in_channels))b = x.size(0)# ---处理特征向量x b, c1, Nexpanded_x = x.unsqueeze(2).expand((b, x.size(1), num_codes, in_channels))# 变换rei  b, N, num_codes,-assignment_weights = assignment_weights.unsqueeze(3)  # b, N, num_codes,# ---开始计算eik,必须在Rei计算完之后encoded_feat = (assignment_weights * (expanded_x - reshaped_codewords)).sum(1)return encoded_featdef forward(self, x):assert x.dim() == 4 and x.size(1) == self.in_channelsb, in_channels, w, h = x.size()# [batch_size, height x width, channels]x = x.view(b, self.in_channels, -1).transpose(1, 2).contiguous()# assignment_weights: [batch_size, channels, num_codes]assignment_weights = torch.softmax(self.scaled_l2(x, self.codewords, self.scale), dim=2)# aggregateencoded_feat = self.aggregate(assignment_weights, x, self.codewords)return encoded_featclass Mlp(nn.Module):"""Implementation of MLP with 1*1 convolutions. Input: tensor with shape [B, C, H, W]"""def __init__(self, in_features, hidden_features=None,out_features=None, act_layer=nn.GELU, drop=0.):super().__init__()out_features = out_features or in_featureshidden_features = hidden_features or in_featuresself.fc1 = nn.Conv2d(in_features, hidden_features, 1)self.act = act_layer()self.fc2 = nn.Conv2d(hidden_features, out_features, 1)self.drop = nn.Dropout(drop)self.apply(self._init_weights)def _init_weights(self, m):if isinstance(m, nn.Conv2d):trunc_normal_(m.weight, std=.02)if m.bias is not None:nn.init.constant_(m.bias, 0)def forward(self, x):x = self.fc1(x)x = self.act(x)x = self.drop(x)x = self.fc2(x)x = self.drop(x)return x#  1*1 3*3 1*1
class ConvBlock(nn.Module):def __init__(self, in_channels, out_channels, stride=1, res_conv=False, act_layer=nn.SiLU, groups=1,norm_layer=partial(nn.BatchNorm2d, eps=1e-6)):super(ConvBlock, self).__init__()self.in_channels = in_channelsexpansion = 4c = out_channels // expansionself.conv1 = Conv(in_channels, c, act=nn.SiLU())self.conv2 = Conv(c, c, k=3, s=stride, g=groups, act=nn.SiLU())self.conv3 = Conv(c, out_channels, 1, act=False)self.act3 = act_layer(inplace=True)if res_conv:self.residual_conv = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=False)self.residual_bn = norm_layer(out_channels)self.res_conv = res_convdef zero_init_last_bn(self):nn.init.zeros_(self.bn3.weight)def forward(self, x, return_x_2=True):residual = xx = self.conv1(x)x2 = self.conv2(x)  # if x_t_r is None else self.conv2(x + x_t_r)x = self.conv3(x2)if self.res_conv:residual = self.residual_conv(residual)residual = self.residual_bn(residual)x += residualx = self.act3(x)if return_x_2:return x, x2else:return xclass Mean(nn.Module):def __init__(self, dim, keep_dim=False):super(Mean, self).__init__()self.dim = dimself.keep_dim = keep_dimdef forward(self, input):return input.mean(self.dim, self.keep_dim)class LVCBlock(nn.Module):def __init__(self, in_channels, out_channels, num_codes, channel_ratio=0.25, base_channel=64):super(LVCBlock, self).__init__()self.out_channels = out_channelsself.num_codes = num_codesnum_codes = 64self.conv_1 = ConvBlock(in_channels=in_channels, out_channels=in_channels, res_conv=True, stride=1)self.LVC = nn.Sequential(Conv(in_channels, in_channels, 1, act=nn.SiLU()),Encoding(in_channels=in_channels, num_codes=num_codes),nn.BatchNorm1d(num_codes),nn.SiLU(inplace=True),Mean(dim=1))self.fc = nn.Sequential(nn.Linear(in_channels, in_channels), nn.Sigmoid())def forward(self, x):x = self.conv_1(x, return_x_2=False)en = self.LVC(x)gam = self.fc(en)b, in_channels, _, _ = x.size()y = gam.view(b, in_channels, 1, 1)x = F.relu_(x + x * y)return xclass GroupNorm(nn.GroupNorm):"""Group Normalization with 1 group.Input: tensor in shape [B, C, H, W]"""def __init__(self, num_channels, **kwargs):super().__init__(1, num_channels, **kwargs)class DWConv_LMLP(nn.Module):"""Depthwise Conv + Conv"""def __init__(self, in_channels, out_channels, ksize, stride=1, act="silu"):super().__init__()self.dconv = Conv(in_channels,in_channels,k=ksize,s=stride,g=in_channels,)self.pconv = Conv(in_channels, out_channels, k=1, s=1, g=1)def forward(self, x):x = self.dconv(x)return self.pconv(x)# LightMLPBlock
class LightMLPBlock(nn.Module):def __init__(self, in_channels, out_channels, ksize=1, stride=1, act="silu",mlp_ratio=4., drop=0., act_layer=nn.GELU,use_layer_scale=True, layer_scale_init_value=1e-5, drop_path=0.,norm_layer=GroupNorm):  # act_layer=nn.GELU,super().__init__()self.dw = DWConv_LMLP(in_channels, out_channels, ksize=1, stride=1, act="silu")self.linear = nn.Linear(out_channels, out_channels)  # learnable position embeddingself.out_channels = out_channelsself.norm1 = norm_layer(in_channels)self.norm2 = norm_layer(in_channels)mlp_hidden_dim = int(in_channels * mlp_ratio)self.mlp = Mlp(in_features=in_channels, hidden_features=mlp_hidden_dim, act_layer=nn.GELU,drop=drop)self.drop_path = DropPath(drop_path) if drop_path > 0. \else nn.Identity()self.use_layer_scale = use_layer_scaleif use_layer_scale:self.layer_scale_1 = nn.Parameter(layer_scale_init_value * torch.ones(out_channels), requires_grad=True)self.layer_scale_2 = nn.Parameter(layer_scale_init_value * torch.ones(out_channels), requires_grad=True)def forward(self, x):if self.use_layer_scale:x = x + self.drop_path(self.layer_scale_1.unsqueeze(-1).unsqueeze(-1) * self.dw(self.norm1(x)))x = x + self.drop_path(self.layer_scale_2.unsqueeze(-1).unsqueeze(-1) * self.mlp(self.norm2(x)))else:x = x + self.drop_path(self.dw(self.norm1(x)))x = x + self.drop_path(self.mlp(self.norm2(x)))return x# EVCBlock
class EVCBlock(nn.Module):def __init__(self, in_channels, out_channels, channel_ratio=4, base_channel=16):super().__init__()expansion = 2ch = out_channels * expansion# Stem stage: get the feature maps by conv block (copied form resnet.py) 进入conformer框架之前的处理self.conv1 = Conv(in_channels, in_channels, k=3, act=nn.SiLU())self.maxpool = nn.MaxPool2d(kernel_size=3, stride=1, padding=1)  # 1 / 4 [56, 56]# LVCself.lvc = LVCBlock(in_channels=in_channels, out_channels=out_channels, num_codes=64)  # c1值暂时未定# LightMLPBlockself.l_MLP = LightMLPBlock(in_channels, out_channels, ksize=3, stride=1, act="silu", act_layer=nn.GELU,mlp_ratio=4., drop=0.,use_layer_scale=True, layer_scale_init_value=1e-5, drop_path=0.,norm_layer=GroupNorm)self.cnv1 = nn.Conv2d(ch, out_channels, kernel_size=1, stride=1, padding=0)def forward(self, x):x1 = self.maxpool((self.conv1(x)))# LVCBlockx_lvc = self.lvc(x1)# LightMLPBlockx_lmlp = self.l_MLP(x1)# concatx = torch.cat((x_lvc, x_lmlp), dim=1)x = self.cnv1(x)return x

2.2 修改init.py文件

步骤二:修改modules文件夹下的__init__.py文件,先导入函数

在这里插入图片描述

然后在下面的__all__中声明函数
在这里插入图片描述

2.3 添加yaml文件

步骤三:在/ultralytics/ultralytics/cfg/models/11下面新建文件yolo11_EVC.yaml文件。粘贴下面内容

  • 目标检测
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'# [depth, width, max_channels]n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPss: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPsm: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPsl: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPsx: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs# YOLO11n backbone
backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4- [-1, 2, C3k2, [256, False, 0.25]]- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8- [-1, 2, C3k2, [512, False, 0.25]]- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16- [-1, 2, C3k2, [512, True]]- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32- [-1, 2, C3k2, [1024, True]]- [-1, 1, SPPF, [1024, 5]] # 9- [-1, 2, C2PSA, [1024]] # 10# YOLO11n head
head:- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [ -1, 1, EVCBlock, [ 1024 ] ]- [[-1, 6], 1, Concat, [1]] # cat backbone P4- [-1, 2, C3k2, [512, False]] # 13- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 4], 1, Concat, [1]] # cat backbone P3- [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)- [-1, 1, Conv, [256, 3, 2]]- [[-1, 14], 1, Concat, [1]] # cat head P4- [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)- [-1, 1, Conv, [512, 3, 2]]- [[-1, 10], 1, Concat, [1]] # cat head P5- [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)- [[17, 20, 23], 1, Detect, [nc]] # Detect(P3, P4, P5)
  • 语义分割
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'# [depth, width, max_channels]n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPss: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPsm: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPsl: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPsx: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs# YOLO11n backbone
backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4- [-1, 2, C3k2, [256, False, 0.25]]- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8- [-1, 2, C3k2, [512, False, 0.25]]- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16- [-1, 2, C3k2, [512, True]]- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32- [-1, 2, C3k2, [1024, True]]- [-1, 1, SPPF, [1024, 5]] # 9- [-1, 2, C2PSA, [1024]] # 10# YOLO11n head
head:- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [ -1, 1, EVCBlock, [ 1024 ] ]- [[-1, 6], 1, Concat, [1]] # cat backbone P4- [-1, 2, C3k2, [512, False]] # 13- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 4], 1, Concat, [1]] # cat backbone P3- [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)- [-1, 1, Conv, [256, 3, 2]]- [[-1, 14], 1, Concat, [1]] # cat head P4- [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)- [-1, 1, Conv, [512, 3, 2]]- [[-1, 10], 1, Concat, [1]] # cat head P5- [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)- [[17, 20, 23], 1, Segment, [nc, 32, 256]] # Segment(P3, P4, P5)
  • 旋转目标检测
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'# [depth, width, max_channels]n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPss: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPsm: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPsl: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPsx: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs# YOLO11n backbone
backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4- [-1, 2, C3k2, [256, False, 0.25]]- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8- [-1, 2, C3k2, [512, False, 0.25]]- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16- [-1, 2, C3k2, [512, True]]- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32- [-1, 2, C3k2, [1024, True]]- [-1, 1, SPPF, [1024, 5]] # 9- [-1, 2, C2PSA, [1024]] # 10# YOLO11n head
head:- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [ -1, 1, EVCBlock, [ 1024 ] ]- [[-1, 6], 1, Concat, [1]] # cat backbone P4- [-1, 2, C3k2, [512, False]] # 13- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 4], 1, Concat, [1]] # cat backbone P3- [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)- [-1, 1, Conv, [256, 3, 2]]- [[-1, 14], 1, Concat, [1]] # cat head P4- [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)- [-1, 1, Conv, [512, 3, 2]]- [[-1, 10], 1, Concat, [1]] # cat head P5- [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)- [[17, 20, 23], 1, OBB, [nc, 1]] # Detect(P3, P4, P5)

本文只是对yolo11基础上添加模块,如果要对yolo11n/l/m/x进行添加则只需要指定对应的depth_multiplewidth_multiple

# YOLO11n
depth_multiple: 0.50  # model depth multiple
width_multiple: 0.25  # layer channel multiple
max_channel:1024# YOLO11s
depth_multiple: 0.50  # model depth multiple
width_multiple: 0.50  # layer channel multiple
max_channel:1024# YOLO11m
depth_multiple: 0.50  # model depth multiple
width_multiple: 1.00  # layer channel multiple
max_channel:512# YOLO11l 
depth_multiple: 1.00  # model depth multiple
width_multiple: 1.00  # layer channel multiple
max_channel:512 # YOLO11x
depth_multiple: 1.00  # model depth multiple
width_multiple: 1.50 # layer channel multiple
max_channel:512

2.4 在task.py中进行注册

步骤四:在task.py的parse_model函数中进行注册

先在task.py导入函数:
在这里插入图片描述
然后在task.py文件下找到parse_model这个函数,添加 EVCBlock

在这里插入图片描述

2.5 执行train.py训练

步骤五:新建train.py,将model的参数路径设置为yolo11_EVC.yaml的路径即可

import warnings
warnings.filterwarnings('ignore')
from ultralytics import YOLOif __name__ == '__main__':# model.load('yolo11n.pt') # 加载预训练权重,改进或者做对比实验时候不建议打开,因为用预训练模型整体精度没有很明显的提升model = YOLO(model=r'ultralytics/cfg/11/yolo11_EVC.yaml')model.train(data=r'/root/lanyun-tmp/yolov8-crops-app/dataset/data.yaml',imgsz=640,epochs=300,batch=32,workers=16,device='0',optimizer='SGD',close_mosaic=10,resume=False,project='run/train1',name='exp',single_cls=False,cache=False,)

在这里插入图片描述

3. 网络结构图

在这里插入图片描述

未来的研究方向包括:

  1. 进一步优化注意力机制的计算复杂度。
  2. 探索更高效的多尺度特征融合策略。
  3. 将YOLO11应用于更多实际场景,如自动驾驶、无人机监控等

相关文章:

  • QML中的JSON 处理
  • VMware虚拟机走主机代理上网
  • SAP IAS云产品简介
  • 《猎豹夕阳》
  • 机器学习(神经网络基础篇)——个人理解篇6(概念+代码)———参数优化篇
  • 【AI图像创作变现】02工具推荐与差异化对比
  • Vue3中provide和inject数据修改规则
  • Kubernetes相关的名词解释API Server组件(9)
  • Python中的短路运算
  • (学习总结34)Linux 库制作与原理
  • 算法—选择排序—js(场景:简单实现,不关心稳定性)
  • Cherry Studio利用本地部署的文本嵌入模型,快速搭建私人AI知识库系统
  • python tkinter库窗口的使用
  • Keil MDK 编译问题:function “HAL_IncTick“ declared implicitly
  • STM32 HAL库FreeRTOS 中断管理
  • fpga系列 HDL:verilog 常见错误 注意警告 created implicit net for “***“
  • 【HDFS】HDFS数据迁移与备份全攻略:从日常备份到灾难恢复
  • Wasm -WebAssembly简介
  • 基于SpringBoot3实现MyBatis-Plus(SSMP)整合快速入门CURD(增删改查)
  • 文献阅读——KAG:通过知识增强生成提升专业领域的大型语言模型
  • 罗马教皇方济各去世,享年88岁
  • 著名政治学学者、中国人民大学教授仝志敏逝世
  • 消息人士称哈马斯愿与以色列达成长期停火
  • 《蛮好的人生》上海特色鲜明,聚焦荧屏甚少出现的保险业
  • 调查显示特朗普在经济问题上的支持率跌至其总统生涯最低
  • 租到“甲醛房”,租客可以解约吗?租金能要回来吗?