当前位置: 首页 > news >正文

折800网站模板qq上如何做文学网站

折800网站模板,qq上如何做文学网站,深圳开发app,动画设计与制作主要学什么亲和性调度 亲和性调度是一种比硬性指定节点#xff08;使用 nodeName 或 nodeSelector#xff09;更灵活的调度策略#xff0c;它允许定义一组规则#xff0c;根据这些规则#xff0c;调度器会尝试将 Pod 调度到最合适的节点上#xff0c;但如果找不到完全匹配的节点使用 nodeName 或 nodeSelector更灵活的调度策略它允许定义一组规则根据这些规则调度器会尝试将 Pod 调度到最合适的节点上但如果找不到完全匹配的节点它仍然可以调度到其他节点上。 下面是亲和性调度的三种主要类型及其使用场景 亲和性类型描述调度规则示例Node Affinity定义 Pod 可以调度到哪些节点的规则。基于节点的标签选择节点如调度到具有特定硬件配置或特定区域的节点。- requiredDuringSchedulingIgnoredDuringExecution必须满足所有规则才能调度。- nodeSelectorTerms节点选择列表。- matchFields按节点字段列出的节点选择器要求列表。- matchExpressions按节点标签列出的节点选择器要求列表推荐。- preferredDuringSchedulingIgnoredDuringExecution优先调度到满足规则的节点如果没有也可以调度到其他节点软限制。- preference节点选择器项与权重相关联。- weight倾向权重范围1-100。Pod Affinity定义 Pod 应该与哪些已存在的 Pod 调度到相同的拓扑域。适用于需要频繁交互的应用减少通信延迟。- requiredDuringSchedulingIgnoredDuringExecution必须与指定的 Pod 调度到相同的拓扑域。- preferredDuringSchedulingIgnoredDuringExecution优先与指定的 Pod 调度到相同的拓扑域如果没有也可以调度到其他拓扑域软限制。Pod Anti-Affinity定义 Pod 不应该与哪些已存在的 Pod 调度到相同的拓扑域。确保应用的多个实例分散在不同的拓扑域提高可用性和容错性。- requiredDuringSchedulingIgnoredDuringExecution必须不与指定的 Pod 调度到相同的拓扑域。- preferredDuringSchedulingIgnoredDuringExecution优先不与指定的 Pod 调度到相同的拓扑域如果没有也可以调度到其他拓扑域软限制。 每种亲和性都支持两种模式 RequiredDuringSchedulingIgnoredDuringExecution在调度时必须满足的规则如果找不到匹配的节点Pod 将不会被调度。但如果调度后节点的标签发生变化导致不再匹配Pod 仍然会保留在该节点上。 PreferredDuringSchedulingIgnoredDuringExecution在调度时优先考虑的规则但如果找不到匹配的节点Pod 仍然可以被调度到其他节点。 NodeAffinity节点亲和性 NodeAffinity 允许你根据节点的标签来指定 Pod 应该或倾向于调度到哪些节点上。 NodeAffinity 可选配置 requiredDuringSchedulingIgnoredDuringExecution 硬限制 nodeSelectorTerms节点选择列表必须满足所有指定的规则才可以调度到节点上。 matchFields按节点字段列出的节点选择器要求列表。 matchExpressions: 按节点标签列出的节点选择器要求列表包括 key键 values值 operator关系符支持 Exists、DoesNotExist、In、NotIn、Gt、Lt preferredDuringSchedulingIgnoredDuringExecution 软限制 preference: 一个节点选择器项与相应的权重相关联。 matchFields按节点字段列出的节点选择器要求列表。 matchExpressions按节点标签列出的节点选择器要求列表包括 key键 values值 operator关系符支持 In、NotIn、Exists、DoesNotExist、Gt、Lt weight倾向权重在范围1-100。 硬限制配置 因为没有找到被打上test标签的node所以调度失败 # vim pod-nodeaffinity-required.yaml --- apiVersion: v1 kind: Pod metadata:name: pod-nodeaffinity-requirednamespace: test spec:containers:- name: nginximage: nginx:1.17.1affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: nodeenvoperator: Invalues: [test,xxx][rootk8s-master ~]# kubectl create ns test namespace/test created [rootk8s-master ~]# kubectl apply -f pod-nodeaffinity-required.yaml pod/pod-nodeaffinity-required created [rootk8s-master ~]# kubectl get pods pod-nodeaffinity-required -n test NAME READY STATUS RESTARTS AGE pod-nodeaffinity-required 0/1 Pending 0 22s [rootk8s-master ~]# kubectl describe pods pod-nodeaffinity-required -n test Name: pod-nodeaffinity-required Namespace: test Priority: 0 Node: none Labels: none Annotations: none Status: Pending IP: IPs: none Containers:nginx:Image: nginx:1.17.1Port: noneHost Port: noneEnvironment: noneMounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5f6rd (ro) Conditions:Type StatusPodScheduled False Volumes:kube-api-access-5f6rd:Type: Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds: 3607ConfigMapName: kube-root-ca.crtConfigMapOptional: nilDownwardAPI: true QoS Class: BestEffort Node-Selectors: none Tolerations: node.kubernetes.io/not-ready:NoExecute opExists for 300snode.kubernetes.io/unreachable:NoExecute opExists for 300s Events:Type Reason Age From Message---- ------ ---- ---- -------Warning FailedScheduling 34s default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didnt tolerate, 2 node(s) didnt match Pods node affinity/selector.Warning FailedScheduling 33s default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didnt tolerate, 2 node(s) didnt match Pods node affinity/selector.接下来我们为node打上标签继续测试 [rootk8s-master ~]# kubectl label nodes k8s-node1 nodeenvdev node/k8s-node1 labeled [rootk8s-master ~]# kubectl label nodes k8s-node2 nodeenvtest node/k8s-node2 labeled [rootk8s-master ~]# kubectl delete -f pod-nodeaffinity-required.yaml pod pod-nodeaffinity-required deleted [rootk8s-master ~]# kubectl apply -f pod-nodeaffinity-required.yaml pod/pod-nodeaffinity-required created [rootk8s-master ~]# kubectl describe pods pod-nodeaffinity-required -n test Name: pod-nodeaffinity-required Namespace: test Priority: 0 Node: k8s-node2/192.168.58.233 Start Time: Thu, 16 Jan 2025 04:14:35 -0500 Labels: none Annotations: cni.projectcalico.org/containerID: eb576e210ed0daf158fc97706a7858428fdcbce61d89936cd60323c184bf65d7cni.projectcalico.org/podIP: 10.244.169.130/32cni.projectcalico.org/podIPs: 10.244.169.130/32 Status: Running IP: 10.244.169.130 IPs:IP: 10.244.169.130 Containers:nginx:Container ID: docker://b58aa001a6b25893a091a726ede2ea57d96e6209c11a8c17d269d78087db505eImage: nginx:1.17.1Image ID: docker-pullable://nginxsha256:b4b9b3eee194703fc2fa8afa5b7510c77ae70cfba567af1376a573a967c03dbbPort: noneHost Port: noneState: RunningStarted: Thu, 16 Jan 2025 04:14:38 -0500Ready: TrueRestart Count: 0Environment: noneMounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m5zx9 (ro) Conditions:Type StatusInitialized True Ready True ContainersReady True PodScheduled True Volumes:kube-api-access-m5zx9:Type: Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds: 3607ConfigMapName: kube-root-ca.crtConfigMapOptional: nilDownwardAPI: true QoS Class: BestEffort Node-Selectors: none Tolerations: node.kubernetes.io/not-ready:NoExecute opExists for 300snode.kubernetes.io/unreachable:NoExecute opExists for 300s Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 11s default-scheduler Successfully assigned test/pod-nodeaffinity-required to k8s-node2Normal Pulled invalid kubelet Container image nginx:1.17.1 already present on machineNormal Created invalid kubelet Created container nginxNormal Started invalid kubelet Started container nginx [rootk8s-master ~]# kubectl get pods pod-nodeaffinity-required -n test NAME READY STATUS RESTARTS AGE pod-nodeaffinity-required 1/1 Running 0 21srequiredDuringSchedulingIgnoredDuringExecution这个字段定义了一个强制性的调度规则即 Pod 必须在满足以下条件的节点上调度调度过程中会考虑这个规则但在运行过程中如果节点标签发生变化这个规则将被忽略。 nodeSelectorTerms这个字段定义了一个或多个节点选择器条件列表。Pod 只有在所有这些条件都满足的情况下才会被调度到节点上。 matchExpressions这个字段定义了一个或多个匹配表达式列表。每个匹配表达式都包含一个键key、一个操作符operator和一个或多个值valuesPod 只有在所有这些匹配表达式都满足的情况下才会被调度到节点上。 在这个配置文件中matchExpressions 定义了一个条件 keynodeenv这意味着要匹配的节点标签的键是 nodeenv。 operatorIn这意味着要匹配的值必须在给定的列表中。 values[test,dev]这意味着要匹配的节点标签的值必须是 test 或 dev。 因此这个 Pod 的亲和性规则要求它必须在标签 nodeenv 的值为 test 或 dev 的节点上调度。 软限制配置 软限制只会优先调度被打上标签的node如果没有任然会被调度到其他节点 [rootk8s-master ~]# vim pod-nodeaffinity-preferred.yaml --- apiVersion: v1 kind: Pod metadata:name: pod-nodeaffinity-preferrednamespace: test spec:containers:- name: nginximage: nginx:1.17.1affinity:nodeAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 1preference:matchExpressions:- key: nodeenvoperator: Invalues: [xxx,yyy][rootk8s-master ~]# kubectl apply -f pod-nodeaffinity-preferred.yaml pod/pod-nodeaffinity-preferred created[rootk8s-master ~]# kubectl get pods pod-nodeaffinity-preferred -n test NAME READY STATUS RESTARTS AGE pod-nodeaffinity-preferred 0/1 ContainerCreating 0 32s [rootk8s-master ~]# kubectl get pods pod-nodeaffinity-preferred -n test -w NAME READY STATUS RESTARTS AGE pod-nodeaffinity-preferred 0/1 ContainerCreating 0 36s pod-nodeaffinity-preferred 1/1 Running 0 37s[rootk8s-master ~]# kubectl describe pods pod-nodeaffinity-preferred -n test Name: pod-nodeaffinity-preferred Namespace: test Priority: 0 Node: k8s-node1/192.168.58.232 Start Time: Thu, 16 Jan 2025 04:28:24 -0500 Labels: none Annotations: cni.projectcalico.org/containerID: eab55d3f2b78987484123e4f4b21434f4f1323620026e3946e5fe77476e4a761cni.projectcalico.org/podIP: 10.244.36.71/32cni.projectcalico.org/podIPs: 10.244.36.71/32 Status: Running IP: 10.244.36.71 IPs:IP: 10.244.36.71 Containers:nginx:Container ID: docker://56be94e1afb802e91e86faf21ccce1925fa7f4204b418e6c5b8ac11024f75fc2Image: nginx:1.17.1Image ID: docker-pullable://nginxsha256:b4b9b3eee194703fc2fa8afa5b7510c77ae70cfba567af1376a573a967c03dbbPort: noneHost Port: noneState: RunningStarted: Thu, 16 Jan 2025 04:29:00 -0500Ready: TrueRestart Count: 0Environment: noneMounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zv8s7 (ro) Conditions:Type StatusInitialized True Ready True ContainersReady True PodScheduled True Volumes:kube-api-access-zv8s7:Type: Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds: 3607ConfigMapName: kube-root-ca.crtConfigMapOptional: nilDownwardAPI: true QoS Class: BestEffort Node-Selectors: none Tolerations: node.kubernetes.io/not-ready:NoExecute opExists for 300snode.kubernetes.io/unreachable:NoExecute opExists for 300s Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 49s default-scheduler Successfully assigned test/pod-nodeaffinity-preferred to k8s-node1Normal Pulling invalid kubelet Pulling image nginx:1.17.1Normal Pulled invalid kubelet Successfully pulled image nginx:1.17.1 in 32.615153362sNormal Created invalid kubelet Created container nginxNormal Started invalid kubelet Started container nginxPodAffinityPod 亲和性 PodAffinity主要实现以运行的Pod为参照实现让新创建的Pod跟参照pod在一个区域的功能。 PodAffinity可选配置 requiredDuringSchedulingIgnoredDuringExecution 硬限制 namespaces指定参照 Pod 的命名空间。 topologyKey指定调度作用域例如 kubernetes.io/hostname以 Node 节点为区分范围或 beta.kubernetes.io/os以 Node 节点的操作系统类型来区分。 labelSelector标签选择器用于匹配 Pod 的标签。 matchExpressions按节点标签列出的节点选择器要求列表包括 key键 values值 operator关系符支持 In、NotIn、Exists、DoesNotExist matchLabels指多个 matchExpressions 映射的内容 preferredDuringSchedulingIgnoredDuringExecution 软限制 weight倾向权重在范围1-100用于指定这个推荐规则的优先级。 podAffinityTerm包含 namespaces topologyKey labelSelector matchExpressions key values operator matchLabels topologyKey用于指定调度时作用域,例如: 如果指定为kubernetes.io/hostname那就是以Node节点为区分范围 如果指定为beta.kubernetes.io/os,则以Node节点的操作系统类型来区分 硬限制配置 创建参照Pod # vim pod-podaffinity-target.yaml --- apiVersion: v1 kind: Pod metadata:name: pod-podaffinity-targetnamespace: testlabels:podenv: pro spec:containers:- name: nginximage: nginx:1.17.1nodeName: k8s-node1 [rootk8s-master ~]# kubectl apply -f pod-podaffinity-target.yaml pod/pod-podaffinity-target created[rootk8s-master ~]# kubectl describe pods pod-podaffinity-target -n test Name: pod-podaffinity-target Namespace: test Priority: 0 Node: k8s-node1/192.168.58.232 Start Time: Thu, 16 Jan 2025 04:58:54 -0500 Labels: podenvpro Annotations: cni.projectcalico.org/containerID: 48a68cbe52064a7eb4c3be9db7e24dff3176382ed16d18e9ede5d30312e6425fcni.projectcalico.org/podIP: 10.244.36.72/32cni.projectcalico.org/podIPs: 10.244.36.72/32 Status: Running IP: 10.244.36.72 IPs:IP: 10.244.36.72 Containers:nginx:Container ID: docker://681c85e860b8e04189abd25d42de0e377cc297d73ef7965871631622704ecd19Image: nginx:1.17.1Image ID: docker-pullable://nginxsha256:b4b9b3eee194703fc2fa8afa5b7510c77ae70cfba567af1376a573a967c03dbbPort: noneHost Port: noneState: RunningStarted: Thu, 16 Jan 2025 04:58:58 -0500Ready: TrueRestart Count: 0Environment: noneMounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8vrrt (ro) Conditions:Type StatusInitialized True Ready True ContainersReady True PodScheduled True Volumes:kube-api-access-8vrrt:Type: Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds: 3607ConfigMapName: kube-root-ca.crtConfigMapOptional: nilDownwardAPI: true QoS Class: BestEffort Node-Selectors: none Tolerations: node.kubernetes.io/not-ready:NoExecute opExists for 300snode.kubernetes.io/unreachable:NoExecute opExists for 300s Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Pulled invalid kubelet Container image nginx:1.17.1 already present on machineNormal Created invalid kubelet Created container nginxNormal Started invalid kubelet Started container nginx创建建pod-podaffinity-required [rootk8s-master ~]# vim pod-podaffinity-required.yaml --- apiVersion: v1 kind: Pod metadata:name: pod-podaffinity-requirednamespace: test spec:containers:- name: nginximage: nginx:1.17.1affinity: #亲和性设置podAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: podenv #选择带podenv标签的Podoperator: Invalues: [xxx,yyy] #匹配xxx,yyy标签topologyKey: kubernetes.io/hostname [rootk8s-master ~]# kubectl apply -f pod-podaffinity-required.yaml pod/pod-podaffinity-required created [rootk8s-master ~]# kubectl describe pod pod-podaffinity-required -n test Name: pod-podaffinity-required Namespace: test Priority: 0 Node: none Labels: none Annotations: none Status: Pending IP: IPs: none Containers:nginx:Image: nginx:1.17.1Port: noneHost Port: noneEnvironment: noneMounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l7kjw (ro) Conditions:Type StatusPodScheduled False Volumes:kube-api-access-l7kjw:Type: Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds: 3607ConfigMapName: kube-root-ca.crtConfigMapOptional: nilDownwardAPI: true QoS Class: BestEffort Node-Selectors: none Tolerations: node.kubernetes.io/not-ready:NoExecute opExists for 300snode.kubernetes.io/unreachable:NoExecute opExists for 300s Events:Type Reason Age From Message---- ------ ---- ---- -------Warning FailedScheduling 39s default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didnt tolerate, 2 node(s) didnt match pod affinity rules, 2 node(s) didnt match pod affinity/anti-affinity rules.Warning FailedScheduling 38s default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didnt tolerate, 2 node(s) didnt match pod affinity rules, 2 node(s) didnt match pod affinity/anti-affinity rules. [rootk8s-master ~]# vim pod-podaffinity-required.yaml --- apiVersion: v1 kind: Pod metadata:name: pod-podaffinity-requirednamespace: test spec:containers:- name: nginximage: nginx:1.17.1affinity: #亲和性设置podAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: podenv #选择带podenv标签的Podoperator: Invalues: [pro,yyy] #匹配xxx,yyy标签topologyKey: kubernetes.io/hostname [rootk8s-master ~]# kubectl delete -f pod-podaffinity-required.yaml pod pod-podaffinity-required deleted [rootk8s-master ~]# kubectl apply -f pod-podaffinity-required.yaml pod/pod-podaffinity-required created [rootk8s-master ~]# kubectl describe pod pod-podaffinity-required -n test Name: pod-podaffinity-required Namespace: test Priority: 0 Node: k8s-node1/192.168.58.232 Start Time: Thu, 16 Jan 2025 05:09:42 -0500 Labels: none Annotations: cni.projectcalico.org/containerID: c459af771605b41fd74ae294344118acbdc2cd8fed3ae242982506c8eda9ad31cni.projectcalico.org/podIP: 10.244.36.73/32cni.projectcalico.org/podIPs: 10.244.36.73/32 Status: Running IP: 10.244.36.73 IPs:IP: 10.244.36.73 Containers:nginx:Container ID: docker://501cb02e356ddb23e7e11fd48ac0403f83221afbba9d18c608f3415533fe4290Image: nginx:1.17.1Image ID: docker-pullable://nginxsha256:b4b9b3eee194703fc2fa8afa5b7510c77ae70cfba567af1376a573a967c03dbbPort: noneHost Port: noneState: RunningStarted: Thu, 16 Jan 2025 05:09:45 -0500Ready: TrueRestart Count: 0Environment: noneMounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-24cmw (ro) Conditions:Type StatusInitialized True Ready True ContainersReady True PodScheduled True Volumes:kube-api-access-24cmw:Type: Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds: 3607ConfigMapName: kube-root-ca.crtConfigMapOptional: nilDownwardAPI: true QoS Class: BestEffort Node-Selectors: none Tolerations: node.kubernetes.io/not-ready:NoExecute opExists for 300snode.kubernetes.io/unreachable:NoExecute opExists for 300s Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 6s default-scheduler Successfully assigned test/pod-podaffinity-required to k8s-node1Normal Pulled invalid kubelet Container image nginx:1.17.1 already present on machineNormal Created invalid kubelet Created container nginxNormal Started invalid kubelet Started container nginxTaints on Nodes: 集群中有一个节点上应用了一个 taint node-role.kubernetes.io/master:, 这意味着这个节点不接受任何 Pod 的调度除非这些 Pod 也被标记为可以容忍这个 taint。这种情况下如果你的 Pod 没有设置相应的 toleration就无法被调度到这个节点上。 Pod Affinity Rules: Pod 由于满足 requiredDuringSchedulingIgnoredDuringExecution 的 pod affinity 规则而无法被调度到任何节点上。根据事件信息集群中有 3 个节点没有匹配 pod affinity 规则这意味着这些节点上没有满足 Pod affinity 规则的 Pod。 PodAntiAffinity 特性Pod 反亲和性 PodAntiAffinity 是 Kubernetes 中的一种亲和性调度规则它与 PodAffinity 相反用于确保带有特定标签的 Pod 不会被调度到同一个节点上。这种规则特别适用于那些天然互斥的应用组件或者对于那些需要分散以提高容错性和性能的 Pod。 PodAntiAffinity主要实现以运行的Pod为参照让新创建的Pod跟参照pod不在一个区域中的功能。它的配置方式和选项跟PodAffinty是一样的。 [rootk8s-master ~]# vim pod-podantiaffinity-required.yaml --- apiVersion: v1 kind: Pod metadata:name: pod-podantiaffinity-requirednamespace: test spec:containers:- name: nginximage: nginx:1.17.1affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: podenvoperator: Invalues: [pro]topologyKey: kubernetes.io/hostname[rootk8s-master ~]# kubectl apply -f pod-podantiaffinity-required.yaml pod/pod-podantiaffinity-required created [rootk8s-master ~]# kubectl get pod pod-podantiaffinity-required -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-podantiaffinity-required 1/1 Running 0 9s 10.244.169.131 k8s-node2 none none [rootk8s-master ~]# kubectl get pod pod-podantiaffinity-required -n test -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS pod-podantiaffinity-required 1/1 Running 0 19s 10.244.169.131 k8s-node2 none none none [rootk8s-master ~]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS k8s-master Ready control-plane,master 21d v1.21.10 beta.kubernetes.io/archamd64,beta.kubernetes.io/oslinux,kubernetes.io/archamd64,kubernetes.io/hostnamek8s-master,kubernetes.io/oslinux,node-role.kubernetes.io/control-plane,node-role.kubernetes.io/master,node.kubernetes.io/exclude-from-external-load-balancers k8s-node1 Ready none 21d v1.21.10 beta.kubernetes.io/archamd64,beta.kubernetes.io/oslinux,kubernetes.io/archamd64,kubernetes.io/hostnamek8s-node1,kubernetes.io/oslinux,nodeenvpro k8s-node2 Ready none 21d v1.21.10 beta.kubernetes.io/archamd64,beta.kubernetes.io/oslinux,kubernetes.io/archamd64,kubernetes.io/hostnamek8s-node2,kubernetes.io/oslinux,nodeenvtest新Pod必须要与拥有标签nodeenvpro的pod不在同一Node上
http://www.hkea.cn/news/14270904/

相关文章:

  • 上行10m企业光纤做网站自己做网站需不需要钱
  • 网站备案 建设方案书网站域名购买com
  • 漳州网站建设技术海南省建设执业资格管理中心网站
  • 网站做等级测评外贸推广方式有哪些
  • 游仙建设局官方网站闲置服务器做网站挣钱
  • 精品建站教程线上营销策略有哪些
  • 网站建设费用有哪些工厂招工信息
  • 网站建设与开发是什么岗位宝安新闻最新消息今天
  • 重庆市中心在哪个区seo关键词选取工具
  • 自己做一网站_多做宣传.可视化小程序开发工具
  • 网站图片上传不上去怎么办做网站要求高吗
  • 什么网站是专门做评论赚钱的商业空间设计案例ppt模板
  • 网站建设合同细节网站留言板设计代码
  • 政务信息网站的建设的意义手机登录凡科网
  • 增城免费网站建设wordpress修改了文件后前端不生效
  • 四川华海建设集团有限公司网站成都双语网站开发
  • 怎么做网站用于推广科技大崛起
  • 加强普法网站建设的通知网页设计个人页面
  • 企业网站设计建设服务建立生态产品
  • 博宇娱乐网站建设建筑工程公司名字起名大全
  • 金色财经网站开发dede网站地图文章变量
  • 汽车精品网站建设钓鱼网站制作的报告
  • 网站的运营模式建网站一般需要多少钱
  • 网站推广建站企业做增资 网站平台
  • 成都哪家做网站怎么做网站的移动端适配版
  • 吉林高端网站建设ios wordpress 编辑器
  • 做网站时需要FTP工具吗网站优化方式有哪些
  • 网站开发系统论文南昌企业做网站
  • 东莞网站制作培训网站 广州
  • 做网站 视频外链做网站设计要注意什么问题