后二80注稳定后二64注推波吗?求实战经验?

2017年会 -->
很久没提笔写文章了,今天闲来无事,也写点自己的心得。都是小弟的拙见。还是那句话:仁者见仁智者见智。我只是说一些纸的看法。我能的也是自己的一些思路,希望对大家有帮助!
首先我说一下我对自己淘宝大环境的看法。
淘宝是一个平台,永远别它想的很公平,万物公平都是相对而言的。适者生存!其实淘宝上的销售无非就是两种形式:一个是走件,一个是走量。固然一个店铺的风格很重要,但是经商最根本的是赚钱,而不是赌博,以小博大才是根本,有人说:我现在的亏是为了以后更好的发展,我感觉这个是一句空话而已。假如一直用亏本的状态发展,到头来只是一场空,所以我们既要在追求自己制定的目标外还要好好的思量如何能“剩下”钱。我的建议是:按照自己的思路去发展不要让淘宝大发展搅乱你自己的思路,机会多的很,不需要方方面面都做的很出色,其实你也做不到的。那么我们能做到的就是慢慢修炼自己的内功。成大事者必须学会忍耐,而不是孤注一掷。千万记住一定:不管淘宝如何变化,不要迷失自己,多想想自己当初做淘宝的想法。这样你才能坚持的久点。
其次说说我关于推广和影响的一些想法。
&&&& 做淘宝的核心是推广和营销,说的在直接一点就是如何摄取流量,提高转化率。很多人都是淘宝提倡什么就去搞什么,其实你大可不必这样做的。首先你要想清楚一点,13亿多人口的中国市场,你可能把市场份额都占全吗?你需要的只是占的千万分之一或者百万分之一的份额而已,所以淘宝上那么多的流量你要量力而行,不要图大,不仅店铺要做小而美,流量也是一样的道理。有多少钱办多大的事情。
别看别人店铺一个月多少的销售额,你要做的是稳中求发展。勿把全部资金都投入进去,这里面有两个意思:一个是你能坚持多久,你的强项在哪里。我的感觉是与其全面撒网,你不如攻其一点,你比如我做直通车很有,那么你就先把自己优势发挥出来,千万别今天搞搞直通车,明天搞搞微淘,后天搞搞微信,到最后你才知道,其实一个流量的入口做好了比啥都强。把自己擅长的技术发挥到极致才是根本。做推广其实说的白一点就是摆兵布阵。到什么地方应该摆什么阵才是王道。你总不能在水上和大陆上都是用一种阵吧。所以推广意味着投资,那么投资就需要谨慎,要有计划和策略。千万不要盲目的跟进。受伤都不知道怎么受伤的!小卖家有小卖家的技巧,只要你能抓住一点,赚钱没问题的。
第三做淘宝的需要的是人脉(11月7日更新)
&&& 很多人不擅长和别人交流,只是一味的自己研究和赚钱,其实在淘宝上人脉也是很重要的,你的圈子大了,三个臭皮匠赛过诸葛亮,经常的和别人探讨一些关于淘宝的事情,你会受益匪浅的。记得07年的时候。我们几个做淘宝的人就一直认为网上网下相结合是以后的发展趋势。而到了2009的时候,我们感觉 淘宝是不进则退,到了2010年的时候我们又聚集在一起,大家共同的认为,做淘宝不进则死。到了2013年 我们认为要想在淘宝赚到钱,或许多元化发展和合作是一个趋势。只有把握住正确的方向,你才能在以后的发展中获取到你想要的结果。人脉的建设需要的是时间的沉淀。在说说多元化发展,多元化发展就是经营不同类目的产品,因为模式和流程是一样的,我不知道大家有没有这样的感觉,第一个申请的店铺一般很少有人能做起来,第二个店铺反而发展的很快,其实道理很简单,就是因为有了上次的经验,所以你能很快的让第二个店铺出成绩。多元化的发展不仅仅是产品和店铺的多元化,而是要合理的整合资源和人才。记得有个皇冠店铺,去年发展不错 一下招聘了8个人,今天赶上淘宝变革,仙子又成了夫妻店,不要盲目的扩张,需要的是合理利用好资源和人才。大家都知道今年的经济环境不好,甚至波及到批发市场,在我看来这个是机遇。首先你要这样想,假如经济环境好的话,批发商和工厂那个能看上你,因为经济不好,所以我们有了很多发展的机遇,所以看待问题的不同得出的结果也是不同的。这里也给大家一个启示,就是在日常的经营店铺中,要学会逆思维。你才能找到突破口,发现一些别人没发现的技巧和问题。
第四说说关于店铺发展的一些心得。(11月7日09:23更新)
&&&&& 店铺的经营需要的是长期和不断的完善,每个店铺总是会有很多的不足,资金和人才缺乏才是关键之关键,举个例子来说吧,有人说我请的美工很差,总是感觉做的图片不行,其实也有人曾经问我这个问题的,但是我感觉美工只是负责把你的想法付诸于实践,你作为店长和运营来说连自己的思路和想法都不告诉美工,她又怎么才能做到完美的图片,所以各个部门的职责和责任需要分配合理,一个店铺的发展好与坏,主要是看店铺的核心人物是否可以驾驭这个店铺。核心人物需要的是综合素质强的人,也是经过多年磨练出来的经验,假如没有一个很好的核心人物,你的店铺发展到最后也是淘汰的结果。接着说,那么店铺最难掌握的是什么呢?一年12个月你不可能每个月的生意都很好,这里就有一个问题出现。举个例子吧!假如我们把一年分成12月,那么你一年投资推广的费用大约12万,有的人会每个月不管是淡季还是旺季都投资1万。这样做的结果只有一个那就是一年赚不到钱,好的结果就是收支平衡。假如我们找到自己店铺的旺季和淡季而充分利用好自己手中的资金,出来的效果一定会不一样的。假如我的店铺是3-5月是旺季的时候 那么我会拿出5万来投资,争取赚一笔。到了6-10月是我的淡季,那么我会收缩战线,维持自己的日常经营,大约花销是2万左右,11-1月有来到你的旺季,在投资5万,稳赚不赔的!做到收放自如才是王道。记住一个真理:任何情况下,都是以赚钱为目的的,不赚钱的店铺都是耍流氓!只有你资金多了才能谈发展,你说你手里就拿着10万何谈大发展,所以赚钱和攒钱才是最根本的,如何事情都需要投资的,没钱肯定不行的。
作为一个新开的店铺,如果是专职的,我建议你最好是把各个环节都走一遍,从找货进货拍照等等一系列的环节都走一遍,发现自己的不足和短板。然后慢慢弥补而不是上来就弥补,因为任何环节都需要时间和精力的付出。好好学习一些基础的东西。也别指望一年就能赚个百万,先想办法让自己活下来才是根本。记得我07年的时候手里就拿着500元开始创业。面对的问题就是如何生存下来,所以我是先找到一些赚小钱的门路,但是我始终没有放弃淘宝。比如我卖奇石吧,为的是赚钱,赚钱是为了维持住淘宝,因为深深知道自己要的是什么,坚持又是为了什么,居安思危的思想一天不能放松的,很多人都有这样的想法,我现在一年能赚10万,明年肯定会赚到20万。但是一个市场的变化,你怎么能预想呢?我们能做到的就是居安思危。多开垦点土地多撒点种子,在一个地方赚钱的时候,一定要想着在干点别的,千万别沉迷于你自己的童话中,市场是残酷的,只有你不断的撒下种子才能立于不败之地的。你看看很多大老板 以前是做机械加工的,不是也要涉足酒店和房地产等等项目。其实道理是一样的,居安思危在淘宝上是必须的,一个降权一个封店都会给你致命打击的,假如你有其他的项目的话,总不至于一下死掉的。所以我总结:赚钱的时候想着养着一个发展中的项目,这样有备无患岂不是更好!
第五 谈谈我们做淘宝的前景和趋势(11月7日 10:8更新)
做了这么多久的淘宝说心里话其实挺累的,尤其是白手起家真的太难了。好多机会和机遇从自己身边擦肩而过,有时候也挺懊悔但是错过了就是错过了。能把握住的只是现在,其实任何一个店铺的发展都不是那么简单的。没一个店铺的成长都是一本血泪史,能坚持这么多年的人不多最少我认识的人现在还在做淘宝的已经不多,但是坚持下来的人现在过的都挺好。有了自己的团队和自己的事业。我总是在想一个问题,假如我们拿出10年的时间或者更长的时间去做某件事情,你就是这个行业的精英,就像一个枪手打枪打多了,凭的是自己的感觉不需要去刻意的瞄准,所以时间才是考验我的能力之一。有人说:做了这么久的淘宝,到头来钱没赚到时间白白的废掉了其实我不这么认为,有些事情经历了才能懂得,只要你做了一年的淘宝就比新手强的很多吧。现在电商行业缺的是什么----人才。我认为只要大家好好的学以后企业和个体老板需要这样的人才多了去了,工资也不低的。去年我教了一个小徒弟也是偶然认识的,以前拿的工资是3000左右也是做网店的,认识我以后不断的学习和总结经验慢慢的成熟起来。现在在一个天猫商城做运营带一个20多人的团队,工资目前是月工资3万多加年底分红。我说这个事情不是为了炫耀自己多厉害而是要告诉大家一个道理,有些事情只要你做了每天都去学习,慢慢总结经验和不断的探索即使不自己干去给别人打工也能拿到你想要的待遇。不要埋怨上天对你的不公而你自己要发现自己的闪光点,不要一根筋的想问题逆思维想想或许会给你自己一个满意的答案的。我预测不出三年时间需要有经验的运营和店长的企业和个体老板不在少数,大家可以这样想:但凡有点本事的运营和店长都在自己做的,很少听说有经验和技巧的人在给别人打工。也是现在电商招人难的原因所在。所以我们做淘宝绝对有利于个人的发展,我有时候在想:如果淘宝没什么技术含量任何人都能做好的话,这个行业不做也罢因为毕竟中国人很多,你能做别人也能做但是淘宝不同,开店容易经营好店铺那可不是一朝一夕的事情了。大家是否和我有同感呢?
11月7日(14:00更新)
接着更新:做网店有人认为是一种轻松加愉快的工作和事业,而恰恰相反我认为是一种忍耐力的磨练,造成8点起来晚上12点休息。基本上都是16个小时的工作而且还是长期而漫长的期限。假如我们不是因为有了那么一个小小的梦想,不是为了爱我的人和我爱的人努力,谁又愿意去承担这些呢?所以既要忍耐生活上的磨练也是忍受事业上的锤炼。作为一个网店的店主你要具备操盘手的能力,操盘手就是要知道大局 知道轻重缓急。有人会说 我刚开店的啥也不懂怎么做好呢?其实当你第一天做网店开始,坚持二字就贯彻你淘宝生涯之中了。不要想着天上掉馅饼那是不可能的,你要做的是锻炼自己的综合能力,就想前三年的时候带了一批学员,当然我带的都是免费的哈!当时认识他们他们才是1-2心,在YY给他们交流心得的时候,谈到了团队和管理有人就认为我怎么可能雇的客服呢?一年以后每个人都有了自己的客服也有了美工。所以我想给大家说的是,假如你想快速的做好店铺,最佳的路径不是“刷”而是找到你的“贵人”或者是“启蒙老师”通过他的经验传授,你会很快的走上正轨,最少是让你赚到钱也看到希望,假如一个人开网店三个月一分钱没赚还赔了几千或者几万的话,你可能在坚持做淘宝吗?不用别人说你 你自己就把自己给说服了。所以要在短时间内学习到基础的知识赚到第一桶金才是王道。所以慢慢寻找你的启蒙老师吧,这个是唯一的捷径。最好是免费的话!其实我当初做淘宝的时候也是遇到了两位大姐,一个是上海的,一个是河北的,他们做的早懂的比我多。通过他们对我的耐心指导和方向的指引,我感觉最少我少走了很多弯路,所以短时间能赚到钱也是这个原因的!
&&& 说说我这么多年对淘宝的看法之一哈,很多新手都认为现在的淘宝太复杂,其实我感觉不是复杂而是一种成长的体现,记得以前一些前辈曾经告诉我说,03年的淘宝那个时候小二是一对一的帮助的,你有问题小二会远程帮你解决,你只需要看着而已。所以不是淘宝没给我们机会而是我们没赶上,很多新手都会问我店铺怎么装修?流量从哪里来?我只是告诉他也是我的感受哈,当初03年的淘宝是和我们一样是小学生而已,卖家和小二淘宝一起成长起来,当淘宝已经变成大学学院的时候你这个小学生怎么可能一下就懂的那么多呢?你要做的就是抓紧时间恶补一下自己的淘宝知识,千万别什么都没学习和了解就想着走捷径而忘了“本”。成长的道路是艰辛的这个谁也不能代替你。如果你走捷径起来早晚也要面对这些问题的,绕是绕不开的!
11月8日(08:45 更新&)
又是快到睡觉的时间了,23:00准时休息哈。记得以前做新手的时候,每天都熬到1-3点奇怪的是早上8点准时醒来精神状态非常的好。转眼6年多的时间过去了,现在的精神头大不如以前,虽然我一直提醒自己时刻要记住最初的梦想,但是年纪也是一天一天的增长起来。所以我的感觉其实年轻人机会很多。以前有个大学毕业的人来请教我做淘宝的事情,我只是说了一句话他就放弃了。我说:假如让你5年一分钱不赚,你能坚持做吗?他沉默了一会就放弃了,说心里话年轻人没有魄力和勇气的话什么事情能做好呢?难道5年真的能一分钱不赚钱?这里也说明一个现象,现在很多的人做淘宝不是为了磨练自己,而是认为淘宝上赚钱很轻松的,所以他们出发点的不同,选择放弃也是对的。其实做淘宝也许很多人不明白一个事情,但凡你用心去做某件事情的时候,在这个过程中不是一成不变的,当随着你的经验和阅历的增长,无形的财富其实都是围绕你的,大器晚成的人必经这个过程。我有时候在想:如果你的经验和店铺的各个环节都做的非常完美的情况下,打出组合拳增加销售量是一件很简单的事情。什么直通车 钻展 硬广 活动 等等 在你资金和经验都很丰富的情况下,你想站起来那不是很简单的事情吗?所以人最难控制的是自己的欲望,当你把某件事情以赌博的形式来做的时候,也就意味着你离着失败差不多了,钱是永远赚不完的,生意也是永远做不完的。你要学会控制自己的欲望你才能游刃有余的做好你的店铺。放眼淘宝 你会发现现在赚钱多 日子过的好的人是那批人呢?那是03-07年做淘宝的人,以此类推你就知道你应该在那一年起来。所以万事开头难 大家现在拼的是耐力而非资金!熬下去总会有机会的!
11月8日(20:11更新)
接着更新哈!有人说淘宝是越来越难做了,新手到底还有没有机会翻身,我从几个方面说起。首先从我接触淘宝以来就没感觉淘宝好做过,做新手的时候要简单学习作图装修发布宝贝规则的熟悉等等,每天都是2 3点休息,即使现在我也是每天都要去学习一些新的知识,累是自然的累了,当你打上自己独闯天下的时候也就意味着你的苦逼生涯的开始。新手也不要羡慕别人现在发展的很好,基础不牢天动地摇没那么简单的,就像我们看父母这辈人一样,感觉80年代90年代就是捡钱的年代,但是每个时代都有机遇的,我们不能影响大环境但是我们可以改变自己。所以我要说的是任何时候不要否定自己的潜力,我以前也就是一个干保安的人,然后开始做淘宝苦逼的日子咱也熬过的,所以深深的理解做新手的不易,但是做新手也有新手的好处,大卖家是没多少时间学习的而新手因为生意不忙,所有了更多时间学习,好好把握这个机会吧,也许哪天你的生意忙了就没有更多的时间学习的。
待续。。。。。。。。
我慢慢更新哈!争取说的全面点
必须的 在更新!好多话要说!呵呵!
写点实在的,同求更新~
醉稀饭经验了
感谢支持!我写的有些笼统,但是肯定有人知道我在说什么,我说的其实都是大实话,经过过的人不难看出来的!
楼主你的标题很吸引人嘛
说的很实在,小卖家一开始做淘宝不能奢求18般武艺样样精通。不能今天试试这个,明天捣鼓那个,到头来什么都不精通。把有限的资源集中起来,主攻一个点,炼出一个看家本领才是王道。
关注楼主更新!
楼主做了这么多年了,一定有自己擅长的方面,发出来和大家一起分享。
说的真有道理
我也这么想的
楼主做了这么多年了,一定有自己擅长的方面,发出来和大家一起分享。
感谢大家的支持!已经更新!
嗯,楼主说的真好,只是有时候慢慢摸索需要时间。
与其全面撒网,不如攻其一点,一个流量的入口做好了比啥都强,这个很赞同哦。
东西是不错,有点干,就是看的有点累,露珠要总结清楚点呐
看来楼主也是跟我一样,慢慢得在熬
说的真好,很实际,学习了
淘宝这个平台就是适者生存.&& 居安思危..
很好,我们就需要不断去思考
看到的是坚持干下去,用心才能做得好,思考才能做强。学习中,准备干淘宝运营
非常不错的总结
期待更新,我也是刚做淘宝,需要学习的东西还有好多好多
楼主写的很好,很成熟,赞一个!
很好 现在电商人特别是淘宝特别浮躁&像lz一样保持清晰思路的不多啊
能带下我吗??、我已经做了一年的天猫了。。求交流下心得
看着LZ说的我也要努力争取做得更好了!有什么困难险阻呢
真心受教!很多事都是需要经过时间洗涤的,我进入淘宝不长,2012年6月份毕的业,然后当中跌跌撞撞地也是很浮躁,是13年6月份进入的电商,现在在公司就研究直通车,听了楼主的一番话语,感觉跟自己对自己每天暗示的很相似啊,呵呵,楼主QQ有木有啊?想拜师哪。
楼主,求带,新人运营!求交流心得!
全部看完了,很真实,有些也说到我的内心了!很感谢分享,我想应该如你所说,找到一个好的启蒙老师确实很重要,我也一直这么认为,也一直在寻找!
居安思危,楼主说的好。还没看完,先来顶。
2014年我要开始接触电子商务,望以后多多指教啊。拜师也可以的。
我慢慢更新哈!争取说的全面点
受教了 &冒昧的问句:伊雪娜旗舰店 &这个是LZ在运营的店么?
说的很实在,期待楼主的更多分享。另外,还收徒弟吗?
说的挺好呢我现在正在迷茫求指导经验谢谢了
就是还收徒弟吗呵呵
辛苦。。偶做020母婴 。加油!
我慢慢更新哈!争取说的全面点
有理,期等更新
楼主写的很好,会一直关注的!我是新兵蛋子,啥也不懂的,俺就是知道傻傻的学习,学习,还是学习!
幸苦了,前辈,三十而立了,没伴没娃,想好好做淘宝。
路过,呵呵
呵呵,笑死人。我就不懂了,你也能带出这么个小弟?一个月3万?牛逼啦,哪个店的?韩都还是茵曼。
楼主前辈收徒弟不
楼主说的很好 很多事有同感
楼主说的太好了,
现在正在迷茫,
求交流心得。
说的很好深有体会,每一次的成长都是一本血泪史,做好淘宝没有捷径,唯有用心的坚持。
双11前成交暴跌,有木有
@你关注的人或派友
亲,先登录哦!
请输入姓名:
请输入对方邮件地址:
您的反馈对我们至关重要!DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR);
DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR);
2. 晚上8点左右执行,执行下面语句收集Top 10的等待类型信息统计。
AS ( SELECT
[wait_type] ,
[wait_time_ms] / 1000.0 AS [WaitS] ,
( [wait_time_ms] - [signal_wait_time_ms] ) / 1000.0 AS [ResourceS] ,
[signal_wait_time_ms] / 1000.0 AS [SignalS] ,
[waiting_tasks_count] AS [WaitCount] ,
100.0 * [wait_time_ms] / SUM([wait_time_ms]) OVER ( ) AS [Percentage] ,
ROW_NUMBER() OVER ( ORDER BY [wait_time_ms] DESC ) AS [RowNum]
sys.dm_os_wait_stats
[wait_type] NOT IN (
N'CLR_SEMAPHORE',
N'LAZYWRITER_SLEEP',
N'RESOURCE_QUEUE',
N'SQLTRACE_BUFFER_FLUSH',
N'SLEEP_TASK',
N'SLEEP_SYSTEMTASK',
N'WAITFOR',
N'HADR_FILESTREAM_IOMGR_IOCOMPLETION',
N'CHECKPOINT_QUEUE',
N'REQUEST_FOR_DEADLOCK_SEARCH',
N'XE_TIMER_EVENT',
N'XE_DISPATCHER_JOIN',
N'LOGMGR_QUEUE',
N'FT_IFTS_SCHEDULER_IDLE_WAIT',
N'BROKER_TASK_STOP',
N'CLR_MANUAL_EVENT',
N'CLR_AUTO_EVENT',
N'DISPATCHER_QUEUE_SEMAPHORE',
N'TRACEWRITE',
N'XE_DISPATCHER_WAIT',
N'BROKER_TO_FLUSH',
N'BROKER_EVENTHANDLER',
N'FT_IFTSHC_MUTEX',
N'SQLTRACE_INCREMENTAL_FLUSH_SLEEP',
N'DIRTY_PAGE_POLL',
N'SP_SERVER_DIAGNOSTICS_SLEEP' )
[W1].[wait_type] AS [WaitType] ,
CAST ([W1].[WaitS] AS DECIMAL(14, 2)) AS [Wait_S] ,
CAST ([W1].[ResourceS] AS DECIMAL(14, 2)) AS [Resource_S] ,
CAST ([W1].[SignalS] AS DECIMAL(14, 2)) AS [Signal_S] ,
[W1].[WaitCount] AS [WaitCount] ,
CAST ([W1].[Percentage] AS DECIMAL(4, 2)) AS [Percentage] ,
CAST (( [W1].[WaitS] / [W1].[WaitCount] ) AS DECIMAL(14, 4)) AS [AvgWait_S] ,
CAST (( [W1].[ResourceS] / [W1].[WaitCount] ) AS DECIMAL(14, 4)) AS [AvgRes_S] ,
CAST (( [W1].[SignalS] / [W1].[WaitCount] ) AS DECIMAL(14, 4)) AS [AvgSig_S]
[Waits] AS [W1]
INNER JOIN [Waits] AS [W2] ON [W2].[RowNum] &= [W1].[RowNum]
GROUP BY [W1].[RowNum] ,
[W1].[wait_type] ,
[W1].[WaitS] ,
[W1].[ResourceS] ,
[W1].[SignalS] ,
[W1].[WaitCount] ,
[W1].[Percentage]
SUM([W2].[Percentage]) - [W1].[Percentage] &95; -- percentage threshold
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657
AS ( SELECT
[wait_type] ,
[wait_time_ms] / 1000.0 AS [WaitS] ,
( [wait_time_ms] - [signal_wait_time_ms] ) / 1000.0 AS [ResourceS] ,
[signal_wait_time_ms] / 1000.0 AS [SignalS] ,
[waiting_tasks_count] AS [WaitCount] ,
100.0 * [wait_time_ms] / SUM([wait_time_ms]) OVER ( ) AS [Percentage] ,
ROW_NUMBER() OVER ( ORDER BY [wait_time_ms] DESC ) AS [RowNum]
sys.dm_os_wait_stats
[wait_type] NOT IN (
N'CLR_SEMAPHORE',
N'LAZYWRITER_SLEEP',
N'RESOURCE_QUEUE',
N'SQLTRACE_BUFFER_FLUSH',
N'SLEEP_TASK',
N'SLEEP_SYSTEMTASK',
N'WAITFOR',
N'HADR_FILESTREAM_IOMGR_IOCOMPLETION',
N'CHECKPOINT_QUEUE',
N'REQUEST_FOR_DEADLOCK_SEARCH',
N'XE_TIMER_EVENT',
N'XE_DISPATCHER_JOIN',
N'LOGMGR_QUEUE',
N'FT_IFTS_SCHEDULER_IDLE_WAIT',
N'BROKER_TASK_STOP',
N'CLR_MANUAL_EVENT',
N'CLR_AUTO_EVENT',
N'DISPATCHER_QUEUE_SEMAPHORE',
N'TRACEWRITE',
N'XE_DISPATCHER_WAIT',
N'BROKER_TO_FLUSH',
N'BROKER_EVENTHANDLER',
N'FT_IFTSHC_MUTEX',
N'SQLTRACE_INCREMENTAL_FLUSH_SLEEP',
N'DIRTY_PAGE_POLL',
N'SP_SERVER_DIAGNOSTICS_SLEEP' )
[W1].[wait_type] AS [WaitType] ,
CAST ([W1].[WaitS] AS DECIMAL(14, 2)) AS [Wait_S] ,
CAST ([W1].[ResourceS] AS DECIMAL(14, 2)) AS [Resource_S] ,
CAST ([W1].[SignalS] AS DECIMAL(14, 2)) AS [Signal_S] ,
[W1].[WaitCount] AS [WaitCount] ,
CAST ([W1].[Percentage] AS DECIMAL(4, 2)) AS [Percentage] ,
CAST (( [W1].[WaitS] / [W1].[WaitCount] ) AS DECIMAL(14, 4)) AS [AvgWait_S] ,
CAST (( [W1].[ResourceS] / [W1].[WaitCount] ) AS DECIMAL(14, 4)) AS [AvgRes_S] ,
CAST (( [W1].[SignalS] / [W1].[WaitCount] ) AS DECIMAL(14, 4)) AS [AvgSig_S]
[Waits] AS [W1]
INNER JOIN [Waits] AS [W2] ON [W2].[RowNum] &= [W1].[RowNum]
GROUP BY [W1].[RowNum] ,
[W1].[wait_type] ,
[W1].[WaitS] ,
[W1].[ResourceS] ,
[W1].[SignalS] ,
[W1].[WaitCount] ,
[W1].[Percentage]
SUM([W2].[Percentage]) - [W1].[Percentage] &95; -- percentage thresholdGO
3.提取信息
查询结果得出排名:
1:CXPACKET
2:LATCH_X
3:IO_COMPITION
4:SOS_SCHEDULER_YIELD
ASYNC_NETWORK_IO
PAGELATCH_XX
7/8.PAGEIOLATCH_XX
跟主要资源相关的等待方阵如下:
CPU相关:CXPACKET 和SOS_SCHEDULER_YIELD
IO相关: PAGEIOLATCH_XXIO_COMPLETION
Memory相关: PAGELATCH_XX、LATCH_X
进一步分析前几名等待类型
当前排前三位:CXPACKET、LATCH_EX、IO_COMPLETION等待,开始一个个分析其产生等待背后原因
CXPACKET等待分析
CXPACKET等待排第1位, SOS_SCHEDULER_YIELD排在4位,伴有第7、8位的PAGEIOLATCH_XX等待。发生了并行操作worker被阻塞
存在大范围的表Scan
某些并行线程执行时间过长,这个要将PAGEIOLATCH_XX和非页闩锁Latch_XX的ACCESS_METHODS_DATASET_PARENT Latch结合起来看,后面会给到相关信息
执行计划不合理的可能
首先看一下花在执行等待和资源等待的时间
PAGEIOLATCH_XX是否存在,PAGEIOLATCH_SH等待,这意味着大范围SCAN
是否同时有ACCESS_METHODS_DATASET_PARENT Latch或ACCESS_METHODS_SCAN_RANGE_GENERATOR LATCH等待
执行计划是否合理
信提取息:
获取CPU的执行等待和资源等待的时间所占比重
执行下面语句:
--CPU Wait Queue (threshold:&=6)
scheduler_id,idle_switches_count,context_switches_count,current_tasks_count, active_workers_count from
sys.dm_os_schedulers
where scheduler_id&255
--CPU Wait Queue (threshold:&=6)select
scheduler_id,idle_switches_count,context_switches_count,current_tasks_count, active_workers_count from
sys.dm_os_schedulerswhere scheduler_id&255
sum(signal_wait_time_ms) as total_signal_wait_time_ms,
sum(wait_time_ms-signal_wait_time_ms) as resource_wait_time_percent,
sum(signal_wait_time_ms)*1.0/sum(wait_time_ms)*100 as signal_wait_percent,
sum(wait_time_ms-signal_wait_time_ms)*1.0/sum(wait_time_ms)*100 as resource_wait_percent
SYS.dm_os_wait_stats
sum(signal_wait_time_ms) as total_signal_wait_time_ms, sum(wait_time_ms-signal_wait_time_ms) as resource_wait_time_percent, sum(signal_wait_time_ms)*1.0/sum(wait_time_ms)*100 as signal_wait_percent,sum(wait_time_ms-signal_wait_time_ms)*1.0/sum(wait_time_ms)*100 as resource_wait_percent
SYS.dm_os_wait_stats
结论:从下表收集到信息CPU主要花在资源等待上,而执行时候等待占比率小,所以不能武断认为CPU资源不够。
造成原因:
缺少聚集索引、不准确的执行计划、并行线程执行时间过长、是否存在隐式转换、TempDB资源争用
解决方案:
主要从如何减少CPU花在资源等待的时间
设置查询的MAXDOP,根据CPU核数设置合适的值(解决多CPU并行处理出现水桶短板现象)
检查”cost threshold parallelism”的值,设置为更合理的值
减少全表扫描:建立合适的聚集索引、非聚集索引,减少全表扫描
不精确的执行计划:选用更优化执行计划
统计信息:确保统计信息是最新的
建议添加多个Temp DB 数据文件,减少Latch争用,最佳实践:&8核数,建议添加4个或8个等大小的数据文件
LATCH_EX等待分析
LATCH_EX等待排第2位。
有大量的非页闩锁等待,首先确认是哪一个闩锁等待时间过长,是否同时发生CXPACKET等待类型。
查询所有闩锁等待信息,发现ACCESS_METHODS_DATASET_PARENT等待最长,查询相关资料显示因从磁盘-&IO读取大量的数据到缓存,结合与之前Perfmon结果做综合分析判断,判断存在大量扫描。
SELECT * FROM sys.dm_os_latch_stats
SELECT * FROM sys.dm_os_latch_stats
信提取息:
造成原因:
有大量的并行处理等待、IO页面处理等待,这进一步推定存在大范围的扫描表操作。
与开发人员确认存储过程中使用大量的临时表,并监测到业务中处理用频繁使用临时表、标量值函数,不断创建用户对象等,TEMPDB 处理内存相关PFSGAMSGAM时,有很多内部资源申请征用的Latch等待现象。
解决方案:
优化TempDB
创建非聚集索引来减少扫描
更新统计信息
在上面方法仍然无法解决,可将受影响的数据转移到更快的IO子系统,考虑增加内存
IO_COMPLETION等待分析
IO_COMPLETION等待排第3位
IO延迟问题,数据从磁盘到内存等待时间长
从数据库的文件读写效率分析哪个比较慢,再与“CXPACKET等待分析”的结果合起来分析。
Temp IO读/写资源效率
TempDB的数据文件的平均IO在80左右,这个超出一般值,TempDB存在严重的延迟。
TempDB所在磁盘的Read latency为65,也比一般值偏高。
运行脚本:
--数据库文件读写IO性能
SELECT DB_NAME(fs.database_id) AS [Database Name], CAST(fs.io_stall_read_ms/(1.0 + fs.num_of_reads) AS NUMERIC(10,1)) AS [avg_read_stall_ms],
CAST(fs.io_stall_write_ms/(1.0 + fs.num_of_writes) AS NUMERIC(10,1)) AS [avg_write_stall_ms],
CAST((fs.io_stall_read_ms + fs.io_stall_write_ms)/(1.0 + fs.num_of_reads + fs.num_of_writes) AS NUMERIC(10,1)) AS [avg_io_stall_ms],
CONVERT(DECIMAL(18,2), mf.size/128.0) AS [File Size (MB)], mf.physical_name, mf.type_desc, fs.io_stall_read_ms, fs.num_of_reads,
fs.io_stall_write_ms, fs.num_of_writes, fs.io_stall_read_ms + fs.io_stall_write_ms AS [io_stalls], fs.num_of_reads + fs.num_of_writes AS [total_io]
FROM sys.dm_io_virtual_file_stats(null,null) AS fs
INNER JOIN sys.master_files AS mf WITH (NOLOCK)
ON fs.database_id = mf.database_id
AND fs.[file_id] = mf.[file_id]
ORDER BY avg_io_stall_ms DESC OPTION (RECOMPILE);
--驱动磁盘-IO文件情况
SELECT [Drive],
WHEN num_of_reads = 0 THEN 0
ELSE (io_stall_read_ms/num_of_reads)
END AS [Read Latency],
WHEN io_stall_write_ms = 0 THEN 0
ELSE (io_stall_write_ms/num_of_writes)
END AS [Write Latency],
WHEN (num_of_reads = 0 AND num_of_writes = 0) THEN 0
ELSE (io_stall/(num_of_reads + num_of_writes))
END AS [Overall Latency],
WHEN num_of_reads = 0 THEN 0
ELSE (num_of_bytes_read/num_of_reads)
END AS [Avg Bytes/Read],
WHEN io_stall_write_ms = 0 THEN 0
ELSE (num_of_bytes_written/num_of_writes)
END AS [Avg Bytes/Write],
WHEN (num_of_reads = 0 AND num_of_writes = 0) THEN 0
ELSE ((num_of_bytes_read + num_of_bytes_written)/(num_of_reads + num_of_writes))
END AS [Avg Bytes/Transfer]
FROM (SELECT LEFT(mf.physical_name, 2) AS Drive, SUM(num_of_reads) AS num_of_reads,
SUM(io_stall_read_ms) AS io_stall_read_ms, SUM(num_of_writes) AS num_of_writes,
SUM(io_stall_write_ms) AS io_stall_write_ms, SUM(num_of_bytes_read) AS num_of_bytes_read,
SUM(num_of_bytes_written) AS num_of_bytes_written, SUM(io_stall) AS io_stall
FROM sys.dm_io_virtual_file_stats(NULL, NULL) AS vfs
INNER JOIN sys.master_files AS mf WITH (NOLOCK)
ON vfs.database_id = mf.database_id AND vfs.file_id = mf.file_id
GROUP BY LEFT(mf.physical_name, 2)) AS tab
ORDER BY [Overall Latency] OPTION (RECOMPILE);
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
--数据库文件读写IO性能SELECT DB_NAME(fs.database_id) AS [Database Name], CAST(fs.io_stall_read_ms/(1.0 + fs.num_of_reads) AS NUMERIC(10,1)) AS [avg_read_stall_ms],CAST(fs.io_stall_write_ms/(1.0 + fs.num_of_writes) AS NUMERIC(10,1)) AS [avg_write_stall_ms],CAST((fs.io_stall_read_ms + fs.io_stall_write_ms)/(1.0 + fs.num_of_reads + fs.num_of_writes) AS NUMERIC(10,1)) AS [avg_io_stall_ms],CONVERT(DECIMAL(18,2), mf.size/128.0) AS [File Size (MB)], mf.physical_name, mf.type_desc, fs.io_stall_read_ms, fs.num_of_reads,fs.io_stall_write_ms, fs.num_of_writes, fs.io_stall_read_ms + fs.io_stall_write_ms AS [io_stalls], fs.num_of_reads + fs.num_of_writes AS [total_io]FROM sys.dm_io_virtual_file_stats(null,null) AS fsINNER JOIN sys.master_files AS mf WITH (NOLOCK)ON fs.database_id = mf.database_idAND fs.[file_id] = mf.[file_id]ORDER BY avg_io_stall_ms DESC OPTION (RECOMPILE); --驱动磁盘-IO文件情况SELECT [Drive],
WHEN num_of_reads = 0 THEN 0
ELSE (io_stall_read_ms/num_of_reads)
END AS [Read Latency],
WHEN io_stall_write_ms = 0 THEN 0
ELSE (io_stall_write_ms/num_of_writes)
END AS [Write Latency],
WHEN (num_of_reads = 0 AND num_of_writes = 0) THEN 0
ELSE (io_stall/(num_of_reads + num_of_writes))
END AS [Overall Latency],
WHEN num_of_reads = 0 THEN 0
ELSE (num_of_bytes_read/num_of_reads)
END AS [Avg Bytes/Read],
WHEN io_stall_write_ms = 0 THEN 0
ELSE (num_of_bytes_written/num_of_writes)
END AS [Avg Bytes/Write],
WHEN (num_of_reads = 0 AND num_of_writes = 0) THEN 0
ELSE ((num_of_bytes_read + num_of_bytes_written)/(num_of_reads + num_of_writes))
END AS [Avg Bytes/Transfer]FROM (SELECT LEFT(mf.physical_name, 2) AS Drive, SUM(num_of_reads) AS num_of_reads,
SUM(io_stall_read_ms) AS io_stall_read_ms, SUM(num_of_writes) AS num_of_writes,
SUM(io_stall_write_ms) AS io_stall_write_ms, SUM(num_of_bytes_read) AS num_of_bytes_read,
SUM(num_of_bytes_written) AS num_of_bytes_written, SUM(io_stall) AS io_stall
FROM sys.dm_io_virtual_file_stats(NULL, NULL) AS vfs
INNER JOIN sys.master_files AS mf WITH (NOLOCK)
ON vfs.database_id = mf.database_id AND vfs.file_id = mf.file_id
GROUP BY LEFT(mf.physical_name, 2)) AS tabORDER BY [Overall Latency] OPTION (RECOMPILE);
信提取息:
各数据文件IO/CPU/Buffer访问情况,Temp DB的IO Rank达到53%以上
解决方案:
添加多个Temp DB 数据文件,减少Latch争用。最佳实践:&8核数,建议添加4个或8个等大小的数据文件。
通过等待类型发现与IO相关 的PAGEIOLATCH_XX 值非常高,数据库存存在大量表扫描操作,导致缓存中数据不能满足查询,需要从磁盘中读取数据,产生IO等待。
解决方案:
创建合理非聚集索引来减少扫描,更新统计信息
上面方法还无法解决,考虑将受影响的数据转移到更快的IO子系统,考虑增加内存。
四、优化方案
依据以上监测和分析结果,从“优化顺序”和“实施原则”开始实质性的优化。
从数据库配置优化
理由:代价最小,根据监测分析结果,通过修改配置可提升空间不小。
理由:索引不会动数据库表等与业务紧密的结构,业务层面不会有风险。
步骤:考虑到库中打表(超过100G),在索引优化也要分步进行。 优化索引步骤:无用索引-&重复索引-&丢失索引添加-&聚集索引-&索引碎片整理。
理由:语句优化需要结合业务,需要和开发人员紧密沟通,最终选择优化语句的方案
步骤:DBA抓取执行时间、使用CPU、IO、内存最多的TOP SQL语句/存储过程,交由开发人员并协助找出可优化的方法,如加索引、语句写法等。
整个诊断和优化方案首先在测试环境中进行测试,将在测试环境中测试通过并确认的逐步实施到正式环境。
数据库配置优化
1. 当前数据库服务器有超过24个核数, 当前MAXDOP为0,配置不合理,导致调度并发处理时出现较大并行等待现象(水桶短板原理)
优化建议:建议修改MAXDOP 值,最佳实践&8核的,先设置为4
2. 当前COST THRESHOLD FOR PARALLELISM值默认5秒
优化建议:建议修改 COST THRESHOLD FOR PARALLELISM值,超过15秒允许并行处理
3. 监测到业务中处理用频繁使用临时表、标量值函数,不断创建用户对象等,TEMPDB 处理内存相关PFSGAMSGAM时,有很多的Latch等待现象,给性能造成影响
优化建议:建议添加多个Temp DB 数据文件,减少Latch争用。最佳实践:&8核数,建议添加4个或8个等大小的数据文件。
4. 启用optimize for ad hoc workloads
5. Ad Hoc Distributed Queries开启即席查询优化
1. 无用索引优化
目前库中存在大量无用索引,可通过脚本找出无用的索引并删除,减少系统对索引维护成本,提高更新性能。另外,根据读比率低于1%的表的索引,可结合业务最终确认是否删除索引。
详细列表请参考:性能调优数据收集_索引.xlsx-无用索引
无用索引,参考执行语句:
OBJECT_NAME(i.object_id) AS table_name ,
COALESCE(i.name, SPACE(0)) AS index_name ,
ps.partition_number ,
ps.row_count ,
CAST(( ps.reserved_page_count * 8 ) / 1024. AS DECIMAL(12, 2)) AS size_in_mb ,
COALESCE(ius.user_seeks, 0) AS user_seeks ,
COALESCE(ius.user_scans, 0) AS user_scans ,
COALESCE(ius.user_lookups, 0) AS user_lookups ,
i.type_desc
sys.all_objects t
INNER JOIN sys.indexes i ON t.object_id = i.object_id
INNER JOIN sys.dm_db_partition_stats ps ON i.object_id = ps.object_id
AND i.index_id = ps.index_id
LEFT OUTER JOIN sys.dm_db_index_usage_stats ius ON ius.database_id = DB_ID()
AND i.object_id = ius.object_id
AND i.index_id = ius.index_id
i.type_desc NOT IN ( 'HEAP', 'CLUSTERED' )
AND i.is_unique = 0
AND i.is_primary_key = 0
AND i.is_unique_constraint = 0
AND COALESCE(ius.user_seeks, 0) &= 0
AND COALESCE(ius.user_scans, 0) &= 0
AND COALESCE(ius.user_lookups, 0) &= 0
ORDER BY OBJECT_NAME(i.object_id) ,
--1. Finding unused non-clustered indexes.
SELECT OBJECT_SCHEMA_NAME(i.object_id) AS SchemaName ,
OBJECT_NAME(i.object_id) AS TableName ,
ius.user_seeks ,
ius.user_scans ,
ius.user_lookups ,
ius.user_updates
FROM sys.dm_db_index_usage_stats AS ius
JOIN sys.indexes AS i ON i.index_id = ius.index_id
AND i.object_id = ius.object_id
WHERE ius.database_id = DB_ID()
AND i.is_unique_constraint = 0 -- no unique indexes
AND i.is_primary_key = 0
AND i.is_disabled = 0
AND i.type & 1 -- don't consider heaps/clustered index
AND ( ( ius.user_seeks + ius.user_scans +
ius.user_lookups ) & ius.user_updates
OR ( ius.user_seeks = 0
AND ius.user_scans = 0
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950
OBJECT_NAME(i.object_id) AS table_name ,
COALESCE(i.name, SPACE(0)) AS index_name ,
ps.partition_number ,
ps.row_count ,
CAST(( ps.reserved_page_count * 8 ) / 1024. AS DECIMAL(12, 2)) AS size_in_mb ,
COALESCE(ius.user_seeks, 0) AS user_seeks ,
COALESCE(ius.user_scans, 0) AS user_scans ,
COALESCE(ius.user_lookups, 0) AS user_lookups ,
i.type_descFROM
sys.all_objects t
INNER JOIN sys.indexes i ON t.object_id = i.object_id
INNER JOIN sys.dm_db_partition_stats ps ON i.object_id = ps.object_id
AND i.index_id = ps.index_id
LEFT OUTER JOIN sys.dm_db_index_usage_stats ius ON ius.database_id = DB_ID()
AND i.object_id = ius.object_id
AND i.index_id = ius.index_idWHERE
i.type_desc NOT IN ( 'HEAP', 'CLUSTERED' )
AND i.is_unique = 0
AND i.is_primary_key = 0
AND i.is_unique_constraint = 0
AND COALESCE(ius.user_seeks, 0) &= 0
AND COALESCE(ius.user_scans, 0) &= 0
AND COALESCE(ius.user_lookups, 0) &= 0ORDER BY OBJECT_NAME(i.object_id) ,
--1. Finding unused non-clustered indexes.
SELECT OBJECT_SCHEMA_NAME(i.object_id) AS SchemaName ,
OBJECT_NAME(i.object_id) AS TableName ,
ius.user_seeks ,
ius.user_scans ,
ius.user_lookups ,
ius.user_updates
FROM sys.dm_db_index_usage_stats AS ius
JOIN sys.indexes AS i ON i.index_id = ius.index_id
AND i.object_id = ius.object_id
WHERE ius.database_id = DB_ID()
AND i.is_unique_constraint = 0 -- no unique indexes
AND i.is_primary_key = 0
AND i.is_disabled = 0
AND i.type & 1 -- don't consider heaps/clustered index
AND ( ( ius.user_seeks + ius.user_scans +
ius.user_lookups ) & ius.user_updates
OR ( ius.user_seeks = 0
AND ius.user_scans = 0
表的读写比,参考执行语句
DECLARE @dbid int
SELECT @dbid = db_id()
SELECT TableName = object_name(s.object_id),
Reads = SUM(user_seeks + user_scans + user_lookups), Writes = SUM(user_updates),CONVERT(BIGINT,SUM(user_seeks + user_scans + user_lookups))*100/( SUM(user_updates)+SUM(user_seeks + user_scans + user_lookups))
FROM sys.dm_db_index_usage_stats AS s
INNER JOIN sys.indexes AS i
ON s.object_id = i.object_id
AND i.index_id = s.index_id
WHERE objectproperty(s.object_id,'IsUserTable') = 1
AND s.database_id = @dbid
GROUP BY object_name(s.object_id)
ORDER BY writes DESC
123456789101112
DECLARE @dbid intSELECT @dbid = db_id()SELECT TableName = object_name(s.object_id),
Reads = SUM(user_seeks + user_scans + user_lookups), Writes = SUM(user_updates),CONVERT(BIGINT,SUM(user_seeks + user_scans + user_lookups))*100/( SUM(user_updates)+SUM(user_seeks + user_scans + user_lookups))FROM sys.dm_db_index_usage_stats AS sINNER JOIN sys.indexes AS iON s.object_id = i.object_idAND i.index_id = s.index_idWHERE objectproperty(s.object_id,'IsUserTable') = 1AND s.database_id = @dbidGROUP BY object_name(s.object_id)ORDER BY writes DESC
2. 移除、合并重复索引
目前系统中很多索引重复,对该类索引进行合并,减少索引的维护成本,从而提升更新性能。
重复索引,参考执行语句:
WITH MyDuplicate AS (SELECT
Sch.[name] AS SchemaName,
Obj.[name] AS TableName,
Idx.[name] AS IndexName,
INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 1) AS Col1,
INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 2) AS Col2,
INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 3) AS Col3,
INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 4) AS Col4,
INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 5) AS Col5,
INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 6) AS Col6,
INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 7) AS Col7,
INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 8) AS Col8,
INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 9) AS Col9,
INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 10) AS Col10,
INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 11) AS Col11,
INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 12) AS Col12,
INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 13) AS Col13,
INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 14) AS Col14,
INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 15) AS Col15,
INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 16) AS Col16
FROM sys.indexes Idx
INNER JOIN sys.objects Obj ON Idx.[object_id] = Obj.[object_id]
INNER JOIN sys.schemas Sch ON Sch.[schema_id] = Obj.[schema_id]
WHERE index_id & 0 AND
Obj.[name]='DOC_INVPLU')
MD1.SchemaName, MD1.TableName, MD1.IndexName,
MD2.IndexName AS OverLappingIndex,
MD1.Col1, MD1.Col2, MD1.Col3, MD1.Col4,
MD1.Col5, MD1.Col6, MD1.Col7, MD1.Col8,
MD1.Col9, MD1.Col10, MD1.Col11, MD1.Col12,
MD1.Col13, MD1.Col14, MD1.Col15, MD1.Col16
FROM MyDuplicate MD1
INNER JOIN MyDuplicate MD2 ON MD1.tablename = MD2.tablename
AND MD1.indexname && MD2.indexname
AND MD1.Col1 = MD2.Col1
AND (MD1.Col2 IS NULL OR MD2.Col2 IS NULL OR MD1.Col2 = MD2.Col2)
AND (MD1.Col3 IS NULL OR MD2.Col3 IS NULL OR MD1.Col3 = MD2.Col3)
AND (MD1.Col4 IS NULL OR MD2.Col4 IS NULL OR MD1.Col4 = MD2.Col4)
AND (MD1.Col5 IS NULL OR MD2.Col5 IS NULL OR MD1.Col5 = MD2.Col5)
AND (MD1.Col6 IS NULL OR MD2.Col6 IS NULL OR MD1.Col6 = MD2.Col6)
AND (MD1.Col7 IS NULL OR MD2.Col7 IS NULL OR MD1.Col7 = MD2.Col7)
AND (MD1.Col8 IS NULL OR MD2.Col8 IS NULL OR MD1.Col8 = MD2.Col8)
AND (MD1.Col9 IS NULL OR MD2.Col9 IS NULL OR MD1.Col9 = MD2.Col9)
AND (MD1.Col10 IS NULL OR MD2.Col10 IS NULL OR MD1.Col10 = MD2.Col10)
AND (MD1.Col11 IS NULL OR MD2.Col11 IS NULL OR MD1.Col11 = MD2.Col11)
AND (MD1.Col12 IS NULL OR MD2.Col12 IS NULL OR MD1.Col12 = MD2.Col12)
AND (MD1.Col13 IS NULL OR MD2.Col13 IS NULL OR MD1.Col13 = MD2.Col13)
AND (MD1.Col14 IS NULL OR MD2.Col14 IS NULL OR MD1.Col14 = MD2.Col14)
AND (MD1.Col15 IS NULL OR MD2.Col15 IS NULL OR MD1.Col15 = MD2.Col15)
AND (MD1.Col16 IS NULL OR MD2.Col16 IS NULL OR MD1.Col16 = MD2.Col16)
MD1.SchemaName,MD1.TableName,MD1.IndexName
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051
WITH MyDuplicate AS (SELECT
Sch.[name] AS SchemaName, Obj.[name] AS TableName, Idx.[name] AS IndexName, INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 1) AS Col1, INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 2) AS Col2, INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 3) AS Col3, INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 4) AS Col4, INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 5) AS Col5, INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 6) AS Col6, INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 7) AS Col7, INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 8) AS Col8, INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 9) AS Col9, INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 10) AS Col10, INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 11) AS Col11, INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 12) AS Col12, INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 13) AS Col13, INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 14) AS Col14, INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 15) AS Col15, INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 16) AS Col16FROM sys.indexes IdxINNER JOIN sys.objects Obj ON Idx.[object_id] = Obj.[object_id]INNER JOIN sys.schemas Sch ON Sch.[schema_id] = Obj.[schema_id]WHERE index_id & 0 AND
Obj.[name]='DOC_INVPLU')SELECT
MD1.SchemaName, MD1.TableName, MD1.IndexName,
MD2.IndexName AS OverLappingIndex,
MD1.Col1, MD1.Col2, MD1.Col3, MD1.Col4,
MD1.Col5, MD1.Col6, MD1.Col7, MD1.Col8,
MD1.Col9, MD1.Col10, MD1.Col11, MD1.Col12,
MD1.Col13, MD1.Col14, MD1.Col15, MD1.Col16FROM MyDuplicate MD1INNER JOIN MyDuplicate MD2 ON MD1.tablename = MD2.tablename AND MD1.indexname && MD2.indexname AND MD1.Col1 = MD2.Col1 AND (MD1.Col2 IS NULL OR MD2.Col2 IS NULL OR MD1.Col2 = MD2.Col2) AND (MD1.Col3 IS NULL OR MD2.Col3 IS NULL OR MD1.Col3 = MD2.Col3) AND (MD1.Col4 IS NULL OR MD2.Col4 IS NULL OR MD1.Col4 = MD2.Col4) AND (MD1.Col5 IS NULL OR MD2.Col5 IS NULL OR MD1.Col5 = MD2.Col5) AND (MD1.Col6 IS NULL OR MD2.Col6 IS NULL OR MD1.Col6 = MD2.Col6) AND (MD1.Col7 IS NULL OR MD2.Col7 IS NULL OR MD1.Col7 = MD2.Col7) AND (MD1.Col8 IS NULL OR MD2.Col8 IS NULL OR MD1.Col8 = MD2.Col8) AND (MD1.Col9 IS NULL OR MD2.Col9 IS NULL OR MD1.Col9 = MD2.Col9) AND (MD1.Col10 IS NULL OR MD2.Col10 IS NULL OR MD1.Col10 = MD2.Col10) AND (MD1.Col11 IS NULL OR MD2.Col11 IS NULL OR MD1.Col11 = MD2.Col11) AND (MD1.Col12 IS NULL OR MD2.Col12 IS NULL OR MD1.Col12 = MD2.Col12) AND (MD1.Col13 IS NULL OR MD2.Col13 IS NULL OR MD1.Col13 = MD2.Col13) AND (MD1.Col14 IS NULL OR MD2.Col14 IS NULL OR MD1.Col14 = MD2.Col14) AND (MD1.Col15 IS NULL OR MD2.Col15 IS NULL OR MD1.Col15 = MD2.Col15) AND (MD1.Col16 IS NULL OR MD2.Col16 IS NULL OR MD1.Col16 = MD2.Col16)ORDER BY MD1.SchemaName,MD1.TableName,MD1.IndexName
3. 添加丢失索引
根据对语句的频次,表中读写比,结合业务对缺失的索引进行建立。
丢失索引,参考执行语句:
-- Missing Indexes in current database by Index Advantage
user_seeks * avg_total_user_cost * ( avg_user_impact * 0.01 ) AS [index_advantage] ,
migs.last_user_seek ,
mid.[statement] AS [Database.Schema.Table] ,
mid.equality_columns ,
mid.inequality_columns ,
mid.included_columns ,
migs.unique_compiles ,
migs.user_seeks ,
migs.avg_total_user_cost ,
migs.avg_user_impact ,
N'CREATE NONCLUSTERED INDEX [IX_' + SUBSTRING(mid.statement,
CHARINDEX('.',
mid.statement,
CHARINDEX('.',
mid.statement)
LEN(mid.statement) - 3
- CHARINDEX('.',
mid.statement,
CHARINDEX('.',
mid.statement)
+ 1) + 1) + '_'
+ REPLACE(REPLACE(REPLACE(CASE WHEN mid.equality_columns IS NOT NULL
AND mid.inequality_columns IS NOT NULL
AND mid.included_columns IS NOT NULL
THEN mid.equality_columns + '_'
+ mid.inequality_columns
+ '_Includes'
WHEN mid.equality_columns IS NOT NULL
AND mid.inequality_columns IS NOT NULL
AND mid.included_columns IS NULL
THEN mid.equality_columns + '_'
+ mid.inequality_columns
WHEN mid.equality_columns IS NOT NULL
AND mid.inequality_columns IS NULL
AND mid.included_columns IS NOT NULL
THEN mid.equality_columns + '_Includes'
WHEN mid.equality_columns IS NOT NULL
AND mid.inequality_columns IS NULL
AND mid.included_columns IS NULL
THEN mid.equality_columns
WHEN mid.equality_columns IS NULL
AND mid.inequality_columns IS NOT NULL
AND mid.included_columns IS NOT NULL
THEN mid.inequality_columns
+ '_Includes'
WHEN mid.equality_columns IS NULL
AND mid.inequality_columns IS NOT NULL
AND mid.included_columns IS NULL
THEN mid.inequality_columns
END, ', ', '_'), ']', ''), '[', '') + '] '
+ N'ON ' + mid.[statement] + N' (' + ISNULL(mid.equality_columns, N'')
+ CASE WHEN mid.equality_columns IS NULL
THEN ISNULL(mid.inequality_columns, N'')
ELSE ISNULL(', ' + mid.inequality_columns, N'')
END + N') ' + ISNULL(N'INCLUDE (' + mid.included_columns + N');',
';') AS CreateStatement
sys.dm_db_missing_index_group_stats AS migs WITH ( NOLOCK )
INNER JOIN sys.dm_db_missing_index_groups AS mig WITH ( NOLOCK ) ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details AS mid WITH ( NOLOCK ) ON mig.index_handle = mid.index_handle
mid.database_id = DB_ID()
ORDER BY index_advantage DESC;
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263
-- Missing Indexes in current database by Index Advantage SELECT
user_seeks * avg_total_user_cost * ( avg_user_impact * 0.01 ) AS [index_advantage] ,
migs.last_user_seek ,
mid.[statement] AS [Database.Schema.Table] ,
mid.equality_columns ,
mid.inequality_columns ,
mid.included_columns ,
migs.unique_compiles ,
migs.user_seeks ,
migs.avg_total_user_cost ,
migs.avg_user_impact ,
N'CREATE NONCLUSTERED INDEX [IX_' + SUBSTRING(mid.statement,
CHARINDEX('.',
mid.statement,
CHARINDEX('.',
mid.statement)
LEN(mid.statement) - 3
- CHARINDEX('.',
mid.statement,
CHARINDEX('.',
mid.statement)
+ 1) + 1) + '_'
+ REPLACE(REPLACE(REPLACE(CASE WHEN mid.equality_columns IS NOT NULL
AND mid.inequality_columns IS NOT NULL
AND mid.included_columns IS NOT NULL
THEN mid.equality_columns + '_'
+ mid.inequality_columns
+ '_Includes'
WHEN mid.equality_columns IS NOT NULL
AND mid.inequality_columns IS NOT NULL
AND mid.included_columns IS NULL
THEN mid.equality_columns + '_'
+ mid.inequality_columns
WHEN mid.equality_columns IS NOT NULL
AND mid.inequality_columns IS NULL
AND mid.included_columns IS NOT NULL
THEN mid.equality_columns + '_Includes'
WHEN mid.equality_columns IS NOT NULL
AND mid.inequality_columns IS NULL
AND mid.included_columns IS NULL
THEN mid.equality_columns
WHEN mid.equality_columns IS NULL
AND mid.inequality_columns IS NOT NULL
AND mid.included_columns IS NOT NULL
THEN mid.inequality_columns
+ '_Includes'
WHEN mid.equality_columns IS NULL
AND mid.inequality_columns IS NOT NULL
AND mid.included_columns IS NULL
THEN mid.inequality_columns
END, ', ', '_'), ']', ''), '[', '') + '] '
+ N'ON ' + mid.[statement] + N' (' + ISNULL(mid.equality_columns, N'')
+ CASE WHEN mid.equality_columns IS NULL
THEN ISNULL(mid.inequality_columns, N'')
ELSE ISNULL(', ' + mid.inequality_columns, N'')
END + N') ' + ISNULL(N'INCLUDE (' + mid.included_columns + N');',
';') AS CreateStatementFROM
sys.dm_db_missing_index_group_stats AS migs WITH ( NOLOCK )
INNER JOIN sys.dm_db_missing_index_groups AS mig WITH ( NOLOCK ) ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details AS mid WITH ( NOLOCK ) ON mig.index_handle = mid.index_handleWHERE
mid.database_id = DB_ID()ORDER BY index_advantage DESC;
4. 索引碎片整理
需要通过DBCC check完成索引碎片清理,提高查询时效率。
备注:当前据库很多表比较大(&50G),做表上索引可能花费很长时间,一般1个T的库要8小时以上,建议制定一个详细计划,以表为单位逐步碎片清理。
索引碎片参考执行语句:
SELECT '[' + DB_NAME() + '].[' + OBJECT_SCHEMA_NAME(ddips.[object_id],
DB_ID()) + '].['
+ OBJECT_NAME(ddips.[object_id], DB_ID()) + ']' AS [statement] ,
i.[name] AS [index_name] ,
ddips.[index_type_desc] ,
ddips.[partition_number] ,
ddips.[alloc_unit_type_desc] ,
ddips.[index_depth] ,
ddips.[index_level] ,
CAST(ddips.[avg_fragmentation_in_percent] AS SMALLINT)
AS [avg_frag_%] ,
CAST(ddips.[avg_fragment_size_in_pages] AS SMALLINT)
AS [avg_frag_size_in_pages] ,
ddips.[fragment_count] ,
ddips.[page_count]
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL,
NULL, NULL, 'limited') ddips
INNER JOIN sys.[indexes] i ON ddips.[object_id] = i.[object_id]
AND ddips.[index_id] = i.[index_id]
WHERE ddips.[avg_fragmentation_in_percent] & 15
AND ddips.[page_count] & 500
ORDER BY ddips.[avg_fragmentation_in_percent] ,
OBJECT_NAME(ddips.[object_id], DB_ID()) ,
123456789101112131415161718192021222324
SELECT '[' + DB_NAME() + '].[' + OBJECT_SCHEMA_NAME(ddips.[object_id],DB_ID()) + '].['+ OBJECT_NAME(ddips.[object_id], DB_ID()) + ']' AS [statement] ,i.[name] AS [index_name] ,ddips.[index_type_desc] ,ddips.[partition_number] ,ddips.[alloc_unit_type_desc] ,ddips.[index_depth] ,ddips.[index_level] ,CAST(ddips.[avg_fragmentation_in_percent] AS SMALLINT)AS [avg_frag_%] ,CAST(ddips.[avg_fragment_size_in_pages] AS SMALLINT)AS [avg_frag_size_in_pages] ,ddips.[fragment_count] ,ddips.[page_count]FROM sys.dm_db_index_physical_stats(DB_ID(), NULL,NULL, NULL, 'limited') ddipsINNER JOIN sys.[indexes] i ON ddips.[object_id] = i.[object_id]AND ddips.[index_id] = i.[index_id]WHERE ddips.[avg_fragmentation_in_percent] & 15AND ddips.[page_count] & 500ORDER BY ddips.[avg_fragmentation_in_percent] ,OBJECT_NAME(ddips.[object_id], DB_ID()) ,i.[name]
5. 审查没有聚集、主键索引的表
当前库很多表没有聚集索引,需要细查原因是不是业务要求,如果没有特殊原因可以加上。
查询语句优化
从数据库历史保存信息中,通过DMV获取
获取Top100花费时间最多查询SQL
获取Top100花费时间最多存储过程
获取Top100花费I/O时间最多
参考获取Top100执行语句
--执行时间最长的语句
SELECT TOP 100
execution_count,
total_worker_time / 1000 AS total_worker_time,
total_logical_reads,
total_logical_writes,max_elapsed_time,
sys.dm_exec_query_stats qs
CROSS APPLY
sys.dm_exec_sql_text(qs.sql_handle) AS st
max_elapsed_time DESC
--消耗CPU最多的语句
SELECT TOP 100
execution_count,
total_worker_time / 1000 AS total_worker_time,
total_logical_reads,
total_logical_writes,
sys.dm_exec_query_stats qs
CROSS APPLY
sys.dm_exec_sql_text(qs.sql_handle) AS st
total_worker_time DESC
--消耗IO读最多的语句
SELECT TOP 100
execution_count,
total_worker_time / 1000 AS total_worker_time,
total_logical_reads,
total_logical_writes,
sys.dm_exec_query_stats qs
CROSS APPLY
sys.dm_exec_sql_text(qs.sql_handle) AS st
total_logical_reads DESC
--消耗IO写最多的语句
SELECT TOP 100
execution_count,
total_worker_time / 1000 AS total_worker_time,
total_logical_reads,
total_logical_writes,
sys.dm_exec_query_stats qs
CROSS APPLY
sys.dm_exec_sql_text(qs.sql_handle) AS st
total_logical_writes DESC
--单个语句查询平均IO时间
SELECT TOP 100
[Total IO] = (qs.total_logical_writes+qs.total_logical_reads)
, [Average IO] = (qs.total_logical_writes+qs.total_logical_reads) /
qs.execution_count
, qs.execution_count
, SUBSTRING (qt.text,(qs.statement_start_offset/2) + 1,
((CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) + 1) AS [Individual Query]
, qt.text AS [Parent Query]
, DB_NAME(qt.dbid) AS DatabaseName
, qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE DB_NAME(qt.dbid)='tyyl_sqlserver' and execution_count&3 AND qs.total_logical_writes+qs.total_logical_reads&10000
--and qt.text like '%POSCREDIT%'
ORDER BY [Average IO] DESC
--单个语句查询平均‘逻辑读’时间
SELECT TOP 100
deqs.execution_count,
deqs.total_logical_reads/deqs.execution_count as "Avg Logical Reads",
deqs.total_elapsed_time/deqs.execution_count as "Avg Elapsed Time",
deqs.total_worker_time/deqs.execution_count as "Avg Worker Time",SUBSTRING(dest.text, (deqs.statement_start_offset/2)+1,
((CASE deqs.statement_end_offset
WHEN -1 THEN DATALENGTH(dest.text)
ELSE deqs.statement_end_offset
END - deqs.statement_start_offset)/2)+1) as query,dest.text AS [Parent Query],
, qp.query_plan
FROM sys.dm_exec_query_stats deqs
CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) dest
CROSS APPLY sys.dm_exec_query_plan(deqs.sql_handle) qp
WHERE dest.encrypted=0
--AND dest.text LIKE'%INCOMINGTRANS%'
"Avg Logical Reads"
--单个语句查询平均‘逻辑写’时间
SELECT TOP 100
[Total WRITES] = (qs.total_logical_writes)
, [Average WRITES] = (qs.total_logical_writes) /
qs.execution_count
, qs.execution_count
, SUBSTRING (qt.text,(qs.statement_start_offset/2) + 1,
((CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) + 1) AS [Individual Query]
, qt.text AS [Parent Query]
, DB_NAME(qt.dbid) AS DatabaseName
, qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE DB_NAME(qt.dbid)='DRSDataCN'
and qt.text like '%POSCREDIT%'
ORDER BY [Average WRITES] DESC
--单个语句查询平均CPU执行时间
SELECT SUBSTRING(dest.text, (deqs.statement_start_offset/2)+1,
((CASE deqs.statement_end_offset
WHEN -1 THEN DATALENGTH(dest.text)
ELSE deqs.statement_end_offset
END - deqs.statement_start_offset)/2)+1) as query,
deqs.execution_count,
deqs.total_logical_reads/deqs.execution_count as "Avg Logical Reads",
deqs.total_elapsed_time/deqs.execution_count as "Avg Elapsed Time",
deqs.total_worker_time/deqs.execution_count as "Avg Worker Time"
,deqs.last_execution_time,deqs.creation_time
FROM sys.dm_exec_query_stats deqs
CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) dest
WHERE dest.encrypted=0
AND deqs.total_logical_reads/deqs.execution_count&50
QUERY,[Avg Worker Time] DESC
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136
--执行时间最长的语句SELECT TOP 100
execution_count,
total_worker_time / 1000 AS total_worker_time,
total_logical_reads,
total_logical_writes,max_elapsed_time,
[text]FROM
sys.dm_exec_query_stats qsCROSS APPLY
sys.dm_exec_sql_text(qs.sql_handle) AS stORDER BY
max_elapsed_time DESC
--消耗CPU最多的语句SELECT TOP 100
execution_count,
total_worker_time / 1000 AS total_worker_time,
total_logical_reads,
total_logical_writes,
[text]FROM
sys.dm_exec_query_stats qsCROSS APPLY
sys.dm_exec_sql_text(qs.sql_handle) AS stORDER BY
total_worker_time DESC --消耗IO读最多的语句SELECT TOP 100
execution_count,
total_worker_time / 1000 AS total_worker_time,
total_logical_reads,
total_logical_writes,
[text]FROM
sys.dm_exec_query_stats qsCROSS APPLY
sys.dm_exec_sql_text(qs.sql_handle) AS stORDER BY
total_logical_reads DESC --消耗IO写最多的语句SELECT TOP 100
execution_count,
total_worker_time / 1000 AS total_worker_time,
total_logical_reads,
total_logical_writes,
[text]FROM
sys.dm_exec_query_stats qsCROSS APPLY
sys.dm_exec_sql_text(qs.sql_handle) AS stORDER BY
total_logical_writes DESC
--单个语句查询平均IO时间SELECT TOP 100[Total IO] = (qs.total_logical_writes+qs.total_logical_reads), [Average IO] = (qs.total_logical_writes+qs.total_logical_reads) /qs.execution_count, qs.execution_count, SUBSTRING (qt.text,(qs.statement_start_offset/2) + 1,((CASE WHEN qs.statement_end_offset = -1THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2ELSE qs.statement_end_offsetEND - qs.statement_start_offset)/2) + 1) AS [Individual Query], qt.text AS [Parent Query], DB_NAME(qt.dbid) AS DatabaseName, qp.query_planFROM sys.dm_exec_query_stats qsCROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qtCROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qpWHERE DB_NAME(qt.dbid)='tyyl_sqlserver' and execution_count&3 AND qs.total_logical_writes+qs.total_logical_reads&10000--and qt.text like '%POSCREDIT%'ORDER BY [Average IO] DESC --单个语句查询平均‘逻辑读’时间SELECT TOP 100
deqs.execution_count, deqs.total_logical_reads/deqs.execution_count as "Avg Logical Reads",deqs.total_elapsed_time/deqs.execution_count as "Avg Elapsed Time",deqs.total_worker_time/deqs.execution_count as "Avg Worker Time",SUBSTRING(dest.text, (deqs.statement_start_offset/2)+1,
((CASE deqs.statement_end_offset
WHEN -1 THEN DATALENGTH(dest.text)
ELSE deqs.statement_end_offset
END - deqs.statement_start_offset)/2)+1) as query,dest.text AS [Parent Query],, qp.query_planFROM sys.dm_exec_query_stats deqsCROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) destCROSS APPLY sys.dm_exec_query_plan(deqs.sql_handle) qpWHERE dest.encrypted=0 --AND dest.text LIKE'%INCOMINGTRANS%' order by
"Avg Logical Reads"
DESC --单个语句查询平均‘逻辑写’时间SELECT TOP 100[Total WRITES] = (qs.total_logical_writes), [Average WRITES] = (qs.total_logical_writes) /qs.execution_count, qs.execution_count, SUBSTRING (qt.text,(qs.statement_start_offset/2) + 1,((CASE WHEN qs.statement_end_offset = -1THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2ELSE qs.statement_end_offsetEND - qs.statement_start_offset)/2) + 1) AS [Individual Query], qt.text AS [Parent Query], DB_NAME(qt.dbid) AS DatabaseName, qp.query_planFROM sys.dm_exec_query_stats qsCROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qtCROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qpWHERE DB_NAME(qt.dbid)='DRSDataCN'and qt.text like '%POSCREDIT%'ORDER BY [Average WRITES] DESC
--单个语句查询平均CPU执行时间SELECT SUBSTRING(dest.text, (deqs.statement_start_offset/2)+1,
((CASE deqs.statement_end_offset
WHEN -1 THEN DATALENGTH(dest.text)
ELSE deqs.statement_end_offset
END - deqs.statement_start_offset)/2)+1) as query, deqs.execution_count, deqs.total_logical_reads/deqs.execution_count as "Avg Logical Reads",deqs.total_elapsed_time/deqs.execution_count as "Avg Elapsed Time",deqs.total_worker_time/deqs.execution_count as "Avg Worker Time",deqs.last_execution_time,deqs.creation_time FROM sys.dm_exec_query_stats deqsCROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) destWHERE dest.encrypted=0AND deqs.total_logical_reads/deqs.execution_count&50ORDER BY
QUERY,[Avg Worker Time] DESC
通过实时抓取业务高峰期这段时间执行语句
收集工具:
推荐使用SQLTrace或Extend Event,不推荐使用Profiler
收集内容:
Statment语句
分析工具:
推荐ClearTrace,免费。具体使用方法请见我的另外一篇介绍。
需要逐条分析以上二点收集到语句,通过类似执行计划分析找出更优化的方案语句
单条语句的执行计划分析工具Plan Explorer,请见我的另外一篇介绍
此次优化针对当前库,特别关注下面几个性能杀手问题
隐式转化(请参考宋大侠的博文)
参数嗅探(参考桦仔博文)
缺失聚集索引
五、优化效果
平均CPU使用时间在30000毫秒以上语句由20个减少到3个
执行语句在CPU使用超过10000毫秒的,从1500减少到500个
CPU保持在 20%左右,高峰期在40%~60%,极端超过60%以上,极少80%
Batch Request从原来的1500提高到4000
最后方一张优化前后的效果对比,有较明显的性能提升,只是解决眼前的瓶颈问题。
数据库的优化只是一个层面,或许解决眼前的资源瓶颈问题,很多发现数据库架构设计问题,受业务的限制,无法动手去做任何优化,只能到此文为止,这好像也是一种常态。从本次经历中,到想到另外一个问题,当只有发生性能瓶颈时候,企业的做法是赶快找人来救火,救完火后,然后就….好像就没有然后…结束。换一种思维,如果能从日常维护中做好监控、提前预警,做好规范,或许这种救火的行为会少些。
感谢2016!
更多关于 &相关内容,感兴趣的码农可查看:
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持
本文永久链接:
本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
可能感兴趣的书:
程序员的思维修炼
Andy Hunt / 崔康 / 人民邮电出版社 /
本书解释了为什么软件开发是一种精神活动,思考如何解决问题,并就开发人员如何能更好地开发软件进行了评论。书中不仅给出了一些理论上的答案,同时提供了大量实践技术...
都喜欢的文章}

我要回帖

更多关于 后二64注稳定平刷不倍 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信