Advice about performance tuning
We have a ticket open with Cireson about high memory usage and we are looking at various things to improve performance. We have already reviewed and implemented the Cireson article on this:
We have set the recycling of the application pool to 7am daily. We find we get spikes in memory usage that grows through the day (as shown below). I just wondered if other customers were having the same experience and whether you have gone further than the steps in the tuning article? Have you set recycling to run regularly (it does tend to clear down the memory usage)?
It hasn't always been this bad for us. Looking at the same chart over 400 days you can see the change:
We are currently on 9.0.15 baseline after recently upgrading. We were on 8.9.5 for a long time and this memory situation has been evident in both versions. As I say, we are looking at a number of aspects of our environment at the moment. We do not have a definitive root cause yet. Just curious what others have seen or done
Thanks.
Best Answer
-
Adam_Dzyacky Product Owner Contributor Monkey ✭✭✭✭✭
Regular time intervals I believe is used by default in IIS; turned that off in favor of controlled times as you have above. However we are leveraging Private memory usage limits.
The line of thinking here was similar to managing memory on SQL - we wanted to set a limit on the amount of memory that could be consumed leaving something for the OS but really controlling any potential leaks.
5
Answers
Did you create and publish any views with an "advanced" type projection lately?
We had some similar troubles (for the past 2 years -.- ) with our Mom.Sdk.Service randomly hitting 99% RAM which caused the Cireson Portal to be no longer accessible
After opening several cases at Cireson & Microsoft, we were asked to run following query against our SCSM-DB:
select lttp.ltvalue,
ltf.LTValue as FolderName,lt.ltvalue as ViewName
from TypeProjection tp
left join LocalizedText lttp on tp.TypeProjectionId=lttp.LTStringId
inner join views v on v.ConfigurationXML like '%'+ tp.TypeProjectionName +'%'
inner join FolderItem fi on v.ViewId=fi.MPElementId
inner join Folder f on fi.FolderId=f.FolderId
left join LocalizedText lt on v.ViewId=lt.LTStringId
left join LocalizedText ltf on f.FolderId=ltf.LTStringId
inner join ManagementPack mp on v.ManagementPackId=mp.ManagementPackId
where lttp.LanguageCode='ENU' and lttp.LTStringType=1
and lttp.ltvalue like '%(%advanced%)%'
and lt.LanguageCode='ENU' and lt.LTStringType=1
and ltf.LanguageCode='ENU' and ltf.LTStringType=1
and MP.MPIsSealed=0
order by 1,2,3
We have eliminated/rebuilt every view that was returned by the qry, especially those that were published in the Cireson Portal, and our problems were gone.
Quote from our Microsoft Call:
"Can you please ensure that the query below doesn’t return any row? It doesn’t matter if it is included in Cireson or not, because opening such “advanced” view in SM console would also cause the target SDK’s memory usage to increase out of control."
Thanks for replying! It has helped us rule that one out. It returned no values. Thanks again!
Has anyone got experience of using these areas:
Any KAs or advice. Anything that will be a short term fix without too much impact on users.
Thanks
Regular time intervals I believe is used by default in IIS; turned that off in favor of controlled times as you have above. However we are leveraging Private memory usage limits.
The line of thinking here was similar to managing memory on SQL - we wanted to set a limit on the amount of memory that could be consumed leaving something for the OS but really controlling any potential leaks.
@Adam_Dzyacky Thank you. What values have you opted for in the private memory usage limits? I just wondered how many portal servers you have deployed (with how many cores and RAM)?
Of course!
2 portal servers, that are 4x14 (CPUxRAM). Going off the min requirements for actual Service Manager which are a 4x8, I've capped IIS's private memory usage at 5,000,000 KB over the last few years.
Trending memory usage with SCOM, our portal averages around 2gb. So that's 3gb of breathing room for heavy volume or some truly unexpected event.
Slightly stale thread, but definitely feels like 2019 version is more sensitive than 2012. App Initialization after reload is flaky. Preload tends to crash. Ran out of memory a few times. Runaway CPU after reboot.
I'm all for a challenge, but these servers can wear you out!! :D