Page: Previous  1, 2, 3, 4  Next

klaxian
Status: Contributor
Joined: 18 Mar 2011
Posts: 70
Location: New York, USA
Reply Quote
I haven't been able to find any mention of this problem in the upstream mainline kernel bug reports. Is it possible that some tuning needs to be adjusted for liquorix?
Back to top
gelabs
Status: Interested
Joined: 02 May 2017
Posts: 16
Reply Quote
I can confirm for the higher load average. Should be 0 but it rises to 1 in a matter of minutes after boot.
Back to top
damentz
Status: Assistant
Joined: 09 Sep 2008
Posts: 1135
Reply Quote
:: stevenpusser wrote ::
I just tested that "intel_pstate=disable" boot cheat with my own Jessie-backported (uses gcc-4.9) Liquorix 4.10-3, and it did work as advertised and fell back to the cpufreq governors. That seems a simple enough workaround.


Yep, this appears to be the fix. In 4.10, Intel P-State is forced on when Turbo Boost 3.0 scheduling support is enabled, for everyone.

I'll release another kernel soon that disables this feature (4.10-5 / 4.10.0-15.2).
Back to top
klaxian
Status: Contributor
Joined: 18 Mar 2011
Posts: 70
Location: New York, USA
Reply Quote
Unfortunately, the new 4.10.0-15.2 kernel does not resolve the problem for me. I can confirm that it is using cpufreq, but there is still high load at idle, cores are not parking the way they should, temps and power usage are higher than normal, etc. See my previous post for a more detailed list of issues and my system specs. These problems do not exist on liquorix kernel 4.9. Any other ideas?
Back to top
damentz
Status: Assistant
Joined: 09 Sep 2008
Posts: 1135
Reply Quote
klaxian,

We may need to simply wait for a port of MuQSS for 4.11. I know that Con is currently working on it. He left a comment saying it's a monster resync: ck-hack.blogspot.com/2017/02/linux-410-ck1-muqss-version-0152-for.html?showComment=1494290706656#c2504321701481009295.

The only thing I can think of, is that part of tweaking the Kconfig configuration for MuQSS, the type of a method in the sched.h header was changed from signed int to unsigned int: github.com/zen-kernel/zen-kernel/commit/44a9f2b789e31829ada96b4f8534618c95282d74. Looking at the code, that's only the cpu number and shouldn't have any impact on the load of the system, but I'll test reverting that locally to see if it changes anything.
Back to top
damentz
Status: Assistant
Joined: 09 Sep 2008
Posts: 1135
Reply Quote
Although recursive, here's a link to a comment on the -ck blog where it was discussed more recently: ck-hack.blogspot.com/2017/05/linux-411-ck1-muqss-version-0155-for.html?showComment=1494616611577#c1554722017191395336
Back to top
klaxian
Status: Contributor
Joined: 18 Mar 2011
Posts: 70
Location: New York, USA
Reply Quote
Thanks for checking into it! I read the thread you linked and I agree with Con that there is probably some sort of accounting bug that is reporting excess CPU usage. This extra usage is probably what pushes the CPU into higher power states. Hopefully this will be fixed in a future version. I'll stick with kernel 4.9 for now since this bug is increasing my idle power consumption.

P.S. I also reviewed the commit you linked where a variable was changed from int to unsigned int. I agree that was a long shot :)
Back to top
klaxian
Status: Contributor
Joined: 18 Mar 2011
Posts: 70
Location: New York, USA
Reply Quote
This bug still exists in kernel 4.11.0-2.6 . It's a bigger problem than simply reporting CPU usage differently. My CPU is mostly kept in its highest power state all the time (with cpufreq). It uses more power and runs hotter. With kernel 4.9, all cores are correctly throttled down at idle.
Back to top
damentz
Status: Assistant
Joined: 09 Sep 2008
Posts: 1135
Reply Quote
Could you try the current kernel, 4.11-7 / 4.11.0-3.1? The load averages should be fixed, but I don't know if the actual cpu usage per task is correct.

One recommended solution is to set the interactive flag on MuQSS to 0 (/proc/sys/kernel/interactive). By doing that, MuQSS is more likely to keep tasks on the same CPU core, which makes it easier for the cpufreq governor to determine what the correct scaling frequency is.

But, you also increase latency at the cost of throughput, entirely your choice.
Back to top
klaxian
Status: Contributor
Joined: 18 Mar 2011
Posts: 70
Location: New York, USA
Reply Quote
Thanks! I can confirm that kernel 4.11-7 / 4.11.0-3.1 does fix the load reporting issue.

As for my frequency scaling situation, I found part of the problem. Since kernel 4.10, processes report more CPU usage than they did before. I'm not sure which method is more accurate, but it's significantly different. Years ago, I found that cpufreq was not being aggressive enough when deciding to step up the core frequencies on this system. Therefore, I manually reduced the tunable "ondemand/up_threshold" to 50 (in /etc/rc.local), which produced the desired result. Since the new kernels are already reporting more CPU usage per process, this lower threshold was multiplying the effect and many cores were always in their highest power state.

I reverted back to the default up_threshold value of 95 and scaling seems a bit better now. It's still more aggressive than before 4.10, but the power usage and temps are much closer.

It's still a little odd to see more CPU usage than I expect, but I think I just have to get used to it. There is also room for more tuning of cpufreq, but this is workable for now. Thanks for your help!
Back to top
Display posts from previous:   
Page: Previous  1, 2, 3, 4  Next
All times are GMT - 8 Hours