|
|
Subscribe / Log in / New account

The 3.16 kernel has been released

The 3.16 kernel has been released

Posted Aug 6, 2014 15:35 UTC (Wed) by kloczek (guest, #6391)
In reply to: The 3.16 kernel has been released by niner
Parent article: The 3.16 kernel has been released

> Meanwhile, if you need up to 64 Terabytes of memory in a single image system, you go to SGI and run an unmodified Redhat Enterprise Linux or SUSE Linux Enterprise Server on their machines: https://www.sgi.com/products/servers/uv/

Did you know that this hardware is mainly used as partitioned computer where applications are using mainly MPI API?
So this one big computer is acting more like bunch of smaller computers connected over very fast interconnects.
This is "a bit" different than running on such scale of CPUs *one* applications with thousands of threads (without rewriting it to use MPI).

Using more and more CPU sockets and memory banks adds huge latency compare to for example two to eight T5 sockets host where each socket has 16 cores and each core is with 8/16 threads. So in maximum configuration you have 8x16*16=2048 CPUs. Still it is only limited number of applications and workloads able to run on all these CPUs and memory so using partitioning is natural. However in both cases we are talking about whole service environment where overhead is significant. Running Google of FB services in majority of whole ecosystem is not the case.

In case sparc-T5 virtualization is done from firmware level and is supported from hardware layer as well virtual interconnect latency overhead in this case probably will be hard to compare with that one in SGI machines. Or if interconnect latency is lower cost per interconnect is probably higher (power consumption as well).
I don't know enough about SGI UV however in case SGI exchanging data over virtual interconnects may be not big deal as long as we are talking about many instances of the same applications attached to different sets of CPUs and memory banks effectively exchanging data over MPI API.

In case T5-8 such hardware takes 8U.
Please check how many Us takes SGI UV.
Try to compare prices, total machine power consumption and calculate CPU power per core in relation to CPU power per core.

In both cases we are talking about hardware working always with hypervisor.
However in case Solaris on T5 if you will really need to build nest for single application and you don't want to spend money on rewriting your application to use MPI, and we are talking about processing the data in parallel with big overhead on exchanging data between threads Solaris will be able to support this in real single image system.
It is yet another aspect of running on big scale single application in single box. If such application will have additionally huge needs on file system layer probably only answer at the moment is only Solaris.
Observability and instrumentation of the system or application on such scale on Linux? Forget about this ..

Sometimes it is really cheaper to spend 0.5mln bucks (per box) to buy few such boxes instead spending every year the same pile of money only on few people salaries to keeping up and run huge bunch of computers or hundreds of virtualized systems.


to post comments

The 3.16 kernel has been released

Posted Aug 6, 2014 16:27 UTC (Wed) by niner (subscriber, #26151) [Link] (41 responses)

You claimed that Linux will not be ready for systems of this scale. To cite you:
"So try to ask where is the Linux on this picture? How Linux is trying to deal with above?
Seriously? IMO now one even is trying to think about this because "everyone is so busy running with empty barrels"."

I showed you that Linux is in use - today - on systems even larger than the ones you cited.

So since you lost completely and utterly on that argument, you just change the discussion to who's hardware is more efficient? And you do this by nothing but hand waving?

I think I'll just leave it at this perfect example of how you're just trolling and not at all interested in any real discussion.

Even if Solaris might have some advantages anywhere, people like you keep me from giving it any thought at all.

The 3.16 kernel has been released

Posted Aug 6, 2014 17:56 UTC (Wed) by kloczek (guest, #6391) [Link] (40 responses)

> You claimed that Linux will not be ready for systems of this scale

Yes, because "death by thousands cuts" syndrome will be like few kilos of lead hanging between legs.

> So since you lost completely and utterly on that argument, you just change the discussion to who's hardware is more efficient? And you do this by nothing but hand waving?

OK. Let's try to ecircle whole discussion to scale of single host with few tenths of GiG RAM (let's say up to 32GiG), two CPU sockets and bunch of disks (from two to up to 20-30 local disks), up to two 10Gb NICs and so on ..
So let's talk about typical "pizza box" or blade.

I'm today working on few host up to now used under Linux which will be migrated to Solaris. Each host has pair of 200GB SSDs in RAID1. I need more fast local storage and temporary we cannot buy new hardware. We are talking about really cheap and even relatively not fresh hardware. In this case upgrade SSDs to bigger ones (supported by hardware vendor) will costs more than Solaris license for two sockets host. Buying new hardware will cost even more.

I'm going to reinstall such small computers on Solaris because I found that data used by applications are compressing on ZFS with 4KB block with compression ratio between 3 to 10 (application with bigger compression ratio will be used probably with maximum 1MB block so effectively compression ratio will be even bigger). In all Linux cases CPU is not saturated more than 40% and all this not used up to now CPU power will be more used on compression/decompression.
If we will decide to use some OpenSolaris fork cost of such transition will be only cost of reinstallation on Solaris + licenses costs. We are not talking about Oracle hardware this however hardware is on official Solaris HCL.
In case of Linux I'll be forced to push harder on upgrade hardware.
Licenses costs as lower costs has been accepted by management.
Effectively at the end I'll be working on the same hardware but with 3 to 10 times bigger local storage (600GB to 3TB SSD local storage).
Please remember as well that cost of bigger hardware is not growing linearly with size.

I'm repeating *just now* the same what my friend did on few magnitudes bigger scale (http://milek.blogspot.co.uk/2012/10/running-openafs-on-so...). I'm trying to save few k quids he saved few Ms :)

Really sometimes Solaris even with license costs can be really cheaper than *free to use* Linux.

Just try to check on how many hosts in your DC have disk space problems and how many of these hosts have low CPU utilisation. Try to imagine what you will be able to do on such boxes with java app/mysql/postgresql/SMTP/FooBar server when you will be able to "transmute" CPU power to pure gold of more disk space (???)

PS. My biggest compression ration on ZFS was few years ago. It was about 26 times with dedup on 2U Dell with 6x300GB disks. Host was used to store tenths thousands network devices configurations pushing to such box own conf data over TFTP every few hours or more often. Automatic snapshots every 10 minutes and keeping snapshots for one month (or longer). Every 10 min each new snapshot been pushed over "zfs send" over ssh to second box to have full standby copy of all data.
Without Solaris would be necessary to spend probably even 10 times more cache on only hardware. Full redundancy was implemented in few lines scripts -> no costs of clustering or similar. Backup costs -> cost of second host (snapshots on hsts + secondary copy of all data on standby box).
Someone may say above can be done by buch f scripts compressing every new cfg file using gzip. Problem was that constant traversing whole directory structure been almost killing this box on IO layer when it was working under Linux. Transparent FS compression may solve many problems saving sometimes many bucks/quids/candies.

The 3.16 kernel has been released

Posted Aug 6, 2014 18:18 UTC (Wed) by intgr (subscriber, #39733) [Link]

So you're hoping that long walls of text, irrelevant to the original point, will make up for the lack of arguments?
Oh well, that's nearly as good as an admission of defeat.

Also, $proponent of $operating_system finds that it has lower TCO than Linux in some specific configuration. News at 11.

The 3.16 kernel has been released

Posted Aug 6, 2014 18:19 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (12 responses)

JFYI, the world's biggest Sparc-based supercomputer ( http://www.top500.org/system/177232 - currently ranked 4-th on the Top500 list ) uses Linux, not Solaris.

In fact, no supercomputer from Top500 list uses Solaris. This alone speaks volumes.

The 3.16 kernel has been released

Posted Aug 6, 2014 19:21 UTC (Wed) by kloczek (guest, #6391) [Link] (11 responses)

> JFYI, the world's biggest Sparc-based supercomputer ( http://www.top500.org/system/177232 - currently ranked 4-th on the Top500 list ) uses Linux, not Solaris.

No .. it is not top500 list of biggest supercomputers.
It is top500 HPC installations. And we are talking about top500 of HPC installations doing calculations where raw CPU power is more important than power of memory subsystems or interconnects.
If you will have on details of equation used to calculate yo each installations index you can find that it will be *exactly* the same if all computers will be connected over RS232 serial lines.
Many of these installations are computing myriads of qute small tasks. Only number of these small task causes sometimes that it is sese to put everything in straight line of rack cabinets.
I'm not telling that most of such installations is doing such things. I'm telling that looking only ob final index you can say very little about where is RealPower(tm).

The 3.16 kernel has been released

Posted Aug 6, 2014 19:43 UTC (Wed) by raven667 (subscriber, #5198) [Link]

Ok, that's a silly argument, the Top500 Supercomputer site is no longer the top 500 supercomputer authority (of the last 20 years) because ... their conclusions don't match your preconceived notions so you'd like to define them out of existence?

The 3.16 kernel has been released

Posted Aug 6, 2014 21:20 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (9 responses)

Whut? Lots of Top500 computers run CPU-bound tasks. That's why completely tickless mode was added to Linux (a CPU can be _completely_ assigned to a thread, without ANY kernelspace interrupts).

The 3.16 kernel has been released

Posted Aug 7, 2014 0:14 UTC (Thu) by kloczek (guest, #6391) [Link] (8 responses)

If you are thinking about very CPU intensive simulation done in loop to observe some limited amount of data evolution on time scale that it may be completely different HPC workload than doing in simple loop calculation across huge stream of data.
First workload will be really very CPU intensive. Second one may be very memory intensive.

> a CPU can be _completely_ assigned to a thread, without ANY kernelspace interrupts

Yep that is true and it is true not only on Linux.
However if such thread will start exchanging/sharing data with other threads such workload will enter on area where bottleneck will be not CPU but interconnect between cores/CPUs.
Try to have look on https://www.kernel.org/pub/linux/kernel/people/paulmck/pe... chapter 5.1 "Why isn't concurent conting trivial?"

If you are expecting that on your computations will be not interconnects intensive will you can relatively cheap build supercomputer. Problem is that in many cases you must deal with memory or interconnect intensive workloads. If your computations will be on interconnect intensive area you will have definitely many problems with locking and synchronization between threads and here OS may help. On diagnosing such problems you will need good instrumentation integrated with profiler. Tools like DTrace can do many things here. BTW .. on Linux still there is no cpc provider (CPU Performance Counter) https://wikis.oracle.com/display/DTrace/cpc+Provider

Interconnect intensive workloads are not only a HPC problems.

The 3.16 kernel has been released

Posted Aug 7, 2014 0:31 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (7 responses)

>Yep that is true and it is true not only on Linux.
Solaris and Windows use periodic ticks for scheduling. Linux can completely eliminate them.

> However if such thread will start exchanging/sharing data with other threads such workload will enter on area where bottleneck will be not CPU but interconnect between cores/CPUs.
Yes, and so? Linux supports various interconnects just fine.

>on Linux still there is no cpc provider (CPU Performance Counter) https://wikis.oracle.com/display/DTrace/cpc+Provider
Oh really? I guess I was in delirium when I read this: https://perf.wiki.kernel.org/index.php/Tutorial#Counting_...

Please, at least familiarize yourself with the current state of Linux before speaking nonsense.

The 3.16 kernel has been released

Posted Aug 7, 2014 1:51 UTC (Thu) by kloczek (guest, #6391) [Link] (6 responses)

> Oh really? I guess I was in delirium when I read this: https://perf.wiki.kernel.org/index.php/Tutorial#Counting_...

Do you really think that reporting CPC registers data it is the same what you can do in few lines D script correlating CPC data with few other things?
Please don't try to tell that I can do the same using perl/awk/python because it will be the same story like "Why LTTng is better than DTrace?" (try to notice only that LTT/LTTng is dead and in next year DTrace will have 10th birthday).

Please don't take this personally but seems you are yet another person which does not fully understand technological impact of approach implemented in DTrace.
In gawg info documentation you can find sentence "Documentation is like sex: when it is good, it is very, very good; and when it is bad, it is better than nothing."
perf is good and I've been using it specially quite often in last few months but still it is only "better than nothing". Sorry ..

The 3.16 kernel has been released

Posted Aug 7, 2014 2:20 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (5 responses)

> Do you really think that reporting CPC registers data it is the same what you can do in few lines D script correlating CPC data with few other things?
SystemTap can do this just fine: https://github.com/fche/systemtap/blob/master/testsuite/s...

> Please don't take this personally but seems you are yet another person which does not fully understand technological impact of approach implemented in DTrace.
I'd used DTrace. It's nice but not groundshaking. And even before perf on Linux, I used oprofile and other similar tools to find bottlenecks in my apps.

The 3.16 kernel has been released

Posted Aug 7, 2014 3:06 UTC (Thu) by kloczek (guest, #6391) [Link] (4 responses)

> SystemTap can do this just fine: https://github.com/fche/systemtap/blob/master/testsuite/s...

Did you watch Big Bang Theory when Sheldon explained Penny what it means "just fine"? :)

https://www.youtube.com/watch?v=Yo-CWXQ8_1M
https://www.youtube.com/watch?v=_amwWlgS6LM

Try to imagine that my reaction on "just fine" phrase is like Penny reaction :P

The 3.16 kernel has been released

Posted Aug 7, 2014 3:16 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

No, I don't watch BBT and I don't really want to. So please explain with examples why DTrace is so much superior in this case.

The 3.16 kernel has been released

Posted Aug 7, 2014 4:15 UTC (Thu) by kloczek (guest, #6391) [Link] (2 responses)

> So please explain with examples why DTrace is so much superior in this case

Because you can do base processing huge volume of tracing data in place of hook using D code instead doing this offline.
Shuffling big volumes of data from kernel space to user space is causing kind of observability quantum effect (observer object state is disturbed by observation).
DTrace it is not like perf which is event driven.
Perf is provides analysis tools to navigate in large multi-GB traces. Dtrace does not have this because is designed to use concise traces. Simple perf is more about offline than online analysis.
So far systemtap or perf does not provides users space providers.
DTrace on Linux now is able to use USDT providers.
Example: https://blogs.oracle.com/wim/entry/mysql_5_6_20_4

The 3.16 kernel has been released

Posted Aug 7, 2014 4:18 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

Uhm, no. SystemTap compiles probes into kernel modules, so there's no userspace-kernelspace overhead. And perf subsystem also supports filters.

The 3.16 kernel has been released

Posted Aug 9, 2014 12:13 UTC (Sat) by kloczek (guest, #6391) [Link]

> SystemTap compiles probes into kernel modules, so there's no userspace-kernelspace overhead.

How many times did you rash system using SystemTap?
I had such situations hundreds times.
This is why DTrace VM doing whole work is better.

> And perf subsystem also supports filters.

You don't understand where is the problem.
If instrumentation generates even this event must be queued. Effectively here you will have few context switches. Taking data from queue add more cs and take event from queue to discard part events will be always wast of CPU/time.

Now PCI bus cannot handle more than about 300k ops/s but new PCI protocol may change this to millions/s. Try to imagine how big overhead may be after this compare to DTrace way on for example tracing IOs.

The 3.16 kernel has been released

Posted Aug 6, 2014 19:18 UTC (Wed) by raven667 (subscriber, #5198) [Link] (25 responses)

You keep using this "death by a thousand cuts" metaphor to describe Linux but it seems that Linux generally wins at performance, scheduling, IO, networking, drivers, power management, etc. and is the dominant, preferred platform for every market segment except for the general purpose desktop (where Solaris has no presence either) but it seems that Solaris is the platform where you have "death by a thousand cuts". I will grant that ZFS and DTrace can be awesome but there is a lot more to an OS than the filesystem and debug framework such as performance, scheduling, IO, networking, drivers, power management, etc. where Solaris just doesn't have the same amount of manpower that Linux enjoys to make each of these subsystems the best in class simultaneously. Linux is driving nearly every segment of computing simultaneously because of the distributed nature of development, every interested development group is able to make the parts which are most important to them the best in its own way, which is multiplicative across the whole system. Work done for Android improves performance of S390 mainframes for example.

The 3.16 kernel has been released

Posted Aug 6, 2014 20:31 UTC (Wed) by kloczek (guest, #6391) [Link] (15 responses)

> where Solaris has no presence either

You don't know about this probably but few biggest companies in last year forbid employees to use Linux on desktops. It was done *very* quietly.

I'm not worry that Solaris is not good on desktop as MOX or Windows. Really I don't care about this. Desktops are taken over by tablets. Some number of "normal" desktop still will be used.
This is like with evolutions. Some species are no longer dominating but as long niche still exist they are still present even after many millions of years.

> I will grant that ZFS and DTrace can be awesome but there is a lot more to an OS than the filesystem and debug framework such as performance, scheduling, IO, networking, drivers, power management, etc.

OK I see this glove .. show me real case scenarios. I'm not trying to tell that there is no such scenarios. Simple I don't know too much such cases.

If you know about any performance problem on Solaris where Linux does better did open SR. It will be treated as *serious* bug and you will have acceptable time of fixing the issue.
Try to raise in RH BTS case "guys I need something like ZFS. Can you do this for me under standard support contract?".

BTW power management. On these systems which I'm reinstalling now on Solarises after first reboot I had warning that kernel was unable to change ACPI P-state objects. So kernel was not able to change power consumption depends on load. We are talking about HP hardware with factory default BIOS settings so I'm assuming that probably 99% of HP hardware working under Linux is consuming more power than it can be. The same probably is on other hardware.
How Linux can do better PM if there is no proper reporting that PM cannot be used as warning on boot stage?

Exact error line from logs on Solaris:

Jul 24 19:08:08 solaris unix: [ID 928200 kern.info] NOTICE: SpeedStep support is being disabled due to errors parsing ACPI P-state objects exported by BIOS.

I've repeated reboot after this on Linux before I've changed BIOS setting. No warnings at all. Nothing strange or scary that may point that we have something which is blocking PM under Linux.
After above I've raised case for our OPS to check BIOS settings on every Linux next reboot.

BTW: did you saw PM in Solaris 11? Try t have look on http://docs.oracle.com/cd/E23824_01/html/821-1451/gjwsz.html and please show me the same level of clarity of PM status under Linux.

Are you sure that if hardware component has power management it will be possible to change PM settings using the same tools?
Under Solaris you can for example manipulate PM of some RAID cards.

And for the recors: just try to have look on list of changes in Sol 11.2 http://docs.oracle.com/cd/E36784_01/html/E52463/index.html
I heard that last year in Oracle in many kernel subsystems project is working more developers than at any time at Sun time on whole kernel. Looking on progress in last few years I think that it may be true.

> Work done for Android improves performance of S390 mainframes for example

Do you want to say that still someone is is using original S390?
Please .. stop kidding :)

The 3.16 kernel has been released

Posted Aug 6, 2014 22:36 UTC (Wed) by mjg59 (subscriber, #23239) [Link] (10 responses)

Linux is able to handle P states on those HP systems because it implements the PCC spec that HP wrote (https://acpica.org/sites/acpica/files/Processor-Clocking-... )

The 3.16 kernel has been released

Posted Aug 7, 2014 1:00 UTC (Thu) by kloczek (guest, #6391) [Link] (9 responses)

> Linux is able to handle P states on those HP systems because it implements the PCC spec that HP wrote

Of course is able and this is what exactly I wrote.
Problem is that on I found in few seconds just after first login executing as first command "dmesg | grep -i err" that factory default BIOS settings are not optimal/wrong.
On Linux you will see only different ACPI many lines reports. No errors or warnings.
It is very easy to overlook this on Linux. On Solaris looks like it is almost impossible to make similar mistake.
OK it is small detail but it is very good example of some development culture which is lack on Linux creates many of these "thousands cuts".

It is matter of ergonomy and thinking on development stage not to report Everything(tm) but first to report only crucial informations with some exact severity level.

Typical Linux dmesg output just after reboot is incredibly long. Reporting everything with high verbosity level in Linux case makes such things like incorrect HW PM issues harder to find if you don't know about what about you are looking for. Initial kernel logs should be only in single lines like:
found A
found B
..

And additional lines per module or hardware components in case some issues/errors/etc.
Lets have a look on Linux:

$ lsmod | wc -l
58
$ lspci | wc -l
38

but ..

$ wc -l /var/log/dmesg
770 /var/log/dmesg

And now dmesg on the same hardware under Solaris where all HW components are fully supported as well (just after reboot):

$ dmesg | wc -l
192

Now .. try to think about test script fired automatically just after OS (re)install which should catch as many as possible problems/issues. In case Solaris in 99% cases "dmesg | grep -i err" or dmesg | grep -i warn" is enough.
On linux it *is* really way harder.

Devil sits really sometimes in small details.
On Solaris it is good verbosity level. On Linux every module can report even few pages reports .. only because *there is no standardisation* here (again: "running with empty barrels" syndrome) and lack of thinking that kernel messages may be useful sometimes if they will be formed using some exact convention.

The 3.16 kernel has been released

Posted Aug 7, 2014 1:35 UTC (Thu) by raven667 (subscriber, #5198) [Link] (7 responses)

You are right that the kernel messages could be much better organized, they were added organically without any particular scheme that I can fathom, some are indented or multi-line, many are purely informational, some are critical and many are only readable if you cross reference them with the implementation as they do not provide enough context or meaning in the message to have any hope.

*shrug* It's probably too late now to really organize them, too much effort, possibility of breaking deployed parsing scripts for little gain.

The 3.16 kernel has been released

Posted Aug 7, 2014 2:12 UTC (Thu) by kloczek (guest, #6391) [Link] (6 responses)

> *shrug* It's probably too late now to really organize them, too much effort, possibility of breaking deployed parsing scripts for little gain.

This is not even close to truth. Really .. :)

Again please have look on Solaris realty/practice/culture.
If something needs to be reimplemented it is sometimes flagged long time before that it will be EOS of some feature.

What if Linus will announce that in end of the 2015 will be applied patch changing "en masse" all kernel initialization messages?
Something like this can be done by junior developer introducing him/her to real kernel space development. As exercise such developer may even prepare some good implementation of test script. Vuala .. isn't it?

If someone will be informed enough long before that something will be changed will be possible make decision about stick on some exact stable kernel line or rewrite auto tests scripts and follow behind latest kernel changes. Isn't it?

Sometimes some problems are not strict technical but are more about good enough coordination or planning.

The 3.16 kernel has been released

Posted Aug 7, 2014 4:13 UTC (Thu) by dlang (guest, #313) [Link] (5 responses)

If you think that organizing the kernel messages is not more work than it's worth, you are welcome to start submitting patches to change them.

But you will probably find that it's a lot more work than you expected, just like everyone who made the claim before and hasn't followed through enough to get the changes in.

The 3.16 kernel has been released

Posted Aug 7, 2014 4:30 UTC (Thu) by kloczek (guest, #6391) [Link] (4 responses)

> If you think that organizing the kernel messages is not more work than it's worth, you are welcome to start submitting patches to change them.

Do you really think that as employee someone pays me to be full time junior kernel developer?
All major contributors to the kernel code are full time kernel developers.

Serious development can be done for free only for short period of time.
Code development is about the money .. big money.

Please don't expect that I'll be contacting all kernen developers to agree on some few lines changes.
I don't need to wait on consistent kernel messages. I can use for example Solaris (few other OSes does the same).

The 3.16 kernel has been released

Posted Aug 7, 2014 4:43 UTC (Thu) by dlang (guest, #313) [Link] (2 responses)

Then this is the market at work. if nobody considers this an important enough issue to pay someone to work on, what makes you think they are wrong?

The 3.16 kernel has been released

Posted Aug 7, 2014 4:53 UTC (Thu) by kloczek (guest, #6391) [Link] (1 responses)

Market is not only about the money. Trust is quite important as well.
You are able to pay long time for insurance trusting that if something will go wrong you will have compensation.
Support fee quite often works like insurance.

The 3.16 kernel has been released

Posted Aug 7, 2014 6:06 UTC (Thu) by dlang (guest, #313) [Link]

that doesn't contradict anything that I said.

you may your insurance company for the trust.

companies do pay Linux developers to search for problems and fix them.

nobody considers the kernel messages a bad enough problem to spend money on, even while agreeing that the current situation isn't ideal

The 3.16 kernel has been released

Posted Aug 9, 2014 12:30 UTC (Sat) by nix (subscriber, #2304) [Link]

Ah! So this is really important, and really easy, but so unimportant that nobody will do it unless someone pays you?

Right. That's a consistent argument, that is.

(And you don't need to 'contact all kernel developers', you just need to make the changes, post the patch to l-k, and wait for the flames. For something this bikesheddy, I can guarantee attention and flames.)

The 3.16 kernel has been released

Posted Aug 7, 2014 5:51 UTC (Thu) by mjg59 (subscriber, #23239) [Link]

I have no idea what you mean here. If you set the firmware to use firmware-mediated P state control then Linux will use that. There's no need to warn anybody or generate errors - the kernel interacts with the firmware in exactly the way that the firmware expects.

The 3.16 kernel has been released

Posted Aug 8, 2014 4:59 UTC (Fri) by fn77 (guest, #74068) [Link] (3 responses)

>If you know about any performance problem on Solaris where Linux does better did open SR. It will be treated as *serious* bug and you will have acceptable time of fixing the issue.

This made my day, sorry. Talking as one that worked as an external consultant for SUN M.S for ~ 8 years, but with their hat on me when facing costumers.

Their support was really good and i mean it(remember explorer and such? kernel dumps?). It was really great till .... exactly 1 year before the failed IBM deal. For who does not know, it was before the Oracle deal. When the best of their people leaved.

>BTW power management. On these systems which I'm reinstalling now on Solarises after first reboot I had warning that kernel was unable to change ACPI P-state objects.
>So kernel was not able to change power consumption depends on load.

Solaris and power management? You mean the 6, yes, six connections to power suplies a SF10/15/20/25K needed? :-)

Talking about logs, as i learnt from my friends at SUN... Read the damn logs. It's our job. Our job is complicated, that's why we get paid well.
BTW, remember SUN Cluster's logs? Remeber the 1000 names for the same thing? Solstice disk suite, Solaris disk suite (argh.. yes, i have to deal with that sometimes even now in my job). Talking about ugly ;-)

> We are talking about HP hardware with factory default BIOS settings so I'm assuming that probably 99% of HP hardware working under Linux is consuming more power than it can be. The same probably is on other hardware.
>How Linux can do better PM if there is no proper reporting that PM cannot be used as warning on boot stage?

Solaris on x86. Let's avoid this rare beast for now.

Solaris and power eficience. Cool, reminds me a time in a datacenter with malfunctioning air conditioning.
Had to shut down ~ 4 full M9000 plus the t2k and other stuff. All done entering inside the data center like a diver and getting out without getting burned.
Btw, how is called the equivalent of powertop on Solaris? ;-)

To be fair, i see that you have difficulties to express your thoughts in English and for me is the same, so to be clear, i have nothing against you, i just want to have a nice opinion exchange in a matter that interests me and you too i guess.

Frederk

The 3.16 kernel has been released

Posted Aug 12, 2014 1:00 UTC (Tue) by kloczek (guest, #6391) [Link] (2 responses)

> Solaris and power management? You mean the 6, yes, six connections to power suplies a SF10/15/20/25K needed?

Sorry but what you are talking about?
SF10/15/20/25K are long time after EOL and EOS. IIRC none of this hardware is supported by Sol 11.
Rewritten PM is part of Sol 11.

> Solaris on x86. Let's avoid this rare beast for now.

Why. IMO it is good example which shows that at the moment there is no gap here between Solaris and Linux.
Solaris has now base PM support implemented in way which makes very easy to extend it across any possible type of hardware components which is not the case in case of Linux.

> Had to shut down ~ 4 full M9000 plus the t2k and other stuff. All done entering inside the data center like a diver and getting out without getting burned

Again you are talking about quite old hardware. M9000 Sun started selling in April 2007. Try to compare this hardware with something equally old if you want to show something about PM.

The 3.16 kernel has been released

Posted Aug 12, 2014 1:07 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

I have a nice Core2 server chugging along quite nicely, with varied payloads. And it's been running since 2006, I think. Having PM is not an advantage now, it's a prerequisite.

The 3.16 kernel has been released

Posted Aug 12, 2014 10:51 UTC (Tue) by nix (subscriber, #2304) [Link]

I'm wondering how on earth anyone could implement power management 'in way which makes very easy to extend it across any possible type of hardware components'. This seems an impossible, lunatic claim to me: they simply vary too much. Did Solaris have support for frequency-stepping CPUs, or boostable CPUs, or PCI power management, or asymmetric contraptions like big.LITTLE before they were thought of? Of course not, and none of those things would have been especially trivial to implement (of necessity: just modelling their behaviour is hard).

This is the second time in a few days that kloczek has suggested that the Solaris guys had the gift of perfect foresight. I'm coming to the conclusion that kloczek speaks with great decisiveness on numerous subjects about which he(?) has very limited actual knowledge, bending all facts to the Truth that his preferred OS is the Greatest Ever. Clearly kloczek is either in management, or is a teenager. :P

The 3.16 kernel has been released

Posted Aug 6, 2014 21:02 UTC (Wed) by kloczek (guest, #6391) [Link] (8 responses)

> You keep using this "death by a thousand cuts" metaphor to describe Linux but it seems that Linux generally wins at performance, scheduling, IO, networking, drivers, power management, etc

One more time about only this part.
*General* you right. Elephant it is very big animal but if you will try to cut his skin thousands times believe or not by even elephant can die.

The 3.16 kernel has been released

Posted Aug 7, 2014 0:38 UTC (Thu) by raven667 (subscriber, #5198) [Link] (7 responses)

What does this even mean, I don't understand what you are referring to. It seems that you are conceding that Linux is more performant and usable than Solaris in every way except that ZFS and DTrace are neat, but that Linux is terrible and will lead to the downfall/death of anyone uses it.

The 3.16 kernel has been released

Posted Aug 7, 2014 1:31 UTC (Thu) by kloczek (guest, #6391) [Link] (6 responses)

I'm referring to that on Linux constant ignoring huge number of small issues is CAUSING "running with empty barrels" syndrome and by this there is no time to do develop more complicated functionalities.
This is causing in many cases as well "death by thousands cuts" affect.

This is like in real life. If someone will break a leg if rehabilitation will be OK someone may even fully recover. Try to spend huge part of your life walking with few small stones inside your shoe which you are not going to remove "because you are so busy".

In case btrfs someone should really kick off this fs from kernel tree.
Why? Yet another metaphor:
If few days ago NASA announced that they started testing EmDrive and no one today is thinking about using steam engine to make Solar system exploration possible. As same no one should be wasting time working on new Linux FS if it will be not using free list and few other new bits.

The 3.16 kernel has been released

Posted Aug 7, 2014 2:04 UTC (Thu) by raven667 (subscriber, #5198) [Link] (5 responses)

> I'm referring to that on Linux constant ignoring huge number of small issues is CAUSING "running with empty barrels" syndrome and by this there is no time to do develop more complicated functionalities.
This is causing in many cases as well "death by thousands cuts" affect.

Hmm, what you are describing doesn't sound like the Linux development I read about on LWN at all, I'm not seeing a lot of makework or wasted motion in what is being applied to the mainline kernel or the lack of new complex functionality due to people needing to spend all their time on bugfixing, what I am seeing is a massive amount of parallel development, a lot of people running in all different directions but each with a purpose and accomplishing some goal, like a million ants lifting and moving a city bus.

The 3.16 kernel has been released

Posted Aug 7, 2014 2:18 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

>like a million ants lifting and moving a city bus.
Admittedly, sometimes in multiple conflicting directions at once.

The 3.16 kernel has been released

Posted Aug 7, 2014 2:50 UTC (Thu) by kloczek (guest, #6391) [Link] (3 responses)

> a lot of people running in all different directions but each with a purpose and accomplishing some goal, like a million ants lifting and moving a city bus

Perfectly encircled :)
Problem is that on Linux as platform is hard to find something even close to DTrace, ZFS, FMA, zoning, how whole network layer was rewritten in Solaris 10.
All these ants are not moving big vehicle but more trying to borrow/collect/preserve some flying dust of features/ideas originally developed on other OSes. It is nothing bad in such behavior. Sometimes something like army of ants it is exactly what you need.
Solaris needs some own "ants" as well and seems awareness of this fact is slowly growing again this time when Solaris is owned by Oracle.
It is more out keep good balance.
In last few years I'm really frustrated by messy Linux development. Working in larger and larger scales environments caused that I' easier choosing WhatIsWorking(tm) instead what I like. As consequence I'm changing my mind .. to start like WhatIsWorking(tm) :o)

About parallel development: It is not about wasting time on parallel development but more about developing more important things and fixing existing bugs or features (after more than decade after first kernel patch nfsstat still is not able to handle "nfsstat -z" which may be very frustrating sometimes).
Again: btrfs is here perfect example.

The 3.16 kernel has been released

Posted Aug 7, 2014 2:59 UTC (Thu) by neilbrown (subscriber, #359) [Link] (2 responses)

> after more than decade after first kernel patch nfsstat still is not able to handle "nfsstat -z"

Linux nfsstat deliberately doesn't support -z. It doesn't need to.

Instead of running "nfsstat -z" you run "nfsstat > myfile".
Then to see increment information, use "nfsstat --since myfile".

You could wrap this in a script which simulates "-z" if you like.

The 3.16 kernel has been released

Posted Aug 7, 2014 4:47 UTC (Thu) by kloczek (guest, #6391) [Link] (1 responses)

> You could wrap this in a script which simulates "-z" if you like

It is really funny because kernel space few lines change to allow handle -z probably will be shorter than such script.
FreeBSD netstat can do -z, Solaris can do, AIX can do and Linux cannot .. total zonk =8-o

You know sometimes it is all about the trust.
Developers are trusting the clients that they will be able to play for support.
How can I trust (as client) that Linux can do something bigger if something so trivial cannot be done?

The 3.16 kernel has been released

Posted Aug 9, 2014 12:33 UTC (Sat) by nix (subscriber, #2304) [Link]

Something so trivial can be done. Neil just showed you how you can do it. If something can be done in userspace with a trivial redirection, it arguably should not be done in kernel space, even if it is only a 'few lines change'. I don't see how not supporting a useless feature destroys 'trust', unless you for some bizarre reason expect Linux to be exactly like Solaris and AIX and consider every change to be a sign of inevitable decay and ruin.

The 3.16 kernel has been released

Posted Aug 6, 2014 16:30 UTC (Wed) by kloczek (guest, #6391) [Link]

Small correction:
"However in both cases we are talking about whole service environment where overhead is significant"

Should be:
"However in both cases we are talking about whole service environment where overhead on exchanging data or interconnects is significant."


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds