This is not ..
.. removing the code or functionalities from kernel code.
This is only about remove dead code which no one ever uses and such cleanum cycle should happen more frequently.
71 publicly visible posts • joined 23 Sep 2014
DTrace
under the GPL
Ahhh it would be so good to have SMF instead crappy systemd under Linux ..
Fireworks, FMA, ILB, ipfilter, projects, trusted boot with signed ELF binaries, zones, kernel zones and many other things ..
I want IPS instead crappy rpm or dpkg. I want BEs instead crappy os-tree, I want .. this list is really long :-/
But moment all this already is available for free in OpenIndiana!!! so why do we want to have this under Linux if core developers ignoring wishes of normal users don't want this?
DTrace source code is available more than the decade. In meantime on Linux failed LTT, LTTng, sytemtap and few less known wannabe-DTrace Linux projects. Now it is even more such projects and looks like every CS student is trying to write MyOwnDtrace(tm).
Why does anyone need to care about Linux offering technologies like DTrace or ZFS if Linux developers community don't care?
FreeBSD, OpenIndiana are still growing. It took few years kind of critical mass of the developers to fully understand OpenSolaris code to be able to conserve it and start introducing new features.
Sun then Oracle did not understand this delay and now Oracle is losing momentum to have the active support of independent developers from outside Oracle.
It is still not too late for Oracle ..
Oracle instead trying help crappy Linux should focus on building Open Source community around own products. They started publishing own code in git repos but those repos have disabled submission of pull requests =:-o
Most of the System Engineers knowing the real value of some technologies are still between hammer and anvil. Linux hammer of the core developers who don't want o understand things which they do not write, and Oracle anvil which still does not understand long-term real potential of Open Source community.
At the moment still, amongst those two, I don't see the potential winner. I see only customers and normal users losers ..
In mean time latest publicly available iso and usb images more than 2 years old on http://www.oracle.com/technetwork/server-storage/solaris11/downloads/ with hundredths well known CVEs will be repealing or scarring probably all new potential Oracle Solaris customers on any try to approach to the Oracle Solaris on distance shorter than stick.
BRAVO Oracle, bravo .. magnifique
Hopefully transition people using Oracle Solaris which needs commercial support to Nexenta or OmniOS will kickstart soon with way higher speed than it was up to now.
PS. Even knowing that in upcoming 11.4 will be at last integrated deduplication based on taken over GreenByte technology (it took Oracle +8 years =8-O .. to prepare this as product which could be integrated into mainline) I'm not convinced that with Oracle Solaris as the product it has any positive long term future.
Long live The Solaris but Oracle for me is more or less already is dead or dying company ..
Just few facts:
1) even on Linux MongoDB code has some portability issues and in Debian and Fedora is possible to find some patches necessary to build it on non-x86* HW. Ad the moment Fedora MongoDB has exclude to build on PPC, USparc and s390. In other words portability MongoDB is really not only Solaris issue (again: on Linux)
2) MongoDB as a package is maintained in OpenIndiana and it is few minor issues but despite those small bits everything is OK on OI Solaris.
3) MongoDB uses completely own/custom build framework and i does not uses GNU auto tools, cmake or meson. If some developers choose such custom framework it says many about MongoDB developers.
Conclusion: I would be really holding my tongue about saying anything about Solaris per se in context of MongoDB until those known on Linux issues will be not sorted out first.
It is really funny to see sometimes comments about how it may be with Solaris written by people have no idea how this OS function on the market :)
Example: kernel DDI (Device Driver Interface) has not hanged in last 10 years and still is in v3 which means that if you have 10 years old binary driver you can load it to kernel space Solaris 11.3 latest SRU. Please try to do such think with Linux :D
Try to read Solaris ABI license where you have GUARANTEE binary compatibility on binary level for veeery looong time.
On other words your example phone call with support is only imaginary.
> Linux is getting the tech from Solaris: ZFS, DTrace, SMF (systemd), Containers (dockers), etc etc etc
ZFS on Linux bases on OpenZFS which is years behind of what Solaris ZFS now provides.
btrfs in latest kernel 4.9rc2 in first few seconds after mounting root fs OOPes on my laptop with detecting recursive locking:
[ 19.847017] =============================================
[ 19.848461] [ INFO: possible recursive locking detected ]
[ 19.849904] 4.9.0-0.rc2.git2.1.fc26.x86_64 #1 Not tainted
[ 19.851360] ---------------------------------------------
[ 19.852823] systemd-journal/491 is trying to acquire lock:
[ 19.854283] (
[ 19.854298] &ei->log_mutex
[ 19.855740] ){+.+...}
[ 19.855751] , at:
[ 19.857234] [<ffffffffc0594372>] btrfs_log_inode+0x162/0x1190 [btrfs]
[ 19.858627]
but task is already holding lock:
[ 19.861547] (
[ 19.861562] &ei->log_mutex
[ 19.863042] ){+.+...}
[ 19.863053] , at:
[ 19.864566] [<ffffffffc0594372>] btrfs_log_inode+0x162/0x1190 [btrfs]
[ 19.866079]
other info that might help us debug this:
[ 19.869067] Possible unsafe locking scenario:
[ 19.871622] CPU0
[ 19.873098] ----
[ 19.874558] lock(
[ 19.874575] &ei->log_mutex
[ 19.876023] );
[ 19.877432] lock(
[ 19.877449] &ei->log_mutex
[ 19.878900] );
[ 19.880319]
*** DEADLOCK ***
[ 19.884367] May be due to missing lock nesting notation
[ 19.887001] 3 locks held by systemd-journal/491:
[ 19.888316] #0:
[ 19.888332] (
[ 19.889627] &sb->s_type->i_mutex_key
[ 19.889644] #12
[ 19.890938] ){+.+.+.}
[ 19.890948] , at:
[ 19.892278] [<ffffffffc0562da3>] btrfs_sync_file+0x163/0x4c0 [btrfs]
[ 19.893624] #1:
[ 19.893640] (
[ 19.894959] sb_internal
[ 19.894970] ){.+.+.+}
[ 19.896274] , at:
[ 19.896307] [<ffffffffc05499f6>] start_transaction+0x2f6/0x530 [btrfs]
[ 19.897660] #2:
[ 19.897676] (
[ 19.899034] &ei->log_mutex
[ 19.899046] ){+.+...}
[ 19.900396] , at:
[ 19.900431] [<ffffffffc0594372>] btrfs_log_inode+0x162/0x1190 [btrfs]
[ 19.901819]
stack backtrace:
[ 19.904602] CPU: 2 PID: 491 Comm: systemd-journal Not tainted 4.9.0-0.rc2.git2.1.fc26.x86_64 #1
[ 19.906037] Hardware name: Sony Corporation VPCSB2M9E/VAIO, BIOS R2087H4 06/15/2012
[ 19.907493] ffffaf42c165b820 ffffffffb746cb43 ffffffffb8be9350 ffff9ff58ada0000
[ 19.908930] ffffaf42c165b8e8 ffffffffb7111cbe 0000000000000002 ffffffff00000003
[ 19.910363] 00000000ca125543 ffffffffb84bf600 a2c5096b94a6fc27 ffff9ff58ada0ca8
[ 19.911768] Call Trace:
[ 19.913071] [<ffffffffb746cb43>] dump_stack+0x86/0xc3
[ 19.914393] [<ffffffffb7111cbe>] __lock_acquire+0x78e/0x1290
[ 19.915713] [<ffffffffb70ece67>] ? sched_clock_cpu+0xa7/0xc0
[ 19.917019] [<ffffffffb7907a5e>] ? mutex_unlock+0xe/0x10
[ 19.918318] [<ffffffffb7112c26>] lock_acquire+0xf6/0x1f0
[ 19.919662] [<ffffffffc0594372>] ? btrfs_log_inode+0x162/0x1190 [btrfs]
[ 19.920966] [<ffffffffb7906de6>] mutex_lock_nested+0x86/0x3f0
[ 19.922289] [<ffffffffc0594372>] ? btrfs_log_inode+0x162/0x1190 [btrfs]
[ 19.923613] [<ffffffffc05aaa85>] ? __btrfs_release_delayed_node+0x75/0x1c0 [btrfs]
[ 19.924936] [<ffffffffc0594372>] ? btrfs_log_inode+0x162/0x1190 [btrfs]
[ 19.926268] [<ffffffffc05ac919>] ? btrfs_commit_inode_delayed_inode+0xe9/0x130 [btrfs]
[ 19.927620] [<ffffffffc0594372>] btrfs_log_inode+0x162/0x1190 [btrfs]
[ 19.928951] [<ffffffffb70e0f8a>] ? __might_sleep+0x4a/0x80
[ 19.930290] [<ffffffffc0594f28>] btrfs_log_inode+0xd18/0x1190 [btrfs]
[ 19.931607] [<ffffffffb7037de9>] ? sched_clock+0x9/0x10
[ 19.932940] [<ffffffffc0595670>] btrfs_log_inode_parent+0x240/0x940 [btrfs]
[ 19.934264] [<ffffffffb72c6279>] ? dget_parent+0x99/0x2a0
[ 19.935628] [<ffffffffc0596d52>] btrfs_log_dentry_safe+0x62/0x80 [btrfs]
[ 19.937010] [<ffffffffc0562f52>] btrfs_sync_file+0x312/0x4c0 [btrfs]
[ 19.938385] [<ffffffffb72e70eb>] vfs_fsync_range+0x4b/0xb0
[ 19.939768] [<ffffffffb72e71ad>] do_fsync+0x3d/0x70
[ 19.941092] [<ffffffffb72e7470>] SyS_fsync+0x10/0x20
[ 19.942251] [<ffffffffb7003eec>] do_syscall_64+0x6c/0x1f0
[ 19.943665] [<ffffffffb790b949>] entry_SYSCALL64_slow_path+0x25/0x25
Problem started about two months ago https://bugzilla.redhat.com/show_bug.cgi?id=1366869 and seems no one in Linux community is interested sorting it out.
Nevertheless btrfs is not even close to ZFS because it does not uses free lists.
DTrace ond OBP?
Comparing those two is kind of a joke.
- Injected OBP code does not have basic sanity checks so it is quite easy hang whole system
- There is no clearly defined providers so forget about building library of OBP scripts which will be possible to use for next few years
- namespace of trace points is messy. Every subsystem has own convention. Sometimes tracepoint names ends on begin/end, sometimes on enter/exit and sometimes begin/end is in the middle. Looks like no one controls this and refuses some some random new naming conventions. Some people started forming hierarchy in providers names with slashes :-0
# perf list | awk -F: '{print $1}' | sort | uniq | wc -l
160
So this is number of kind of perf providers.
And everything on top of:
# perf list | wc -l
1743
only a bit less than 2k tracepoint when on Typical Solaris now are available 80-90k.
- SMV vs systemd?
On Solaris adding SMF not caused any changes in init. On Linux systemd does theoretically everything from init things to crond, PM, logging, login user session and few other things.
None of the even Fedora rawhide systemd services are ready to create so easy instance of the services like using SMF on Solaris.
Linux problem is constantly thes same and is called NIH (Not Invented Here) which is causing that initial impulse to implement something new which is already working well on other OS is causing that instead implementing some base functionalities most of the people are attracted on implementing whistles and bells.
More than 10 years after Solaris 10 release Linux still is deep in the wood.
Containers and docker?
Solaris has rock solid non-global zones and has integrated kernel-zones. Linux has Xen and kvm but both of them are not even close to K-Z from point of view for example network layer overhead.
Solaris started provide well isolation of the processes on top of SMF. Linux SElinux here still is useless because it provides only global AVLs without possibility to cage process on top of service definition.
Docker is now integrated in Solaris as well.
In other words .. Linux still is a bit better than only toy which can blow up in your face almost at any time by New-Briliant-Idea-Of-Doing-Old--Things-New-Way.
It is kind of problem with such cards or generally most of the big SSDs.
Bigger and more expensive SSD than it is more possible that someone will be trying to use such device to transfer more. All SSDs with even quite low power consumption are generating heat. If card or whole device have no enough radiators sooner or later all what will be written or read from such dev will be full of errors.
M.2 SSDs are almost naked and they could be used only up to some rate of read/write IOps.
So using big M.2 SSDs is like using most powerful transport but only on distance few meters.
Is it really so hard to use source code formatting tools or use existing one?
What is the problem with passing all kernel code over such filters and do one final commit to git before final release?
$ man indent | grep -- -comment
-bbb, --blank-lines-before-block-comments
-cn, --comment-indentationn
-cdn, --declaration-comment-columnn
-cdb, --comment-delimiters-on-blank-lines
-dn, --line-comments-indentationn
-fc1, --format-first-column-comments
-fca, --format-all-comments
Set maximum line length for non-comment lines to n.
-lcn, --comment-line-lengthn
-ncdb, --no-comment-delimiters-on-blank-lines
-nfc1, --dont-format-first-column-comments
-nfca, --dont-format-comments
-nsc, --dont-star-comments
-sc, --start-left-side-of-comments
--blank-lines-before-block-comments -bbb
--comment-delimiters-on-blank-lines -cdb
--comment-indentation -cn
--declaration-comment-column -cdn
--dont-format-comments -nfca
--dont-format-first-column-comments -nfc1
--dont-star-comments -nsc
--format-all-comments -fca
--format-first-column-comments -fc1
--line-comments-indentation -dn
--no-blank-lines-before-block-comments -nbbb
--no-comment-delimiters-on-blank-lines -ncdb
--start-left-side-of-comments -sc
> However, it is worth stating OpenZFS has not been frozen since the split with Oracle Solaris ZFS
IMO it is not worth. Just look on price.
Oracle is supporting way bigger customers base and only by this demands of fixing something or adding new features is at least order of magnitude bigger.
For example OpenZFS ARC still needs to be carefully capped. On Oracle ZFS ARC can easily go up an down.
> Canonical claims it has taken legal advice and that it is allowed to ship OpenZFS with its Linux.
It is the same as commercial support on FreeBSD with ZFS.
More important is that OpenZFS code is few years behind what is now available in Solaris 11.3 and even more will be in upcoming Solaris 12.
This is causing that whatever is basing on OpenZFS is more and more like toy.
Improvements only in Solaris 11.3 GA are so dramatic (from point of view of scalability and performance) that sooner or later someone trying to use ZFS on Linux will hit the wall.
And what is the difference of using ZFS on the same hardware (enev non-Oracle HW)?
Almost nothing if Solaris is not cheaper!!!
RedHat support on the same hardware is more expensive than Solaris support.
If someone really needs to use ZFS on prod I really don't understand why someone may try to use Linux/Ubuntu.
technological gap between Solaris and any Linux flavor is so big that if someone will really do full compare IMO cannot form conclusion that it make sense to use Linux!!!
DTrace, ZFS, FMA, network layer visualization, zones (non-global and kernel zones) and many more like packaging (how many years will take any Linux distro to develop concept of BEs?) and crucial SUPPORT quality .. all these things somehow is possible to have on Linux is IMO out of the discussion.
Linux on side of Solaris still is only a toy .. and nothing more than toy.
For me it would be very interesting to observe how Linux developers will be trying to grind this problem.
Seems that kind of issues needs to be solved by major re-architecture. Something which is complete opposite of how Linux is progressing on dealing with some issues by trying to change state from A to B by series of small modifications.
Seems at least Linux clashed with something enough big which cannot be solved using Linux TraditionalWay(tm).
> I'm sorry, but you probably miss some basic CPU architecture concepts.
This concept is not fixes and from first 4004 Intel CPU still is evolving.
>You do not need IB to do memory scans. In fact you don't do any I/O to access RAM.
OK. Lets say that you just rebooted the system which is running in-memory-database and this database uses 2TB or RAM (max amount of RAM per CPU socket in case Sonoma/M7).
How long will take warming up this DB over FC/SAS/SAT/10Bb NIC(s) links?
And another thing. Remember that using IB is not only about bandwidth but about lower latency of IOs.
> You should compare systems carefully instead of believing what Oracle marketing says. Compare socket to socket performance or $ to $ performance. 5Gb -- did you mean 5 gigabits or gigabytes? You can scan more than 5 gigabytes /s on modern notebook.
Really don't understand why you are assuming your laptop CPu does what are doing high end CPUs like Sonoma/M7.
> In US $ the hardware turnover is on a gentle decline. There might still be decent returns, but there's no sign of some recovery in SPARC platforms
You don't see one very important fact that in last years progress on hardware overcome the same on software area. Only by this in many cases many workloads can be handled by investing in hardware. Oracle IMO has descent or enough balance between investing in hardware and software.
I'll mention only again ZFS as very basic technology. Providing ZFS compression on enough big scale can provide huge savings making king of contradicting result that Linux which is theoretically for free can be on some exact conditions more expensive than Solaris only because to use it it will be necessary to buy much more powerful end expensive hardware.
Behind this boundary begins area where Oracle has something to offer than other companies for his customers.
People sees decline of high end market where they see that some mid range or low end solutions are entering on area handled by companies offering enterprise or high end solutions.
This what happen probably happen with EDA. At some time this was something new build by highly specialized engineers as high end solution and by this they some exact solutions been build on top high end technologies. After this EDA solutions matured and competition causes lowering cost of building those automation. What makes such solutions high end? Not to much anymore ..
Moving up lower boundary of enterprise marker is constant process .. "nihil novi sub Sole". What is less obvious is that upper boundary of this area constantly is moving up as well. Why it is less obvious? One very simple fact: number of people working for this marker is it was and still is fraction of whole IT employs.
> I don't think evidence about Oracles revenue growth necessarily contradicts the notion that the high end market
Size of this market is determined only by number of companies with enough big needs.
I don't see any evidences that number of such companies is declining.
Growing revenue such companies like Oracle definitely is not evidence here.
Do you see any other data suggesting decline trend?
> I think the author is arguing that that high end market(that continues to shrink
https://finance.yahoo.com/q/is?s=ORCL+Income+Statement&annual
Oracle Income before tax in 2013 was 12,834,000. In 2014 it was 13,704,000. I Don't see where author see this shrinking?!?
In previous years was the same.
BTW existence on market bigger and bigger 3rd part ?asS providers to be hones even opens this market for Oracle because if those companies offers DBs as services high end database installations may provide lowering the costs of running huge number of customers databases on such high end HW platform like Sonoma instead running them on x86.
Try to think only about one aspect of running one consolidated DB engine on such HW like making backups. In case of Linux there is no so effective technique like like that one which is available on Solaris like flushing all not written data from memory to storage -> block access to storage for fraction of the second to make snapshot and -> unlock storage -> create off site backup using zfs send.
With massive scale databases binary backup is only solution.
ZFS is on the market more than 10 year (few months ago was 10 years anniversary). After 10 years Linus with btrfs is faaar behind ZFS. OpenZFS in longer term is less and less alternative (distance between ZFS and OpenZFS is only growing).
M7 on doing SQL memory scan can do 120 GB/sec, whereas an x86 runs queries at ~5Gb/sec (in both cases it is max theoretical speed) .. and this is why in M7/Sonoma CPUs is build IB controller.
Which one ARM CPU can do what x86 does with own memory subsystem?
Please try compare how many x86 machines running parallel will be necessary to have to have the same avg speed. After this try to compare price of the x86 and Sonoma HW. On top of this you must add power consumption of both solutions. At the end multiple powering costs by two to have real powering and cooling costs. Try to compare DC footprint costs as well .. maintenance costs ..
Please remember as well that we are talking about raw memory scan. M7 DAX (in CPU database accelerator subsystem which is present in Sonoma as well) can speedup those 120 GB/sec by almost factor of compression ration with which database database data can be compressed. With columnar compression Oracle DB data can be compress with even 5 to 10 or more compression ratio.
Trust me only in London area you can find more than few dozens of Oracle customers with so high needs to have so high SQL scan speed.
Sonoma probably would be not able to gain speed of full M7 but even this will attract enough number of customers to have income higher than costs.
PS. I'm really surprised how shallow knowledge has author about high end systems.
> solaris on x86 was pretty much very dead, by design, decisions, policy and strategy of Oracle itself
You made my day :)
In few weeks will be released Solaris 11.3. I would be really glad to se any Linux distribution with comparable number of new features (most of them are not available on any Linux).
On Horizon is Solaris 12.
"The HGST drives' performance is 130,000 random read IOPS and 1.1GB/sec sequential read bandwidth"
According to https://en.wikipedia.org/wiki/PCI_Express
PCI express speed is:
* v1.x (2.5 GT/s):
250 MB/s (×1)
4 GB/s (×16)
* v2.x (5 GT/s):
500 MB/s (×1)
8 GB/s (×16)
* v3.0 (8 GT/s):
985 MB/s (×1)
15.75 GB/s (×16)
* v4.0 (16 GT/s):
1969 MB/s (×1)
31.51 GB/s (×16)
However according the same wiki page PCI-e v4.0 "Final specifications are expected to be released in 2017."
Really have no idea how they are going to provide more than 1GB/s data transfer speed over PCI-e v3. Those disk will be with SAS interface so main bottleneck will not even on PCI but SAS layer.
Can someone explain this?
> When performance and scale matter, then Linux appears to the operating system of choice, wouldn't you agree? Perhaps the Solaris 11.3 beta will make a dent in the TOP500 list?
It is few reasons why Linux is used on HPC systems in order of importance:
- Linux is only system supported on delivery by hardware vendor MPI related software
- it is known for people who support it in exact env (usually those people are enough skilled to not relay on external OS support)
- on the market is much more people able to customize it on using APUs/GPUs (Most of biggest HPC installations are working on customized hardware)
As you see performance on above list does not appear.
> They need to realise that Linux is being developed at a rate that no other operating system can keep up with
It is always funny to observe such comments written usually by people who knows only that something like Solaris exist.
May you try to show any Linux distribution which had more to offer in last major new release than for example just announced Solaris 11.3 beta?
One example will enough ..
> I doubt if the Sparc code would have been purged with such glee
Linux (u)sparc port always was only a toy.
Nothing serious have been done on Linux on this platform.
Oracle made Solaris development much more productive. In the same time now on some Solaris kernel projects are working now more developers than during best Sun times on whole kernel.
Look on raw fact like list of released new features in Sol 11.3.
Look on https://java.net/projects/solaris-userland/lists/commits/archive. Development and adoption of all OSS bits is now in Solaris steady and well organized (which was not the case when during the Sun time).
Really I wish to see any Linux distributions new major version with so many new features like in lates Solaris. Remember that it is quite close to first release of Solaris 12 which will have even more radical changes.