Recently posted comments
Black Duck acquires Ohloh
Posted Oct 5, 2010 20:01 UTC (Tue) by deepfire (guest, #26138)Parent article: Black Duck acquires Ohloh
I find it disturbing that the networking information is becoming a trade asset, doubly so when these guys enter our world.
Think about it, Google pays Twitter for direct access to its information, so as to avoid the much more difficult web crawling way.
Now, how are we supposed to like the idea that BlackDuck will someday sell direct access to the FOSS networking information to some groups, like, say, headhunters?
Solid-state storage devices and the block layer
Posted Oct 5, 2010 19:42 UTC (Tue) by dlang (guest, #313)In reply to: Solid-state storage devices and the block layer by jzbiciak
Parent article: Solid-state storage devices and the block layer
What is Florian's strategy?
Posted Oct 5, 2010 19:32 UTC (Tue) by Lefty (guest, #51528)In reply to: What is Florian's strategy? by daniel
Parent article: Microsoft sues Motorola, citing Android patent infringement (ars technica)
Mm hm.
Solid-state storage devices and the block layer
Posted Oct 5, 2010 19:27 UTC (Tue) by jzbiciak (guest, #5246)In reply to: Solid-state storage devices and the block layer by dlang
Parent article: Solid-state storage devices and the block layer
It certainly is random access. I can generally send a command for address X followed by a command for address Y to the same chip, where the response time is not a function of the distance between X and Y, except when they overlap. Instead, the performance is most strongly determined by what commands I sent[*]. Reads are much faster than writes, and both are much, much faster than sector erase.
The opposite is generally true of disks. There, the cost of an operation is more strongly determined by whether it triggered a seek (and how far the seek went) than if the operation was a read or a write. Both reads and writes require getting the head to a particular position on the platter, ignoring any cache that might be built into the drive. Also, under normal operation, spinning-rust drives don't really have an analog to "sector erase." (Yes, there's the old "low-level format" commands, but those aren't generally used during normal filesystem operation.)
[*] Ok, so that's not 100% true, but essentially true in the current context. NAND flash has a notion of "sequential page read" versus "random page read". If you're truly reading random bytes a'la DRAM w/out cache, you'll see noticeably slower performance if the two reads are in different pages. But, if you're doing block transfers, such as 512-byte sector reads, you're reading the whole page. Hopping between any two sectors always costs about the same. Here, read a data sheet! For this particular flash, a random sector read is 10us, sector write is 250us, and page erase is 2ms. The whole page-open/page-close architecture makes it look much more like modern SDRAM than disk.
Patent lawyers agrees with my belief that Red Hat will have paid
Posted Oct 5, 2010 19:23 UTC (Tue) by FlorianMueller (guest, #32048)Parent article: Red Hat settles patent case with Acacia - shares few details (InternetNews.com)
It's hard to see how that patent holder would have let Red Hat off the hook without paying. I discussed this on Twitter with a Texas-based IP lawyer, who saw my tweet about it in which I voiced the supposition that Red Hat paid royalties plus probably something on top so that the patent holder keeps quiet about the fact that Red Hat paid (since Red Hat wouldn't want to be seen as having been Novell-ized in a way).
The lawyer also doubted that Red Hat got off the hook without paying: @FOSSpatents doubtful. I remember looking at that #patent. Went through a reexam or 2. I then double-checked and he confirmed that in his recollection the patent survived one or two invalidation attempts.
So he concluded: Doubt we'll find out how much, but I'm sure they paid. He then thought that maybe Acacia would at some point disclose to the SEC the payment it received.
honestly, does anyone care??
Posted Oct 5, 2010 19:22 UTC (Tue) by Trelane (subscriber, #56877)In reply to: honestly, does anyone care?? by clump
Parent article: The OpenOffice fork is officially here (Computerworld)
honestly, does anyone care??
Posted Oct 5, 2010 19:15 UTC (Tue) by clump (subscriber, #27801)In reply to: honestly, does anyone care?? by Trelane
Parent article: The OpenOffice fork is officially here (Computerworld)
What is Florian's strategy?
Posted Oct 5, 2010 19:14 UTC (Tue) by FlorianMueller (guest, #32048)In reply to: What is Florian's strategy? by daniel
Parent article: Microsoft sues Motorola, citing Android patent infringement (ars technica)
What is Florian's strategy?
Posted Oct 5, 2010 19:13 UTC (Tue) by daniel (guest, #3181)In reply to: What is Florian's strategy? by Lefty
Parent article: Microsoft sues Motorola, citing Android patent infringement (ars technica)
Really? Golly. Perhaps, since you're all about advocating free software and all, you could get them to liberate all those neat kernel performance patches they've been keeping to themselves all these years. Isn't it all about sharing?
Actually, those mythical patches are gawdawful on the whole and we don't want them. They're more about abusing the kernel in various ways to do specific things in a way that could be be best described as one off hacks. Mostly left in the dust by sensible upstream patches created by developers not on a short lease.
Yes, I care
Posted Oct 5, 2010 19:10 UTC (Tue) by Ed_L. (guest, #24287)In reply to: Yes, I care by dwheeler
Parent article: The OpenOffice fork is officially here (Computerworld)
Yeah Dave. You and my mother :)
Seriously, I've been using LaTeX for so long now (25+ years) I can't think of writing a document any other way. Even a letter (other than email...) Mom, on the other hand, requires an Office-like wysiwymg. Which I have to maintain. So yes, I too have a keen interest in the continued success of OpenOffice.org or LibreOffice either separately or in combination, and hope they can keep their relationship cordial.
bogus random entropy sources
Posted Oct 5, 2010 19:10 UTC (Tue) by jzbiciak (guest, #5246)In reply to: bogus random entropy sources by ejr
Parent article: Solid-state storage devices and the block layer
VIA's approach on the C3 doesn't sound too unwieldy. This white paper analyzing the generator's output makes for an informative read. The punch line is that it looks like a pretty reasonable source of entropy as long as you do appropriate post processing. The random numbers it generates aren't caveat free, but they're heckuva lot better than disk seeks and keypresses.
What is Florian's strategy?
Posted Oct 5, 2010 19:07 UTC (Tue) by daniel (guest, #3181)In reply to: What is Florian's strategy? by FlorianMueller
Parent article: Microsoft sues Motorola, citing Android patent infringement (ars technica)
bogus random entropy sources
Posted Oct 5, 2010 19:01 UTC (Tue) by patrick_g (subscriber, #44470)In reply to: bogus random entropy sources by jzbiciak
Parent article: Solid-state storage devices and the block layer
>>> I don't understand why more processors don't include a proper hardware random number generator. It's a classic case of something that is significantly easier to do in hardware, I'd think.
I think Intel will is working on this.
See these link : http://www.technologyreview.com/computing/25670/
Solid-state storage devices and the block layer
Posted Oct 5, 2010 18:48 UTC (Tue) by jmm82 (guest, #59425)Parent article: Solid-state storage devices and the block layer
bogus random entropy sources
Posted Oct 5, 2010 18:46 UTC (Tue) by jzbiciak (guest, #5246)In reply to: bogus random entropy sources by mpr22
Parent article: Solid-state storage devices and the block layer
If anything, it would make it harder for them to export the chips outside of the United States without getting special approval from the Feds. Cryptographic hardware is a munition under ITAR.
I remember there was some concern awhile back when we put our AES implementation in ROM on some devices, because it calculated AES "too quickly" for some peoples' taste. We ended up making that part of the ROM protected (ie. not user accessible) so that it was only used for boot authentication.
The OpenOffice fork is officially here (Computerworld)
Posted Oct 5, 2010 18:43 UTC (Tue) by JoeBuck (subscriber, #2330)In reply to: The OpenOffice fork is officially here (Computerworld) by jd
Parent article: The OpenOffice fork is officially here (Computerworld)
Your history has a few problems: PGCC was not EGCS with patches for improving optimization on Pentium, rather, it preceded EGCS, and one of the founding purposes of EGCS was to merge what was best about the PGCC changes, HJ Lu's Linux-specific hacks, and the Cygnus "devo tree" to produce a really good, portable compiler. There weren't any "tiffs"; even the PGCC developers at the time were quite open about the fact that it was a bad hack that disregarded front end/back end separation to produce a non-portable Intel-only GCC and that changes would be needed. Intel went off and did their own proprietary compiler after that, and for some reason its code didn't work that well on AMD ;-).
There were some Fortran problems, starting with the departure of Craig Burley, who did g77, and continuing with two competing GNU Fortran projects, one inside EGCS and one outside. Others are in a better position than me to assess the current state of GCC Fortran.
The intent of EGCS was not just to fork, but to take over and become *the* GCC, because the old way was considered badly broken. Craig wasn't comfortable with the complex negotiations required to achieve this and felt it was dishonest.
honestly, does anyone care??
Posted Oct 5, 2010 18:41 UTC (Tue) by Trelane (subscriber, #56877)In reply to: honestly, does anyone care?? by dlang
Parent article: The OpenOffice fork is officially here (Computerworld)
What's the big deal?
Posted Oct 5, 2010 18:39 UTC (Tue) by khim (subscriber, #9252)In reply to: It's really funny... by southey
Parent article: Seigo: on the impending future of ui greatnesses
Working offline is not cloud computing and just defeats the purpose of cloud computing.
How come? Cloud computing difference from traditional client-server computing is the fact that your client is not dumb, it has the ability to run stuff locally.
To work offline you had better hope that your clients are synced 100% of the time with both app version and data.
There are no "app version" in cloud computing. Everything is just data supplied by server. Sure, some parts of these data are actual programs, but this just means you need an upgrade path. You may require to sync from time to time: if you allow user to go weeks and months without sync-up you make life for developer harder, if you force user to sync-up every hour you make life of the user miserable, but it's up to the developer of cloud program to decide what's the right trade-off.
It is a problem when your clients can not run the same app (desktop vs netbook vs smartphone) or you need to change your client (office vs home computers).
How it's different from "normal" web? This is something you must handle for normal web application, too.
The OpenOffice fork is officially here (Computerworld)
Posted Oct 5, 2010 18:38 UTC (Tue) by Trelane (subscriber, #56877)In reply to: The OpenOffice fork is officially here (Computerworld) by josh
Parent article: The OpenOffice fork is officially here (Computerworld)
Does anyone have a list of major OpenOffice features actually developed by SunDeveloped or re-developed?
Solid-state storage devices and the block layer
Posted Oct 5, 2010 18:36 UTC (Tue) by dlang (guest, #313)In reply to: Solid-state storage devices and the block layer by jzbiciak
Parent article: Solid-state storage devices and the block layer
the requirement to do bulk deletes makes it far more like spinning disks than ram.