×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

e1000e Bug Squashed — Linux Kernel Patch Released

Soulskill posted more than 5 years ago | from the good-news-everyone dept.

Bug 111

ruphus13 writes "As mentioned earlier, there was a kernel bug in the alpha/beta version of the Linux kernel (up to 2.6.27 rc7), which was corrupting (and rendering useless) the EEPROM/NVM of adapters. Thankfully, a patch is now out that prevents writing to the EEPROM once the driver is loaded, and this follows a patch released by Intel earlier in the week. From the article: 'The Intel team is currently working on narrowing down the details of how and why these chipsets were affected. They also plan on releasing patches shortly to restore the EEPROM on any adapters that have been affected, via saved images using ethtool -e or from identical systems.' This is good news as we move towards a production release!"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

111 comments

Frosty Piss? (-1, Troll)

Luke727 (547923) | more than 5 years ago | (#25253171)

It tastes pretty good.

The Juiuce Walks Again !! (-1, Troll)

Anonymous Coward | more than 5 years ago | (#25253357)

OJ walks out of the hands of the law AGAIN and this time with an even bigger grin !! His comment was, "If I can kill two white people and get off, I knew this was going to be a cake walk". Court observers agreed.

News? (3, Insightful)

quarrel (194077) | more than 5 years ago | (#25253193)

I know this is News For Nerds and all that, but isn't this a tad specific?

An alpha/beta of the most recent linux kernel patch had a bug fixed, and it hits the front page?

Don't get me wrong, I'm glad they found it, but this is kinda the point of debug cycles.. If we start reporting every bug squashed in all the major open source projects out there this is going to go downhill fast.. (of course, it's possible some may think that the idle. is only a step above..)

--Q

Re:News? (1)

WK2 (1072560) | more than 5 years ago | (#25253245)

(of course, it's possible some may think that the idle. is only a step above..)

Or a step below...

Re:News? (5, Insightful)

Atriqus (826899) | more than 5 years ago | (#25253293)

It's newsworthy because it was a bug that actually bricked hardware.

Re:News? (0)

PReDiToR (687141) | more than 5 years ago | (#25253529)

What, really "bricked" or just needing a reflash?

I mean, who in their right mind would call a PC without an operating system bricked? Just because you have to put a floppy in to install an MBR and command environment (a la the 3 DOS install disks from yesteryear) a bricked system?
Compare that to running an operating system like DOS on an old Athlon that didn't have a big enough heatsink/fan, no ACPI or 'hlt' commands built in, and the processor overheating to the point of literally burning itself up.

Sorry for being pedantic, but you never know when someone reading this post might want to intelligently ask for help installing Linux on their router or upgrade their WindowsMobile PDA.

Re:News? (5, Insightful)

sumdumass (711423) | more than 5 years ago | (#25253653)

Try Erasing the BIOS on the main board and you will be more accurate in your comparison.

This bug actually flashed the firmware for the network controller and hosed access to it in some unexplained sort of way. That is something note worthy because of the rarity of it. If it was simply hosing something that was readily diagnosable and more common like a boot sector or something, then it would be different. It isn't often the software is associated with hardware damage either purposefully or accidentally.

BTW, I know there are recovery methods for a hosed BIOS. That isn't the point. Simply installing an operating system shouldn't hose it nor should it hose hardware either. Imagine all the people who just thought their card was broken or something and went for a refund under warranty or the bad name Intel or Linux received for the "faulty shipment of devices" or the ability to break a device. This is something that would work in windows, load Linux in a dual boot mode, it would stop working in both windows and Linux without any errors or indication that the car was even capable of being seen by the mainboard.

Re:News? (2)

sjames (1099) | more than 5 years ago | (#25257085)

It was even more fun. Once the card was hosed, not only would it not work, but it required a bit of hacking to get it recognized enough to attempt a re-flash (assuming you had an image of the correct contents to flash in).

The exact cause was mysterious as well since it didn't happen to everyone, nor was it predictable if or when it would happen.

Re:News? (1)

SanityInAnarchy (655584) | more than 5 years ago | (#25254405)

I mean, who in their right mind would call a PC without an operating system bricked?

Entirely too many people. Of course, "right mind" is subjective...

Re:News? (1)

sjames (1099) | more than 5 years ago | (#25257105)

What, really "bricked" or just needing a reflash?

Bricked but theoretically recoverable with some further work.

The cards should be fixable by reflashing, but when you can't enumerate the card on the bus, that's a bit of a challenge.

Re:News? (-1)

Anonymous Coward | more than 5 years ago | (#25253615)

So they release code that not only doesn't work, but actually is destructive to hardware. Even for alpha, that's stupid. Something I've come to expect from Linux and its "I've got to be the neatest" mentality.

Re:News? (1)

BrokenHalo (565198) | more than 5 years ago | (#25253677)

Even for alpha, that's stupid. Something I've come to expect from Linux and its "I've got to be the neatest" mentality.

Even for Anonymous Coward, that was a stupid thing to say. A bug existing in alpha or beta versions does not constitute shoddy software overall. That is, after all, what alpha and beta releases are for. I don't need to catalogue the bugs in Windows that are never even acknowledged, let alone fixed, but production releases of Linux are generally as solid as anyone could wish, and bug reports are open for everyone to see, and do get acted upon.

Re:News? (-1, Redundant)

wwahammy (765566) | more than 5 years ago | (#25254135)

Can you catalogue any bugs in Windows that cause a piece of hardware to malfunction semi-permanently? This is a pretty serious issue even for an alpha. I hope Linus and crew reevaluate testing procedures to make sure this doesn't happen again. I can't say I'm hopeful about that though.

Re:News? (1)

Ornedan (1093745) | more than 5 years ago | (#25254649)

The only versions of Windows you ever get to see are the final releases. However, I'd bet they occasionally break some hardware on that multi-thousand machine internal testing farm of theirs.

And what part of "alpha release" does imply "not testing" to you?

Re:News? (0)

Anonymous Coward | more than 5 years ago | (#25253729)

Another thing you should expect is people telling you to shut the fuck up.

Re:News? (0)

Anonymous Coward | more than 5 years ago | (#25253861)

The "pre-release" code is released so people can experiment with it, patch it, and hopefully fix it. If you get burned because you thought that an alpha release was the bleeding edge and gave you a l33t system, then you get exactly what you deserve.

Re:News? (0, Flamebait)

Anonymous Coward | more than 5 years ago | (#25254281)

Well - let's not forget how this bug has gotten into the kernel at the first place. Was this a driver developped by the kernel team?

NO! - This was a driver given by Intel and written with Intel's own specifications. To be precise - it is not the kernel developer's or Linux developer's who are faulty, but Intel that messed things BIG up here. If you can't trust the specifications or software written by the manufacturer himself then you are in trouble.

Thinking about it.. We all know how close Intel and Microsoft are. Now - If you can hurt Linux by releasing specifications and software that bricks hardware would that no be a nice coincidence? I mean - linking Linux with hardware going defective would not hurt Microsoft hmmm?

Oh well.....

Re:News? (-1, Troll)

Anonymous Coward | more than 5 years ago | (#25256509)

Don't know what pre teen modded you insightful, but if you think Intel or any other company goes around destroying hardware for bad publicity, cost of liability, or a secret conspiracy with the Bildebergers, then well, I suppose you too believe "accidents" only occur with babies and old people.

Re:News? (1)

RiotingPacifist (1228016) | more than 5 years ago | (#25255335)

No that is what is to be expected from an alpha, anything else means your just taking unnecessary risks. Alpha means the code has been developed and tested internally, NOT with your programs, NOT with your hardware, now if you run linus' or morton's machine then you will probably not come across this kind of bug but anybody else is running essentially untested code. While BSD claim they review their code, the fact that this bug wasn't caused by somebody commenting out #do not break drivers foo means that a code review by anybody that didn't design/work on the chips was probably going to miss it.
If it had made it to beta then id be worried not enough testing was being done but as it was caught in alpha software that is still about a year before end users will touch it (probably 2-3 if your running a server?) and has yet to be audited by distros i don't really expect this thing to upset anybody but the idiot that installs Linux and then goes "hmm im a windows uber leet pro, i can run the alpha software no probs"

kernel somebodies svn -> morton branch -> alpha (stoped here) -> beta -> rc -> release
distro proposed -> unstable -> release
it was stopped at stage 3 of 9, but you have to remember that each stage is tested by a lot more people than the previous.

Re:News? (5, Informative)

SL Baur (19540) | more than 5 years ago | (#25253317)

An alpha/beta of the most recent linux kernel patch had a bug fixed, and it hits the front page?

They have not fixed the bug that caused the e1000e ethernet cards to get bricked. This is at least a two part bug. The EEPROM should not have been writable and Something Is Happening to cause bad writes to happen. What that "Something" is, no one knows yet, though it appears they are getting close.

Linus is an absolute, total anal retentive with regards to fixing bugs by understanding and fixing the root cause[1], not just papering over it. This papers over it for the moment, because the bug hasn't been isolated yet, but it allows more people to participate because the side effects were really nasty - this was a true bricking of the ethernet card.

This stage isn't newsworthy for Slashdot.[2] It must be a slow news day.

[1] This is a Good Thing.

[2] Nor will the real bug fix when it comes. A bug is found, a bug is fixed. Life, goes on.

Re:News? (5, Interesting)

Spy der Mann (805235) | more than 5 years ago | (#25253455)

I know this is News For Nerds and all that, but isn't this a tad specific?

That's what sections are for. See the little Tux Icon over there? We all care about Linux. Besides, it's a VERY IMPORTANT BUG. A showstopper, so to speak. And keep in mind that a lot of people in here are kernel freaks. They want to test-drive the latest versions of the kernel. And one of the reasons why people keep coming here (and not to digg) is precisely for this kind of news.

Thanks, ruphus13.

Re:News? (1)

ruphus13 (890164) | more than 5 years ago | (#25253751)

Thanks Spy - I, for one, was looking forward to testing this out, and, luckily hadn't gotten down to getting the latest bits when I read about the bug. Now I can proceed to find the next ones!

Re:News? (1)

Mhtsos (586325) | more than 5 years ago | (#25255079)

What I found newsworthy is that I can expect the latest windows worm / trojan / virus to brick a whole bunch of network cards (at work, don't throw stones) as it's now more clearly documented that it can be done. I think it was mentioned in a previous article that the real bug is that bricking through software is possible at all.

Re:News? (0, Troll)

Whiteox (919863) | more than 5 years ago | (#25253809)

It's been a slow news week. Maybe the economic crunch is having an effect on geek news...

Re:News? (-1)

ClickWir (166927) | more than 5 years ago | (#25260999)

If it weren't for the front page on Slashdot and/or Digg, I don't know how long I would have wondered why my NIC stopped working after upgrading to Intrepid beta. Sure I might have checked out the bug reports, but since it's on a testing system I wasn't that worried. Now I am, I think I just bricked that machine.

This doesn't really help me now.... but it's nice to know and hopefully when the fixes for the broken EEPROM's come out, those will be on the front page as well.

Lol! Open sores. (-1, Flamebait)

Anonymous Coward | more than 5 years ago | (#25253217)

Sounds like Linux hasn't changed any in the past ten years. Just buy a copy of Windows and get on with it.

Get on with it? Vista hides behind Mohave Project (1, Troll)

SlashdotTroll (581611) | more than 5 years ago | (#25253269)

Old woman, what with the spanking Windows with quicker bug fixes or the oral abatement in the Mac vs PC ads?

Windows tries its best to hide behind all kinds of Unix technology, and all it comes down to is who to strawman the blame of their poor implementation of Unix to how Microsoft finally is forced to write its own obfuscated code to replace its NT from VMS to what it has become today. My MS-DOS 5 Apache 1 server with the GUI Spectra [chello.at] is far more responsive for desktop publishing while actively serving webpages that people should be aware of the FUD coming out of modern Microsoft. They are pumping out more Operating Systems averting from scientific design towards the graces privies of a bastard legislature that there is no productive computing willingly fruiting from Microsoft to prove the subsistence and superior design to their prior titles. Microsoft yesterday is superior to Microsoft today. Hello, Micros~1 != Micros~2, and freeDOS just keeps getting better that a XVesa with multiple QEMU of freeDOS is becoming the better implementation of multitasking than XP or Vista.

Re:Get on with it? Vista hides behind Mohave Proje (1)

setagllib (753300) | more than 5 years ago | (#25253999)

Argh. Markov Chain text garbage got modded Insightful.

Re:Get on with it? Vista hides behind Mohave Proje (1)

Hal_Porter (817932) | more than 5 years ago | (#25254133)

Does that mean the spambot passed the Turing test or the moderator failed it?

Re:Get on with it? Vista hides behind Mohave Proje (1)

Ant P. (974313) | more than 5 years ago | (#25257767)

Any idiot can receive mod points, but it takes a genius to figure out how to navigate this terrible new UI.

Re:Lol! Open sores. (1)

BPPG (1181851) | more than 5 years ago | (#25253897)

Just buy a copy of Windows and get on with it.

You may have missed the part where it said that this is a development release. Also, installing a development release of the next Windows might brick your system AND get you sued. ;-)

Hardware of Software Problem? (5, Interesting)

Anonymous Coward | more than 5 years ago | (#25253241)

Linus isn't very happy with Intel here:
http://lkml.org/lkml/2008/9/29/368

On Mon, 29 Sep 2008, Arjan van de Ven wrote:
>
> we have a patch to save/restore now, in final testing stages
> (obviously we want to be really careful with this)

Btw, the _real_ bug is clearly in the hardware design that allows you to
brick those things without apparently even having a lock bit.

I'm hoping Intel doesn't treat this as just a software bug. Some hw
designer should be thinking hard about which orifice they put their head
up in.

It used to be that you could fry some monitors by feeding them
out-of-range signals. The _monitors_ got fixed.

                Linus

Re:Hardware of Software Problem? (5, Insightful)

techno-vampire (666512) | more than 5 years ago | (#25253659)

He's got good reason. It should be impossible for the system to write to the EEPROM without special measures being taken, possibly a jumper that has to be removed to allow it. And, if possible, the card won't work right (in some way that doesn't prevent boot) until the jumper's put back to normal. That way, if you really have to re-flash it, you can, but it's not going to happen by accident.

I remember having a motherboard with a jumper that had to be specially set to update the BIOS. The smart way was to power down, open the case and pull the jumper so that you could flash the EEPROM. Then, of course, once that was done, reverse the procedure for safety. I always regarded anybody who left the jumper off for the rare convenience as fools who deserved anything that might happen.

Re:Hardware of Software Problem? (0)

Anonymous Coward | more than 5 years ago | (#25253901)

Really? Have you actually ever done a BIOS upgrade on an anywhere near modern PC? They don't need jumpers being moved. It's not normally a problem at all.

Besides, when you actually WANT to do a firmware update, it's yet another thing. What was a five minute task (possibly not even a reboot) now becomes a scheduled shut down, open case, flip jumper, boot, try and install update, realise that your network card isn't working now so you have to fish update using a USB memory key or something, install update, shut down, restore jumper (which you haven't lost, I hope) and boot up again.

Re:Hardware of Software Problem? (1)

techno-vampire (666512) | more than 5 years ago | (#25254003)

Well, I haven't needed to do a BIOS upgrade in this millennium, I think, and I only had one motherboard that needed a jumper change. As far as your comedy of errors goes, anybody who didn't plan ahead and make sure the update was already on the hard disk before starting deserves all the problems you described. And, of course, flashing the EEPROM on a NIC should be a rare event. Nice strawman, though.

Re:Hardware of Software Problem? (1)

SanityInAnarchy (655584) | more than 5 years ago | (#25254499)

Well, I haven't needed to do a BIOS upgrade in this millennium, I think

Good for you...

I know my keyboard has had its firmware upgraded at least once. I haven't had to do a BIOS update for awhile...

I do remember a series of incremental improvements to the whole process:

The very first time I flashed a BIOS, it was relatively easy -- just run the BIOS update program (in Windows), which formats a floppy for me, which I then boot off of. After booting the floppy, I still have to dump the BIOS, then load the new one -- from a DOS commandline.

I streamlined the process a bit when CDs got cheap -- I burned a FreeDOS CD, and left the actual updates on the hard drive, so I could keep using the same CD.

And some BIOSes supported flashing themselves, from inside the BIOS. Unfortunately, these tools were mostly limited to floppies -- I had to go pick up a floppy drive and plug it into the computer.

More recently -- it turned out I didn't actually need an update, but it was such an easy process that I didn't mind. Pretty much just paste a couple of commands into a terminal on Linux (versus printing them, writing, memorizing, or copying from another computer), then on my next reboot, new BIOS.

Easiest, of course, was on OS X -- firmware updates of any kind are included with Apple's "Software Update", meaning you might not even notice.

flashing the EEPROM on a NIC should be a rare event.

So should any kind of BIOS update.

But then, so should any software bug. You could say that firmware should be held to a higher standard, and I'd agree, but if and when there's a firmware bug, I'd much rather have it patched quickly and conveniently than have to schedule a few hours (or an afternoon), and dig up an old floppy drive and/or a copy of FreeDOS.

Re:Hardware of Software Problem? (1)

mpe (36238) | more than 5 years ago | (#25256741)

And, of course, flashing the EEPROM on a NIC should be a rare event. Nice strawman, though.

Doing any kind of firmware upgrade should be a rare event. At minimum it should involve first shutting down the driver accessing that piece of hardware. If the peripheral is designed sensibly an "upgrade firmware" command would require some kind of "handshake" and only be accepted as the first command after a reset.

Re:Hardware of Software Problem? (1)

SanityInAnarchy (655584) | more than 5 years ago | (#25254455)

possibly not even a reboot

Unlikely. I would imagine that this flavor involves writing the new firmware to some dedicated chunk of memory, where it will be pulled either by the OS or the BIOS itself on next reboot.

Re:Hardware of Software Problem? (2, Informative)

mczak (575986) | more than 5 years ago | (#25255169)

Jumpers are not really used a lot these days. They cost extra, and are clumsy to handle (need to open case). You are right it would be really good if there were some precautions taken so no accidental writes happen (for instance need some special command sequence hard to trigger accidentally), but often those eeprom chips just have a simple serial interface, and reading and writing works almost exactly the same. A couple of years ago you could easily overwrite the eeprom of hauppauge tv cards (though there wasn't much information in there, just the exact model IIRC which was needed to set things up fully correct), a bug very similar to this.

Re:Hardware of Software Problem? (2)

Xugumad (39311) | more than 5 years ago | (#25255869)

Given the cost of EEPROM space, I think the better answer is to double the size. One half is readable, one writable, at any point in time. To update, you write, turn off, flip the jumper across to the other side (or, heck, just use a physical switch) and you're done. Bricking isn't absolutely impossible (you could write a damaged image to one half which wipes the other when it boots), but essentially infeasible.

Re:Hardware of Software Problem? (2, Informative)

Agripa (139780) | more than 5 years ago | (#25256261)

It is not uncommon to require a set of magic numbers to be written before writing to protected memory. The magic numbers and/or access pattern is designed so that no simple or likely hardware failure will allow unprotected access. Small discrete or integrated EEPROMs often have this functionality built in.

Re:Hardware of Software Problem? (0)

Anonymous Coward | more than 5 years ago | (#25253695)

It used to be that you could fry some monitors by feeding them
out-of-range signals. The _monitors_ got fixed.

                                Linus

I remember that. Wasn't it like the IBM PC Jr. or something, and you could ask the display to refresh at 0Hz and that would cause it to fry. Can't remember if actual CRT implosion was urban legend or not, but monitor death was real.

More recently than that (1)

Nimey (114278) | more than 5 years ago | (#25253755)

Supposedly with pre-multisync monitors (say, your average early-'90s monitor, like my old Tandy VGM-340) if you weren't careful about what X modelines you used you could fry your monitor.

Re:More recently than that (0)

Anonymous Coward | more than 5 years ago | (#25253877)

Yes, Linus mentions this in the quote above. He also points out that the final fix was in the monitor itself, not the kernel.

Re:More recently than that (1)

iampiti (1059688) | more than 5 years ago | (#25254869)

I don't know from which year his monitor was. But around 2000 a friend of mine fried his CRT trying to install linux. So Linus is right

Re:More recently than that (1)

retchdog (1319261) | more than 5 years ago | (#25257671)

I remember that. Doing it by hand, or with the open-source tool (I think it was called Xconfigurator) was scary and full of warnings like "This may damage your hardware". So scary, that there was a commercial non-free software, which did nothing but configure X "more safely". I remember one friend of mine was excited when it came out, because there was finally warez for linux. :P

I just held my breath and used xconfigurator.

Re:Hardware of Software Problem? (0)

Anonymous Coward | more than 5 years ago | (#25253895)

Well, I accidently fried my Compaq "Portable"'s video card circa 1983 or so. It had a built-in 9" green screen VGA graphics card and monitor. I was experimenting with the DOS debug command and writing assembler. I had to take it to the long-defunct Computerland store for a new video card.

I'm sure that other hardware suffered the same problems.

Re:Hardware of Software Problem? (0)

Anonymous Coward | more than 5 years ago | (#25256079)

With some really old 5.25 floppy drives you could tell it to seek some weird sectors and get the drive heads to jam. Didn't ruin the hardware but you had to open the drive casing to free them.

So, we put the workaround in _hardware_? (5, Insightful)

SanityInAnarchy (655584) | more than 5 years ago | (#25254441)

Linus has a very good analogy here -- in fact, I love the fact that on the rare occasions I have to set modelines myself, I can pretty much put whatever I want, knowing that if it doesn't work, I can just ctrl+alt+backspace and try again.

But the conclusion does bother me: We're basically saying that all software is buggy, or that we're incapable of preventing this kind of thing from happening (in software). This is true of most modern OS designs -- monolithic kernels do make it possible for pretty much any driver to accidentally ruin any other driver's day.

The proposed workaround, then, is to prevent that memory from being written -- and to prevent this in hardware, for no other reason than to avoid having to write it into every kernel that might potentially allow buggy code to run in Ring 0.

I don't like either solution. Hardware shouldn't be brickable from software, or at least, not so easily. But software shouldn't need hardware to coddle it, either -- why is the SSD in this laptop emulating a hard disk?

Re:So, we put the workaround in _hardware_? (4, Insightful)

PRMan (959735) | more than 5 years ago | (#25255195)

Yes, because as long as the hardware can be bricked by software, it remains an exploit that can be used by malicious software writers.

Speaking of the fried monitors, back in the day a college I worked at got a virus that fried 2 monitors before I got smart and put a Hercules monochrome card in it and cleaned it up.

So, yes, while it can (and should) be worked around in Linux, it should also be fixed in hardware, if possible.

Re:So, we put the workaround in _hardware_? (1)

SanityInAnarchy (655584) | more than 5 years ago | (#25261919)

as long as the hardware can be bricked by software, it remains an exploit that can be used by malicious software writers.

Except, where do you draw the line?

Software control of fans means a virus could spin them all down, and run some complex calculations (PI) to spin the CPU up.

Software control of hard drives means you can spin them up and down all day, and wear them out an order of magnitude faster.

Software control of a printer means you can print page after page of black ink, using up an ink cartridge.

Software control of a Roomba means you can deliberately crash it into walls, or possibly down the stairs.

I have enough coddling in my software. ("Are you sure you want to run this program from the Internet? It might be a virus!") While it might be safer, I really don't want to be in a situation where my hardware is telling me I can't do something, because I might screw it up.

ATA is an abstraction (1, Informative)

tepples (727027) | more than 5 years ago | (#25255479)

why is the SSD in this laptop emulating a hard disk?

It's not. ATA's wire protocol uses a hardware abstraction over block storage devices, as does USB Mass Storage Class. The hard disk is emulating an ideal block device, and the SSD is also emulating an ideal block device.

Re:ATA is an abstraction (1)

mpe (36238) | more than 5 years ago | (#25256771)

ATA's wire protocol uses a hardware abstraction over block storage devices, as does USB Mass Storage Class. The hard disk is emulating an ideal block device, and the SSD is also emulating an ideal block device.

This has been the case for a long time. Even with parallel IDE the drive geometry reported by the controller was typically a complete fiction. Another common feature is the ability for the drive controller to transparently remap failed blocks. Which means that by the time the host actually starts seeing failures the disk is likely to be in a very bad state.

Re:So, we put the workaround in _hardware_? (1)

klapaucjusz (1167407) | more than 5 years ago | (#25257259)

\

But the conclusion does bother me: We're basically saying that all software is buggy,

No. What we're saying is that we build layered systems, and that every layer is expected to protect its integrity from the higher layers.

The hardware protects itself from software (no brain-damaged hardware interfaces), the kernel protects itself from userspace (priviledged vs. unpriviledged mode), system userspace protects itself from user userspace (root vs. non-root), userspace protects itself from interpreted network code (sandboxing).

Re:Hardware of Software Problem? (1)

Baki (72515) | more than 5 years ago | (#25255249)

At least for consumer hardware we have come to expect that it cannot be damaged by buggy software, but in general it is not true that hardware should always protect itself against bad software. Just consider much of embedded software, e.g. the flight software for aeroplanes. Wrong software will result in "hardware damage", the same for most robots etc.

I am quite sure that even a microprocessor driven washing machine nowadays could damage itself if the (embedded) software were buggy.

Re:Hardware of Software Problem? (1)

mpe (36238) | more than 5 years ago | (#25256815)

At least for consumer hardware we have come to expect that it cannot be damaged by buggy software, but in general it is not true that hardware should always protect itself against bad software. Just consider much of embedded software, e.g. the flight software for aeroplanes.

Hence you'd never upgrade the firmware on all the redundant computers on an airliner at the same time. Typically with there being a minimum time (both by the calender and flying) between such upgrades.

Re:Hardware of Software Problem? (1)

pslam (97660) | more than 5 years ago | (#25255453)

The strange thing is that I've written drivers for many EEPROMs, and they all have a few hoops you have to jump through to enable writing. It's not something you can just accidentally do.

Usually it's something like 'Read address 0xaaaa then 0xdddd then write some magic byte then the address then write 128 bytes'.

Perhaps Intel thought they didn't need all that magic?

The Real Bug (-1, Redundant)

Anonymous Coward | more than 5 years ago | (#25253267)

"The _real_ bug is clearly in the hardware design that allows you to brick those things without apparently even having a lock bit. I'm hoping Intel doesn't treat this as just a software bug. Some hw designer should be thinking hard about which orifice they put their head up in."

e1000 been broken a while (3, Insightful)

AaronW (33736) | more than 5 years ago | (#25253315)

About a year ago we built up some new machines to run Linux and found that multiple e1000 cards would cause the Ethernet connectivity to drop and become useless. We ended up replacing them with much cheaper Realtek cards and all the problems disappeared. I haven't trusted Intel since. It's as if there were some buggy interrupt interaction with the on-board Intel Ethernet in the 915 chipset.

Re:e1000 been broken a while (1)

Anonymous Coward | more than 5 years ago | (#25253481)

I've never had a problem with their cards. They're about the only NIC that i've never needed to mess with to get Linux to see. NICs built into the motherboard NB/SB are the biggest problem usually. The PCI-X cards work in PCI slots and in the tests I've done they're usually able to push 30-40% more data through the network than other NICs.

Re:e1000 been broken a while (4, Informative)

sumdumass (711423) | more than 5 years ago | (#25253727)

3com used to be that way too. I'm not exactly sure what it was but the 3c905's rocked and would run data quite a bit faster then any other card at the time. I know they had a full blown data processors on the cards but I assume the others would to. I used to go to computer shows just to pick them up for $10-$20 used because they had the same effects on data performance as you would see with rendering going from a S3 trident video adapter to a Gforce video card. I because seriously convinced when at a lan party with an AMD Athlon 800 system running windows 98se with 256 memory and we had to pull a 100 meg file from a file server to get the updates in sync to a game to play. I started pulling the file last because of helping others find it, I was on the tail end of the 3rd tire of uplinked switches and I had the file installed while others were still transering it. The funny part is that people with their brand new Windows XP 1.4 and 1.8 gig plus systems were still slower and the only thing I can attribute to it is the NIC.

Intel caught up with 3com in this aspect and despite my older fascinations with 3com, I'm actually an Intel fan in this one respect now.

Re:e1000 been broken a while (1)

kesuki (321456) | more than 5 years ago | (#25254097)

processors and sub systems have gotten a lot faster since then.

i know, cheap ethernet interfaces are slower than the fastest cards out there, but your experience, from many years back when a 800 mhz cpu was fast, are a bit dated. a 100 MB file shouldn't take long enough to download from a file server even with a cheap nic unless there is a performance issue with the file server in question. 100 megabytes shouldn't take more than a few seconds to transfer across a lan.

in theory a 100 mbit lan should take 8 seconds to transfer a 100 megabyte file. in theory a 1000 mbit lan should only take .8 seconds. obviously file IO limitations do apply. if your hard drive can only do 80 mbit, and only on the start of the drive it's going to take longer.

Re:e1000 been broken a while (-1)

Anonymous Coward | more than 5 years ago | (#25254797)

in theory a 100 mbit lan should take 8 seconds to transfer a 100 megabyte file. in theory a 1000 mbit lan should only take .8 seconds. obviously file IO limitations do apply. if your hard drive can only do 80 mbit, and only on the start of the drive it's going to take longer.

Your forgetting tcp overhead of about 30%, so times would be more like 12 seconds / 2 seconds. Additionally with consumer kit your unlikely to do better than 40-50% of maximum transfer due to bottlenecks

Yes things are faster (0)

Anonymous Coward | more than 5 years ago | (#25256131)

I used to be pleased years ago when rsync over ssh could saturate a 100 Mb/s LAN (e.g. around 12 MB/s of actual disk throughput).

Yesterday I rsynced about 25 GB of files (Linux distro tree) onto my new laptop over gigabit LAN and sustained 35 MB/s the entire way. This is application speed, which means NICs and CPUs kept up with the I/O and encryption loads.

This performance is about 75% of my ideal disk speed on my laptop, measured with just one large sequential access dumped to /dev/null.

Re:e1000 been broken a while (1)

sumdumass (711423) | more than 5 years ago | (#25256313)

Of course the 800 mhz system was when I First noticed that there was a difference back in 2000/2001 and things have come along faster now.

But to reach the maximum speeds, you have to make sure you have newer equipment capable of hitting the faster speeds and that the lines are in good near perfect order to realize the maximum speeds. You also have TCP overhead that inflates the transmision size of the 100 meg file and other factors to consider like multiple users accessing the same interfaces, the amount of time required for disk access to not only break the file down for transmission but to replace it on the other computer and all.

I guess what I'm saying is that there is a difference in network cards out there and the Intel pro adapters have what makes the difference. Don't concentrate too much on when I discovered this, a lot has came along in the last 7-8 years.

Saw something like that once (1)

Gazzonyx (982402) | more than 5 years ago | (#25254651)

I had the same thing pop up on a supermicro (ICH-7, IIRC... dual Xeon 5xxx's) at work. Recompiling the modules and reinstalling them seemed to fix the problem. Like most hardware problems, it seems to be just the wrong combination of drivers, hardware, software and luck.

I think a yum update is what triggered it, but I'm not sure; it just popped up out of nowhere and acted in such a way that I couldn't ever corner the thing. Recompiling the modules was one of those things that I did while I was thinking about the problem and trying to isolate stupid variables. I really didn't expect it to fix the problem.

I also remember that one of the network cables was found to be flaky some time later - it could all be coincidence.

At any rate, I've found Realtek chips to be... less than desirable, yet durable enough to take a good beating. Their Linux support isn't bad, either. You could do worse, in regards to bang for your buck, than a Realtek based card, IMHO.

Re:e1000 been broken a while (1)

LordNimon (85072) | more than 5 years ago | (#25255491)

It's funny you say that. A few years ago, I asked on a mailing list for the most Linux-friendly gigabit ethernet card, and almost everyone said e1000. I've been happy with mine ever since. My distro was a bit too old for the card, but I was able to download the drivers from intel.com and install them without any problems.

Re:e1000 been broken a while (1)

Fweeky (41046) | more than 5 years ago | (#25256101)

Quite a few problems like that seem to be MSI-X related, did you try disabling them?

Root cause still unknown? (4, Interesting)

AcidPenguin9873 (911493) | more than 5 years ago | (#25253331)

Yes, they released a patch so that the NVM can't be overwritten after the e1000e driver is loaded. But from what I can tell, they still don't know what is/was responsible for the overwriting.

FWIW, I'm almost positive that modern CPUs have debug traps for this exact sort of thing...you can trap arbitrary I/O writes via SMM or something...obviously I'm not in the debug loop, but I don't see why this has been so hard to figure out...

Re:Root cause still unknown? (1)

moteyalpha (1228680) | more than 5 years ago | (#25253839)

It makes me wonder if they have the tools available to do their job. When I did this type of work we had analyzers and ICE machines which makes it easy if you know how to use them. Are the kernel designers getting enough support to buy the needed hardware? Sometimes these things go beyond the software and can happen because of a physical condition that is untrappable in SMM, like a DMA over the top of refresh cycle fault.

Re:Root cause still unknown? (1)

vally_manea (911530) | more than 5 years ago | (#25254227)

Actually I think the guys working on this are Intel engineers so probably they have everything they need.

Re:Root cause still unknown? (1)

Anpheus (908711) | more than 5 years ago | (#25256247)

The problem is that rather than do it the easy way with that alphabet soup of acronyms listed up there, they broke out their handy electron microscope to examine it.*

* Yes, I'm jealous.

Re:Root cause still unknown? (1)

SL Baur (19540) | more than 5 years ago | (#25253881)

obviously I'm not in the debug loop, but I don't see why this has been so hard to figure out...

Because it bricked the card. No way to have it fixed other than to get a replacement as there was no way reload the firmware.

People were scared to test.

Re:Root cause still unknown? (2, Interesting)

Almahtar (991773) | more than 5 years ago | (#25254055)

Which makes me hope all attempts to write to the EEPROM are being logged in the new driver, with stacktraces.

Otherwise what's the point of testing them? Sure they won't brick your card, but you can't get very useful feedback.

Re:Root cause still unknown? (1)

jhol13 (1087781) | more than 5 years ago | (#25253891)

I think it more interesting question is "how can someone overwrite".

With that I mean "isn't there any tests around", not that Linux should (magically) become a microkernel (not that I would mind).

Re:Root cause still unknown? (1)

Ornedan (1093745) | more than 5 years ago | (#25254683)

From what I've read, the bug causing the overwrite is in somewhere other than the network card's driver. That something is overwriting random memory and it happens to hit the memory region mapped for writing the card's firmware.

Re:Root cause still unknown? (1)

jimicus (737525) | more than 5 years ago | (#25256299)

I think it more interesting question is "how can someone overwrite".

Very easy, if the card is designed to have field-updateable firmware. You just need to send it the right (or in this case wrong) command.

Ideally the manufacturer would make it so that you have to go through all sorts of hoops before you've done anything permanent, but this isn't the first time [theregister.co.uk] something like this has happened.

Re:Root cause still unknown? (1)

jhol13 (1087781) | more than 5 years ago | (#25261563)

You missed my next sentence.

What I am complaining is the lack of proper testing in Linux. If there were proper tests for the module which does the overwriting, the problem would have never occured at all.

Re:Root cause still unknown? (1)

jimicus (737525) | more than 5 years ago | (#25262843)

What I am complaining is the lack of proper testing in Linux. If there were proper tests for the module which does the overwriting, the problem would have never occured at all.

Are you trolling or do you honestly not understand the implications of it being an alpha release?

In other words "This release is for testing purposes; by all means report a bug if it breaks but don't be too surprised if the breakage is catastrophic. If you use this on something important, you are nuts and should seek help". In traditional, closed-source development, alpha releases are produced, they may or may not break things. Now, for software living entirely in userland you probably won't cause hardware problems but at the kernel layer, this is entirely possible simply because so many things are designed to have field-upgradeable firmware (which is usually what gets damaged).

Working at a company that does embedded software development, I can tell you now that these things do happen from time to time. If they didn't, there would be no such thing as JTAG programmers.

The only difference here is that because the Linux kernel's development process is open to the world, these things are known by the whole world.

Re:Root cause still unknown? (4, Interesting)

SuperQ (431) | more than 5 years ago | (#25254475)

So the thing is, there is more than just a simple "eeprom write interface" on these chips.

Most of the time the the eeprom attached to the nic is a cheap small serial eeprom part, usually just a few kb.. maybe 32 or 64kb. It contains mostly things like a bit of boot strapping, a few "permanent" settings like the MAC address, and the PXE rom.

And that's where the problems come in. This serial interface is usually an afterthought, and if there is noise on that bus, bits can flip. Or if something bad happens in the NIC code, you could accidentally write when you meant to read.

Usually this is recoverable, but I haven't looked into this specific corruption situation. I've had to deal with this kind of thing before. It's not fun.

Flashing NIC eeproms isn't something a normal end-user does all the time. 99% of the time it's written at the factory, stuffed on the board, and forgotten about.

From what I can tell, it's a cocktail deal (1)

Gazzonyx (982402) | more than 5 years ago | (#25254659)

From what I can tell, the bug is only being seen on bleeding edge combinations of software in bleeding edge distros. They're thinking it's a combination of the driver and a new release of X (one allows for the conditions, the other glitches after that), but there's very little 'tried-and-true' stuff in a bleeding edge distro.

So in a nutshell... (2, Funny)

GlobalColding (1239712) | more than 5 years ago | (#25253349)

From RTFA the cause of the problem has not been identified yet, however the problem is prevented from being able to present itself going forward by maliciously writing/erasing non volatile memory. Since the problem was caught at alpha/beta stages the stable releases were unaffected. BTW, My boss tried to RTFA over my shoulder and shot cheese out of his ears (he is the non techie type). Its threads like these that absolutely cement /.'s place as the worlds dominant UBER NERD site.

Re:So in a nutshell... (0, Flamebait)

vistahator (1330955) | more than 5 years ago | (#25253553)

You know what? Fuck you. In the next few months a few major Linux distros (ubuntu, opensuse) will be releasing their next version with the 2.6.27 kernel and it would be a disaster if this bug was not fixed by then. Go troll ytmnd if this is all beneath you.

Re:So in a nutshell... (1)

BPPG (1181851) | more than 5 years ago | (#25253933)

Its threads like these that absolutely cement /.'s place as the worlds dominant UBER NERD site.

ummm... good?

Re:So in a nutshell... (1)

jimicus (737525) | more than 5 years ago | (#25256311)

My boss tried to RTFA over my shoulder and shot cheese out of his ears

Can he do that on demand?

Solid state drives (0, Redundant)

ilovesymbian (1341639) | more than 5 years ago | (#25253351)

Does this affect solid state drives too? I just bought an asus eee pc 901.

Re:Solid state drives (1)

BPPG (1181851) | more than 5 years ago | (#25253957)

You have nothing to worry about, this article is referring to a development release of Linux, you won't see it in a normal distro...

**braces himself for the imminent whoosh

Re:Solid state drives (1)

SanityInAnarchy (655584) | more than 5 years ago | (#25254585)

If there's a whoosh, I don't get it either, other than that it has to be...

I don't think Intel makes solid state drives. Nor does Intel make the EEE PC. Nor does any EEE PC ship with an experimental kernel. Nor does an ethernet card have anything to do with a hard drive.

Some quick Googling shows that the 901 may have gigabit, maybe not -- and if it did, and if they were this particular Intel card, you might be affected. Which would still have nothing to do with the SSD.

But after checking the manuals I could find, it doesn't look like it supports gigabit at all.

I had a similar problem with Broadcom WiFi card (0, Troll)

postmortem (906676) | more than 5 years ago | (#25254677)

..after using it in Linux, it is not recognizable by any PC.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...