LinHES Forums
http://forums.linhes.org/

Building an updated box...
http://forums.linhes.org/viewtopic.php?f=23&t=22053
Page 2 of 4

Author:  tjc [ Thu Mar 24, 2011 7:44 pm ]
Post subject: 

Similar thoughts had occurred to me along with visions of using a grinder to shave the top 1/4" or so off of that ridiculously tall heat sink until it would play nice with others.

For the moment it's all by itself in there while I tweek the fan configuration to make things quieter without sacrificing too much cooling.

Author:  tjc [ Sat Mar 26, 2011 10:26 pm ]
Post subject: 

Ah ****. I really hate hardware compatibility problems...

It turns out that LinHES R6.0x doesn't play nice with the Realtek RTL8111/8168 NIC on this motherboard. It keeps trying to use the 8169 driver and the utility to rebuild the initramfs mentioned in previous posts on this topic doesn't exist, which leaves me looking at building a custom kernel.

R7.00.02 does better with the networking but the audio support doesn't work and there are some other packages with broken dependencies.

I won't even go into the BS with the WD Caviar Green drives, which have a head unloading bug/misfeature in the firmware, and multiple unsuccessful attempts to get a working FreeDOS image with the necessary firmware update utility included.

I'm already 12 hours into this project and the system is still barely limping along...

Author:  snaproll [ Sun Mar 27, 2011 8:40 am ]
Post subject: 

WD 'Green' drives have been off my 'buy' list for some time, & it looks like they're gonna stay that way ......

Hitachi and Seagate ..... no problems ....

Author:  christ [ Sun Mar 27, 2011 8:47 am ]
Post subject: 

tjc wrote:
I won't even go into the BS with the WD Caviar Green drives, which have a head unloading bug/misfeature in the firmware, and multiple unsuccessful attempts to get a working FreeDOS image with the necessary firmware update utility included.

This is curious. I have about half a dozen of the things (2.0TB version) on my main CentOS based system in various RAID configs. I have no issues at the moment. Though I had seen some comments on Newegg saying that it doesn't play well in RAID.

I am curious what this head unloading issue is. Maybe when you get past your current crisis you can elaborate.

Author:  tjc [ Sun Mar 27, 2011 9:19 am ]
Post subject: 

As part of their power saving regime the green drives, and especially the advanced format ones, park their heads aggressively, after just a few seconds of idle time. The problem with that is that the drives are only designed to handle a certain number of head load/unload cycles, and normal Linux usage patterns (like writing system logs) can chew through the rated lifetime very quickly.

These WD FAQs contain more details:
http://wdc.custhelp.com/app/answers/detail/a_id/3263
http://wdc.custhelp.com/app/answers/detail/a_id/5357

Author:  snaproll [ Sun Mar 27, 2011 9:53 am ]
Post subject: 

I read those as saying "We think our process is the shiznet- It's all your fault fix your software- We don't want your Linux business" .... :x

I, for one, will be happy to accommodate them ....

Author:  christ [ Sun Mar 27, 2011 10:32 am ]
Post subject: 

tjc, thanks for that info. I have the WD20EADS drives. I think I may be ok because I use all of these drives for recordings or storage. My main drives are not green.

On my main recording drive set I have racked up about 100K for option 193 in about 15 months. Not too bad I suppose but it may be worth disabling the parking function as they suggest using hdparm.

Author:  tjc [ Sun Mar 27, 2011 11:42 am ]
Post subject: 

I can't believe that I forgot to check the NIC specs on this stupid motherboard. I paid close attention to a number of other issues, but assumed that by this point GigE support was plain vanilla. Even with their drivers the stupid RTL8111/8168B doesn't support 9k jumbo frames. This whole project has been more than a little surreal. It was supposed to be a simple hardware refresh that would take maybe 8 hours of my time, not counting time when the machines were making backups and copying data from one disk to another. Instead...

Can anyone recommend a "no brainer" PCIe-1x GigE NIC that works well with R6?

This Intel one looks reasonable and seems to have decent Linux support. http://www.newegg.com/Product/Product.aspx?Item=33-106-033

Author:  christ [ Sun Mar 27, 2011 11:57 am ]
Post subject: 

I would be surprised if an Intel card didn't work on Linux and a comment in the Newegg page from one person claims it works fine on Linux.

I use Marvell (88E8056) and nVidia (MCP79) chip sets on my systems without issue. They should support jumbo frames but I have not used this capability yet as I still have devices that run 100Mb.

On the Arch wiki they also have a hardware compatibility list:
https://wiki.archlinux.org/index.php/HC ... ers_(Wired)

It is extremely light in information though.

Author:  tjc [ Sun Mar 27, 2011 12:20 pm ]
Post subject: 

christ wrote:
a comment in the Newegg page from one person claims it works fine on Linux.

Actually if you search for Linux within the Feedback on NewEgg there are 25 comments that mention it. I'm in the process of reading through them now. ;-) There's even one that says "works good in Linux, doesn't work in XP" :-D It also looks like the 2.6.28 kernel includes the appropriate driver (e1000e).

Author:  slowtolearn [ Sun Mar 27, 2011 1:42 pm ]
Post subject: 

tjc wrote:
This Intel one looks reasonable and seems to have decent Linux support. http://www.newegg.com/Product/Product.aspx?Item=33-106-033

I used the "older brother" - EXPI9300PT - of that card in my KM 5.5 and Slackware systems to rsync between the two - Never had a problem with it.

Of course the chipsets are different (82574L vs. the 82572GI in the PT), but I have used Intel NICs in many *nix machines and can't recall having any trouble with them.

Author:  tjc [ Sun Mar 27, 2011 2:18 pm ]
Post subject: 

I _just_ got a custom kernel package built with the r8169 driver stripped out. That and dropping in the r8186 driver (r8168-8.022.00) from realtek, and some mkinitcpio fiddling, got me on the network with a 6.04 kernel. We'll see how stable the combo is, as there are lots of stories about this NIC going stupid over time.

I still went ahead and ordered the Intel card for insurance, because I don't want to go through this again.

As soon as I get a shower, go grocery shopping, tidy up my notes, ... the directions will get posted for the next unlucky stiff to get bit by this wretched POS.

Author:  Martian [ Sun Mar 27, 2011 4:47 pm ]
Post subject: 

I went through a similar experience with Realtek NICs in my last server build. While I was able to get the Realtek NIC to work with their driver module if never worked perfectly. I finally bit the bullet and bought a $30 Intel NIC that worked perfect "out of the box".

As a general rule I now avoid Realtek chips with Linux.

Martian

Author:  tjc [ Wed Mar 30, 2011 7:08 am ]
Post subject: 

BTW - The Intel card arrived last night and worked like a champ with 6.04. Just plugged it in, rebooted, and no muss, no fuss, everything just worked.

Author:  tjc [ Sat Apr 09, 2011 6:47 pm ]
Post subject: 

A couple further notes on this motherboard based on ongoing experience with it.

- The board is not supported by xmbmon, so to do thermal monitoring of the CPU and such requires installing lm_sensors and modifying the rrd_mbtemp.pl script to get it's data from that.
Code:
[root@black3 rrd]# diff rrd_mbtemp.pl.old rrd_mbtemp.pl
20,22c20,23
< &ProcessHDD("mbtemp", "T 1", "Motherboard");
< &ProcessHDD("cputemp", "T 2", "CPU");
< &ProcessHDD("ambtemp", "T 3", "Case ambient");
---
> &ProcessHDD("mbtemp", "temp1:", "Motherboard");
> &ProcessHDD("cputemp", "temp2:", "CPU");
> # &ProcessHDD("ambtemp", "T 3", "Case ambient");
> &ProcessHDD("gputemp", "Temperature", "GPU");
32c33,39
<         my $temp=`/usr/bin/mbmon -c 1 -$_[1]`;
---
>         # my $temp=`/usr/bin/mbmon -c 1 -$_[1]`;
>         my $temp;
>         if ($_[0] =~ "gputemp") {
>                 $temp=`/usr/bin/nvidia-smi -g 0 | grep $_[1] | cut -d' ' -f2`;
>         } else {
>                 $temp=`/usr/bin/sensors | grep $_[1] | cut -c'15-16'`;
>         }

- The board reports three temperatures via lm_sensors, temp1 is the motherboard (possibly southbridge) temperature, temp2 is the CPU, and temp3 is the northbridge with what looks like a 30C offset. The last is very annoying, I ended up tearing the machine apart, removing that HS, cleaning it, checking the seating, and reinstalling it with good thermal compound, only to have the temps stay the same. Sanity checking the temperature afterwards proved that there was NFW that the reported 64-68C was correct and it was actually more like 34-38C or even lower.

- The GPU temperatures from the Geforce 210 board can be read using the nvidia-smi utility which makes for a nice little enhancement to the RRD graphs. At some point when my robotic overlords aren't so demanding I may find time to merge the thermal monitoring for the various chips into one tidy graph and do some other enhancements.

Page 2 of 4 All times are UTC - 6 hours
Powered by phpBB® Forum Software © phpBB Group
http://www.phpbb.com/