mrb's blog

From 32 to 2 ports: Ideal SATA/SAS Controllers for ZFS & Linux MD RAID

Keywords: raid sas sata storage zfs

[Last updated on 2017-03-20]

I need a lot of reliable and cheap storage space (media collection, backups). Hardware RAID tends to be expensive and clunky. I recognize quite a few advantages in ZFS on Solaris/FreeBSD, and Linux MD RAID:

  • Performance. In many cases they are as fast as hardware RAID, and sometimes faster because the OS is aware of the RAID layout and can optimize I/O patterns for it. Indeed, even the most compute intensive RAID5 or 6 parity calculations take negligible CPU time on a modern processor. For a concrete example, Linux 2.6.32 on a Phenom II X4 945 3.0GHz computes RAID6 parity at close to 8 GB/s on a single core (check dmesg: "raid6: using algorithm sse2x4 (7976 MB/s)"). So achieving a throughput of 500 MB/s on a Linux MD raid6 array requires spending less than 1.5% CPU time computing parity. Now regarding the optimized I/O patterns, here is an interesting anecdote: one of the steps that Youtube took in its early days to scale their infrastructure up was to switch from hardware RAID to software RAID on their database server. They noticed a 20-30% increase in I/O throughput. Watch Seattle Conference on Scalability: YouTube Scalability @ 34'50".
  • Scalability. ZFS and Linux MD RAID allow building arrays across multiple disk controllers, or multiple SAN devices, alleviating throughput bottlenecks that can arise on PCIe links, or GbE links. Whereas hardware RAID is restricted to a single controller, with no room for expansion.
  • Reliability. No hardware RAID = one less hardware component that can fail.
  • Ease of recoverability. The data can be recovered by putting the disks in any server. There is no reliance on a particular model of RAID controller.
  • Flexibility. It is possible to create arrays on any disk on any type of controller in the system, or to move disks from one controller to another.
  • Ease of administration. There is only one software interface to learn: zpool(1M) or mdadm(8). No need to install proprietary vendor tools, or to reboot into BIOSes to manage arrays.
  • Cost. Obviously cheaper since there is no hardware RAID controller to buy.

Consequently, many ZFS and Linux MD RAID users, such as me, look for non-RAID controllers that are simply reliable, fast, cheap, and otherwise come with no bells and whistles. Most motherboards have up to 4 or 6 onboard ports (be sure to always enable AHCI mode in the BIOS as it is the best designed hardware interface that a chip can present to the OS to enable maximum performance), but for more than 4 or 6 disks, there are surprisingly not that many choices of controllers. Over the years, I have spent quite some time on the controllers manufacturers' websites, the LKML, linux-ide and ZFS mailing lists, and have established a list of SATA/SAS controllers that are ideal for ZFS or Linux MD RAID. I also included links to online retailers because some of these controllers are not that easy to find online.

The reason the list contains SAS controllers is because they are just as good as an option as SATA controllers: many of them are as inexpensive as SATA controllers (even though they target the enterprise market), they are fully compatible with SATA 3Gbps and 6Gbps disks, and they support all the usual features: hotplug, queueing, etc. A SAS controller typically present SFF-8087 connectors, also known as internal mini SAS, or even iPASS connectors. Up to 4 SATA drives can be connected to such a connector with an SFF-8087 to 4xSATA forward breakout cable (as opposed to reverse breakout). This type of cable usually sells for $15-30. Here are a few links if you have trouble finding them.

There are really only 5 significant manufacturers of discrete non-RAID SATA/SAS controller chips on the market: LSI, Marvell, JMicron, Silicon Image, and ASMedia. Controller cards from Adaptec, Areca, HighPoint, Intel, Supermicro, Tyan, etc, most often use chips from one of these 5 manufacturers.

Here is my list of non-RAID SATA/SAS controllers, from 16-port to 2-port controllers, with the kernel driver used to support them under Linux, and Solaris. There is also limited information on FreeBSD support. I focused on native PCIe controllers only, with very few PCI-X (actually only 1 very popular: 88SX6081). The MB/s/port number in square brackets indicates the maximum practical throughput that can be expected from each SATA port, assuming concurrent I/O on all ports, given the bottleneck of the host link or bus (PCIe or PCI-X). I assumed for all PCIe controllers that only 60-70% of the maximum theoretical PCIe throughput can be achieved, and for all PCI-X controllers that only 80% of the maximum theoretical PCI-X throughput can be achieved on this bus. These assumptions concur with what I have seen in real world benchmarks assuming a Max_Payload_Size setting of either 128 or 256 bytes for PCIe (a common default value), and a more or less default PCI latency timer setting for PCI-X. As of May 2010, modern disks can easily reach 120-130MB/s of sequential throughput at the beginning of the platter, so avoid controllers with a throughput of less than 150MB/s/port if you want to reduce the possibility of bottlenecks to zero.

32 ports

  • [SAS] 4 x switched Marvell 88SE9485, 6Gbps, PCIe (gen2) x16 [150-175MB/s/port]
    [Update 2011-09-29: Availability: $850 $850. This is a HighPoint HBA combining 4 x 8-port Marvell 88SE9485 with PCIe switching technology: RocketRAID 2782.]
    • Linux/Solaris/FreeBSD support: see Marvell 88SE9485 or 88SE9480 below

24 ports

  • [SAS] 3 x switched Marvell 88SE9485, 6Gbps, PCIe (gen2) x16 [200-233MB/s/port]
    [Update 2011-09-29: Availability: $540 $620. This is a HighPoint HBA combining 3 x 8-port Marvell 88SE9485 with PCIe switching technology: RocketRAID 2760A.]
    • Linux/Solaris/FreeBSD support: see Marvell 88SE9485 or 88SE9480 below

16 ports

  • [SAS] LSI SAS2116, 6Gbps, PCIe (gen2) x8 [150-175MB/s/port]
    Availability: $400 $510. LSI HBA based on this chip: LSISAS9200-16e, LSISAS9201-16i. [Update 2010-10-27: only the model with external ports used to be available but now the one with internal ports is available and less expensive.]
  • [SAS] 2 x switched Marvell 88SE9485, 6Gbps, PCIe (gen2) x16 [300-350MB/s/port]
    [Update 2011-09-29: Availability: $450 $480. This is a HighPoint HBA combining 3 x 8-port Marvell 88SE9485 with PCIe switching technology: RocketRAID 2740 and 2744.]
    • Linux/Solaris/FreeBSD support: see Marvell 88SE9485 or 88SE9480 below

8 ports

  • [SAS] 4 x switched ASMedia ASM1062, 6Gbps, PCIe (gen2) x4 [150-175MB/s/port]
    Availability: $80 $95 (799 NOK) $120 (999 NOK). ST Lab HBA based on this chip: A-590.
    • Linux support: ahci (3.6+)
  • [SAS] Marvell 88SE9485 or 88SE9480, 6Gbps, PCIe (gen2) x8 [300-350MB/s/port]
    Availability: $280. [Update 2011-07-01: Supermicro HBA based on this chip: AOC-SAS2LP-MV8]. Areca HBA based on the 9480: ARC-1320. HighPoint HBA based on the 9485: RocketRAID 272x. Lots of bandwidth available to each port. However it is currently not supported by Solaris. I would recommend the LSI SAS2008 instead, which is cheaper, better supported, and provides just as much bandwidth.
    • Linux support: mvsas (94xx: 2.6.31+, ARC-1320: 2.6.32+)
    • Solaris support: not supported (see 88SE6480)
    • Mac OS X support: [Update 2014-06-26: the only 88SE9485 or 88SE9480 HBAs supported by Mountain Lion (10.8) and up seem to be HighPoint HBAs]
  • [SAS] LSI SAS2008, 6Gbps, PCIe (gen2) x8 [300-350MB/s/port]
    Availability: $130 $140 $180 $220 $220 $230 $240 $290. [Update 2010-12-21: Intel HBA based on this chip: RS2WC080]. Supermicro HBAs based on this chip: AOC-USAS2-L8i AOC-USAS2-L8e (these are 2 "UIO" cards with the electronic components mounted on the other side of the PCB which may not be mechanically compatible with all chassis). LSI HBAs based on this chip: LSISAS9200-8e LSISAS9210-8i LSISAS9211-8i LSISAS9212-4i4e. Lots of bandwidth per port. Good Linux and Solaris support.
  • [SAS] LSI SAS1068E, 3Gbps, PCIe (gen1) x8 [150-175MB/s/port]
    Availability: $110 $120 $150 $150. Intel HBAs based on this chip: SASUC8I. Supermicro HBAs based on this chip: AOC-USAS-L8i AOC-USASLP-L8i (these are 2 "UIO" cards - see warning above.) LSI HBAs based on this chip: LSISAS3081E-R LSISAS3801E. Can provide 150-175MB/s/port of concurrent I/O, which is good enough for HDDs (but not SSDs). Good Linux and Solaris support. This chip is popular because it has very good Solaris support and was chosen by Sun for their second generation Sun Fire X4540 Server "Thumper". However, beware, this chip does not support drives larger than 2TB.
    • Linux support: mptsas
    • Solaris support: mpt
    • FreeBSD support: mpt (supported at least since 7.3)
  • [SATA] Marvell 88SX6081, 3Gbps, PCI-X 64-bit 133MHz [107MB/s/port]
    Availability: $100. Supermicro HBAs based on this chip: AOC-SAT2-MV8 Based on PCI-X, which is an aging technology being replaced with PCIe. The approximate 107MB/s/port of concurrent I/O it supports is a bottleneck with modern HDDs. However this chip is especially popular because it has very good Solaris support and was chosen by Sun for their first generation Sun Fire X4500 Server "Thumper".
    • Linux support: sata_mv (no suspend support)
    • Solaris support: marvell88sx
    • FreeBSD support: ata (supported at least since 7.0, if the hptrr driver is commented out)
  • [SAS] Marvell 88SE6485 or 88SE6480, 3Gbps, PCIe (gen1) x4 [75-88MB/s/port]
    Availability: $100. Supermicro HBAs based on this chip: AOC-SASLP-MV8. The PCIe x4 link is a bottleneck for 8 drives, restricting the concurrent I/O to 75-88MB/s/port. A better and slightly more expensive alternative is the LSI SAS1068E.

4 ports

  • [SAS] LSI SAS2004, 6Gbps, PCIe (gen2) x4 [300-350MB/s/port]
    Availability: $160. LSI HBA based on this chip: LSISAS9211-4i. Quite expensive; I would recommend buying a (cheaper!) 8-port controller.
  • [SAS] LSI SAS1064E, 3Gbps, PCIe (gen1) x8 [300-350MB/s/port]
    Availability: $120 $130. Intel HBA based on this chip: SASWT4I. [Update 2010-10-27: LSI HBA based on this chip: LSISAS3041E-R.] It is quite expensive. [Update 2014-12-04: And it does not support drives larger than 2TB.] For these reasons, I recommend instead buying a cheaper 8-port controller.
    • Linux support: mptsas
    • Solaris support: mpt
    • FreeBSD support: mpt (supported at least since 7.3)
  • [SAS] Marvell 88SE6445 or 88SE6440, 3Gbps, PCIe (gen1) x4 [150-175MB/s/port]
    Availability: $80. Areca HBA based on the 6440: ARC-1300. Adaptec HBA based on the 6440: ASC-1045/1405. Provides good bandwidth at a decent price.
    • Linux support: mvsas (6445: 2.6.25 or 2.6.31 ?, 6440: 2.6.25+, ARC-1300: 2.6.32+)
    • Solaris support: not supported (see 88SE6480)
  • [SATA] Marvell 88SX7042, 3Gbps, PCIe (gen1) x4 [150-175MB/s/port]
    Availability: $70. Adaptec HBA based on this chip: AAR-1430SA. Rosewill HBA based on this chip: RC-218. This is the only 4-port SATA controller supported by Linux providing acceptable throughput to each port. [2010-05-30 update: I bought one for $50 from Newegg in October 2009. Listed at $70 when I wrote this blog. Currently out of stock and listed at $90. Its popularity is spreading...]
  • [SAS] Marvell 88SE6340, 3Gbps, PCIe (gen1) x1 [38-44MB/s/port]
    Hard to find. Only found references to this chip on Marvell's website. Performance is low anyway (38-44MB/s/port).
    • Linux support: mvsas
    • Solaris support: not supported (see 88SE6480)
  • [SATA] Marvell 88SE6145 or 88SE6141, 3Gbps, PCIe (gen1) x1 [38-44MB/s/port]
    Hard to find. Chip seems to be mostly found on motherboards for onboard SATA. Performance is low anyway (38-44MB/s/port).
    • Linux support: ahci
    • Solaris support: ahci
    • FreeBSD support: ahci

2 ports

  • [SATA] ASMedia ASM1060 or ASM1061, 6Gbps, PCIe (gen2) x1 [150-175MB/s/port]
    Availability: $17.
    • Linux support: ahci (3.6+)
  • [SATA] Marvell 88SE9128 or 88SE9125 or 88SE9120, 6Gbps, PCIe (gen2) x1 [150-175MB/s/port]
    Availability: $25 $35. HighPoint HBA based on this chip: Rocket 620. LyCOM HBA based on this chip: PE-115. Koutech HBA based on this chip: PESA230. This is the only 2-port chip on the market with no bottleneck caused by the PCIe link at Max_Payload_Size=128. Pretty surprising that it is being sold for such a low price.
    • Linux support: ahci
    • Solaris support: not supported [Update 2010-09-21: Despite being AHCI-compliant, this series of chips seems unsupported by Solaris according to reader comments, see below.]
    • FreeBSD support: ahci
  • [SATA] Marvell 88SE6121, 3Gbps, PCIe (gen1) x1 [75-88MB/s/port]
    Hard to find. Chip seems to be mostly found on motherboards for onboard SATA.
    • Linux support: ahci
    • Solaris support: ahci
    • FreeBSD support: ahci
  • [SATA] JMicron JMB362 or JMB363 or JMB366, 3Gbps, PCIe (gen1) x1 [75-88MB/s/port]
    Availability: $22.
    • Linux support: ahci
    • Solaris support: ahci
    • FreeBSD support: ahci
  • [SATA] SiI3132, 3Gbps, PCIe (gen1) x1 [75-88MB/s/port]
    Availability: $20. Warning: the overall bottleneck of the PCIe link is 150-175MB/s, or 75-88MB/s/port, but the chip has a 110-120MB/s bottleneck per port. So a single SATA device on a single port cannot fully use the 150-175MB/s by itself, it will be bottlenecked at 110-120MB/s.

Finding cards based on these controller chips can be surprisingly difficult (I have had to zoom on product images on newegg.com to read the inscription on the chip before buying), hence the reason I included some links to online retailers.

For reference, the maximum practical throughputs per port I assumed have been computed with these formulas:

  • For PCIe gen2: 300-350MB/s (60-70% of 500MB/s) * pcie-link-width / number-of-ports
  • For PCIe gen1: 150-175MB/s (60-70% of 250MB/s) * pcie-link-width / number-of-ports
  • For PCI-X 64-bit 133MHz: 853MB/s (80% of 1066MB/s) / number-of-ports

To anyone building ZFS or Linux MD RAID storage servers, I recommend to first make use of all onboard AHCI ports on the motherboard. Then put any extra disks on a discrete controller, and I recommend those that support preferably the AHCI hardware interface which over the years became the most common standard interface for storage controllers.

Comments

go wrote: very interesting article, thank you for sharing it.

another update:

warning : AOC-SAT2-MV8 SATA controller (8 ports,
PCI-X 64-bit) support hard drive up to 1 TB
28 Jun 2010 18:50 UTC

vlad wrote: The Adonics HBA AD2SA6GPX1 (and its eSATA variant ADSA6GPX1-2E) is actually based on Marvell 88SE9123 chipset. OpenSolaris ahci driver does not support that chipset yet. (ahci driver is attached but the disks connected to the adapter are not visible to the system) 09 Jul 2010 20:26 UTC

mrb wrote: Thanks vlad, I have updated the blog entry. 11 Jul 2010 05:36 UTC

Mark wrote: Drifting slightly, but on i7 systems, using discrete controller(s) connected to the MCH/IOH PCI Express ports might help avoid the 2GB/s DMI interface becoming a bottleneck. This is only likely to be an issue for those building raid / zfs based around SSDs and producing throughput greater than 1GB/s. 17 Aug 2010 17:45 UTC

GSnyder wrote: First-rate writeup, Marc - thanks for working on this. I wish I'd seen it a week ago before I started slogging through all of these issues on my own. :-)

Unfortunately, OpenSolaris (and I assume, Solaris) doesn't yet (as of August, 2010) support cards based on the Marvell 88SE912x series, even though they are billed as being AHCI compliant. Several users have confirmed that the OS sees the card but not the attached drives.

Currently I'm using a card based on a Silicon Image 3124, but the performance is notably inferior to the built-in SATA ports on the Intel ICH7 motherboard. There is also an OpenSolaris driver issue that produces frequent hesitations in the data flow, not to mention the fact that OpenSolaris doesn't support this controller's NCQ.

I think I'm going to give the SuperMicro ACOC-USAS-L8i a try.
23 Aug 2010 09:20 UTC

mrb wrote: Thanks GSnyder. Post updated.

Back to the question of software vs. hardware RAID, Intel acknowledges too that software RAID can be faster:
http://www.theregister.co.uk/2010/09/15/sas_in_patsburg/
22 Sep 2010 03:32 UTC

audi wrote: Great article. Very useful information on what controller cards are available. I found this just after I had done my own bit of researching. LSI now has a 16 port internal card for around $400.

LSI SAS 9201-16i

This card will work nicely in a 20+ hard drive case. I plan on getting this card to create a huge backup server using ZFS and OpenSolaris.
18 Oct 2010 21:05 UTC

John wrote: Wondering why these are not included, since they are officially supported by Solaris 10.
LSI SAS 3801 EL-S PCI-E Low Profile SAS
LSI SAS 3801 XL-S PCI-X Low Profile SAS
26 Oct 2010 09:23 UTC

mrb wrote: audi: great find!

john: I missed the PCIe one (and purposefully ignored the PCI-X model). Also I just found yet another one: LSISAS3081E-R (with internal ports).

I have updated my post. Thanks guys.
27 Oct 2010 10:58 UTC

Instigater wrote: I could get 126MB/s read and 115MB/s write on sil3132 and gigabyte i-ram. Tested with intel iometer on raw storage device. 02 Nov 2010 19:02 UTC

mrb wrote: Instigater: thanks. I updated my "110MB/s" number to "110-120MB/s" as per your observations. 02 Nov 2010 19:50 UTC

Luzipher wrote: Thanks for a great post ! This saved me a few hundred Euros.
Here is a post about how to fit the Supermicro UIO cards nicely into standard PCIe slots:
"(http://blog.agdunn.net/?p=391)":http://blog.agdunn.net/?p=391
Note that the gen2 version (AOC-USAS2-L8i) has some LEDs indicators on its bracket, but they look easily removable (in fact the LEDs sit on the board itself, it's just a transparent plastic that routes the light to the bracket).
05 Nov 2010 23:21 UTC

Brian wrote: I'm trying to understand how many drives I can connect to an ESATA port on a motherboard that says it is using the JMicron JMB362 chip. I vaguely understand that port multiplier technology is involved, but JMicron's site says this chip "supports" port multiplier technology and I can't tell if that's the same thing as saying it includes/implements such functionality or whether the enclosure I buy needs to have some sort of additional controller. Their site says that this chip "Supports 2-port 3.0Gbps SATA II interface and Supports two independent SATA II channels (separate logic and FIFO)" and this motherboard has two ESATA ports, so I think each port is driven by this chip. I just want to know if I can connect 4 drives to one port, two drives to each port, or only one drive to each port. The hard drive enclosures that require your mobo to support port multiplier technology are a lot cheaper, so I'd prefer to get one of those if it would work. Any ideas? 02 Dec 2010 00:07 UTC

mrb wrote: This wording means the motherboard is not using a port multiplier, but that it does support an (optional) port multiplier downstream (eg. in an enclosure to allow sharing this sata link between multiple drives). However you do not _need_ to use a port multiplier.

The chip JMB362 supports 2 sata ports. One of them is your eSATA port, the other one is either internal or unused (the pins of the chip are not wired).
02 Dec 2010 05:30 UTC

domiel wrote: Just wondering if you (or any of your readers) know anything about the intel RS2WC080. I'm guessing it's equivalent to the LSI SAS2008 but it may be a RAID only device with no JBOD mode (I've been bitten by that before - LSI 84016E)

Does anyone know about this card?
20 Dec 2010 04:43 UTC

mrb wrote: Good find domiel. The RS2WC080 is definitely based on the LSI SAS2008 per the specs. It looks like it supports JBOD -- that's my gut feeling based on Intel presenting it as an "entry-level RAID" controller. 22 Dec 2010 03:34 UTC

jloe wrote: Why is it that the cards using the mini-sas (iPASS) connectors
only give you one activity LED per iPASS port?

A 16 drive controller card only has 4 activity LEDs!

Thanks,

Joe
22 Dec 2010 09:28 UTC

mrb wrote: jloe: I am frustrated by this as well... 23 Dec 2010 05:21 UTC

swmike wrote: AOC-SASLP-MV8 doesn't work until (reportedly) 2.6.37, before that it's highly unstable and won't even survive an md resync. SMART only makes it worse. 15 Jan 2011 14:32 UTC

289cui wrote: the SIL3132 has poor performance only 27MB/s with both ports used (3rd HDD of the raidz1-tank is on an ICH6R which runs great with 3 other HDDs for another tank, 80 MB/s over LAN which is my limiting factor) running EON/opensoloris b130. luckily I have no bad blocks so no bus reset problems.
under windows xp the SIL3132 does 110MB/s with only 1 HDD connected.
17 Jan 2011 00:22 UTC

mrb wrote: 289cui: something is definitely wrong with your setup. A Sil3132 in one of my OpenSolaris 2009.06 boxes achieves 100MB/s with 1 drive, and 145MB/s with 2 drives, confirming the perf numbers in my post. 17 Jan 2011 02:48 UTC

domiel wrote: Has anyone tried using either the SAS2008 or LSI1068E based controllers with the new 3TB WD drives? I emailed LSI but they said that they haven't tested either of them as yet.

As an aside if anyone is using the new 7K3000's what do you think of them as there is scant info out there as to how they perform.
18 Jan 2011 03:42 UTC

domiel wrote: Sorry, a correction - that should be 3TB _Hitachi_ drives... which is also the model I referred to as 7K3000 18 Jan 2011 04:47 UTC

Kristleifur Daðason wrote: Thank you! SUPER useful summary of everything.

I can attest that the old PCI-X 88SX6081 Marvell SATA chip in the AOC-SAT2-MV8 works really well under Linux. LSI PCIe cards have worked extremely well for me as well.

(Haven't had any luck with controllers from Promise so far, not under Linux at least. Do they use their own chips?)
23 Jan 2011 20:09 UTC

mrb wrote: Kristleifur: Good question... In the early days of SATA, circa 2003, Promise used to design many SATA controller chips and cards. But nowadays they seem to have scaled back this activity. I find very little information about PCIe Promise SATA/SAS cards. It is unclear whether they are well supported by Linux/Solaris/BSD and which chips the card use. Their website is grossly lacking in information. 24 Jan 2011 03:56 UTC

289cui wrote: @mrb
strange and your opensol 2009.06 is even older than EON 0.600 snv_130 (2010-04). sometimes I got 35-37 MB/s with some flukes over 40MB/s.
the hdds are 3x F2 Samsung 1.5 TB, maybe a bad modell ?
I just don't know what to do, because the the other 3 hdds on the intel-southbridge are rockin.
28 Feb 2011 13:12 UTC

svrocket wrote: mrb -

enjoyed this blog post mucho for building my ZFS NAS/SAN beast. For those that like to live dangerously, dealextreme sells dodgy second tier QA castoff LSI 3041s and 3081s for pretty cheap. But if its DOA, good luck shipping it back to China!

http://www.dealextreme.com/p/lsi-sas3041e-r-4-port-sas-sata-host-bus-adapter-51317
09 Mar 2011 07:36 UTC

289cui wrote: dmesg is showing me a notice concerning IRQ sharing with my SIL3132, normally this shouldn´t be a problem, but it seems in my case it is.

so my idea was buying a JMB363, which uses ahci as driver in opensolaris, and hoping the ahci driver has no problem with irq-sharing, I tested opensolaris on my workstation with a jmb363 chip onboard, it worked.

so I bought a add-on card PCIe 1x based on JMB363 chip, unfortunataly the card is recognized as "raid-controller" (class-code 10400 not 10601 which is SATA AHCI 1.0) so the ahci driver is not attached.
so I went to the add-on-cards BIOS, hdds are configured as "non-raid" and found no option the select legacy IDE, AHCI or RAID like in the BIOS of workstation-mainboard with the onboard 363.
flashed newer option-rom/bios for the add-on-card, but stil no option and no AHCI.

windows xp and 7 also recognize the jmb363 based add-on-card as raid-class-controller.

but linux (partition magic 5.10) as two devices:
1 IDE (nothing connected)
1 AHCI (my 2 hdds and ahci-driver attached)

which would be nice for me if I wanna use linux (but I don't) - but bad for people who actually use the card as raid (native by the chip and not via softraid under linux) if there is no switch to avoid this.

so how to achieve this under opensolaris ?
linux seems reading/writing pci(e)-register in "early stages" in linux/drivers/pci/quirks.c (http://lxr.linux.no/#linux v2.6.38/drivers/pci/quirks.c#L1478)
20 Mar 2011 18:15 UTC

svrocket wrote: hey folks I retract my DX link - the $72/$95 LSI cards are counterfeit and have been pulled.

Some Unlucky Buyers posted a about it thread here: http://hardforum.com/showthread.php?t=1581324
with lots of hi-res pictures.
23 Mar 2011 18:12 UTC

Antonio wrote: Hi, by any chance have you tested Dell's SAS6/iR. Seems to have an LSI1068E chipset. I wonder if it works with FreeNAS (FreeBSD 7 or 8 based).

http://accessories.us.dell.com/sna/productdetail.aspx?c=us&l=en&s=bsd&cs=04&sku=341-9957&~ck=baynoteSearch&baynote_bnrank=2&baynote_irrank=0
29 Mar 2011 18:02 UTC

mrb wrote: No, I have not tested Dell's SAS6/iR. 30 Mar 2011 06:33 UTC

Mar Zah wrote: I am trying to create several multi terabyte arrays and have run into some problems.
I have been running several tests using an assortment of Sans Digital esata JBOD boxes and several RAID cards and have come up with measurements that some people might find useful
I am using 4 Seagate Barracuda Green drives
The tests measure the resync speed of an mdadm RAID10 array with no other processes running on an ASUS P5Q motherboard. I find this is test gives me a useful idea of the real world performance of the array under load
I used the latest bios and drivers for each card
The following numbers are with the drives in a Sans Digital TR4M with its port multiplier. I also used a TR5M and found the numbers identical
Highpoint RocketRAID 2312 resync 85MBps
Cheap SIL3132 card resync 56MBps
Highpoint RocketRAID 622 resync 123MBps
Highpoint RocketRAID 2722 resync 93MBps

The following numbers are with the drives each connected to their own sata cables (no port multiplier)
Using a pair of cheap SIL3132 controller resync 97MBps peaking at 130MBps
Using the P5Q onboard ICH10 controller 270MBps
Using a Supermicro AOC-USAS2-L8i controller 270MBps
Obviously the ICH10 and AOC-USAS2-L8i are limited by the throughput of the drives which I tested at ~130MBps

My question is why is the performance so poor through the port multiplier.

I have been exchanging emails with Highpoint about the very poor results from the RocketRaid 2722. Why would you pay $300 for a card that is outperformed by the free one Sans Digital supplies. Their support department seems clueless and I have to assume the card is basically crap where esata is concerned
14 Apr 2011 20:06 UTC

Michael wrote: Hi, great blog! I have posted links to this blog on several big forums. :o)

A question: I am looking to connect a fast SSD (500MB/sec) as cheap as possible, so I need a SATA 6Gbps card with two slots. Do you know of any such card, supported by OpenSolaris?
24 Apr 2011 21:35 UTC

mrb wrote: Michael: no I don't know any 2-port card matching your criteria. Your less expensive 6Gbps option for Solaris is the (8-port) LSI SAS2008. 25 Apr 2011 06:06 UTC

Christophe B wrote: Hello, great article. I am rebuilding raids on my box after changing some disks and aligned partitions and have a few questions:

Motherboard is ASUSTeK P7P55D DELUXE (http://www.asus.com/Motherboards/Intel_Socket_1156/P7P55D_Deluxe/#specifications) with 3 SATA controllers: Intel® P55 Express Chipset with 6 ports, JMicron JMB363 with 1 port and JMicron JMB322 with 2 ports.

My setup is 2x1Tb drive as Raid1 for the system (in fact 3 raid1 with different partitions) and 4x2Tb as Raid5 for data.

How should I plug my disks ? All 6 on the integrated Intel chipset ? Or should I use the jmicron to plug the raid1 ? Or spread the raid5 disks across all the controllers ?
11 May 2011 13:52 UTC

mrb wrote: Put the 6 disks on the P55 chipset's sata ports. They are all 3Gbit/s & AHCI, and the chipset is connected to the CPU via a 16Gbit/s link (2GB/s), offering enough bandwidth to all 6 disks. 12 May 2011 18:01 UTC

rm wrote: New SATA3 chip on the horizon, Asmedia ASM1061:
http://cgi.ebay.com/New-PCI-E-SATA-3-0-SATA-III-Card-6Gb-s-eSata-ASM1061-/150600961598
tried it yet?
16 May 2011 07:28 UTC

Christophe B wrote: mrb: Thank you for your answer, I've done that and my box is speedlight fast with partition alignment now !

I've got a last question: If I want to add two more disks, should I plug them on the two JMicron JMB362 ports or use a spare Promise SATA300 TX4 card on PCI bus ?
I read that PCI bus limits the speed to 266 MB/s so I suppose that is is more interresting to use the JMB362 plugged on the motherboard ?
16 May 2011 09:55 UTC

mrb wrote: Classic PCI is 32-bit 33MHz, so 133MB/s theoretical peak. Whereas the JMB362 is on a PCIe x1 2.5Gbit/s, so 250MB/s theoretical peak usable (taking into account 10b/8b encoding).

Obviously connecting them to the JMB362 is better. Not mentioning thhat the SATA300 TX4 is an ancient card.
17 May 2011 07:11 UTC

Llew wrote: Fantastic post - Helped me to no end!

88SE9125 works with FreeBSD but none of it's ports (tried PCBSD 8.2 , Freenas 8, GhostBSD 2).
07 Jun 2011 01:42 UTC

Mario wrote: Thanks alot, this was very helpful. 13 Jun 2011 11:28 UTC

Federico wrote: Looks like the 2-port SYBA card with the JM363 has been replaced with http://www.newegg.com/Product/Product.aspx?Item=N82E16816124022 15 Jun 2011 16:07 UTC

Joel wrote: Thanks for the excellent writeup! Helped me locate a 2 port controller card for a new ZFS build.
http://www.newegg.com/Product/Product.aspx?Item=N82E16816132008
17 Jun 2011 07:26 UTC

ldoodle wrote: Hi,

Does anyone know if the Marvell 9480 chip is now supported in Solaris (11 Express). I have seen the Supermicro AOC-SAS2LP-MV8 (http://www.supermicro.com/products/accessories/addon/AOC-SAS2LP-MV8.cfm) which seems absolutely ideal for ZFS seeing as it has no RAID whatsoever.
23 Jun 2011 10:10 UTC

ldoodle wrote: Any comments on my previous post. This article is over a year old now so data provided could be well out of date. 01 Jul 2011 13:52 UTC

mrb wrote: The AOC-SAS2LP-MV2 is a new card based on the (same old) 88SE9480 controller that was already in my list, shown as not supported by Solaris. (I added the card, thanks).

A quick online search tells me that Solaris still doesn't seem to support it.
01 Jul 2011 18:31 UTC

ldoodle wrote: Thanks mrb. I thought that would be the case, just wasn't sure if the 9480 controller was added when the article was written or it is was updated more recently. 05 Jul 2011 10:42 UTC

Easy Company wrote: Thanks for keeping this list up to date!

I refer to it from time to time, to assess my upgrade options
11 Jul 2011 04:24 UTC

RU_Taurus wrote: Support for various Marvell chipsets in FreeBSD 8.2:
http://www.freebsd.org/cgi/man.cgi?query=mvs&apropos=0&sektion=0&manpath=FreeBSD+8.2-RELEASE&format=html
27 Jul 2011 15:58 UTC

Sam wrote: Could anyone add a comment on how SAS expanders affect things -

do they introduce a large performance penalty like port multipliers?
22 Aug 2011 18:08 UTC

Ed Swindelles wrote: I'd also like to see some real world numbers about SAS expanders.

Regarding Dell...their H200 uses the LSI 2008 chipset.
23 Aug 2011 01:54 UTC

Andre Reis wrote: If you're in Europe, StarTech.com sells the (2 port) PEXSATA22I for €29,89 VAT, chipset is JMicron JMB363.

Amazon.co.uk also sells these for ~£21.
03 Sep 2011 17:39 UTC

Ben wrote: Found some experts!

So, I have:
- Ubuntu 64bit
- P67 MB
- mdadm SSD RAID1 array on P67 ports
- mdadm 10 disk RAID6 array on P67 and SIL3132 PCI-E port multiplier.

The system is prone to 5 second hangups. It seems to have something to do with the SIL3132 PCI-E card as it's fine without this card. There may be conflict with my no-name PCI TV tuner devices.

So, my question, can anyone recommend a port multiplier that will work with the Intel P67 SATA ports? I want to use mdadm so I want to be able to address each disk individually. I bought ADDONICS AD5SAPM (SIL chip) but when plugged into the P67 ports I can only can see the first device. I tried the AD5SAPM with a spare JB363 adapter and again could only see the first device.

Any wisdom much appreciated.

thanks
ben
04 Sep 2011 06:27 UTC

mrb wrote: The Sil3132 is not a port multiplier. It is a SATA HBA. On the other hand, the AD5SAPM is a port multiplier.

I would in general recommend avoiding port multipliers. They are supposed to follow specs, and should work everywhere, but in practice there are many bugs, in SATA controllers, and in OS drivers. Some combinations do work, but you are going to spend a lot of time researching that. Your situation seems much easier to solve by finding a SATA HBA that doesn't cause these "hangups". Make sure AHCI is enabled for the P67 SATA ports. Try upgrading your kernel/Ubuntu version if you are running a really old one like 8.04. Try removing the tuner card. Try replacing the Sil3132 with a Marvell 88SE9128.
05 Sep 2011 00:58 UTC

Andre Reis wrote: Solaris users beware: I haven't been able to use the JMB363 with Solaris. The cards I purchased from StarTech.com aren't set to AHCI, and Solaris sees them as unrecognized RAID controllers.

I'm trying to flash a newer Option-ROM from JMicron, but so far no success.
08 Sep 2011 12:01 UTC

Andre Reis wrote: If you have a PCI card based on Sil3114, it just needs a ROM flash for Solaris to recognize it, see http://kate-ward.blogspot.com/2009/05/i-fought-silicon-image-3114-with.html 19 Sep 2011 17:30 UTC

Leo wrote: Nice! 23 Sep 2011 01:48 UTC

Martin Gustafsson wrote: Hi,
This is my be the right place to ask for some help regarding controllers for Linux and BSD :)
I got a SuperMicro MBD-X7SPA-H-D525 -O that I use for my home NAS.
It's time to add some more disks and I'm looking for a controller that I can use for my motherbord. I think I need a PCIe x4 controller even if motherbord have an 16x slot?

Any sugestion on a good controller for the MBD-X7SPA-H motherbord with 8-port or more that I can use for Debian 6.x or FreeNAS 7.x or 8.x.

Thanks Martin
28 Sep 2011 09:12 UTC

mrb wrote: Martin: yes one can plug an x4 card in an x16 slot. I do this myself in some machines. This is called up-plugging by the PCIe standard. Per my list, I would recommend you either an LSI SAS2008 or SAS1068E controller. Note that your motherboard's mechanical x16 slot is actually only wired as a x4 link, so it will be the I/O throughput bottleneck (2 GB/s in each direction, which is probably far above your needs, so that's fine). 29 Sep 2011 02:04 UTC

Jim West wrote: I have an HP Proliant DL585 G2 server, and I have an external SATA III RAID 5 array ... I have tried 2 different PCIe SATA III controllers, and am only able to support SATA II 3 Gb/sec, since the PCIe slots are version 1.0. Would it be possible to use the PCI-X 64 bit 100 MHz slot with a PCI-X SATA adapter to obtain 6 Gb/s. I have not been able to find a PCI-X 64 bit SATA III controller. I would really like to connect my SATA III RAID 5 array to my DL585 G2 at 6 Gb/s, but not sure how. I would really apprecaite any advice that you might have on this problem.
Thanks,
Jim West
29 Sep 2011 02:32 UTC

Michael wrote: News:
The IBM M1015 card use the LSI2008 chipset. The m1015 card is cheap and works with Solaris and FreeBSD. Metaluna explains:

Option 1: No Flash
Works fine with OpenIndiana, Solaris
Works fine with JBOD drives
Requires installation of 9240 driver from LSI website
May have had a problem with some ESXi installations but now this is fixed?
ANSWER: IT SEEMS TO HAVE BEEN FIXED WITH ESXi

Option 2: Flash to IT mode

Works fine with OpenIndiana, Solaris, ESXi
No extra driver installation required. Drivers are built in to OI, Solaris.
Possibly better likelihood of working with more OS kernels out-of-the-box
Flash procedure is here: http://lime-technology.com/forum/index.php?topic=12767.msg121131#msg121131
29 Sep 2011 08:59 UTC

Michael wrote: Regarding SAS expanders and Solaris and Norco 4224, these two works and are 6GB. Both are using the new generation 6GB LSI Expander chip, so both works with LSI 2008 chipset.

Intel RES2V240

Chenbro AC UEK-23601
29 Sep 2011 09:01 UTC

Martin Gustafsson wrote: Thanks for your feedback mrb. I have just place an order on a new controller for my home NAS. With your help and your posts abow I did order an LSI MegaRAID SAS3081E-R. Hear in Sweden the price one a controller that use the SAS1068E is the same. I just compear the Intel SASUC81 and the LSI SAS30881E-R.
Once agen Thank you.
29 Sep 2011 10:47 UTC

Royce Williams wrote: FreeBSD 8.2-STABLE supports SAS2008 in JBOD mode with the mps(4) driver, as of February 2011. The commit notes are here:

http://lists.freebsd.org/pipermail/freebsd-stable/2011-February/061524.html
http://lists.freebsd.org/pipermail/freebsd-stable/2011-February/061593.html

If you are running 7.4, 8.1 or 8.2-RELEASE, and you want to run the updated mpt driver as a kernel module, the eRacks guys have a nice howto here:

http://blog.eracks.com/2011/08/lsi-logic-6gbps-mps-driver-for-freebsd-8-2-release/

Ken also noted that LSI would be releasing their own FreeBSD driver, and the rest of the thread after that commit note has some caveats re: disk selection and other items to be aware of.

I can say that mps(4) as a kernel module is working fine for me so far - running 8.1-RELEASE amd64 with a Dell PERC H200, flashed to LSI IT firmware v9.00.00, using ZFS on a pair of WD Caviar Black 2Tib.
29 Sep 2011 15:26 UTC

rektide wrote: highpoint has some new offerings in the pocketraid 27xx.

the low end has the pocketraid 272x, which is a single chip marvell 88SE9485 for ~$180.

higher up models are a multi-chip marvell 88SE9485 PLX bridge solution with a PCIe 16x interface, and 16 & 24 ports. the dual chip offerings are low profile & ~$400 (2740), while the quad 88SE9485 solutions (2760a) are ~$550 . if you want a whole lot of SATA controllers on a single card with a pci bridge, wow, hells yeah, here's your iops density. tweaktown has a solid review, which covers the "throw more controllers at it" solution used here:
http://www.tweaktown.com/reviews/3553/highpoint_rocketraid_2760_switch_archite/index.html

major awesome list, thanks for your maintenance. happy hunting throughput warriors.
29 Sep 2011 20:11 UTC

mrb wrote: Wow these new RocketRAID 27xx are sick. I have always wanted cards to use PCIe switching technology to increase the number of chips & ports... HighPoint is finally the first to do it.

List updated. Title updated to "32 to 2 ports" :) Thanks rektide, and Royce. To Jim: don't bother with the aging PCI-X tech. You should try a PCIe 2.0 card. It will fall back to 1.0 on your server but that doesn't matter as the the 3 vs 6 Gbps SATA/SAS link speed is completely independent of the PCIe speed.
30 Sep 2011 06:26 UTC

Jason wrote: Some comments on Newegg suggest that the Intel rebrand of LSI SAS1068E can't handle drives larger than 2TB. I have not verified this claim personally as I do not have a card yet, but this alone is enough to push me to a LSI SAS2008 based card like the LSI 9211-8i. 06 Oct 2011 00:05 UTC

Rob wrote: Beware Highpoint 27xx has no fBSD support beyond 8.2, and their support@ confirmed no plans for 9.x 26 Nov 2011 13:17 UTC

Tanel wrote: I want to warn for the HighPoint Rocket 620 (Marvell 88SE9128) and the sometimes integrated Marvell 88SE9172 on FreeBSD.

There seems to be an issue somewhere in the layers of ZFS, the AHCI driver or hardware. More about it can be read from:
http://lists.freebsd.org/pipermail/freebsd-stable/2011-September/063852.html
http://forums.freebsd.org/showthread.php?t=23813

It _could_ of course be related to that specific model of the motherboard, so it would be of great help if someone could share his/her experiences of HighPoint Rocket 620 (Marvell 88SE9128) or Marvell 88SE9172 on FreeBSD.
12 Dec 2011 10:25 UTC

JimmyZ wrote: Great article, I've been looking for a summary like this for years, I still want to know if they support 3TB though.

to go: you said AOC-SAT2-MV8 supports disk up to 1TB, well I have a card using the same Marvell 88SX6081 chip but it can handle 2TB drives, so I guess this is more like a firmware issue.
04 Jan 2012 15:26 UTC

Raman wrote: I have LSI 3801E (1068E chip) and it in fact do not support 3TB drives! I've flashed it with latest "IT" firmware (01.33.00.00) and bios (06.36.00.00) and still no luck. Drive reports size as 2,2 Tb...

Are there any options? Perhaps alternate firmware?..
04 Jan 2012 20:06 UTC

Stefano Del Corno wrote: First of all, great page. Lots of usefull details that will help me in the future.. :D

I'd love to share some of my findings with you...

1) OpenIndiana (OpenSolaris) doesn't handle port multipliers very well. In my experience neither SIL 3132 nor JMicron 363 based HBA work well with SIL port multipliers. I've been told that AHCI driver may handle PMs but never had the opportunity to test it.

2) Linux (I did some tests with recent kernels) on the other hand handles PMs very well using both the SIL and the JMicron 363 HBA. If you are performance-wise, avoid PMs, indeed.

3) There is a source of cheap SAS3041 hba in Germany. Look for LSI00168 on ebay and enjoy a decent HBA for 50euro. I've already had my pair of boards... The bid will last 'till end of month, so if you are in Europe, hurry up! :D
07 Jan 2012 20:17 UTC

Ward Vandewege wrote: Quick note - the mvsas driver is unstable/unusable on Debian Squeeze (2.6.32), at least with this card: Supermicro AOC-SASLP-MV8. Avoid. Apparently there are critical bugs in the mvsas driver that are fixed in 2.6.36 or thereabouts.

2012-02-14
14 Feb 2012 22:22 UTC

Hiren wrote: Great page -

Thanks for all the information. It is a great summary and interesting to see that there are only a handful of "real" chip vendors (the board makers are abundant).

hiren
22 Feb 2012 22:33 UTC

Fizzer wrote: Quote: Antonio
Hi, by any chance have you tested Dell’s SAS6/iR. Seems to have an LSI1068E chipset. I wonder if it works with FreeNAS

Seams to work fine, put in my desktop PC alongside my pci-e graphics card and W7 installed drivers straight off..

Will report back on in in my Ubuntu server, then will see if Solaris Express recognises it.

Thanks for posting this Marc , i was looking at pci stat cards when I came across this. Never thought of it.

Got my Dell SAS6/iR PCIe off fleaBay for £31 + £7 P+P from USA.
Model: http://search.dell.co.uk/1/2/1191-dell-sas-6-ir-internal-controller-card-raid-no-cables-kit.html

Bargain compared to 8-port PCI-e SATA cards.

Also, first boot in my GigaButt board was a no,no..
Aparently these cards/or SMBus conflicts with Intel motherboard's memory detection.

Found this site, overclocking forum site that mod them.. LOL Loads of stuff on PERC cards. Hope ya don't mind the link.

http://www.overclock.net/t/359025/perc-5-i-raid-card-tips-and-benchmarks

Thanks a bunch.. Mate!

Will report more when I have my ZFS RAIDz pools up, complete n00bie to ZFS, so it may be some time... :D
01 Mar 2012 20:19 UTC

rektide wrote: ya'll ready to let the PCIe v3.0 game start switching in?!

can't wait till the hba and flash controller become the same chip. and there are four of em. way to go marvell with the scale out answers!
16 Mar 2012 02:27 UTC

Steve wrote: Thank you for this article. I found it while searching google and now have some ideas for what to buy if/when I decide to build a NAS.

I'm curious though if you happen to have any recommendations for internal drive cages?

Most of my searches bring up external drive cages, or 5.25" drive bay to 3.5" converters.

Ideally, I'm actually looking for something to sandwich in as many 3.5" drives as possible, into as small an area as possible.
22 Mar 2012 22:32 UTC

Gary wrote: Greetings,

An excellent article. Thank you very much. It doesn't quite answer my particular question.

Do you, or anyone, know of a 4-port SAS/SATA PCIe v2, at least x4, that will support SATA port multipliers based on the SiI3726?

Syba seems to produce a bunch of these 4 port SATA 'RAID', but can be configured to be plain old SATA controller, that can support such port multipliers. However, they are all 1 lane cards. That works out to be 1.25Gb/port, nominal, and then spread that over 5 drives...ugh. Would be better to stick them on a USB-SATA widget. I figure a x4 card would get me right in the 100MB/s per drive which is in the neighborhood of being tolerable.

All that said, I actually wouldn't even mind spending the 'big' $$$ for a highend raid controller PCIe x8 but I haven't found any that will support SATA port multipliers.

Any assistance would be greatly appreciated.
Thanks,
--Gary
30 Mar 2012 07:18 UTC

mrb wrote: Steve: I don't know the market of internal drive bays that well, but I would love to buy this external one for one of my next builds: http://www.plinkusa.net/websata350.htm

5 x 3.5" HDDs in 3 x 5.25" bays, and minimalist design that does not use trays --just slide the drives in!

Gary: in the early years of SATA, port multipliers (PMP) were not very well supported, but these days most modern SATA controllers supports them. For PMP support on Linux, see the PMP column in: https://ata.wiki.kernel.org/articles/s/a/t/SATA_hardware_features_8af2.html
30 Mar 2012 10:28 UTC

Gary wrote: MRB: Thanks for responding. On that and your advice in the article, I elected the HighPoint 2720SGL, since I already had 8087s from my Adaptec card. In any case, as you mention this card is based on the Marvell 88SE9485 and is recognized and by the later linux kernels w/out issue. The mvsas driver loads, sees it and all seems right and true in the world.

I flashed the bios to the latest on HighPoint's site for the card, v1.4. Note that I tried this in the stock fw of v1.0 w/not change. The issue is that the card/module seems like it's trying to work, but can't deal with the port multiplier. Inquire failures and all manner of unhappiness. Interestingly enough, the drive in port 0 of the pmp is seen in the early config menu of the HBA but it can't even do that much when in linux.

I contacted linux@highpoint-tech.com, which was apparently forwarded to generalsales as that's who vended me this:

"The RocketRAID 2720SGL is a 6Gb/s SAS RAID controller - it does not support port multiplier devices (PM is limited to certain types of SATA controllers). Our RR64x series, for example, support PM capable enclosures and chassis."

That flies in the face of the data both from google queries, here, and Newegg's page, where I purchased the card, here:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816115100

On the "details" tab, note:
"BIOS Boot to RAID with point-to-point connectivity or through port multiplier."

There's a non RAID BIOS floating around that I found browsing the FreeNAS forums. It's not clear if this will make a difference as I won't be able to do anything with it until later tonight.

Any thoughts?
Thanks,
--Gary
10 Apr 2012 19:23 UTC

mrb wrote: Gary: no idea. I have never tried port multipliers in combination with SAS cards. 21 Apr 2012 08:51 UTC

Trygve Sørdal wrote: Port multiplier is SATA only.
Port expander is SAS, with possibility for SATA tunneling protocol .
HP has a 36 port one that is said to be quite good, gives 32 disk capacity.
http://hardforum.com/showthread.php?t=1484614

INTEL-SASUC8I (LSI 1068E) works with INTEL AXX6DRV3GEXP (expander with 6-bay hot-swap). This is quite affordable, but installation in a normal case is a bit fiddly
I have personally tested it with 6 x 1TB SATA drives of various manufacture on Ubuntu server.

Be aware that LSI 1068E supports a maximum of 2 TB per disk according to LSI.
http://kb.lsi.com/KnowledgebaseArticle16399.aspx
26 Apr 2012 11:08 UTC

joerg wrote: I would like to report my experience with migrating an existing software RAID 5 (7 disks in total) to a new controller. This possibility was for me one of the main points for choosing a software RAID.

The array was created on an Ubuntu 10.04 using mdadm. The controller is a Promise TX8660. I managed to get driver sources for the 2.6 kernel series from promise. Now I wanted to migrate to Ubuntu 12.04 with a 3.2 kernel and I have lost support for this controller.

I thought, that it would be easy to just use another controller, so I bought an Adaptec 6805E. The controller is well supported and integrated without problems when installing Ubuntu 12.04. I installed without the disks containing the old array attached to the new controller.

When attaching the array to the 6805E controller the disks are recognized and the controller BIOS even reports an legacy raid on the disks. However the controller seems to write metadata onto the disks. The disks are available in Ubuntu, but the RAID could not be discovered.

Back to the old system and the promise controller, I found that the partition table was corrupt. I could repair this from the backup tables using gdisk and finally got the array back. The disks were originally partitioned using parted using gpt partition labels and and unformatted primary partition using the entire disk (all are 2TB Samsung).

So the problem seems to be that the adapters these days like to write metadata messing up the info on the disk.

So I now think going away from the Adaptec and the LSI SAS 9211-8i looks promising to me. However I contacted LSI and according to them it also writes metadata. I suspect the same problem might happen.

What are your experiences when migrating an software raid? Is it me or is this a common problem? I thought a controller should leave freshly attached disks alone and just presents them to the OS until explicitly told do do something.

Would the Supermicro AOC-USAS2-L8e be the right choice for me because it is HBA only? However I would like to avoid the UIO problems. I also like to have a controller that will support more then 2TB per disk and 6GB/s SATA.

best regards, Joerg
18 May 2012 01:47 UTC

mrb wrote: My experience with migrating software raid is just as easy as (1) move the disks to another machine, (2) boot, and the mdadm array or zfs pool is redetected automatically.

joerg: your experience is the reason the Adaptec 6805E is not in my post. It is not an "ideal" controller as it seems it cannot act as a true JBOD controller. I suggest you replace it with one of the controllers I recommend in my post. They will not forcefully write metadata and corrupt your existing data.
18 May 2012 07:32 UTC

joerg wrote: Hi mbr,

Thanks for the answer. And thanks for this blog in the first place. So much useful information here. I wish I would have found it before ordering the controller. I have seen the LSI before and it's capabilities and driver support looked quite similar to the Adaptec. So it was a 50/50 decision.

Just to complete my observations: It seems that the Adaptec needs to be explicitly configured to present the disks as JBOD. It seems that JBOD is like yet another RAID mode where the controller manages the physical disks and presents them "under his name" to the OS. I did not create such a JBOD volume explicitly and all I do is the controllers default handling. In the OS, I do not see the original disk's vendor name (SAMSUNG) and model (HD203WI and HD204UI), but I see them being presented as "Adaptec ... " disks. All this is very strange to me, because this might mean that you cannot attach any kind of disk to this controller if you are interested in keeping its data rather than doing a fresh install.

So this is the official warning for all users of software RAID to stay away from Adaptec copntrollers (It is most likely that it is not only the 6805E that behaves this way). I would even expect that a fresh software RAID with these controllers might cause problems if you want to move it to other vendor's controllers because of the Adaptec metadata.

I already looked more closely at the alternatives:
LSI SAS 9211-8i
Supermicro AOC-USAS2-L8i or L8e
Highpoint Rocket 2720SGL
ARECA ARC-1320-8i

All seem to be SATA3 with 6GB/S, capable of disks > 2TB, and well supported by Linux. Do they all also support presenting the disk's SMART info to the OS?

Regarding the LSI I contacted their Support asking if it writes metadata to the disks. They were very fast with their reply. I got the following answer:

"The 9211 does write metadata to each of the disks, so if you want to restore files from a software RAID to this, you will need to first create your RAID volume on the 9211, image the existing software RAID volume, and then restore it to the 9211’s volume. Please note, however, that the 9211 cannot do RAID 5. It can only do RAID 0 and RAID 1. If you need RAID 5 support you should look at the 9260-8i."

This leaves me somehow suspicious, but it could still mean that the controller only writes metadata when using its RAID functions. Which is fine and which I would have expected. I asked LSI now explicitly if it also writes metadata in case the controller RAID functions are not used or if it is flashed with "initiator target" (IT) firmware that does not contain the RAID functions. The answer is pending and I will report what they say.

mbr, If I understand you correctly, the controller leaves the disks alone if I leave it's RAID functions alone. Have you or anybody here ever moved a software RAID from other vendor's controllers to the LSI. Or was it only migrating from one LSI model to another one?

I also wonder if the problem might be related to my partitioning. I use gpt labels and primary partitions spanning the entire disk (created with parted). These partitions are used by madam directly without putting a filesystem onto the single disk. Is this a strange way or the usual way of preparing the disks. To me it sounded logical doing it this way because I need a filesystem on the md volume and not on disk level. However a new controller without seeing a filesystem might think it is an empty disk. But this is only guessing about other causes of problems.

I hope it is ok asking all these details here and writing long posts. You might imagine that I want to keep the array as save as possible now and also avoid another useless buy.

best regards, Joerg
18 May 2012 09:33 UTC

joerg wrote: I got some answers from Adaptec and from LSI support. Adaptec confirmed that the 6805E is a Hardware RAID controller and cannot work as a simple HBA. Thus it will always write metadata and manage the attached disks.

LSI support said something similar for the SAS 9211-8i. However, they confirmed that I need to put IT (initiator target) firmware onto the SAS 9211-8i. With the IT firmware it acts as a simple HBA and will not write metadata. With the original Firmware it is a hardware RAID that will write metadata.
The procedure to change the firmware is described in
http://kb.lsi.com/KnowledgebaseArticle16266.aspx?Keywords=firmware

I guess I have found a way now how to get my array working on the new machine.

best regards, Joerg
18 May 2012 21:39 UTC

joerg wrote: To finalize my experience report: I received the LSI SAS 9211-8i. I exchanged the firmware with the IT variant and everything works smoothly now. The HBA boots quite fast and needs just a couple of seconds to recognize all disks. The array assembles without problems. The disks SMART info is available. Performance of the array is good.

Thanks again for this blog. It was very helpful in finally finding the right solution.

best regards, Joerg
26 May 2012 11:07 UTC

mrb wrote: joerg, glad you solved your pb. I myself have used the 9211-8i with the IT firmware, and have used software RAID on top of it, and everything worked flawlessly. 30 May 2012 07:52 UTC

ccool wrote: Hi,

Excellent Post by the way.

I was wondering if anyone had a positive experience with esata port multiplier and Solaris Express 11? Right now, I have a SIL3132 controller, but because I am unable to make it work, I am looking for another controller that would allow me to use PMP under Solaris Express 11. If anyone had any info on the subjet, I would greatly appreciated.

Also, I have tried Marvell 88SE9128, but as written here it doesn't work on Solaris. I have also tried my motherboard esata port, but I guess it doesn't support PMP. I thought my SIL3132 would, but even though it seems to detect, in the logs, the first drive, it will be unable to use it. I am still making tests, but right now, I am a little disappointed. Also, everything works fine on Win7, so the hardware is ok.

Best Regards
30 May 2012 16:16 UTC

josh4trunks wrote: Super helpful, thanks! Tossing out my PCIe card to grab a PCIe 2.0 card 07 Jun 2012 22:29 UTC

Peter Bell wrote: Thank you - a very interesting write-up.

I'm currently using the SuperMicro AOC-USAS2-L8i. You comment on the "UIO" layout of this card. I consider this layout to have one great advantage - in a standard tower case, it puts the components (including heatsink) on the upper surface of the card. I have trouble convincing myself that placing the heatsink on the lower surface of the card is ideal.

Before reading this page, I ordered a Rocket 620 to enable me to finally populate all the bays in my 5in3 drive cages - it seems that I have made a good choice there.
22 Jun 2012 17:06 UTC

compudaze wrote: If I put a LSI SAS2008 card into a PCIe 2.0 x4 slot will I still get 300-350MB/s per port if I only use one of the SAS ports? aka 4 SATA devices? 24 Jun 2012 18:16 UTC

mrb wrote: compudaze: yes 24 Jun 2012 22:28 UTC

clinnovo wrote: Nice Post Post ! SAS Training in Hyderabad is provided by clinnovo institute that is a pioneer in offering sas course. Data analysis can be done efficiently with the help of

this course and clinnovo makes it easy to operate SAS while you are trying your hands on the same.
08 Aug 2012 13:05 UTC

context wrote: not sure if anyone still reads these. I have tried the ASM1061 card, the net says it works with FreeBSD 9 though i can say NexentaStor (OpenSolaris), it does NOT see drives connected to it during install. 10 Aug 2012 06:43 UTC

José wrote: Hi,
Here is my experience.
I'm using a Gigabyte motherboard with controller Z68:
- 6 SATA ports (2 SATA 3) : fine with OpenIndiana and Solaris 11
I bought a ASUS U3S6 card with a Marvell 9120 controller and 2 ports, the ports are detected as SCSI by cfgadm but disks are cannot start.

So looking for another card, I decided to buy a JMB366 card.

I tried all the day making it work, impossible ....

Although this model seem to be AHCI compliant, and announced here as the best controller, the card is not detected by OpenIndiana.
I have 4 PCI-e slots, and tried each one.
I'm very disappointed, because I spend 80 € to these 2 cards and they don't work at all :-(
21 Aug 2012 15:03 UTC

Greg Varga wrote: Have you (or anyone) looked at the LSI SAS2308 based options? Particularly the LSI SAS 9207-4i4e card?

Any experience with running them in FreeBSD?
28 Aug 2012 01:51 UTC

crabhero wrote: It helps me a lot when I was ready to buy a Adaptec 1430SA for OpenIndiana.
you says it won't wok under Solaris,so it also can't be support under OpenIndiana which is based on OpenSolaris.

with this LIST , we can do ZFS in esxi without breaks.
30 Aug 2012 15:29 UTC

Kuba Słodzik wrote: @Jose - I managed to get the StarTech JMB366-based card working with Solaris 11. I got updated firmware from http://www.station-drivers.com/page/jmicron.htm but rather than install the new firmware, I used the flash update program to erase the card's BIOS. After doing that, you won't be able to boot from a drive connected to that card, but Solaris 11 will recognize it as an ahci device. It seems to work very well. 31 Aug 2012 18:18 UTC

José wrote: @Kuba,
I'm going to try,
Finally I bought 2 new cards:
- one LSI 3041-E with 4 ports (chip LSI SAS1064E) bought on eBay as mentioned by Stefano Del Corno (search LSI00168). It works like a charm.
Best is using IT firmware if you don't use RAID technology,but IR firmware word too.
Disks are bootable.
- one with chip Sil3124 (4 ports), same problem as 2 previous cards, disks are not recognized by cfgadm:
I tried RAID and non RAID firmware found on Silicon website.

For my opinion, LSI is definitively the best choice and the card is cheap :
40€.
17 Sep 2012 13:19 UTC

CJ Davies wrote: I've just bought all the parts for a FreeNAS build, including a HighPoint Rocket 620 (non-RAID) controller, after reading this post. I discovered whilst I was testing/experimenting that I can't use smartctl to set the TLER to 7 seconds for any disk connected to the 620, but I can to any disk connected to the motherboard's controller.

This isn't necessarily an issue as I have 3x disks that need TLER setting on each power cycle & 3x that don't, so I can connect 2x of the latter to the 620 & the rest to the motherboard.

But can you (or anybody reading these comments) confirm whether the 620 blocking SCT commands affects behaviour when a read error actually occurs? Eg if one of the disks connected to the 620 has a read error & after 7 seconds sends the appropriate signal back to the controller, will the 620 block this error or will it pass it on up to the motherboard & allow ZFS to handle it sensibly?
14 Nov 2012 15:59 UTC

Shawn wrote: I just wanted to share my experiences with a couple of these cards. I was using Ubuntu Server then switched to Debian Squeeze. I use mdadm for RAID.

I have two AOC-SAT2-MV8 cards in a Supermicro motherboard which I have been using for several years. After a bug was fixed in the driver back in 2008 or so they've been fantastic cards. I've used WD RE2/3, Hitachi 5K3000, WD Green/AV-GP/Red drives (500GB/2TB/3TB) with no issues.

I recently purchased an Areca 1320 to connect an external enclosure with 8 disks. I had a mix of drives in it so only 4 of them were in the same array and the other 4 may have had a couple in the same array, can't remember for sure. I had no issues with this setup. Just this past weekend I rearranged all my drives and now I have 5 RE3 and 3 RE2 500GB drives in a RAID6 array on that card. I've had 3 drives drop out over the past couple days. The first drive dropped during the initial sync, the second while I was filling up the array with data (this turned out to be a bad drive and was replaced), and the third during another resync. This appears to be the same issue many people have reported with the Supermicro SAS cards with the mvsas driver. I'm running the current stable kernel in Debian Squeeze so I guess I'll have to look into patching or updating the kernel.
18 Nov 2012 23:07 UTC

Ceri wrote: HI, I'm trying to get decent performance out of md in linux, and can't seem to get it.
The number you quote seem way higher that what I see, so I'm wondering if you know something I don't. I'm hoping you can point me to
I'm using a LSI SAS-2 Fusion controller to an Oracle/Sun J4410 disk array (24 * 2TB), running over two ports (dm) on Oracle Enterprise Linux (RHEL5) and a 2.6.39 kernel. I'm trying md with raid5, raid6, raid10, and can't get my sequential writes over 100MB/s, yet the reads are 1.5GB/s, so the pipeline is there. I have 128 processors and 128GB or ram, so CPU and memory are not an issue.
12 Dec 2012 16:18 UTC

Ceri wrote: Additional info: Its the LSI SAS2008 card, which you say is 300-350MB/port, so I'm not sure why vdbench is reporting 1500MB/s on reads over two ports.
Still, the write performance at 80MB/s seems awfully slow...
12 Dec 2012 16:26 UTC

mrb wrote: Ceri: what does "running over two ports (dm)" mean? What is dm? 14 Dec 2012 01:53 UTC

mrb wrote: I cannot find much information on how the J4410 cage is architectured. 14 Dec 2012 01:54 UTC

Pro Backup wrote: The more expensive Marvell 9485 chip offers an advantage over LSI SAS2008. The mv94xx Linux driver version 4.0.0.1528N is able to detect PUIS (=POIS/ATA6 power up in standby mode) drives. That is not possible with mpt2sas drivers up to version 15.00.00.00. 03 Jan 2013 22:56 UTC

Kevin M wrote: Good article! There are a couple more advantages of s/w RAID worth noting: 1. Data durability: ZFS does extensive integrity checking, whereas other RAID solutions (h/w or s/w) will often arbitrarily pick between redundant blocks if they don't match, making the array susceptible to bit rot. 2. Rebuild time: ZFS only rebuilds in-use blocks, whereas h/w RAID will rebuild all blocks, which can increase the risk of another failure during the rebuild. 01 Feb 2013 03:32 UTC

Dan Langille wrote: I have an LSI 9211-8i. I haven't needed to flash it with the IT firmware to get passthrough on disks.

Am I missing something?
08 Feb 2013 19:58 UTC

SM wrote: I don't see much here about the 88SE92xx drivers. The Highpoint Rocket 640L or Syba (or whatever other brand they go by) at this link: http://www.newegg.ca/Product/Product.aspx?Item=N82E16816124062. It's probably a slight bottleneck, but even though, for 4 ports, PCI-E 2.0 x2 should provide decent bandwidth for adding four spindle disks. Thoughts? 14 Feb 2013 00:33 UTC

Fábio Rabelo wrote: I just instaled an Intel RS2WC080 in a OmniOS system with napp-it .
Before the Intel driver instalation ( exactly the way Intel put on read-me ) using "case1" .
After the instalation, system stops the boot process in "hostname :storage ) and never end it, allready 20 minutes waiting !!
Before the instalation, boot ands in just 2 minutes ...
No errors or warnings during install
What I did wrong ??
11 Apr 2013 19:31 UTC

anybody wrote: Thanks for your information.
I'm considering to build a ZFS based NAS, your article is very helpful.
Please keep updating this.
28 Apr 2013 12:48 UTC

xmikus01 wrote: According to this commit on GitHub, Marvell 88SE9128 should be supported under Illumos: https://github.com/joyent/illumos-joyent/commit/257c04ecb24858f6d68020a41589306f554ea434 27 Jun 2013 22:34 UTC

Amit Katyal wrote: Researching info to build a ZFS based NAS. Excellent article, thank you very much!!! 04 Jul 2013 19:55 UTC

Patrik Dufresne wrote: Very help ful article. 26 Aug 2013 23:46 UTC

kyij wrote: Little searching and found the Rocket 750 - It has 10 mini sas ports to support 40 drives.. each can be 2TB + 27 Aug 2013 17:15 UTC

Artem Ryabov wrote: Marvell 88SE9128 corrupts data on 2TB+ HDD by recording secondary GPT in the center of the HDD on reboot. You can check your drives this command, which checks the GPT signature on the HDD. It is better to check after a reboot.

for sdn in /dev/sd? ; do echo -n $sdn -
sn=$[`blockdev --getsize $sdn`-1]
[ $sn -lt $[1<<32] ] && echo small && continue
sn=$[$sn%(1<<32)]
st=`dd if=$sdn bs=1b count=1 skip=$sn 2>/dev/null| head -c 8`
[ 'EFI PART' = "$st" ] && echo bad && continue; echo ok ; done
02 Sep 2013 20:39 UTC

mrb wrote: Artem: wow, what a bad bug! 02 Sep 2013 21:29 UTC

Artem Ryabov wrote: mrb: yes. it's very hard to find, becose data corrpted before OS start. By UEFI bios, as I understood. OS don't have any info about data was changed. 02 Sep 2013 21:44 UTC

William wrote: I used 2 LSI SAS1068E without any issues with Debian 7 and ZoL 0.6.1 with 2TB HDDs, no issues at all. I should note that the chip *does* seem to support 3TB drives with the latest SuperMicro firmware in HBA mode.

I also have a Debian 6 ZoL 0.6.2 (multi-mirror) system with 8 SSDs on a LSI SAS2008, speed is not the best (600MB/s, same SSDs on a SAS2008 on a SuperMicro Mainboard do ~1000MB/s in RAID10) but still very good - again stable without issues.

Both cards are very hard to source in Europe, i grabbed my USAS cards via geizhals.at (German language hardware price search engine, just put AOC-USAS-L8i in it for the 1068E) and via www.server-and-more.com (not sure if they sell to EU endusers though).
08 Oct 2013 16:34 UTC

gr1n wrote: It looks like VMware has dropped support for Marvell 88SE9128 in ESXi 5.5. But it worked in 5.1... 15 Oct 2013 16:14 UTC

sarnold wrote: Awesome blog all around and this article in particular. I'm hoping you can help educate me a bit further on something that I'm having trouble finding information on using my usual Wikipedia and googling.

Can SATA drives do multipath? Is it a standard feature of 6Gb SATA? Most drives? Only a handful of drives? Do multipath capable drives have different physical connectors?

Thanks!
29 Nov 2013 07:54 UTC

Eric wrote: I have a fairly basic question about the "4 x 8-port Marvell 88SE9485," which supports 32 drives. I'm wondering how do you get 32 drives connected to this controller on those four mini-SAS ports, since I can only find mostly "SAS to 4-port SATA reverse breakout cables." I'm assuming it will have to use "SAS to 8-port SATA reverse breakout cables" to get 32 SATA drives connected to this controller. 15 Jan 2014 15:42 UTC

Eric wrote: No wait, I think I see what you mean now. It's 4 SATA drives per mini-SAS port and there are 8 mini-SAS ports. Is that what you mean? Because I just looked at your link to the product and there are only 6x SFF-8087 ports, so does that mean it's actually only 24 SATA ports, and not 32 SATA ports, since the 2x external ports are SFF-8088 ports? Thanks 15 Jan 2014 15:47 UTC

mrb wrote: Eric: yes, there are 2 SFF-808x ports per 88SE9485. You can connect 4 drives per SFF-808x (regardless if it is 8087 or 8088). 17 Jan 2014 05:37 UTC

Eric wrote: Awesome. Thanks! 17 Jan 2014 15:55 UTC

Jay wrote: Great article. Really helps. Was wondering if you can answer this.
I'm running with MDADM on 6 SSD drives on a SuperMicro server board. I need more drives.

If I install something like LSI SAS2008 can I just begin plugging extra new drives onto that card and rebuild the array ? Will it play well with the onboard raid drives or do they need to ALL be on the card or on the board as opposed to some on each ?

This is a mission critical environment hosting websites.

Your thoughts ?

Jay
29 Apr 2014 02:23 UTC

Colin wrote: I’m having a bit of trouble with the RocketRAID 2760A. I can see the drives that I plug in with lsblk but I can’t seem to interact with the devices properly. I’m not sure where I’m going wrong yet. 03 Jul 2014 17:22 UTC

Colin wrote: If you're having trouble with the HighPoint RocketRAID 2760A and you're using drives greater the 2TB then the issue is likely that you card has RocketRAID 276x Controller BIOS v1.1. I was using 4TB drives. Version 1.2 and later seem to resolve this issue. 09 Jul 2014 14:40 UTC

javier wrote: Nowadays there is something similar priced in 24 ports like rocketraid2760a?

Thank you.

Pd. Moreless price the 9201-16i but it has only 16 ports...
15 Jul 2014 17:00 UTC

DMG wrote: Have you looked at the Adaptec Series 7 cards at all? The price point seems really good for 16 ports. 13 Aug 2014 16:46 UTC

kraig wrote: the aoc-sas2lp-mv8 drivers only work with 2.6 linux kernels. the mvsas driver in ubuntu and centOS 7 will not recognize this card without applying a patch and recompiling, which is a PITA. BSD doesn't recognize it and neither does ESXI

For anyone considering getting that card, get the 9211-8i instead. Supported natively in linux, bsd and esxi.

Rule of thumb: say no to marvell
03 Feb 2015 18:47 UTC

tony morrison wrote: Where is the best place to purchase the Adaptec Series 7 cards? Do you know where I can get the best price? 24 Jun 2015 23:07 UTC

fabien wrote: any more card reference avaible or proof hba/jbod ? 28 Jul 2015 16:39 UTC

Bruno Skiba wrote: Any good recommendations on desktop enclosures for an 8-drive array? I want to build a zfs array using zfsguru or FreeNAS/Nas4Free possibly. I was thinking a mid-tower with a couple of 4 or 5 drive bays. I want this to be more of an appliance; always on, low-power, ideally it would spin drives down if there was nothing accessing it for a time.
I've build out plenty of desktops, some workstations and servers so I'm comfortable with the hardware end of the build.
Just curious what you all would recommend.
Regards,
Bruno
15 Aug 2015 07:12 UTC

giorgio vercelli wrote: Hi,
also is good relying on pass-through capability controller to by-pass raid built-in feature of so many cards, this enlarges the spectrum of available controller for using software raid actually i'm using cheap areca pci-x old sata raid card with pass-through disks i find this effective
giorgio
15 Aug 2015 07:40 UTC

Detlev wrote: Not sure if anybody is still reading this, however trying to finally troubleshoot my SATA card (Marvell 9128), I came across your list here.
(Low performance under Unix, erratic behaviour and drive drops - while the card tested fine under Windows.)
Nobody on the OpenSUSE forum seems to have any ideas, at least I got no response and Unix Stackexchange seems to be equally silent.
https://forums.opensuse.org/showthread.php/511116-Sata-card-erratic-behaviour-amp-failure-Marvell-88SE9128-(9123-)-Chipset
23 Nov 2015 11:54 UTC

tom wrote: Actually really interesting breakdown, thank you for your work on this. Regards, Tom 25 Nov 2015 16:57 UTC

David wrote: Many thanks for a great share of info. Appreciated. 03 Feb 2016 13:00 UTC

Waxhead wrote: A very helpful page. Thanks - hope you keep it updated! :) 14 May 2016 16:37 UTC

timofonic wrote: Half-offtopic...

I need to recover data from some HDDs using my laptop. I was totally ignorant abou SAS (despite I had a HP workstation, I never played with it and now it got broken after nearly 10 years of service... but I'm not sure if it's broken or is a GPU/PCIe connector issue).

I tried to find a minipcie (or expresscard, like in Sonnet one) SAS adapter and only found the Sonnetto one.

Do you know if there's more minipcie/M.2/expresscard SAS adapters out there? Maybe there's not so much demand, but there's tons of small controllers out there and maybe some of them could be used in minipcie/M.2

minipcie is used by the really annoying Intel wifi (not so strong at wifi and you can't do tricks to improve it, the firmware isn't to understood) + bluetooth.

There's a two M.2 buses. I wonder if the extra stuff can be get to use it in extra devices in the future (USB for example). I'm also interested in adding a external PCIe enclosure to add I/O massively but that's crazy stuff...

This is my laptop...
https://www.msi.com/Laptop/GE62-2QD-Apache-Pro.html#hero-overview

I'm thinking about how to put a bunch of expansion on this little expandable laptop from MSI (I was laptop n00b, now I regret).
15 Sep 2016 19:10 UTC

xraynorm wrote: Thank-you this is very useful. 28 Dec 2016 19:35 UTC

davidyuen wrote: Hi, I would like to add two sata card info as below
I have Syba 88SE9125 & Syba ASMedia 1061 cards, those work in solaris 113 on X470 & C612 machines.
88SE9125 cards - 4x sata3.0 ports, PCIe x1, <=3TB disk;
ASMedia 1601 - 2x sata/2x esata 3.0, PCIe x1, <=3TB disk;
thanks
25 Jun 2019 09:47 UTC

almkglor wrote: I am currently using an HBA based on the JMB585 chipset. This is PCI-e 3 x 2, with 5x SATA3. On Linux it seems to use the "ahci" driver. Given your heuristic of 60%->70% of the PCI-e links, PCI-e 3 supposedly reaches up to 8Gb/s per link, so 1GB/s per link, times 2 links 2GB/s, divided by 5 ports would be 400GB/s, then the 60% would be 240GB/s per port. 15 Apr 2021 16:35 UTC

ChrisC wrote: Hi thanks for this guide.

I see a glut of asm1064 cards on ebay, looking like all chinese sellers, these if legitimate are native 4 port cards, no port multipliers, with pci gen 3 x 1 bandwidth using standard ahci protocol/drivers.

My thinking is I use my native AMD ports for my storage drives, I boot of the integrated asm1064 on the board, poor bandwidth but at least I know its a legit chip.
I also add a pcie sata card in the second gen 3 pcie port on the board, and this is where it gets difficult, your list helps though.

My zfs pool will start with only 2 drives, later expand to 4 and hopefully that be enough, my problems only really come if I expand it past 4 drives.
21 Jul 2021 04:57 UTC