• 0 Posts
  • 22 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle


  • Lower storage density chips would still be tiny, geometry wise.

    A wafer of chips will have defects, the larger the chip, the bigger portion of the wafer spoiled per defect. Big chips are way more expensive than small chips.

    No matter what the capacity of the chips, they are still going to be tiny and placed onto circuit boards. The circuit boards can be bigger, but area density is what matters rather than volumetric density. 3.5" is somewhat useful for platters due to width and depth, but particularly height for multiple platters, which isn’t interesting for a single SSD assembly. 3.5 inch would most likely waste all that height. Yes you could stack multiple boards in an assembly, but it would be better to have those boards as separately packaged assemblies anyway (better performance and thermals with no cost increase).

    So one can point out that a 3.5 inch foot print is decently big board, and maybe get that height efficient by specifying a new 3.5 inch form factor that’s like 6mm thick. Well, you are mostly there with e3.l form factor, but no one even wants those (designed around 2U form factor expectations). E1.l basically ties that 3.5 inch in board geometry, but no one seems to want those either. E1.s seems to just be what everyone will be getting.




  • There’s a cost associated with making that determination and managing the storage tiering. When the NVME is only 3x more expensive per amount of data compared to HDD at scale, and “enough” storage for OS volume at the chepaest end where you can either have a good enough HDD or a good enough SDD at the same price, then OS volume just makes sense to be SSD.

    In terms of “but 3x is pretty big gap”, that’s true and does drive storage subsystems, but as the saying has long been, disks are cheap, storage is expensive. So managing HDD/SDD is generally more expensive than the disk cost difference anyway.

    BTW, NVME vs. non-NVME isn’t the thing, it’s NAND v. platter. You could have an NVME interfaced platters and it would be about the same as SAS interfaced platters or even SATA interfaced. NVME carried a price premium for a while mainly because of marketing stuff rather than technical costs. Nowadays NVME isn’t too expensive. One could make an argument that number of PCIe lanes from the system seems expensive, but PCIe switches aren’t really more expensive than SAS controllers, and CPUs have just so many innate PCIe lanes now.




  • The lowest density chips are still going to be way smaller than even a E1.S board. The only thing you might be able to be cheaper as you’d maybe need fewer SSD controllers, but a 3.5" would have to be, at best, a stack of SSD boards, probably 3, plugged into some interposer board. Allowing for the interposer, maybe you could come up with maybe 120 square centimeter boards, and E1.L drives are about 120 square centimeters anyway. So if you are obsessed with most NAND chips per unit volume, then E1.L form factor is alreay going to be in theory as capable as a hypothetical 3.5" SSD. If you don’t like the overly long E1.L, then in theory E3.L would be more reasonably short with 85% of the board surface area. Of course, all that said I’ve almost never seen anyone go for anything except E1.S, which is more like M.2 sized.

    So 3.5" would be more expensive, slower (unless you did a new design), and thermally challenged.


  • Hate to break it to you, but the 3.5" form factor would absolutely not be cheaper than an equivalent bunch of E1.S or M.2 drives. The price is not inflated due to the form factor, it’s driven primarily by the cost of the NAND chips, and you’d just need more of them to take advantage of bigger area. To take advantage of the thickness of the form factor, it would need to be a multi-board solution. Also, there’d be a thermal problem, since thermal characteristics of a 3.5" application are not designed with the thermal load of that much SSD.

    Add to that that 3.5" are currently maybe 24gb SAS connectors at best, which means that such a hypothetical product would be severely crippled by the interconnect. Throughput wise, talking about over 30 fold slower in theory than an equivalent volume of E1.S drives. Which is bad enough, but SAS has a single relatively shallow queue while an NVME target has thousands of deep queues befitting NAND randam access behavior. So a product has to redesign to vaguely handle that sort of product, and if you do that, you might as well do EDSFF. No one would buy something more expensive than the equivalent capacity in E1.S drives that performs only as well as the SAS connector allows,

    The EDSFF defined 4 general form factors, the E1.S which is roughly M.2 sized, and then E1.L, which is over a foot long and would be the absolute most data per cubic volume. And E3.S and E3.L, which wants to be more 2.5"-like. As far as I’ve seen, the market only really wants E1.S despite the bigger form factors, so I tihnk the market has shown that 3.5" wouldn’t have takers.


  • Not enough of a market

    The industry answer is if you want that much volume of storage, get like 6 edsff or m.2 drives.

    3.5 inch is a useful format for platters, but not particularly needed to hold nand chips. Meanwhile instead of having to gate all those chips behind a singular connector, you can have 6 connectors to drive performance. Again, less important for a platter based strategy which is unlikely to saturate even a single 12 gb link in most realistic access patterns, but ssds can keep up with 128gb with utterly random io.

    Tiny drives means more flexibility. That storage product can go into nas, servers, desktops, the thinnest laptops and embedded applications, maybe wirh tweaked packaging and cooling solutions. A product designed for hosting that many ssd boards behind a single connector is not going to be trivial to modify for any other use case, bottleneck performance by having a single interface, and pretty guaranteed to cost more to manufacturer than selling the components as 6 drives.


  • It depends on where you are.

    Where I live, that number might be more like 1%. At My parents place, it’s more like 95%, based on the number of Trump signs that have continuously stayed in yards since 2016.

    There was a party a little ways into rural territory and a lot of us went and the hostess was terrified when we started talking bad about Trump because the window was open and the neighbors were hard Trump people with guns.




  • So I have a Ford so it’s not quite the same, but broadly the hands free and the lane centering will do only pretty boring stuff. So on a freeway even with hands free I keep my hands on in the wheel (don’t quite trust it, and my hands need to be somewhere anyway), but my arms are not tense as often.

    With more risky intersections, it’s pretty much like driving without but still my hands are controlling against pretty much the direction the road instead of having to correct against the current direction of the car.

    My first few cars the cruise control was rarely useful. Then I had adaptive cruise control which made things nicer. Now my arms are not in tension as much, which is nicer yet.

    I don’t know about actually trusting the system when dealing with cross streets and signal lights, but the most tedious parts of driving are far less constantly taxing



  • I’ve got mixed feelings on the CHIPS act.

    It was basically born out of a panic over a short term shortage. Like many industry observers accurately stated that the shortages will subside long before any of the CHIPS spending could even possibly make a difference. Then the tech companies will point to this as a reason not to spend the money they were given.

    That largely came to pass, with the potential exception of GPUs in the wake of the LLM craze.

    Of course, if you wanted to give the economy any hope for viable electronics while also massively screwing over imports, this would have been your shot. So it seems strategically at odds with the whole “make domestic manufucating happen” rhetoric.