Hacker News new | past | comments | ask | show | jobs | submit login

Just with tiny amounts of data.





Compared to today. We thought we used large amounts of data at the time.

"We thought we used large amounts of data at the time."

Really? Did it take at least an entire rack to store?


We didn't measure data size that way. At some point in the future someone would find this dialog, and think that we dont't have large amounts of data now, because we are not using entire solar systems for storage.

Why can't you use a rack as a unit of storage at the time? Were 19" server racks not in common use yet? The storage capacity of a rack will grow over time.

my storage hierarchy goes 1) 1 storage drive 2) 1 server maxed out with the biggest storage drives available 3) 1 rack filled with servers from 2 4) 1 data center filled with racks from 3


How big is a rack in VW beetles though?

It's a terrible measurement because it's an irrelevant detail about how their data is stored that no one actually knows if your data is being stored in a proprietary cloud except for people that work there on that team.

So while someone could say they used a 10 TiB data set, or 10T parameters, how many "racks" of AWS S3 that is, is not known outside of Amazon.


a 42U 19" inch rack is an industry standard. If you actually work on the physical infrastructure of data centers it is most CERTAINLY NOT an irrelevant detail.

And whether your data can fit on a single server, single rack, or many racks will drastically affect how you design the infrastructure.


A standard so standard you had to give two of the dimensions so as not to confuse it with something else? Like a 48 U tall data center rack, or a 23" wide telco rack?

Okay, so it is relatively standard these days, but the problem is you can change how many "U" or racks you need for the same amount of storage based on how you want to arrange it, for a given use case which will affect access patterns and how it's wired up. A single server could be a compute box hosting no disks (at which point your dataset at rest won't even fit) or 4U holding 60 SATA drives vertically, at which point you could get 60*32TiB, 1.9 pebibytes for your data in 2024, but it would be a bit slow and have no redundancy. You could fit ten of those in a single rack for 19 petabytes with no tor switch, and just run twenty 1-gig Ethernet cables out (two per server) but what would be the point of that, other than a vendor trying to sell you something?

Anyway, so say you're told the dataset is 1 petabytes in 2024, is it on a single server or spread across many; possibly duplicated across multiple racks as well? You want to actually read the data at some point, and properly tuning storage array(s) to keeping workers fed and not bottleneck on reading the data off storage may involve some changes to the system layout if you don't have a datacenter fabric with that kind of capacity. Which puts us back at sharding the data in multiple places, at which point even though the data does fit on a single server, it's spread out across a bunch for performance reasons.

Trying to derive server layout from dataset size like asking about the number of lines of code used. A repo with 1 million LoC is different from one with 1,000, sure, but what can you really get from that?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: