And, to give some context to what’s going on here:
Over the coming year we’re hoping to expand Goodreads into other countries and make it more accessible to users that speak languages other than English. While we have a large and incredible army of librarians, that army won’t necessarily scale to meet the needs of users all over the world as we begin to fully expand our library to encompass book editions for more and more countries.
Furthermore, even if we could inspire the help of volunteers around the world as quickly as we’ll need them, it wouldn’t be the best use of your/their time to ask you/them to manually enter data for every book that exists in the world.
Right now our goal is to use the information we’ve gathered from this first data import, along with the insights that GR Librarians have shared with us to calibrate our system for clean, non-intrusive imports that will scale as our library grows rapidly over the coming year.
I want to reiterate that we know that you guys do an incredible amount of really really amazing work. Our mission here isn’t to undo or undermine what you’re doing – it’s hopefully to develop a system that will support what you do and make your job easier (even if that’s not how it seems while we feel our way through these initial, somewhat tangled steps.) [screenshot]
Short version: Sarah drops bomb in first paragraph, spends three reassuring the masses, and runs off hoping her fate is not Kara’s:
Michael echoes my thoughts exactly:
This will not scale indeed. Considering the various issues with data imports, multiple language support, multiple character set support, stray editions in need to be combined, duplicated authors, multiple ISBNs without formalized storage thereof, and a data model in need of some revision – to just pump more data into this system will leave it beyond repair, and, I agree, no amount of volunteers will be able to save it. Unless you tackle the architectural/software issues first.
Good Luck. [screenshot]
Goodreads. All day, every day.