Hacker Newsnew | past | comments | ask | show | jobs | submit | highlynt's commentslogin

I recently joined Google and it definitely didn't work that way. You aren't usually interviewing for a specific team, like with smaller companies. But once you get past the interview your recruiter works with you to find teams that you will be interested in. In my case I talked to 3 hiring managers that were all working on the exact type of project I expressed interest in. I've seen it work this way a bunch of times when hiring for my team since joining.

I'm sure it doesn't always work this way but from what I've seen it's certainly the intention.


But once you get past the interview your recruiter works with you to find teams that you will be interested in

That just confirms his/her point.

If you don't know what you'll be doing until after you've gotten through the recruitment process, it's blind placement.


There's a significant difference between this and what Facebook does where you don't know what team you will be on until after you join Facebook and complete their bootcamp.


So Facebook is even worse. Good for them!


I wish that such a process had existed in 2011 when I went to work at Google; my experience there might have been less miserable and consequently less brief.


This has come up on HN before... One of the bidders has apparently run a load test on Google Cloud with some impressive numbers: https://cloudplatform.googleblog.com/2016/03/financial-servi...


Yep that's FIS, running atop our now Generally Available release of Cloud Bigtable (https://cloud.google.com/bigtable/). With the HBase compatibility, several folks have swapped out Cassandra for Bigtable (like Spotify, mentioned in our GA announcement https://cloudplatform.googleblog.com/2016/08/Google-Cloud-Bi...).

Disclosure: I work on Google Cloud, so I want you to use Bigtable ;).


~20GB/s read/write over a thousand+ cores seems slow, especially for embarrassingly parallel data such as this (split on security). That works out to megabytes per second per core. Am I missing something?


They're not doing sequential scans of files on disk, they're doing random reads and writes in a database, where each write is replicated and durable, in parallel, across the entire key space of market transactions. The task was to reconcile market transactions end-to-end by matching orders with their parent/child orders (e.g., as orders get merged/split or routed from broker/dealers to others or to exchanges to be executed), thus building millions (billions?) of graphs across the entire dataset. You can see more details in the video of the presentation at the bottom of this blog post: https://cloudplatform.googleblog.com/2016/03/financial-servi... but I presume you're much more familiar with the intricacies of the stock market than I am. :)

Here's the performance you can expect to see per Cloud Bigtable server node in your cluster, whether for random reads/writes or for sequential scans: https://cloud.google.com/bigtable/docs/performance

Here's a benchmark comparing Cloud Bigtable to HBase and Cassandra that may be of interest (on a different benchmark than presented in the FIS blog post, but shows the relative price/performance): https://cloudplatform.googleblog.com/2015/05/introducing-Goo...

Disclosure: I am the product manager for Google Cloud Bigtable. Let me know if you have any other questions, I'm happy to discuss further.


Has anyone tried Cloud BigTable? Performance numbers are compelling but I'm not always sure where it fits in with the rest of the GCP storage options.


Bigtable is best thought of as an "event database". High reads, high writes, single index, accessible through the Hbase API. Cassandra and Hbase are similar technologies that are inspired by the original Bigtable paper.

One big benefit of Bigtable is its scalability. To scale up, you turn the 'scale' knob. By contrast, Cassandra and Hbase are headaches to scale (Apple has acquired Cassandra companies to aid in operation and scale).

Here's a couple of guys from Sungard, who scaled to about 3,000,000 writes per second with a couple weekends' worth of effort (something only few beyond the likes of Facebook, Netflix, and Apple can achieve) https://cloud.google.com/bigtable/pdf/SunGardCATCaseStudy.pd...


Hey. I'm one of the "guys from SunGard", although I'm no longer there. The longer version is this: https://cloud.google.com/bigtable/pdf/ConsolidatedAuditTrail... . A lot of it is related to the use case, but yeah, Bigtable handled pretty much whatever we wanted to throw at it. No other cloud provider can offer this sort of scale and performance right now without a ton of manual management or significant compromises, something that seems to have yet to sink in (although few companies need the scale we went up to).

It did take a lot more work than "a couple weekends" though :).


Similar story here - developer at NYC HFT shop making $180k base + about the same in bonus and equity grants.

The work is mostly very interesting, low-latency java programming, distributed systems, etc on a small highly capable team. 40-50 hours per week.


If you're unable to expand on your role, that's fine, but as this is a huge interest of mine, I'm willing to take a shot.

What sort of problems do you attempt to solve? How much risk are you allowed to take? How much return is expected? Is it true that most quants are ungodly intelligent and frequently come from PhDs in science?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: