I don't remember the specifics, it was around the 1.0 release of meteor, so... eight years ago? But the core concept was sports betting dashboard, live updating the odds. So updates were coming in thick and fast, hundreds of betting positions would change every couple seconds as bookies would try to boost their margins during games in progress.
In testing it was beautiful. With simulated updates on local machine, instant updates. Instant. Everyone's happy. Deployed to the server, and connected to the data firehose? Feedback is still okay, with just the clients employees and us browsing every now and then. Slightly slower, but hey it's on a remote server now, that's got to be the issue.
Went live, client ran the advertising campaign, and users flocked. Thing is, they flocked all during the same time, when the games were on. And updates were coming fastest while the games were on. Both of those things multiplied together to firmly peg server CPUs at 100%. Clients were also not thrilled about throwing more and more boxes at it to try to stop the bleeding. Resource consumption was going up geometrically with user count, something I hadn't seen before with any technology stack.
All in all, it taught me there is no such thing as free lunch. You pay somewhere - worse developer experience, more resource requirements, development costs and time. No such thing as a silver bullet.
Also, keeping data in sync without transactions in a Mongo cluster provided endless educational entertainment. We needed to process incoming payment confirmations from the bank, and update the "credits" balance of users. Entirely too often one of those would fail, especially under load. I hear it's gotten better, but I still refuse to treat Mongo as anything but a non-authorative cache since then.
In testing it was beautiful. With simulated updates on local machine, instant updates. Instant. Everyone's happy. Deployed to the server, and connected to the data firehose? Feedback is still okay, with just the clients employees and us browsing every now and then. Slightly slower, but hey it's on a remote server now, that's got to be the issue.
Went live, client ran the advertising campaign, and users flocked. Thing is, they flocked all during the same time, when the games were on. And updates were coming fastest while the games were on. Both of those things multiplied together to firmly peg server CPUs at 100%. Clients were also not thrilled about throwing more and more boxes at it to try to stop the bleeding. Resource consumption was going up geometrically with user count, something I hadn't seen before with any technology stack.
All in all, it taught me there is no such thing as free lunch. You pay somewhere - worse developer experience, more resource requirements, development costs and time. No such thing as a silver bullet.
Also, keeping data in sync without transactions in a Mongo cluster provided endless educational entertainment. We needed to process incoming payment confirmations from the bank, and update the "credits" balance of users. Entirely too often one of those would fail, especially under load. I hear it's gotten better, but I still refuse to treat Mongo as anything but a non-authorative cache since then.