recentpopularlog in

colin.jack : mongodb   65

Bye Bye Mongo, Hello Postgres | Hacker News
excellent write up of a migration from mongodb to postgres.
guardian  mongodb  PostgreSQL  Migrations 
6 weeks ago by colin.jack
AWS Launches New Document-Oriented Database Compatible with MongoDB
"Amazon positioned DocumentDB as a drop-in replacement that's "designed to be compatible with your existing MongoDB applications and tools." AWS claims that DocumentDB offers the scalability, availability, and performance needed for production-grade MongoDB workloads."
AWS  Mongodb  DocumentDB  nosql 
february 2019 by colin.jack
Perform Two Phase Commits — MongoDB Manual 3.0
Kinda like the outbox idea, but the message is whats saved and its applied asynchronously. Lot of work for developer considering this could be needed for many use cases in distributed systems especially when data is denormalized in multiple documents.
Messaging  Outbox  EventualConsistency  Mongodb  Architecture  NoSql 
december 2015 by colin.jack
Transactional event-based NOSQL storage | Svend
"It is also important to fully validate the business action before outputting the resulting event to DB. The propagation job runs asynchronously and will have no simple way of warning the initiator of the update that some input parameters is incorrect or inconsistent."
DomainModel  ddd  Events  Messaging  mongodb  nosql  eventsourcing 
november 2015 by colin.jack
Saving Aggregate and Domain Events together in MongoDB - Google Groups
I was caught off guard by just how limiting a "single document transaction" actually is in practice when I tried to use MongoDB. Considering that it's also based on mutating state.....ouch.

Instead of trying to bend Mongo, I would try:
- using an event store [like get event store]
- using a document db that supports multi doc transactions [like ravendb]
- using a queue for domain events [nsb or easynetq]

Depending on which you choose, there are various strategies "to support the required consistency".
Mongodb  DDD  messaging 
november 2015 by colin.jack
The Ideal Domain-Driven Design Aggregate Store?
When thinking of a JSON-based store, no doubt your mind is immediately drawn to MongoDB. That’s just how MongoDB works. While true, MongoDB still falls short of filling the needs of DDD Aggregates in one very important way. In our park bench discussion I noted how MongoDB was close to what I wanted, but that you could not use MongoDB to both update an Aggregate’s state to one collection in the store and append one or more new Domain Events to a different collection in the same operation. In short, MongoDB doesn’t support ACID transactions. This is a big problem when you want to use Domain Events along with your Aggregates, but you don’t want to use Event Sourcing. That is, your Domain Events are an adjunct to your Aggregate state, not its left fold. Hopefully I don’t have to explain the problems that would occur if we successfully saved an Aggregate’s state to MongoDB, but failed to append a new Domain Event to the same storage. That would simply make the state of the application completely wrong, and no doubt would lead to inconsistencies in dependent parts of our own Domain Model and/or those in one or more other Bounded Contexts.
MongoDB  DDD  Aggregates  PostgreSQL 
july 2015 by colin.jack
mongodb - What does single server durability mean? - Stack Overflow
Full single server durability is part of the ACID paradigm, more specifically; durability ( http://en.wikipedia.org/wiki/ACID#Durability ).

It requires that if a server goes down, in a single server environment, that it will not lose its data or become corrupt.

MongoDB only partly abides by the ACID rule of durability due to the nature of delayed writes to both the journal and disk. Even though this window is small (like 60 ms) there is still a small possibility of losing a couple of operations in a single server environment unless you were to use journal acknowledged writes in your application.

In the event of a failure with journal acknowledged writes on you would be able to replay the journal to ensure that the only operations to lose would be the ones that were incapable of reaching the server before it failed.

In the event of a failure without journal acknowledged writes there would be the possibility of losing all operations within a 60 ms window, you mjust decide if that means anything to you; on a single server environment your site is probably too small to care tbh.
Mongodb 
march 2013 by colin.jack
mongodb-user | Google Groups
I thought about this a bit more. In systems I've worked on the
requirement I describe pops up regularly, and although the solution
you describe would work it would add to the complexity of the
application code so I was thinking about whether Mongodb could support
it directly.
In particular I was wondering if you'd consider supporting a special
append-only "notification" collection. It would be the only collection
you could enlist in transactions involving other documents. I realise
these sorts of transactions are specifically avoided in mongodb I
thought it might be practical here because you could guarantee the
transaction is local to that machine and there would be no issues with
sharding and so on (as sharding the notifications makes no sense).
I guess really this is somehting like an application focussed oplog.
Each mongodb could then have a single process reading the
notifications and working out what to do with them.
Maybe this is totally impractical, but thought I'd suggest it anyway.
mongodb  Transactions 
february 2013 by colin.jack
Mongodb - Colin Jack - com.googlegroups.mongodb-user - MarkMail
Lets say a request comes in and we want to update a single document
then send off an event message. Sending the event message could be
needed for several reasons:

1. Kicking off some back-end processing based on the update.
2. Letting other systems know about the change.
3. Updating other documents affected by the change.

One way to do this when working with a relational DB (depending on the
DB and other technologies used) would be to use nterprise messaging
and have the update to the table in a transaction that also included
getting the message onto the local queue. This means you couldn't end
up updating the DB and not saving the message, or vice versa.

My question is how do you handle this when using mongodb?

In case an example helps, updating an address. A user uses our app to
request an address update and specifies its correcting an earlier
mistake. In handling the request we want to update the appropriate
document and also publish an AddressCorrection message so other
systems know of the change. Not ever notifying those systems would be
mongodb  Transactions 
february 2013 by colin.jack
wearefractal/smog · GitHub
"smog (License: MIT, npm: smog) from Fractal is a web-based MongoDB interface. It displays collections, and allows them to be sorted and edited. It also supports administration features, like shutting down servers, CPU/bandwidth usage graphs, and replica set management.

It’s built with Connect, and there’s an experimental GTK+ desktop interface made with the pane module by the same authors."
mongodb 
september 2012 by colin.jack
mong-lite.js
"mongo-lite (GitHub: alexeypetrushin / mongo-lite, License: MIT, npm: mongo-lite) by Alexey Petrushin aims to simplify MongoDB by removing the need for most callbacks, adding reasonable defaults like safe updates, and offering optional compact IDs."
Node.js  Mongodb 
september 2012 by colin.jack
High Scalability - High Scalability - Ask HighScalability: Facing scaling issues with news feeds on Redis. Any advice?
Excellent.

"If you're storing all of your newsfeed data in Redis, you're doing in wrong!

Redis is great at storing the structure of information, but it's expensive to run because the cost/gb is very high. Storing the individual news feed items in Redis isn't necessary; you only need to know which newsfeed items correspond to which newsfeed. Fortunately, RDBMSs are generally VERY inexpensive in terms of cost/gb, though are very expensive for aggregate lookups (what Redis does very well inexpensively).

I'd recommend storing the newsfeed data in a traditional MySQL/PostgreSQL db (pk'ed on the ID of the newsfeed item) and just pointing the individual newsfeed lists in redis at those IDs. You can then use Memcached (generally pretty cheap) to cache requests to the RDBMS.

For now (since newsfeeds for you are capped at 300 items and all items will eventually expire as they're pushed off the end), I'd say that you can do this: start storing IDs instead of the posts, prefixed with some kind of "ID prefix flag". If you fetch a newsfeed item, check to see whether it begins with the prefix flag. If it does, it's an ID and you should grab it from the DB. If not, you already have the newsfeed item."
"In my extensive experience in Redis, Mongo, large data and stretching a small budget I would recommend the following.

1. Store the ID's as keys in redis as most have suggested.
2. Use MongoDB to store the backend data using safe mode to ensure the data is written to disk. This allows you to scale horizintaly using MongoDB's easy sharding. Stay away from RDBMS' when using big data unless you have the need for transactions and have an endless pit of money to throw at scaling. An edge graph like Neo4J would also work well but I would recommend using mongo as you can use it for other things should you need to."
Feeds  Scalability  HighScalability  Redis  NoSql  Application  mongodb 
september 2012 by colin.jack
Importing data from MongoHQ, and sending to MongoLab (as Heroku plugin) « Fabiano PS
MongoLabs and MongoHQ have different ways of counting and charging for data, in ways that can make a huge difference. For instance, for the same 6,582 doccuments under 1 collection, I may get 2.18Mb in one, while in the other I get 6.91 MB! (see comments)
mongodb  heroku 
september 2012 by colin.jack
From MongoDB to Riak at Shareaholic • myNoSQL
Why not MongoDB?

working set needs to fit in memory
global write lock blocks all queries despite not having transactions/joins
standbys not “hot”
Riak  Mongodb  nosql 
september 2012 by colin.jack
A Year with MongoDB | Hacker News
So that's not actually "safe". If you issue an insert in the default "fire and forget" mode and that insert causes an error (say a duplicate key violation), no exception will be thrown.
Even with journaling on your code does not get an exception.
Journaling is a method for doing "fast recovery" and flushing to disk on a regular basis. "Write Safety" is method for controlling how / where the data has been written. So these are really two different things.
mongodb 
may 2012 by colin.jack
A Year with MongoDB - Engineering at Kiip
List of issues that made them move from mongodb makes worrying reading.

See also http://news.ycombinator.com/item?id=3837772.
mongodb  riak 
may 2012 by colin.jack
Real world MongoDB benchmarks with benchRun « Server Density Blog
There is a tool built into MongoDB that allows you to benchmark specific queries in a custom way, so you can hit it with realistic queries – it’s called benchRun and is documented as the MongoDB JS Benchmarking Harness.
mongodb 
may 2012 by colin.jack
MongoDB Work Queues: Techniques to Easily Store and Process Complex Jobs • myNoSQL
Using MongoDB as a queueing system is in many regards as good and as wrong as using a relational database for this type of functionality. They completely lack the semantics and features required by both queues and pubsub. Redis (and obviously the dedicated MOMs) supports natively both queues and pubsub semantics.
mongodb  messaging  redis  queuing 
april 2012 by colin.jack
MongoDB 2.0 Should Have Been 1.0 | Luigi Montanez
MongoDB is on its way to becoming the default datastore for web apps. At version 2.0, it is finally a stable product free of unexpected surprises. That is, a proper 1.0 release. With this stability, developers should seriously consider working with it. Its developer experience is unmatched in the world of datastores, though Redis comes in at a close second.
mongodb 
march 2012 by colin.jack
locking - Is it true that MongoDB has one global read/write lock? - Stack Overflow
yes. it's true: http://www.mongodb.org/display/DOCS/How+does+concurrency+work

but they are working on it, if you look at the change log of the 2.0, they started to deal with it: http://blog.mongodb.org/post/10126837729/mongodb-2-0-released

The read/write lock is currently global, but collection-level locking is coming soon
mongodb 
march 2012 by colin.jack
MongoDB and Riak, In Context (and an apology) - sean cribbs :: digital renaissance man
The honeymoon phase of NoSQL is over. Will 10gen make the hard decisions it needs to make MongoDB is easier to scale out and have greater durability, while maintain its reputation for snappy performance? I believe they will. Will Basho improve Riak’s developer-friendliness and raw performance, while maintaining its reputation for simplicity and reliability in operations? I have no doubt.
nosql  riak  mongodb 
march 2012 by colin.jack
Surviving a Production Launch with Node.js and MongoDB - sean hess
"While we tried to make MongoDB non-relational, it almost never worked. TV-Guide data contains several many-to-many relationships, and you simply can't store them as nested objects. For example, a user chooses a Lineup, which lists their available Channels. Each Channel has a list of Events, which map an Episode to a Channel and start time. In theory, we could store the Events underneath the Channel, but that would mean we would have to pull out ALL the Events per Channel to get ANY of them. It ended up being easiest to store Events as separate documents.
What MongoDB really does well, though, is make every JOIN explicit. This encouraged us to denormalize data onto the most specific documents to avoid a second query. For example, by storing the name of the show on each Event, we can avoid having to hit the database a second time to get information about the show. It encourages you to make each document you DO have usable without a second trip to the db. "
Mongodb  Node.js  DocumentDB 
december 2011 by colin.jack
Should I Use MongoDB or CouchDB? • myNoSQL
"Riyad Kalla also added a section about Redis and how he finds it useful in most cases in a caching capacity or queue capacity."
Mongodb  couchdb  Redis 
october 2011 by colin.jack
MongoDB: 10 Things You Might Not Know About It :: myNoSQL
MongoDB doesn’t support multi-master replication. They think this is good because it keeps things logically simple. Also, the system is far simpler as it doesn’t have to worry about write-collisions across multiple masters.
MongoDb 
july 2011 by colin.jack
Hacker News | MongoDB as a better default data store
Interesting argument against document database approach - "Refactoring relationships is difficult. When you start building an app you don't know how the model is going to turn out. With a relational model that's fine, you can add/remove relationships easily. With documents your models are organised into hierarchies that they have to be aware of. The code that handles this hierarchy is incredibly brittle. If you move a model from one document to another, or just it's own document, you're going to have lots of code to fix. Maybe it's a case of needing some new refactoring patterns, but rest assured, they'll be far more complicated than refactoring patterns for relational stores."
mongodb 
june 2011 by colin.jack

Copy this bookmark:





to read