image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
https://www.mongodb.com/…f65e14161735.png
[ "react-native", "android" ]
[ { "code": "const realmContext = createRealmContext({\n schema: [\n playlistSchema\n ],\n});\n\nconst { RealmProvider } = realmContext;\nconst RealmAppProvider = (props) => {\n ...\n return (\n <AppProvider id={appId} baseUrl={baseUrl}>\n <UserProvider fallback={RealmAuth}>\n <RealmProvider\n sync={{\n flexible: true,\n initialSubscriptions: {\n rerunOnOpen: true,\n update(subs, realm) {\n subs.add(realm.objects(playlistSchema))\n },\n },\n onError: errorCallback,\n }}\n fallback={LoadingIndicator}\n >\n {props.children}\n </RealmProvider >\n </UserProvider>\n </AppProvider>\n )\n}\nconst App = () =>{\n...\n return (\n <RealmAppProvider>\n <View style={appStyles.flexContainer}>\n <StatusBar\n backgroundColor={appTheme.colors.background}\n barStyle={Appearance.getColorScheme() == 'dark' ? 'light-content' : 'dark-content'}\n />\n <PaperProvider theme={appTheme}>\n <AppProvider>\n <SafeAreaProvider>\n <NavigationContainer ref={navigationRef}>\n <Navigation navigationRef={navigationRef} />\n </NavigationContainer>\n </SafeAreaProvider>\n </AppProvider>\n </PaperProvider>\n <Toast config={toastConfig} />\n </View >\n </RealmAppProvider>\n );\n}\n", "text": "I’m working on the device sync functionality with mongoDB to react-native app (realm). I’ve added the initialSubscription for the sync mongoDB data into realm database. App is stop after the sync data.RealmAppProvider.jsApp.jsAttached the console log image when app stops.image949×810 37.4 KB", "username": "Dhaval_Baldha" }, { "code": "", "text": "Hello, MongoDB communityI’ve been stuck on the above issue for a long time. Could you check my query and advise me on this section? It would be very useful to me.Thanks.@mongo_dba @mongo_learn @Tyler_Kaye @Mansoor_Omar", "username": "Dhaval_Baldha" }, { "code": "", "text": "Hi, can you please elaborate on what you mean by the app is stopping working? The logs don’t show anything out of the ordinary, and I am not quite sure of the details of the behaviour you are describing. Also, if you can post a link to your application in realm.mongodb.com, I can see if anything looks off in the backend logs.Best,\nTyler", "username": "Tyler_Kaye" }, { "code": "<UserProvider><RealmProvider>sync={{\n flexible: true,\n initialSubscriptions: {\n update(subs, realm) {\n subs.add(realm.objects(playlistSchema))\n },\n },\n onError: errorCallback,\n}}\n", "text": "please elaborate on what you mean by the app is stopping workingHello @Tyler_Kaye ,I apologise for the delay in responding.I’ve configured <UserProvider> and <RealmProvider> with required configs. We are currently using Realm.Credentials.apiKey based authentication, and the app is working fine for the authentication stage.After successful authentication, the RealmProvider configuration will be executed; for sync, I’ve integrated the following configuration.With the config application is able to fetch the data from mongo and write into device realmDB, After the complete process app will close automatically from foreground. When i’m restart the app it’s working fine.\nSync starts only if I use initialSubscriptions, If we’re using ‘rerunOnOpen: true’ then app is close on every launch because it fetches all data from the beginning.Our realm app URL is https://realm.mongodb.com/groups/650a789571fbe30a95085905/apps", "username": "Dhaval_Baldha" } ]
React native android/iOS app stop with data sync process from mongoDB
2023-10-04T10:16:05.847Z
React native android/iOS app stop with data sync process from mongoDB
380
null
[ "security", "app-services-data-access" ]
[ { "code": "mongodb-realm-<your app id>", "text": "We have multiple teams developing App Services apps; I’m concerned how to properly ensure data privileges for them. Every app has access to all linked clusters with readWriteAnyDatabase. This comes from the internal database user generated by App Services mongodb-realm-<your app id>.As this user can’t be modified, how can I ensure specific privileges for each team/app?The most fundamental concern is to be able to configure readWrite to specific databases in the linked cluster. The obvious risk (by mistake or malicious) is data integrity issues. Having each team/app configuring their own rules for data access is a no-go.So far, my only solution is to create a Project for each team/app with individual clusters and replicating shared data among them. With globally distributed clusters and large data sets, I’m not a big fan of this approach.", "username": "Mikael_Gurenius" }, { "code": "demoapp-xijqbdemoapp-xijqbmongodb-realm-demoapp-xijqbmongodb-realm-demoapp-xijqbmongodb-realm-demoapp-xijqb", "text": "How about this for a suggestion?Given\nit is possible to modify privileges for Realm database users;When\na new App Services app demoapp-xijqb is created;Then\na new database demoapp-xijqb is automatically created;\nuser mongodb-realm-demoapp-xijqb has readWrite to it;\nuser mongodb-realm-demoapp-xijqb has no other access;\nuser mongodb-realm-demoapp-xijqb can be given custom privileges;", "username": "Mikael_Gurenius" }, { "code": "const DemoColl = context.services.get('mongodb-atlas').db(context.app.clientAppId).collection('Demo')\n%(%%environment.values.database)demoapp/data_sources/mongodb-atlas/%(%%environment.values.database)/Demo/rules.json\n\n{\n \"collection\": \"Demo\",\n \"database\": \"%(%%environment.values.database)\",\n \"roles\": [\n {\n \"name\": \"default\",\n \"apply_when\": {},\n \"insert\": true,\n \"delete\": true,\n \"search\": true,\n \"additional_fields\": {}\n }\n ]\n}\ndemoapp-xijqbdemoapp-xijqb-developmentdemoapp-xijqb-qademoapp-xijqb-production", "text": "This would give us quite a few benefits.A new app always gets a dedicated database; very easy to deploy the same app multiple times in the same project without interfering with someone else.We can write code like this (we already do this but you get the idea):Even better would be to have the default database name as a context/placeholder value. Using %(%%environment.values.database) is OK but a little error-prone in the deployment phase.Another thought is to combine database name with environment so there is 4 databases in the cluster:", "username": "Mikael_Gurenius" }, { "code": "", "text": "Hi Mikael_Guenius,This is a feature in our long-term roadmap but it is part of an overarching initiative that allows finer-grained authorization for Atlas. It is a project currently in development but will likely not land until mid-to-late next year. Once that lands, we can support more granularity as users desire.For now, users would need to create an Atlas cluster per-app per-tenant. Or potentially, severless could be an option once we release changestreams support (where device sync could integrate), which is targeted for early next year. Hope that helps!", "username": "Kaylee_Won" }, { "code": "", "text": "@Kaylee_Won, thank you! Looking forward to it. Knowing this, I’m more confident that we eventually can consolidate our clusters safely.PS: Another idea to consider is letting us create shared environments on top of our dedicated ones, much like you do on the free tier.", "username": "Mikael_Gurenius" } ]
Security concerns of App Services: Team development can't be isolated?
2023-10-20T12:15:06.190Z
Security concerns of App Services: Team development can&rsquo;t be isolated?
337
null
[]
[ { "code": "", "text": "Hi,I want to update my MongoDB community edition\nI have installed MongoDB 6 in my Linux system using Yum how ic an update this to MongoDB 7 (or any latest version)?I have tried using Yum update but it does not upgrade to Mognodb 7", "username": "Shakir_Idrisi" }, { "code": "sudo yum update mongodb-org", "text": "Hi,A way to do this would be to make sure to create the repo file in the documentation below at /etc/yum.repos.d:\nfollowing the 7.0 configuration - make sure to create copies of your current config files as yum will clear these in the update - once the repo file is created yum will use the latest version when running sudo yum update mongodb-orghope this helps,\nGareth", "username": "Gareth_Furnell" }, { "code": "", "text": "Hi,It means when any new version realsed we need to update repo file to get latest version using yum update?Is there any way which will automatically update repo also without updating manually?", "username": "Shakir_Idrisi" }, { "code": "", "text": "Hi, Unfortunately not -\nWhen doing a yum update, it will link to teh latest version set in the repo file.\nMany checks need to be done before doing an upgrade as a MongoDB Database Administrator - for example making sure the FCV is set after the upgrade, testing and compatability with any drivers that are connected to MongoDB beforehand, therefore pre-checks are essential and the repo files are always pre-written and provided by MongoDB for your specific upgrade/update.Hope this helps.\nGareth", "username": "Gareth_Furnell" } ]
Upgrade mongodb Community Edition from 6 to 7
2023-10-26T11:27:40.036Z
Upgrade mongodb Community Edition from 6 to 7
227
null
[ "compass", "server" ]
[ { "code": "brew services listmongodb-community error 15872 newaccount ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plistconnect ECONNREFUSED 127.0.0.1:27017mongod{\"t\":{\"$date\":\"2023-11-03T16:34:27.048+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"thread1\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":21},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":21},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":21},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.052+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"thread1\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.094+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.105+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.106+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.106+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.106+01:00\"},\"s\":\"I\", \"c\":\"TENANT_M\", \"id\":7091600, \"ctx\":\"thread1\",\"msg\":\"Starting TenantMigrationAccessBlockerRegistry\"}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.107+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":16716,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"news-MBP\"}}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.107+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"7.0.2\",\"gitVersion\":\"02b3c655e1302209ef046da6ba3ef6749dd0b62a\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.107+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"23.0.0\"}}}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.107+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.110+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.114+01:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"NonExistentPath: Data directory /data/db not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the 'storage.dbPath' option in the configuration file.\"}}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.114+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.115+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.115+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.115+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.115+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.115+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"initandlisten\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.115+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.115+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.115+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.115+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.115+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.115+01:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.115+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.116+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.116+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.116+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.116+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.116+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2023-11-03T16:34:27.116+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}\n", "text": "I can’t use my Compass tool to connect to my MongoDB.I have even uninstalled and reinstalled Mongo-community@7 but it’s still the same errorWhen I run brew services list\nmongodb-community error 15872 newaccount ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plistWhen I try to connect from Compass, I get a connect ECONNREFUSED 127.0.0.1:27017 errorI’m using MacOS Sonoma 14. This is frustrating, not sure what to do anymoreWhen I run mongod I get this error", "username": "Emmanuel_Raymond" }, { "code": "mongod/data/dbmongodmongodmongod", "text": "mongod is looking for /data/db to store data. Either create that and make sure the user you run mongod with has permissions to access it or create a folder for data somewhere else and pass it as an argument to mongod (https://www.mongodb.com/docs/manual/reference/program/mongod/#std-option-mongod.--dbpath).Then mongod will start fine and Compass will connect to it.", "username": "Massimiliano_Marcon" }, { "code": "", "text": "Also check this posting", "username": "Jack_Woehr" }, { "code": "", "text": "Where do I create this data/db file. I still haven’t found the solution. Thanks", "username": "Emmanuel_Raymond" }, { "code": "{\"t\":{\"$date\":\"2023-11-03T18:20:27.720+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"thread1\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":21},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":21},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":21},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.723+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"thread1\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.724+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.725+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.725+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.725+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.725+01:00\"},\"s\":\"I\", \"c\":\"TENANT_M\", \"id\":7091600, \"ctx\":\"thread1\",\"msg\":\"Starting TenantMigrationAccessBlockerRegistry\"}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.725+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":24680,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"news-MBP\"}}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.725+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"7.0.2\",\"gitVersion\":\"02b3c655e1302209ef046da6ba3ef6749dd0b62a\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.725+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"23.0.0\"}}}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.725+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.726+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.727+01:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20568, \"ctx\":\"initandlisten\",\"msg\":\"Error setting up listener\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"setup bind :: caused by :: Address already in use\"}}}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.727+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.730+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.730+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.730+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.730+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.730+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.730+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.730+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.731+01:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.732+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.732+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.732+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.732+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.732+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.732+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2023-11-03T18:20:27.732+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":48}}\n", "text": "I created /data/db under /Users/{user}/data/dbwhen I ran mongod again, I got this", "username": "Emmanuel_Raymond" }, { "code": "Address already in use", "text": "You probably already have another mongod running. Address already in use means the port is already being used.", "username": "Massimiliano_Marcon" } ]
I can't connect to my MongoDB using Compass
2023-11-03T14:48:44.140Z
I can&rsquo;t connect to my MongoDB using Compass
206
https://www.mongodb.com/…175628fd443a.png
[]
[ { "code": "", "text": "image785×538 23.1 KB\nin the above images , im trying to export the collection , but it seems like email field is not showing in the list, so how do i get this work done? Even i tried adding feild explicity , but it does’nt allow me to checkbox it.\nKindly help me", "username": "Abhishek_Chaurasia1" }, { "code": "", "text": "Hi @Abhishek_Chaurasia1 and welcome to MongoDB community forums!!In order to understand further, could you help me with some information regarding the issue that you are observing.Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "before\n{ “_id”: { “$oid”: “64146260dbe8d66a58b55777” }, “ride_id”: “m_bbdc5fc7_89f6_4b56_98e1_becc9b9888c9”, “username”: “XCZJCH5”, “__v”: 0, “category”: “SEDAN”, “category_code”: 43, “city_code”: 9, “createdAt”: { “$date”: { “$numberLong”: “1679057504897” } }, “date”: “2023-03-17”, “destination”: { “title”: “TOWER-A, BRIGADE METROPOLIS, Brigade Metropolis, Garudachar Palya, Mahadevapura, Bengaluru, Karnataka 560048, India”, “lat”: 12.9931492, “long”: 77.70354979999999 }, “distance”: “41.1”, “locality_id”: “692”, “message”: null, “source”: { “title”: “Kempegowda International Airport Bengaluru (BLR), KIAL Rd, Devanahalli, Bengaluru, Karnataka 560300, India”, “lat”: 13.1995486, “long”: 77.70017910000001 }, “sub_trip_type”: “3600”, “time”: “10:45 PM”, “trip_type”: “local”, “updatedAt”: { “$date”: { “$numberLong”: “1679057504897” } }}after adding email field\n{ “_id”: { “$oid”: “65432bacdbe8d66a5848ffa0” }, “ride_id”: “m_d7c5a7ad_b7c3_44fa_a845_0128cce6f80a”, “username”: “HYZIGSL”, “__v”: 0, “category”: “”, “category_code”: null, “city_code”: 24, “createdAt”: { “$date”: { “$numberLong”: “1698900908741” } }, “date”: “2023-11-10”, “destination”: { “title”: “Srinivasa Murthy Ave, Baktavatsalm Nagar, Adyar, Chennai, Tamil Nadu 600020, India”, “lat”: 13.0054857, “long”: 80.2562105 }, “distance”: “13.6”, “email”: “jaysh21@gmail.com”, “locality_id”: null, “message”: null, “source”: { “title”: “Chennai International Airport (MAA), Airport Rd, Meenambakkam, Chennai, Tamil Nadu 600027, India”, “lat”: 12.9805163, “long”: 80.1598454 }, “sub_trip_type”: “pick_airport”, “time”: “12:00 am”, “trip_type”: “airport”, “updatedAt”: { “$date”: { “$numberLong”: “1698900909107” } }}currently am using mongoDB compass version 1.31.2", "username": "Abhishek_Chaurasia1" }, { "code": "{\n \"ride_id\": \"m_d7c5a7ad_b7c3_44fa_a845_0128cce6f80a\",\n \"username\": \"HYZIGSL\",\n \"__v\": 0,\n \"category\": \"\",\n \"category_code\": null,\n \"city_code\": 24,\n \"createdAt\": {\n \"$date\": {\n \"$numberLong\": \"1698900908741\"\n }\n },\n \"date\": \"2023-11-10\",\n \"destination\": {\n \"title\": \"Srinivasa Murthy Ave, Baktavatsalm Nagar, Adyar, Chennai, Tamil Nadu 600020, India\",\n \"lat\": 13.0054857,\n \"long\": 80.2562105\n },\n \"distance\": \"13.6\",\n \"email\": \"jaysh21@gmail.com\",\n \"locality_id\": null,\n \"message\": null,\n \"source\": {\n \"title\": \"Chennai International Airport (MAA), Airport Rd, Meenambakkam, Chennai, Tamil Nadu 600027, India\",\n \"lat\": 12.9805163,\n \"long\": 80.1598454\n },\n \"sub_trip_type\": \"pick_airport\",\n \"time\": \"12:00 am\",\n \"trip_type\": \"airport\",\n \"updatedAt\": {\n \"$date\": {\n \"$numberLong\": \"1698900909107\"\n }\n }\n}\n", "text": "Hi @Abhishek_Chaurasia1\nThank you for sharing the information.The version you have mentioned appears to be outdated. The latest stable version of MongoDB Compass is 1.40.4Could you please upgrade and let us know if the issue still persists.In saying so, I tried to replicate by inserting the below JSON:and I was successfully able to see the email field in my exported file.Regards\nAasawari", "username": "Aasawari" } ]
Email field is not showing while exporting the collection
2023-10-31T06:06:56.021Z
Email field is not showing while exporting the collection
209
null
[ "java", "spring-data-odm" ]
[ { "code": "{\n \"_id\": \"531515\",\n \"_class\": \"coodel\",\n \"noteID\": 8\n \"clientID\": 3,\n \"userID\": 8,\n \"userTypeID\": 4,\n \"userRoleName\": \"Patient\",\n \"userFirstName\": \"\",\n \"userMiddleName\": \"C\",\n \"userLastName\": \"\",\n \"creatorID\": 46,\n \"creatorFirstName\": \"\",\n \"creatorMiddleName\": \"\",\n \"creatorLastName\": \"\",\n \"creatorTypeID\": 6,\n \"creatorRoleName\": \"NURSE\",\n \"createdByUserName\": \"@.com\",\n \"createdDate\": {\n \"$numberLong\": \"1698417636801\"\n },\n \"createdDateTime\": {\n \"$date\": \"2023-10-27T14:40:36.801Z\"\n },\n \"modifierID\": 46,\n \"modifierFirstName\": \"\",\n \"modifierMiddleName\": \"\",\n \"modifierLastName\": \"Roers\",\n \"modifierTypeID\": 6,\n \"modifierRoleName\": \"NURSE\",\n \"modifiedByUserName\": \"mro.com\",\n \"modifiedDate\": {\n \"$numberLong\": \"1698417636801\"\n },\n \"modifiedDateTime\": {\n \"$date\": \"2023-10-27T14:40:36.801Z\"\n },\n \"calendarEventID\": 0,\n \"active\": true,\n \"parentNoteID\": 0,\n \"rootNoteID\": 531515,\n \"important\": false,\n \"noteType\": \"CLINICALNOTE\",\n \"clinicalNote\": {\n \"templateName\": \"physician\",\n \"templateID\": 1,\n \"questions\": [\n {\n \"type\": \"radiogroup\",\n \"questionID\": 1,\n \"title\": \"Why dall code red?\",\n \"isRequired\": true,\n \"answer\": \"Other\",\n \"optionalText\": \"\",\n \"choices\": [\n {\n \"value\": \"item1\",\n \"text\": \"Alert\",\n \"inputType\": \"radio\",\n \"choiceOrder\": 1,\n \"optionalText\": false\n },\n {\n \"value\": \"item2\",\n \"text\": \"Pt's request\",\n \"inputType\": \"radio\",\n \"choiceOrder\": 2,\n \"optionalText\": false\n },\n {\n \"value\": \"item3\",\n \"text\": \"Other\",\n \"inputType\": \"radio\",\n \"choiceOrder\": 3,\n \"optionalText\": true\n }\n ]\n },\n {\n \"type\": \"radiogroup\",\n \"questionID\": 2,\n \"title\": \"Who acced?\",\n \"isRequired\": true,\n \"answer\": \"Pt\",\n \"optionalText\": \"\",\n \"choices\": [\n {\n \"value\": \"item1\",\n \"text\": \"Pt\",\n \"inputType\": \"radio\",\n \"choiceOrder\": 1,\n \"optionalText\": false\n },\n {\n \"value\": \"item2\",\n \"text\": \"Aide\",\n \"inputType\": \"radio\",\n \"choiceOrder\": 2,\n \"optionalText\": false\n },\n {\n \"value\": \"item3\",\n \"text\": \"Other\",\n \"inputType\": \"radio\",\n \"choiceOrder\": 3,\n \"optionalText\": true\n }\n ]\n },\n {\n \"type\": \"radiogroup\",\n \"questionID\": 5,\n \"title\": \"Any couired?\",\n \"isRequired\": true,\n \"answer\": \"None required\",\n \"optionalText\": \"\",\n \"choices\": [\n {\n \"value\": \"item1\",\n \"text\": \"None required\",\n \"inputType\": \"radio\",\n \"choiceOrder\": 1,\n \"optionalText\": false\n },\n {\n \"value\": \"item2\",\n \"text\": \"Yes\",\n \"inputType\": \"radio\",\n \"choiceOrder\": 2,\n \"optionalText\": true\n }\n ]\n }\n ]\n },\n \"hasChild\": false,\n \"sendNoteStatus\": 0\n}\n{\n \"_id\": \"\",\n \"noteID\": ,\n \"clientID\": ,\n \"userID\": 9,\n \"userTypeID\": 4,\n \"userRoleName\": \"Patient\",\n \"userFirstName\": \"\",\n \"userMiddleName\": \"\",\n \"userLastName\": \"Smith\",\n \"creatorID\": 46,\n \"creatorFirstName\": \"Mike\",\n \"creatorMiddleName\": \"\",\n \"creatorLastName\": \"\",\n \"creatorTypeID\": 6,\n \"creatorRoleName\": \"NURSE\",\n \"createdByUserName\": \"m\",\n \"createdDate\": {\n \"$numberLong\": \"1699235118743\"\n },\n \"createdDateTime\": {\n \"$date\": \"2023-11-06T01:45:18.743Z\"\n },\n \"modifierID\": 46,\n \"modifierFirstName\": \"\",\n \"modifierMiddleName\": \"\",\n \"modifierLastName\": \"\",\n \"modifierTypeID\": 6,\n \"modifierRoleName\": \"NURSE\",\n \"modifiedByUserName\": \"om\",\n \"modifiedDate\": {\n \"$numberLong\": \"1699235118743\"\n },\n \"modifiedDateTime\": {\n \"$date\": \"2023-11-06T01:45:18.743Z\"\n },\n \"calendarEventID\": 0,\n \"active\": true,\n \"parentNoteID\": 0,\n \"rootNoteID\": 8,\n \"important\": false,\n \"noteType\": \"CN\",\n \"clinicalNote\": {\n \"templateName\": \"-in\",\n \"templateID\": zz,\n \"migratedToJS\": false,\n \"asurveyResponse\": {\n \"responseObject\": {\n \"question1\": {\n \"value\": \"item2\",\n \"itemComment\": \"\"\n },\n \"question4\": [\n {\n \"value\": \"item6\",\n \"itemComment\": \"\"\n }\n ],\n \"question7\": [\n {\n \"value\": \"item3\",\n \"itemComment\": \"\"\n }\n ]\n }\n },\n \"key\": false\n },\n \"hasChild\": false,\n \"sendNoteStatus\": 0,\n \"_class\": \"com.mModel\"\n}\n", "text": "Hello,We are inserting a mongoDB document via POST API. When the server side is returning sucess to the API, we are then doing a GET API to search for all documents present in the month.post notes succes at 1699236425746\ngetall notes called at 1699236425747The GET notes is being called in the next millisecond after the POST API has returned into the success handler.The issue is that we are get all documents in the present month but unable to get the document just inserted. If we call a GET API again or add a 400 ms delay to the GET API, it is able to also bring in this document that is just inserted.Our POST API is not running asynchronously i.e it is a blocking thread until the insert into mongoDB database is done. We use Java MongoTemplate API class of spring-data-mongoDb library. We are not adding any @Transaction to the Java annotation. We are merely letting mongoDBTemplate call save on its API method and letting it take care of anything(and everything).This was not happening before. We added a new subdocument to ClinicalNote object called asurveyResponse. Sharing before and after document structureOur document structure Before when there was no issue in GET API after receiving POST success\nBEFOREAFTER", "username": "Rickson_Menezes1" }, { "code": "", "text": "are you using replication?\nwhat’s your read concern and read preference?", "username": "Kobe_W" } ]
Inserted document not available immediately for query
2023-11-06T02:34:54.916Z
Inserted document not available immediately for query
110
null
[]
[ { "code": "", "text": "I have an application that can restart frequently due to various reasons. Does that impact the MongoDB cluster if the application fails to close the database connection upon exiting each time?", "username": "kris_sgd" }, { "code": "", "text": "If your app is shut down cleanly, no problem. Otherwise you have to rely on connection idle timeout.", "username": "Kobe_W" } ]
Does not closing DB connection lead to cluster instability?
2023-11-05T03:46:27.477Z
Does not closing DB connection lead to cluster instability?
113
https://www.mongodb.com/…b_2_1024x375.png
[ "ops-manager" ]
[ { "code": "", "text": "I have configured 6.0.0 Ops manager and mongodb 6.0.0 Database in kubernetes in GKE everything is working well but i am not able to see data its saying “An error occurred while querying your MongoDB deployment. Please try again in a few minutes.”Can anybody help\nnew1086×398 20.8 KB", "username": "Piyush_Shailly1" }, { "code": "", "text": "Hi @Piyush_Shailly1 and welcome to the community!\nCan you try to:If the problem persist contact the support.Regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "@Fabio_Ramohitaj This didn’t worked I am running on On GKE Kubernetes cluster. And i have enterprise edition of 6.0.0. Myopsmanager pod and backup pod is restarted do i need to restart operator pod also ?? and how to connect with support", "username": "Piyush_Shailly1" }, { "code": "", "text": "recieving this error 2023-11-05T18:08:05.352+0000 [JettyHttpPool-339] gid:6541e604c7a0374ab55d96f7 INFO com.xgen.svc.mms.res.filter.CheckRequiredUserRoleFilter [CheckRequiredUserRoleFilter.java:filter:46] - Failed authorization attempt for username=xyz, requiredRoles=[GROUP_DATA_ACCESS_ANY], ipAddress=10.10.100.1", "username": "Piyush_Shailly1" }, { "code": "", "text": "Check the Ops Manager user has the correct roles to use data explorer.", "username": "chris" } ]
Need Urgent Help
2023-11-05T09:21:17.398Z
Need Urgent Help
139
null
[ "aggregation" ]
[ { "code": " {\n count: 6,\n totalValue: new Decimal128(\"3.82\"),\n courier: 'Name Surname',\n zipcode: '12345'\n },\n {\n count: 1137,\n totalValue: new Decimal128(\"1196.109999999999999822364316059975\"),\n courier: 'Name Surname',\n zipcode: '12350'\n },\n{\n \"$group\": {\n \"_id\": {\n \"zipcode\": \"$address.zipcode\",\n \"courierId\": \"$bag.courier.userId\",\n \"name\": \"$bag.courier.name\"\n },\n \"count\": {\n \"$sum\": 1\n },\n \"totalValue\": {\n \"$sum\": \"$priceData.price\"\n }\n }\n }, \n$priceData.price", "text": "Hello,I was aggregating some data using the group stage and I have noticed that some result sets have obtained very long decimal point sequences despite being stored as Decimal128 values:The group stage:$priceData.price is stored as Decimal128 on the database. Could you tell me what could be the cause of the issue and how to resolve it, please? Thank you", "username": "Vladimir" }, { "code": "$sum", "text": "Hi @VladimirIt may pay to check your dataset as something other than Decimal128 may have been inserted leading to a loss of precision in the $sumSee this playground with mixed types and the resulting aggregation:Mongo playground: a simple sandbox to test and share MongoDB queries onlineVs this one filtering only the decimal128 values:Mongo playground: a simple sandbox to test and share MongoDB queries online", "username": "chris" }, { "code": "{ $priceData.price: { $not : { $type : 19 } } }$round: [\"$price\", 2]$project", "text": "That’s a very nice catch, thank you. Running the following query:{ $priceData.price: { $not : { $type : 19 } } }Revealed some old documents that have the price value stored as strings instead of Decimal128.What do you think would be the right course of action? 2 ideas come to mind:Repair the incorrect fields by running an update query and converting them to the correct type (string → Decimal128). However, I am not sure what the correct and safe update query could be?As an immediate solution, I have used the $round: [\"$price\", 2] operator in the $project stage to reduce the decimal points. What are your thoughts on that?Again, thank you very much", "username": "Vladimir" }, { "code": "$toDecimal$round$merge$convert$cond$round: [\"$price\", 2]$project", "text": "Revealed some old documents that have the price value stored as strings instead of Decimal128.Due to type bracketing they won’t be included in the sum, it would be another number type. Another number type would create this result or a decimal created by passing a long as an argument(this was accpted in earlier versions).What do you think would be the right course of action?If there is no use case for the data being of the unexpected decimal then I would suggest updating it to the expected decimal with correct precision.However, I am not sure what the correct and safe update query could be?Could be as simple as using $toDecimal, $round and $merge or it could be more complex if there are strings or null present in which case $convert and some $cond might have to e used.As an immediate solution, I have used the $round: [\"$price\", 2] operator in the $project stage to reduce the decimal points. What are your thoughts on that? ", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Decimal128 addition during the group stage results in a lot of decimal numbers
2023-10-31T17:32:37.307Z
Decimal128 addition during the group stage results in a lot of decimal numbers
182
null
[ "swift", "transactions" ]
[ { "code": "let dog = Dog()\ndog.name = \"Rex\"\ndog.age = 10\n\nwriteRealm { realm in\n realm.add(dog)\n}\nwriteRealm@RealmWriteActor func writeRealm<T>(_ block: @Sendable (Realm) throws -> T) throws -> T {\n try autoreleasepool {\n let realm = try Realm(configuration: ...)\n defer {\n realm.invalidate()\n }\n return try realm.write {\n try block(realm)\n }\n }\n}\nDogSendableThreadSafeThreadSafeReferencedog@unchecked SendablewriteRealm", "text": "The app I’m working on runs Realm writes serially using a global actor. To minimize contention, we try to keep the actual write transactions as focused as possible. One technique we’ve been using is to create unmanaged Realm objects off of the write actor and then capture references to those objects in a closure that gets executed on the actor:writeRealm looks like this:We’re running into trouble with combining this approach with Strict Concurrency Checking since Dog is not Sendable. We’ve explored a few options, but so far haven’t found anything satisfactory:What other techniques would you recommend trying, and what ideas do you have for how Realm could better support Strict Concurrency Checking in the future?", "username": "hershberger" }, { "code": "", "text": "I kind of see what you’re asking but it’s not clear what the specific issue is. e.g. you want to minimize contention so you’re focusing the writes. Is that because you’re writing a large amount of data at once? Or is that because data is being accessed while the write is occurring? Is there a UI issue? Or something else?The code in the question doesn’t provide an example of why you’re running into trouble - e.g. what is the trouble?Can you explain the use case a bit so we can evaluate it a bit further and maybe come up with some suggestions?", "username": "Jay" }, { "code": "Object@SendableObjectwriteRealm@SendabledogDogSendablelet dog = writeRealm { realm in\n return Dog()\n}\nDogwriteRealm/// implemented by Dog\nprotocol UnmanagedCopyable {\n func makeUnmanagedCopy() -> Self\n}\n\nfinal class UnmanagedRealmObject<T>: @unchecked Sendable where T: UnmanagedCopyable {\n init(object: T) {\n // make an unmanaged copy at init so that we'll have our own private\n // copy that no other code can modify\n unmanagedObject = object.makeUnmanagedCopy()\n }\n\n // disallow direct access to our copy\n private let unmanagedObject: T\n\n // vend copies rather than the original so that we can guarantee that `unmanagedObject`\n // remains unmanaged\n var object: T {\n unmanagedObject.makeUnmanagedCopy()\n }\n}\nlet dog = Dog()\ndog.name = \"Rex\"\ndog.age = 10\n\nlet unmanaged = UnmanagedRealmObject(object: dog)\n\nwriteRealm { realm in\n realm.add(unmanaged.object)\n}\nlet dog = writeRealm { realm in\n return UnmanagedRealmObject(object: realm.objects(Dog.self).first!)\n}.object\nDogwriteRealm { realm in\n let dog = Dog()\n dog.name = \"Rex\"\n dog.age = 10\n realm.add(dog)\n}\nDogDogRealmWriteActor", "text": "Hi @Jay, thanks for the reply.The problem we’re running into is that when we enable Strict Concurrency Checking, we get compile-time warnings when we capture instances of Realm Object subclasses in @Sendable closures or return instances of Realm Object subclasses from actor-isolated methods to non-isolated concurrency contexts.In the example I shared, the closure passed to writeRealm is @Sendable, so when it captures dog, the compiler emits a warning since Dog is not Sendable.Similarly, if we were to write this:we would also get a warning since an instance of Dog is returned from actor-isolated writeRealm to a non-isolated context.We explored avoiding these errors using a wrapper like this:This resulted in code like this:andWhile this helps to avoid the warnings, it introduces overhead at runtime (due to the need to make extra copies of the realm objects) and undesirable boilerplate to the code.The naïve solution to the first scenario would be to instantiate Dog inside of the closure:This might be fine for simple cases, but in some situations, we may need to do expensive computation to obtain the values we need to initialize Dog and/or create many Dogs in a single transaction. In either case, we’d prefer not to block the RealmWriteActor while doing that work so that other writes will not be held up.Furthermore, this approach doesn’t solve the second scenario where we want to return an unmanaged Realm object from an actor-isolated method to a non-isolated context.Since the Swift team has said that Strict Concurrency Checking will be on by default in Swift 6, we are thinking ahead about what changes we’ll need to make to our code, including figuring out how to best use Realm in conjunction with these new compile-time checks.", "username": "hershberger" }, { "code": "", "text": "Objects are not meant to be sendable, as objects are not thread/task safe. The way to cross the actor boundary with objects is to first grab a ThreadSafeReference to them and then pass them to your actor.However, your point about unmanaged objects is accurate. We’ll need to discuss as a team how to better solve this use case, as unmanaged objects should be considered sendable.", "username": "Jason_Flax" }, { "code": "Object", "text": "I am probably misunderstanding the use case here, so feel free to fill in the blanks…return instances of Realm Object subclasses from actor-isolated methods to non-isolated concurrency contexts.Realm objects are thread confined - once an object is on a thread it “can’t be passed” to other threads and are only valid on the thread they were created. (ignoring sharing objects across threads for the moment)It almost seems like the Realm objects created within an actor (these are thread isolated objects) are being passed out of the actor to be used elsewhere. Wouldn’t that process go against Realm’s live object functionality?So using that object within the actor that created it is cool - but passing it in or out would not be.But - there’s the passing the unmanaged objects - those are not thread confined and really, you can do anything you want with them - they may as well be Swift objects at that point. They are totally independent objects - they just have duplicate property values.we want to return an unmanaged Realm object from an actor-isolated method to a non-isolated contextSo… why can’t you do that?My other thought is that unmanaged objects share nothing in common with the object it came from, other than duplicate properties. They have their own discreet memory space so it’s not clear how changing one of them would affect any other object.However, that also means if you’re implementing primary keys - you’ve got 8 unmanaged objects running around with the same primary key… and that’s trouble as they are really 8 independent objects.So. now that I said that… I understand the error but perhaps going against or around the core way Realm works is part of the issue?This is a super interesting question but I am still not clear on what the actual problem is - e.g. If you try to pass an object to another object that doesn’t conform to a protocol that accepts it, the compiler will kick it back. That sounds kinda like what’s going on here. (maybe not)?", "username": "Jay" }, { "code": "", "text": "@Jason_Flax @Jay thanks again for the replies!But - there’s the passing the unmanaged objects - those are not thread confined and really, you can do anything you want with them - they may as well be Swift objects at that point. They are totally independent objects - they just have duplicate property values.Exactly! We have situations where we need a long-lived copy of a value from Realm, so to avoid pinning, we make an unmanaged copy (instead of using ThreadSafe/ThreadSafeReference).So… why can’t you do that?Returning unmanaged copies of Realm objects from an actor-isolated context to a non-isolated context isn’t a problem at runtime. It works just fine! The problem is that at compile-time, the fact that it’s unmanaged isn’t represented by the type system, so Swift can’t tell that it’s actually safe for the unmanaged copy to cross that boundary. Therefore it emits a warning.", "username": "hershberger" }, { "code": "", "text": "as unmanaged objects should be considered sendable@Jason_Flax is there a specific reason that unmanaged objects are not sendable? Or it just hasn’t been implemented yet?@hershberger I see. So it appears your app is heavily dependent on unmanaged objects. Is there a reason you’re using Realm objects for that task? It seems like using Swift objects in that context would be ideal and then when you need the data persisted, cast to a Realm object to back the Swift objects.I was thinking it would be a akin to having a JSON managed datastore, and that data is then populated into Swift objects to be passed, massaged etc and then coded back to JSON for storage. Just thinking outside the box here.", "username": "Jay" }, { "code": "", "text": "It seems like using Swift objects in that context would be ideal and then when you need the data persisted, cast to a Realm object to back the Swift objects.This would definitely work. The only downside I see is that we’d need to maintain a duplicate class/struct definition.", "username": "hershberger" }, { "code": "", "text": "Unmanaged objects are not Sendable because unmanaged and managed objects are the same type, for the sake of user convenience.Duplicate class/struct definitions is a decent temporary solution for now.", "username": "Jason_Flax" }, { "code": "", "text": "We had the same issue with the annoying warnings because the compiler sees the unmanaged objects as not Sendable… However, this now became a way bigger problem as starting from iOS 17 the app crashes when we pass an unmanaged object accross actors…“Precondition failed: Incorrect actor executor assumption; Expected ‘UnownedSerialExecutor(executor: (Opaque Value))’ executor.”", "username": "David_Kessler" } ]
Strict Concurrency Checking
2023-02-06T16:12:43.211Z
Strict Concurrency Checking
1,169
null
[ "replication" ]
[ { "code": "", "text": "I have M80 atlas cluster(128GB RAM and 32vCPUs) with 3 replicasets, most of the time my system use around 10% of RAM and 10% of CPU, during high load CPU spikes more than 100% leading to autoscaling of cluster, I’m trying to cut the cost, i noticed some indexes are missing and i added them, i noticed only primary is getting utilised so if i start reading from secondaries, can I reduce the CPU usage or 32 vCPUs are actually shared across 3 replicas and i wont see much diffrence?", "username": "MAHENDRA_HEGDE" }, { "code": "", "text": "Hi @MAHENDRA_HEGDE,\nTo take away any doubt, you can connect to a single host and use the command:Based on the result of the output, you can make decisions.Regards", "username": "Fabio_Ramohitaj" } ]
Atlas replicaset resource usage
2023-11-05T15:00:22.980Z
Atlas replicaset resource usage
107
null
[ "java" ]
[ { "code": "com.allanbank.mongodb.error.ReplyException: Unsupported OP_QUERY command: insert. The client driver may require an upgrade. For more details see https://dochub.mongodb.org/core/legacy-opcode-removal\n\tat com.allanbank.mongodb.util.FutureUtils.unwrap(FutureUtils.java:57)\n\tat com.allanbank.mongodb.client.SynchronousMongoCollectionImpl.insert(SynchronousMongoCollectionImpl.java:687)\n\tat com.allanbank.mongodb.client.SynchronousMongoCollectionImpl.insert(SynchronousMongoCollectionImpl.java:722)\n\tat site.ycsb.db.AsyncMongoDbClient.insert(AsyncMongoDbClient.java:272)\n\tat site.ycsb.DBWrapper.insert(DBWrapper.java:221)\n\tat site.ycsb.workloads.CoreWorkload.doInsert(CoreWorkload.java:601)\n\tat site.ycsb.ClientThread.run(ClientThread.java:135)\n\tat java.lang.Thread.run(Thread.java:750)\n", "text": "Hi,Has anyone used ycsb with MongoDB 6.x? The latest ycsb failed to make connection to the database with the following error. It seems like the mongodb java driver bundled in ycsb is not updated to the version 6? Has anyone run into it and how do you resolve this? I’m thinking I can just install the latest mongodb java driver but have no experience in java development and I’m looking for some guidance/suggestions.Thanks", "username": "Henry_Wong1" }, { "code": "", "text": "Which YCSB repository/branch are you using? The one that MongoDB employees generally use is GitHub - mongodb-labs/YCSB: This Repository is NOT a supported MongoDB product.", "username": "Jeffrey_Yemin" }, { "code": "OP_QUERY", "text": "OP_QUERY is a legacy opcode and was deprecated with MongoDB version 5.0 and removed at 5.1 and onward.unfortunately, the latest commits of “brianfrankcooper/YCSB” and “mongodb-labs/YCSB” are dated long before 5.0 was out, and no activity since then.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Hello,Is there any update on the subject? is YCSB used with mongo 6+?", "username": "Roey_Maor" }, { "code": "mongodb/src/main/java/site/ycsb/db/AsyncMongoDbClient.java\n", "text": "@Roey_Maor YCSB is itself not active for many years now. and unfortunately, it uses even much older allanbank/mongodb-async-driver: The MongoDB Asynchronous Java Driver. (github.com)However, if you know Java then you can edit this file in the sourceupdate it to use the new MongoDB Java Reactive Streams Driver, the official MongoDB driver for asynchronous Java applications. Follow the documentation to update other parts such as maven files.", "username": "Yilmaz_Durmaz" } ]
YCSB and MongoDB 6
2022-08-08T19:37:45.581Z
YCSB and MongoDB 6
2,312
https://www.mongodb.com/…8_2_1023x333.png
[]
[ { "code": "", "text": "I am facing this error when using MongoDB Enterprise Operator in Openshift. Real-time, metrics, Profile tab are working fine.\nAnyone met this issue?\n\nimage1858×605 34.9 KB\n", "username": "Vinh_Nguyen_Duc" }, { "code": "", "text": "did you got any solution for this", "username": "Piyush_Shailly1" } ]
Data Explorer An error occurred while querying your MongoDB deployment. Please try again in a few minutes
2023-06-02T04:22:41.315Z
Data Explorer An error occurred while querying your MongoDB deployment. Please try again in a few minutes
835
https://www.mongodb.com/…_2_1024x576.jpeg
[ "sharding", "bengaluru-mug" ]
[ { "code": "", "text": "Bangalore MUG1920×1080 119 KB2023-11-25T04:00:00Z→2023-11-25T07:30:00Z Hey Everyone! We invite you to our next MongoDB User Group Meetup in Bengaluru. Explore MongoDB Replica Sets, Search Capabilities, and Design Best Practices. Event Highlights: Transitioning from Replica Sets to Sharded Clusters: The How and Why \nThis presentation explores the transition from Replica Sets to Sharded Clusters and the significance of horizontal scaling. We begin with an introduction to Replica Sets and their benefits. Moving forward, we discuss the concept of sharding, its advantages, and its role. We then explore the reason for the shard, components of Sharded Clusters, sharding strategies and best practices. Data Modeling in MongoDB \nMongoDB’s flexible BSON format necessitates distinct data modeling strategies. Decisions between embedding and referencing data impact performance and scalability. This talk explores MongoDB data modeling best practices, emphasizing alignment with application needs. Proper modeling ensures efficient data architectures and robust applications. Crafting an Effective Framework for Implementing and Expanding Search Capabilities in a Multi-Tenant Environment \nWe will delve into our approach to designing the search module for a multi-tenant system characterized by flexible schemas and user-defined search criteria. With the growing complexity of search requirements, conventional search index methods face significant challenges. In this presentation, we will explore innovative solutions to address this issue. Why Attend?\n Gain insights MongoDB Horizontal Scaling Solutions, MongoDB Search and MongoDB Data Modeling.\n Learn from industry experts and practitioners.\n Network with like-minded professionals and expand your connections.\n Stay up-to-date with the latest advancements in MongoDB and AWS technologies.\n Acquire practical knowledge and skills applicable to your projects. Registration and Details: Secure your spot fortoday! Limited seats are available. Don’t miss out on this exclusive opportunity! For any inquiries, please reply on this thread A heartfelt thank you to the entire team at Toyota Connected for their unwavering support and collaboration on this event. Your dedication and commitment have been instrumental in making this a success. We deeply appreciate your partnership. #Gratitude #ToyotaConnectedSupportWhere?\nEvent Type: In-Person\nLocation: Toyota Connected, UB City, Level 4, No. 24, Concorde Block Tower Vittal Mallya Rd, Bengaluru, Karnataka 560001Speakers\nIMG-24831920×2560 164 KBAayushi Mangal Agrawal\nSenior Database Engineer @ Visible AlphaPrakhar Verma\nChief Architect @ Capillary TechnologiesDSC_03001920×2880 357 KBVivekanandan Sakthivelu\nLead Engineer, Software Development @ Toyota Connected IndiaCollaboration with Toyota Connected Indiatoyota595×646 14.6 KBEvent Participation Guidelines", "username": "Manosh_Malai" }, { "code": "", "text": "How to register to this event", "username": "Narayan_VJ" } ]
MUG Bengaluru: Explore MongoDB Replica Sets, Search Capabilities and Design Best Practices
2023-11-02T12:17:10.666Z
MUG Bengaluru: Explore MongoDB Replica Sets, Search Capabilities and Design Best Practices
629
null
[ "database-tools", "backup" ]
[ { "code": "mongodump --host 127.0.0.1 --port 27017 --authenticationDatabase=\"admin\" --username admin --password \"\" --db=admin --collection system.users --archive=/data/dump/users_and_roles.gz\n\nRestore Command:\n\n", "text": "We encountered an error while attempting to restore a backup created with mongodump. The error message suggests an issue with the authentication version. The error message is as follows:“Failed: the users and roles collections in the dump have an incompatible auth version with the target server: cannot restore users of auth version 1 to a server of auth version 5.”This issue is occurring when attempting to restore users and roles from a dump created with an authentication version 1 to a server with authentication version 5. This incompatibility is preventing the restoration process. The commands used for backup and restore are as follows:Backup Command:mongorestore --host 127.0.0.1 --port 27017 --authenticationDatabase=“admin” --username admin --password “” --db=admin --collection system.users --archive=/data/dump/users_and_roles.gzPlease assist me to fix the issue. Thanks in advance !", "username": "Nino_I" }, { "code": "mongorestoremongodump", "text": "Is the server being restored to the same version as the source ?When using mongorestore to load data files created by mongodump, the MongoDB versions of your source and destination deployments must be either:https://www.mongodb.com/docs/database-tools/mongorestore/#restore-to-matching-server-version", "username": "chris" } ]
MongoDB Restore failed for system.users collection
2023-11-01T16:14:50.949Z
MongoDB Restore failed for system.users collection
138
null
[ "mongodb-shell" ]
[ { "code": "", "text": "The Unit : [](MongoDB Courses and Trainings | MongoDB University) does not work well.\nIt’s impossible to do the quizzes", "username": "Honorat_DJANWE" }, { "code": "", "text": "well, it’s not impossible because many of us have done the quizzes …If you describe the problem you are having, perhaps someone can help you.", "username": "Jack_Woehr" } ]
Unit : The MongoDB Shell
2023-11-04T02:09:25.104Z
Unit : The MongoDB Shell
118
null
[ "aggregation", "data-modeling", "python", "mongodb-shell" ]
[ { "code": "", "text": "Greetings Mongo Community,I have just completed MongoDB Python Developer Course Now I want to purchase and give Associate Developer Exam Python. But before that I want to practice MongoDB and to do so I found One Code Example Application for Dog Care Providers (DCP) but the problem with this code example is that it doesn’t have the access to the database itself so without database how I can practice than.Can someone suggest me similar sort of practice resources so that I can pass its certification exam and then use mongodb in my projects.Thanks", "username": "Ankit_Gupta11" }, { "code": "", "text": "Try the free tier of Atlas.", "username": "Jack_Woehr" } ]
Good Practice resource for MongoDB
2023-11-03T11:56:45.683Z
Good Practice resource for MongoDB
146
https://www.mongodb.com/…_2_1024x512.jpeg
[ "java", "atlas-cluster", "spring-data-odm" ]
[ { "code": "", "text": "I have a single node GCP cluster with my spring-boot app running. I recently created a new DB [M10] cloud atlas and trying to connect it via my app. However, I keep getting the following error - screenshot attached\n\nScreenshot 2022-06-11 at 14.27.531920×961 289 KB\nHowever, I am able to connect to my old DBs [M20 and M10] via the same GCP cluster and same spring-boot app.I am unable to figure out what could be the reason? Any idea?Thanks in advance\nPranav", "username": "Pranav_Jariwala" }, { "code": "", "text": "Is there a final “Caused by” that’s cut off from that screen shot? That could help diagnose the issue. It’s possible that you’re running into https://jira.mongodb.org/browse/JAVA-4018, in which case updating the Java driver to 4.5.1 or later will fix it.Is this a self-managed cluster, or are using Atlas, or some other service? I am pretty sure that Atlas always registers a TXT record in DNS, but other services may not.", "username": "Jeffrey_Yemin" }, { "code": "", "text": "final caused by :\n\nScreenshot 2022-06-11 at 22.39.332870×1014 414 KB\nI am using Atlas and spring-boot-starter-data-mongodb 4.6.0the other 2 DBs are also from Atlas and I am able to connect to them", "username": "Pranav_Jariwala" }, { "code": "", "text": "from the bug you referenced it looks like it may have re-appeared in 4.6.0.", "username": "Pranav_Jariwala" }, { "code": "", "text": "4.6.0sorry i am using spring-boot-starter-data-mongodb 4.7.0 which internally uses 4.6.0 mongo java driver", "username": "Pranav_Jariwala" }, { "code": "", "text": "You can see from the stack trace that it’s using mongodb-driver-core-4.4.0.jar.", "username": "Jeffrey_Yemin" }, { "code": "", "text": "Thanks for spotting that. But after upgrading I am still getting DNS issues\nScreenshot 2022-06-12 at 16.26.112672×864 316 KB\n", "username": "Pranav_Jariwala" }, { "code": "", "text": "It is not the same address as before.", "username": "steevej" }, { "code": "", "text": "yeah… i deleted the previous DB and created new one", "username": "Pranav_Jariwala" }, { "code": "", "text": "any idea what might be wrong? I am using the same old username/pass combination that I have for the other 2 DB [no special character] . didn’t find anything relevant online", "username": "Pranav_Jariwala" }, { "code": "", "text": "There’s something wrong with the new Atlas DB instances. There seems to be an issue only while connecting with the new DB. The old DB connection are working perfectly fine", "username": "Pranav_Jariwala" }, { "code": "~$ dig SRV _mongodb._tcp.hk-prod.h8dkbpg.mongodb.net\n\n; <<>> DiG 9.10.6 <<>> SRV _mongodb._tcp.hk-prod.h8dkbpg.mongodb.net\n;; global options: +cmd\n;; Got answer:\n;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19870\n;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1\n\n;; OPT PSEUDOSECTION:\n; EDNS: version: 0, flags:; udp: 512\n;; QUESTION SECTION:\n;_mongodb._tcp.hk-prod.h8dkbpg.mongodb.net. IN SRV\n\n;; ANSWER SECTION:\n_mongodb._tcp.hk-prod.h8dkbpg.mongodb.net. 60 IN SRV 0 0 27017 ac-gokumri-shard-00-00.h8dkbpg.mongodb.net.\n_mongodb._tcp.hk-prod.h8dkbpg.mongodb.net. 60 IN SRV 0 0 27017 ac-gokumri-shard-00-01.h8dkbpg.mongodb.net.\n_mongodb._tcp.hk-prod.h8dkbpg.mongodb.net. 60 IN SRV 0 0 27017 ac-gokumri-shard-00-02.h8dkbpg.mongodb.net.\n\n;; Query time: 167 msec\n;; SERVER: 127.0.0.1#53(127.0.0.1)\n;; WHEN: Sun Jun 12 12:08:08 EDT 2022\n;; MSG SIZE rcvd: 256\n\n~$ dig TXT _mongodb._tcp.hk-prod.h8dkbpg.mongodb.net\n\n; <<>> DiG 9.10.6 <<>> TXT _mongodb._tcp.hk-prod.h8dkbpg.mongodb.net\n;; global options: +cmd\n;; Got answer:\n;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7702\n;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1\n\n;; OPT PSEUDOSECTION:\n; EDNS: version: 0, flags:; udp: 512\n;; QUESTION SECTION:\n;_mongodb._tcp.hk-prod.h8dkbpg.mongodb.net. IN TXT\n\n;; ANSWER SECTION:\n_mongodb._tcp.hk-prod.h8dkbpg.mongodb.net. 60 IN TXT \"authSource=admin&replicaSet=atlas-tbk5qg-shard-0\"\n\n;; Query time: 35 msec\n;; SERVER: 127.0.0.1#53(127.0.0.1)\n;; WHEN: Sun Jun 12 12:08:57 EDT 2022\n;; MSG SIZE rcvd: 131\n\n", "text": "Notice that now the TXT record is no longer failing (that happens first) but SRV lookup is still failing. Using dig, I’m able to resolve both TXT and SRV records for this host:Can you try the same from the same server on which you’re running the Java application?", "username": "Jeffrey_Yemin" }, { "code": "", "text": "looks like dig worked.\n\nScreenshot 2022-06-12 at 18.45.311752×1494 139 KB\nthen maybe some issue with com.mongodb.internal.dns.DefaultDnsResolver ?", "username": "Pranav_Jariwala" }, { "code": "", "text": "I think it is something in your environment. I am able to connect successfully to your cluster from both the mongo shell and a Java application.", "username": "Jeffrey_Yemin" }, { "code": "", "text": "If the issue was with the environment wouldn’t there be an issue connecting to other Atlas DB? [as mentioned previously, I am able to connect to the other 2 DB via the same GCP server, same application]", "username": "Pranav_Jariwala" }, { "code": "", "text": "Not necessarily, but I don’t have a hypothesis that explains what both of us are seeing. I think at this point you should open a support ticket (https://www.mongodb.com/docs/manual/support/) to try to get to the root cause.Regards,\nJeff", "username": "Jeffrey_Yemin" }, { "code": "org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'appConfig' defined in URL [jar:file:/opt/app/server/app.jar!/BOOT-INF/classes!/com/devnotes/server/app/AppConfig.class]: Bean instantiation via constructor failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.devnotes.server.app.AppConfig]: Constructor threw exception; nested exception is com.mongodb.MongoConfigurationException: Unable to look up TXT record for host dev.7f***.mongodb.net at org.springframework.beans.factory.support.ConstructorResolver.instantiate\n", "text": "Having the same issue as Pranav. GCP is failing to connect, works on my local environment. GCP works when connecting to older instances of Atlas.", "username": "Robert_Timm" }, { "code": "", "text": "Nevermind, just started happening in my local environment as well.", "username": "Robert_Timm" }, { "code": "cluster0.dnqegeu.mongodb.netcluster0.xfzokj5.mongodb.netmongodb+srv://mongodb-driver-core-4.8.2.jar", "text": "I’m facing the same issue, but did not find any solution on the forum here which works.\nI can connect to a DB on cluster0.dnqegeu.mongodb.net or cluster0.xfzokj5.mongodb.net with the MongoDB Compass application,\nBUT my java spring application fails to connect to a URI with mongodb+srv://\nWhile the mongodb driver should use the SRV entries of the DNS, it looks for a TXT entry.\nThe library in use is mongodb-driver-core-4.8.2.jar\nAny suggestion please?", "username": "Dominik_Knoll" }, { "code": ";QUESTION\ncluster0.dnqegeu.mongodb.net. IN ANY\n;ANSWER\ncluster0.dnqegeu.mongodb.net. 60 IN TXT \"authSource=admin&replicaSet=atlas-c64af7-shard-0\"\ncluster0.dnqegeu.mongodb.net. 60 IN SRV 0 0 27017 ac-eheduyb-shard-00-00.dnqegeu.mongodb.net.\ncluster0.dnqegeu.mongodb.net. 60 IN SRV 0 0 27017 ac-eheduyb-shard-00-01.dnqegeu.mongodb.net.\ncluster0.dnqegeu.mongodb.net. 60 IN SRV 0 0 27017 ac-eheduyb-shard-00-02.dnqegeu.mongodb.net.\n;QUESTION\ncluster0.xfzokj5.mongodb.net. IN ANY\n;ANSWER\ncluster0.xfzokj5.mongodb.net. 60 IN TXT \"authSource=admin&replicaSet=atlas-kymzsm-shard-0\"\ncluster0.xfzokj5.mongodb.net. 60 IN SRV 0 0 27017 ac-1b0f5hn-shard-00-00.xfzokj5.mongodb.net.\ncluster0.xfzokj5.mongodb.net. 60 IN SRV 0 0 27017 ac-1b0f5hn-shard-00-01.xfzokj5.mongodb.net.\ncluster0.xfzokj5.mongodb.net. 60 IN SRV 0 0 27017 ac-1b0f5hn-shard-00-02.xfzokj5.mongodb.net.\n", "text": "The scheme mongodb+srv involves 2 types of DNS records. A TXT record which supplied connection string parameter and SRV records which provides a list of hosts to connect to. Looking for a TXT is thus normal behaviour.What is the error message?As for the 2 clusters you share I get:andSo both looks correct.", "username": "steevej" } ]
Unable to look up TXT record for host ****.****.mongodb.net from GCP ap
2022-06-11T12:40:21.913Z
Unable to look up TXT record for host ****.****.mongodb.net from GCP ap
14,092
null
[ "node-js", "cxx" ]
[ { "code": "", "text": "There is a easy to use, very lightweight, fully featured ES2020 interpreter called QuickJs.I easily brought this library into my threaded web application (written in C++ using MongoCXX) and it gives my users the ability to write and execute their custom business logic in Javascript.I see that node-js is available but requires you to install Node which I would like to avoid due to it’s large size and more unnecessary stuff to deal with.I looked at the Node.js driver source and it is written in Typescript which I don’t understand although I have am familiar with Javascript.Is there source code for Node.js driver in plain Javascript available?My hope would be to use the Node.js driver mostly as is except for the parts that communicate with the server which I would have to adapt to the MongoDB C driver or to MongoCXX. Does this sound reasonable?Also, I’m sure that there are many others who do not use Node and would prefer not having to install it if they really are not going to use it.On a side note, I believe that QuickJs would be a perfect JS library to use with the Mongo shell instead of Spidermonkey. Spidermonkey has JIT but for the shell, I think that it is totally unnecessary, plus the size and complexity. I do believe that QuickJs would be way easier to use.Online QuickJs demo: http://numcalc.com/\nOfficial website: QuickJS Javascript Engine\nGithub: GitHub - bellard/quickjs: Public repository of the QuickJS Javascript Engine. Pull requests are not accepted. Use the mailing list to submit patches.Hopefully I’ll get some answers before deciding what to do next. I really would like to hear opinions on this, e.g. difficulty level, etc.", "username": "Billy_Zee" }, { "code": "const testFab = function() {\n\tconst number = 200;\n\tlet n1 = 0, n2 = 1, nextTerm;\n\n\tconsole.log('Fibonacci Series:');\n\n\tfor (let i = 1; i <= number; i++) {\n\t\tconsole.log(n1);\n\t\tnextTerm = n1 + n2;\n\t\tn1 = n2;\n\t\tn2 = nextTerm;\n\t}\n}\n\nconst testConcat = function(n) {\n\tlet str = '';\n\t\n\tif (n != 0)\n\t\tstr = str + '-' + n;\n\telse\n\t\tstr = n;\n\n\treturn str;\n}\n\nlet str = '';\nfor (n = 0; n < 100; ++n) {\n\tstr += testConcat(n);\n}\n\n\n// start time\nvar start = new Date().getTime();\n\n// run tests\nconsole.log('Concatenation result:\\n', str);\ntestFab();\n\n// end time\nvar end = new Date().getTime();\nvar time = end - start;\nconsole.log('Execution time: ' + time + ' ms');\n\n", "text": "Here is an example that you can run yourself online (QuickJs).Copy the following code below to a text file and upload to http://numcalc.com\nThe code below runs in about 175ms which demonstrates that JIT is not necessary for running reasonable business logic that shell or mongodb users would run (in my humble opinion).", "username": "Billy_Zee" }, { "code": "", "text": "I’m familiar with QuickJS actually, and have looked into a similar scenario.The main issue is no, there is not a JS version of the driver available, at least not in MongoDBs Public GitHub Repo anyway.You would have to install Node.JS, and adapt things accordingly. I personally moved away from QuickJS myself because of issues discovered in its compatibility with other libraries and frameworks such as React.JS/React Native, Dart.JS, and a multitude of issues working with TailwindCSS and FlexBox.I also had a multitude of issues with CloudFlare freaking out over it, as it kept flagging it as XSS was trying to take place. TBH you’d have much better success going to the Node Driver and running it through Atlas for server side, and running your clients via the Node.JS Realm SDK.EDITThe reasoning behind the TypeScript FYI, was to build it out to be as universal to other JavaScript libraries etc as possible, as majority of JavaScript users (Like myself for instance) may know multiple libraries and frameworks from Next.JS, Node.JS, React.JS, Angular.JS etc. Or just one or two of them, but almost certainly everyone will be familiar with TypeScript.", "username": "Brock" }, { "code": "", "text": "Thanks Brock for your response, I’ve looked at a few online TS to JS converters and they seem to produce not very readable JS. Is this the nature of TS to JS conversions or is there a way to convert TS into easily readable JS, e.g. something that a person would write? Thanks again!", "username": "Billy_Zee" }, { "code": "", "text": "Unfortunately there isn’t, TS is just TS and a lot of translation service middleware is normally internally made from entity to entity, and will cause extreme performance issues.", "username": "Brock" } ]
MongoDB Node.js Driver to use QuickJS interpreter to avoid having to install Node?
2023-03-22T05:12:15.808Z
MongoDB Node.js Driver to use QuickJS interpreter to avoid having to install Node?
1,115
https://www.mongodb.com/…2_2_1024x192.png
[ "data-modeling" ]
[ { "code": "", "text": "Hi, I’m working on implementing social functionality in an app where users can follow each other. I’m trying to come up with a good approach for storing these follower/following relationships between users.I was initially going to use two collections… Follower and Following where each document would contain a user id, and an array of users the person is following, or who are following them.My concern with this approach is that if follower count gets very large, we will hit the 16mb BSON limit. An alternative would be having one document per follower relationship, which introduces other performance issues.Is a bucket pattern the best solution for this… or is there a better approach? I could have buckets of 100 followers and then also use those buckets for paginating lists of followers to display.However my concern is that when deleting a follower… all buckets for a given user would have to be searched and then when the follower is removed, an old bucket might have a gap. This gap could then be filled by a new follower in the future, but this would mean it would be more difficult to display the followers in order of new to old without sorting on date followed, which I imagine could become a performance issue.I’m relatively new to MongoDB and learning. This great article by @Justin mentions that this bucket gap issue can be solved by expressive update instructions introduced in MongoDB 4.2, but I’m having trouble understanding how this would resolve the issue:A look at how to speed up the Bucket PatternAny suggestions for using MongoDB when dealing with potentially large and ever growing social data like this?", "username": "UC_Audio" }, { "code": "", "text": "Following up to see if anyone has any suggestions on this.To summarize the questions above, is there a recommended approach or pattern for handling large amounts of follower/following relationships between users in a social media context using MongoDB?", "username": "UC_Audio" }, { "code": "", "text": "I found the answer here referencing the socialite project:Social Data Reference Architecture - This Repository is NOT a supported MongoDB product - GitHub - mongodb-labs/socialite: Social Data Reference Architecture - This Repository is NOT a supported Mo...For anyone else who comes across this, it looks like the suggested approach is to store each follow relationships as one individual document per relationship. So, two collections as noted above, but no bucket pattern, just individual follower or following documents.", "username": "UC_Audio" }, { "code": "", "text": "If you ever considered setting the maximum number of followers and subscriptions for each user?", "username": "Alex_Deer" }, { "code": "", "text": "It can be a really good feature, and today many people are using social media accounts only to gain followers, so if you plan to make a social media application, it can be a good feature if you get a decent explanation. This feature may define your social media project from other platforms like TikTok or Instagram, where all content creators try getting followers with all possible tools, including promotional services like tikdroid.com.", "username": "Alex_Deer" }, { "code": "", "text": "This post was flagged by the community and is temporarily hidden.", "username": "Hamad_Ch" } ]
Data modeling for social media followers/following... bucket pattern?
2020-05-01T19:36:46.972Z
Data modeling for social media followers/following&hellip; bucket pattern?
9,213
null
[ "react-native" ]
[ { "code": "", "text": "I have been trying to implement Realm into my react-native expo application. I am currently using expo sdk 49.0.13. When installing the realm package and creating a new EAS build, it gets stuck on the splash-screen after seeing it has been bundled 100%.On the internet I have seen people mention possible conflicts with react-native-firebase/app and react-native-reanimated, but I haven’t been able to solve the problem, as creating a new build every time makes it a bit more complicated.Does anybody have any suggestions of how to solve this, or a strategy to find the root of the problem?Thanks in advance \nStef", "username": "Stef_N_A" }, { "code": "", "text": "Do you have remote debugging active? This can sometimes cause issues.", "username": "Andrew_Meyer" }, { "code": "", "text": "I do not have the remote bugging active. I tried different versions of Realm such as 11.10.2, but that also doesn’t work. When I repeatedly reload the app, it sometimes gets past the splashscreen and works. Running npx expo start --no-dev --minify doesn’t change anything.", "username": "Stef_N_A" }, { "code": "", "text": "Found the issue. I overlook the “developmentClient” in eas.json.", "username": "Stef_N_A" }, { "code": "", "text": "Yeah we have the exact same issue, it seems like Realm doesn’t work with Expo 49 and development builds", "username": "Alex_R_N_A" }, { "code": "expo-dev-client", "text": "@Alex_R_N_A Expo base SDK doesn’t include Realm, and thus won’t work with Expo Go, which only contains a limited subset of all the React Native third party packages. You must use the expo-dev-client to add Realm to Expo.Realm works fine with Expo 49. @realm/expo-template - npm", "username": "Andrew_Meyer" }, { "code": "", "text": "@Andrew_Meyer For me the template is not working smoothly when I build with EAS. I did build the template with EAS and when the bundling is stuck at 100% I still have to press reload a lot of times to get lucky and enter the app. This is happening when I am running the development build on my physical iPhone.", "username": "Stef_N_A" }, { "code": "Bundling: 100%...Loading from metro...NodeJS v20.9.0 (LTS)NPM v10.1.0nvm use --ltsnpx create-expo-app@latest MyAwesomeRealmApp --template @realm/expo-templatenpm startnpm start -- --clearBundling: 100%...rLoading from metro...rr", "text": "My app (a fresh app created using the template mentioned by @Andrew_Meyer @realm/expo-template - npm) also gets stuck while bundling or loading from metro with a blank screen.It either displays Bundling: 100%... or a brief Loading from metro... with a white screen. This is reproducible by using the Realm expo template as follows:Steps to reproduce:Note\nThis only appears to affect iOS dev client builds (physical iPhones and simulators). Android builds don’t have this issue for me.", "username": "Kishan_Jadav" } ]
Expo 49.0.13 stuck on splashscreen after installing realm and creating EAS build
2023-10-18T22:30:19.607Z
Expo 49.0.13 stuck on splashscreen after installing realm and creating EAS build
460
null
[ "node-js" ]
[ { "code": "passport.use(new LocalStrategy({ passReqToCallback : true},\n\tasync function(username, password, done) {\n\t\tawait User.getUserByUsername({ username: username }, function(err, user){\n\t\t\tif(err) throw err;\n\t\t\tif(!user){\n\t\t\t\tconsole.log('Unknow User');\n\t\t\t\treturn done(null, false, { message: 'Incorrect username or password.' });\n\t\t\t}\n\t\t\tUser.comparePassword(password, user.password, function(err, isMatch){\n\t\t\t if(err) throw err;\n\t\t\t if(isMatch){\n\t\t\t \treturn done(null, user);\n\t\t\t }else{\n\t\t\t \tconsole.log('Invalid Password');\n\t\t\t \treturn done(null, false, {message: 'Invalid Password'});\n\t\t\t } \t\n\t\t\t});\n\t\t});\n\t}\n));\n> module.exports.comparePassword = function(candidatePassword, hash, callback){\n\tbcrypt.compareSync(comparePassword, hash, function(err, isMatch){\n\t\tif(err) return callback(err);\n\t\tcallback(null, isMatch);\n\t});\n}\n\nmodule.exports.getUserById = function(id, callback){\n\tUser.findById(id, callback);\n}\nmodule.exports.getUserByUsername = function(username, callback){\n\tvar query = {username: username};\n\tUser.findOne(query,callback);\n}\n", "text": "I’m new in express nodejs and using MongoDB 7. So i get several error when using the below code: (which is route/users.js)and next model/user.js have the below code:Please if possible give the total solution above code.ThxTypeError: Model.findOne() have error no longer call back accepted", "username": "solaiman_ibna_Khair" }, { "code": "", "text": "I assume youre using mongoose, google mogo mongoose callback.", "username": "John_Sewell" }, { "code": "var passport = require('passport');\nvar localStrategy = require('passport-local').Strategy;\n", "text": "", "username": "solaiman_ibna_Khair" }, { "code": "passport.use(new LocalStrategy(\n\tasync function (username, password, cb) {\t\t\n\t\tawait User.findOne({username: username}).then(user=>{\t\t\t\n\t\t\t// the fetch user is not found\n\t\t\tif(!user){\t\t\t\t\n\t\t\t\treturn cb(null, false, { message: 'Incorrect username or password.' });\n\t\t\t}\t\t\t\n\t\t\t// use bcrypt to compare the password ans validate\n\t\t\tUser.comparePassword(password, user.password, (err, isMatch)=>{\n\t\t\t\tif(err) throw err;\n\t\t\t\tif(isMatch){\t\t\t\t\t\n\t\t\t\t\treturn cb(null, user);\n\t\t\t\t}else{\n\t\t\t\t\treturn cb(null, false, {message:'Password ectered is incorrect'});\n\t\t\t\t}\n\t\t\t});\t\t\n\t\t}).catch((err)=>{\n\t\t \tconsole.log(err);\n\t\t});\n\t}\n));\t\t\nmodule.exports.comparePassword = function (candidatePassword, hash, callback) {\n bcrypt.compare(candidatePassword, hash, (err, isMatch) => {\n if (err) return callback(err); \n callback(null, isMatch);\n });\n};\n", "text": "OK, I solved it myself. The answer is below:And comarePassword : model/user.js", "username": "solaiman_ibna_Khair" } ]
I useing MongoDB 7 so model.findOne() have error no longer call back accepted
2023-10-25T16:40:21.124Z
I useing MongoDB 7 so model.findOne() have error no longer call back accepted
205
null
[ "replication", "database-tools", "backup", "ops-manager" ]
[ { "code": "", "text": "Hello!I think about a new deployment and I don’t have much experience with it. It’s on-premise deployment on either physical or virtual machines. I’m going to use Ops Manager or schedule mongodump scripts (it’s still an open question) to protect MongoDB data.My question is:\nShould I also protect the replica set nodes at the OS level with another solution? For example, to protect Mongo config files\n(f.g. rsync for files backup, hypervisor snapshots, other backup solution etc.)\nNormally, we protect all machines at the OS level in our environment but not sure it’s really necessary in case of Mongo as we’re planning using scripts for new replica set node installation.How do you usually do it?Many thanks in advance!", "username": "Petr_Makarov" }, { "code": "", "text": "if your data size is small, mongodump should be fine. (i guess ops manager can also do backup). If your data size is big, using a system level snapshot can be faster (e.g. EBS snaphot) as they do low level disk backup instead of dealing with queries.", "username": "Kobe_W" }, { "code": "", "text": "Thanks for the reply!The idea is that I have Ops Manager for backup and I must use it. Ops Manager backs up only database data but I think about an additional OS/server-level backup of the replica set nodes.Does it makes sense to have 2 backups? DB backup by Ops Manager and OS/server backup by something else?Thanks!", "username": "Petr_Makarov" }, { "code": "", "text": "i don’t see any obvious benefit for doing that. for my projects, i only use ebs snapshot.", "username": "Kobe_W" } ]
OS level backup of replica set nodes
2023-11-02T20:55:40.829Z
OS level backup of replica set nodes
164
null
[ "cluster-to-cluster-sync" ]
[ { "code": "{\"progress\":{\"state\":\"RUNNING\",\"canCommit\":true,\"canWrite\":false,\"info\":\"change event application\",\"lagTimeSeconds\":17042,\"collectionCopy\":{\"estimatedTotalBytes\":1718274427703,\"estimatedCopiedBytes\":1718366009460},\"directionMapping\":{\"Source\":\"cluster0: 127.0.0.1:27017\",\"Destination\":\"cluster1: 127.0.0.1:27100\"},\"mongosyncID\":\"coordinator\",\"coordinatorID\":\"coordinator\"}}\n", "text": "Hello!I’m using mongosync 1.7.0 to sync data between two datacenters (about 500GB) with MongoDB 6.0.8. It syncs collections without any issues moves to “change event application” stage. But the problem that “lagTimeSeconds” continue growing (now it’s 16493) and stops when it’s 4-8 hours behind.\nThen if want to commit I have to wait the lag time. I checked this and after “commit” command sent “lagTimeSeconds” goes down slowly second by second. By docs I see that during “commit” state I should prevent writes to source cluster. It means that I should consider 4-8 hours downtime\nI was thinking that after initial sync of collections, the “lagTimeSeconds” will be going down. But it’s not, it goes up and then stops increasing some moment.Same time if I decide to move data using copying of mongodb files from disk it will take about 4 hours of downtime.\nI suppose something is wrong here.I restarted sync many times and always after moving to “change event application” state it keeps “lagTimeSeconds” big enough to make it not possible to move to another server without downtime.Please let me know if you have any idea why it happens. Network speed and CPU are not bottlenecks, as I restarted sync many times and “collection copy” stage goes fast enough and after that it still does some inserts on the new server, but it looks like it keeps the lag and doesn’t try to catch up.", "username": "Dmitry_Eroshenko1" }, { "code": "{\"progress\":{\"state\":\"RUNNING\",\"canCommit\":true,\"canWrite\":false,\"info\":\"change event application\",\"lagTimeSeconds\":23292,\"collectionCopy\":{\"estimatedTotalBytes\":1718274427703,\"estimatedCopiedBytes\":1718366009460},\"directionMapping\":{\"Source\":\"cluster0: 127.0.0.1:27017\",\"Destination\":\"cluster1: 127.0.0.1:27100\"},\"mongosyncID\":\"coordinator\",\"coordinatorID\":\"coordinator\"}}", "text": "This is state that I see right now after it runs for couple more hours {\"progress\":{\"state\":\"RUNNING\",\"canCommit\":true,\"canWrite\":false,\"info\":\"change event application\",\"lagTimeSeconds\":23292,\"collectionCopy\":{\"estimatedTotalBytes\":1718274427703,\"estimatedCopiedBytes\":1718366009460},\"directionMapping\":{\"Source\":\"cluster0: 127.0.0.1:27017\",\"Destination\":\"cluster1: 127.0.0.1:27100\"},\"mongosyncID\":\"coordinator\",\"coordinatorID\":\"coordinator\"}}Btw, desstination server is not on same machine. I just run ssh tunnel to another server on port 27100", "username": "Dmitry_Eroshenko1" }, { "code": "", "text": "Looks like I found why I have that lag. “date” command shows different time on two servers because of time zone. And difference is 4 hours. I’ll try to set same timezone now and will update this ticket if it works. FYI I run mongosync binary on source cluster with EDT tz, but on destionation one I have UTC.", "username": "Dmitry_Eroshenko1" }, { "code": "", "text": "Ok, I changed time zone, restarted server and restarted sync. Now it looks promising because lag doesn’t grow up during sync, stays around 50 seconds", "username": "Dmitry_Eroshenko1" }, { "code": "{\"progress\":{\"state\":\"RUNNING\",\"canCommit\":false,\"canWrite\":false,\"info\":\"collection copy\",\"lagTimeSeconds\":75,\"collectionCopy\":{\"estimatedTotalBytes\":1721506636723,\"estimatedCopiedBytes\":395547521848},\"directionMapping\":{\"Source\":\"cluster0: 127.0.0.1:27017\",\"Destination\":\"cluster1: 127.0.0.1:27100\"},\"mongosyncID\":\"coordinator\",\"coordinatorID\":\"coordinator\"}}", "text": "{\"progress\":{\"state\":\"RUNNING\",\"canCommit\":false,\"canWrite\":false,\"info\":\"collection copy\",\"lagTimeSeconds\":75,\"collectionCopy\":{\"estimatedTotalBytes\":1721506636723,\"estimatedCopiedBytes\":395547521848},\"directionMapping\":{\"Source\":\"cluster0: 127.0.0.1:27017\",\"Destination\":\"cluster1: 127.0.0.1:27100\"},\"mongosyncID\":\"coordinator\",\"coordinatorID\":\"coordinator\"}}75 seconds. So, looks like mongosync uses local time instead of unix timestamp for dates to apply oplog, not sure why it’s like that.", "username": "Dmitry_Eroshenko1" }, { "code": "", "text": "Looks like it didn’t help. Collection copy finished, now on “change event application”. And the lag grows up again{“progress\":{“state”:“RUNNING”,“canCommit”:true,“canWrite”:false,“info”:“change event application”,“lagTimeSeconds”:23079,“collectionCopy”:{“estimatedTotalBytes”:1721506636723,“estimatedCopiedBytes”:1722156937585},“directionMapping”:{“Source”:“cluster0: 127.0.0.1:27017”,“Destination”:“cluster1: 127.0.0.1:27100”},“mongosyncID”:“coordinator”,“coordinatorID”:“coordinator”}}", "username": "Dmitry_Eroshenko1" }, { "code": "", "text": "After three weeks of trying I decided to use replication to move to another datacenter instead of mongosync.", "username": "Dmitry_Eroshenko1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongosync keeps "lagTimeSeconds" big
2023-10-31T06:24:23.237Z
Mongosync keeps &ldquo;lagTimeSeconds&rdquo; big
200
null
[ "react-native" ]
[ { "code": "<UserProvider fallback={<FallBackNavigator />}>\n <RealmProvider\n sync={{\n flexible: true,\n initialSubscriptions: {\n update: (subs, realm) => {\n subs.add(realm.objects('courses'),{\n name: 'allCoursesSubscription',\n })\n },\n rerunOnOpen: true,\n },\n }}\n fallback={() => <ActivityIndicator size={'large'} style={{ position: 'absolute', zIndex: 999, top: DEVICE_HEIGHT * 0.5, left: DEVICE_WIDTH * 0.46 }} />}\n >\n <Navigator\n initialRouteName=\"Splash\"\n screenOptions={{\n headerShown: false,\n }}\n >\n <Screen name=\"Splash\" component={Splash} />\n <Screen name=\"Onboarding\" component={Onboarding} />\n <Screen name=\"DrawerNavigator\" component={DrawerNavigator} />\n <Screen name=\"BottomNavigator\" component={BottomNavigator} />\n </Navigator>\n </RealmProvider>\n </UserProvider>\n", "text": "Hello, Please how can I use Flexible sync in realm with React Native when the device is offline? When I am online, everything works well, but when I go offline, the RealmProvider gets stuck at the fallback without being able to pass through. Here is my code below.I have checked other topics on this with no way to get unblocked, I have also read the docs all through.\nI only have a problem with it when I am offline. Please help", "username": "Oben_Tabiayuk" }, { "code": "", "text": "I finally found out how to solve the Issue I was having. I was forgetting the user field in the sync property of the RealmProvider. So I added it and everything was now okay", "username": "Oben_Tabiayuk" }, { "code": "<RealmProvider\n sync={{\n flexible: true,\n onError: (_, error) => {\n console.log(error);\n },\n }}\n fallback={LoadingIndicator}>\n <App />\n</RealmProvider>\n\n", "text": "Hey @Oben_Tabiayuk can you expand how you solved it. @Andrew_Meyer can you advise.\nSame with me, when I am online, everything works well, but when I go offline and reopen the app the RealmProvider gets stuck at the fallback without being able to sync data.I want to tell Realm to only connect to the remote server if the device is online. If the device is offline, Realm should immediately return the local subscription.", "username": "Siso_Ngqolosi" }, { "code": "<RealmProvider\n sync={{\n flexible: true,\n existingRealmBehavior: {\n openBehavior: \"openImmediately\",\n },\n newRealmFileBehavior: {\n openBehavior: \"downloadBeforeOpen\",\n },\n onError: (_, error) => {\n console.log(error);\n },\n }}\n fallback={LoadingIndicator}>\n <App />\n</RealmProvider>\n", "text": "@Siso_Ngqolosi We are planning to update this in v13, but the default behaviors for opening Realm are unfortunately not very offline first friendly. However this can be solved with the following settings:Let us know if this helps!", "username": "Andrew_Meyer" }, { "code": "\"downloadBeforeOpen\"Realm.OpenRealmBehaviorTypeconst realmAccessBehavior: Realm.OpenRealmBehaviorConfiguration = {\n type: 'downloadBeforeOpen',\n timeOutBehavior: 'openLocalRealm',\n timeOut: 1000,\n };\n\n return (\n ...\n <RealmProvider\n sync={{\n flexible: true,\n newRealmFileBehavior: realmAccessBehavior,\n existingRealmFileBehavior: realmAccessBehavior,\n onError: (_, error) => {\n console.log(error);\n },\n }}\n fallback={LoadingIndicator}>\n...\n", "text": "Thanks @Andrew_Meyer for that.Also found this and it seems to be working . Just have to fix the error message is telling me that the type \"downloadBeforeOpen\" is not assignable to the type Realm.OpenRealmBehaviorType .Unrelated: Enjoyed your talk realm talk here: https://youtu.be/biIPsXoHu7Q?si=WysWpd7PkZbbr7e0", "username": "Siso_Ngqolosi" }, { "code": "const existingRealmFileBehaviorConfig: OpenRealmBehaviorConfiguration = {\n type: OpenRealmBehaviorType.OpenImmediately,\n };\n\n const newRealmFileBehaviorConfig: OpenRealmBehaviorConfiguration = {\n type: OpenRealmBehaviorType.OpenImmediately,\n };\n\n <RealmProvider\n sync={{\n user: app.currentUser!,\n flexible: true,\n newRealmFileBehavior: newRealmFileBehaviorConfig,\n existingRealmFileBehavior: existingRealmFileBehaviorConfig,\n", "text": "@Siso_Ngqolosi use this to solve your error", "username": "Oben_Tabiayuk" }, { "code": "const realmAccessBehavior = {\n type: 'downloadBeforeOpen',\n timeOutBehavior: 'openLocalRealm',\n timeOut: 1000,\n };\n\n return (\n ...\n <RealmProvider\n sync={{\n flexible: true,\n newRealmFileBehavior: realmAccessBehavior,\n existingRealmFileBehavior: realmAccessBehavior,\n onError: (_, error) => {\n console.log(error);\n },\n }}\n fallback={LoadingIndicator}>\n...\n", "text": "Hey @Andrew_Meyer, an idea why this might be happening?This problem occurs when modifications are made to the local realm database while the application is running in an offline mode (no internet connection). When the application is subsequently brought back online, the changes made to the local database are overwritten by the data stored in MongoDB.To replicate this issue, follow these steps:", "username": "Siso_Ngqolosi" }, { "code": "", "text": "@Siso_Ngqolosi Sorry for the late reply, this seems to have fallen through my filter.This should be working. Can you check the logs in Atlas if there was an error? Perhaps the permissions on this schema are too restrictive. If on the server side, the user didn’t have the rights to modify this object, then they would be reverted on sync.", "username": "Andrew_Meyer" }, { "code": "", "text": "Is this due to it being guest?Perform a guest check-inBecause the anon auth page has this very big warning:So “once user logs out” seems to be maybe happening in the transition to being a guest user offline then having a connection established?", "username": "Andy_Dent" } ]
Flexible sync when device is offline
2023-03-22T13:20:06.703Z
Flexible sync when device is offline
1,221
null
[ "queries", "dot-net" ]
[ { "code": "public class Note : EmbeddedObject\n{\n [MapTo(\"_id\")]\n public ObjectId Id { get; set; } = ObjectId.GenerateNewId();\n\n [MapTo(\"title\")]\n public String Title { get; set; }\n};\n\npublic class SharedWith : RealmObject\n{\n \n [PrimaryKey]\n [MapTo(\"_id\")]\n public ObjectId Id { get; set; } = ObjectId.GenerateNewId();\n\n [MapTo(\"ownerId\")]\n public ObjectId OwnerId { get; set; }\n\n [MapTo(\"recipientId\")]\n public ObjectId RecipientId { get; set; } \n \n [MapTo(\"notes\")]\n public IList<Note> Notes { get; }\n};\n _realm.All<SharedWith>().Where(sw => sw.RecipientId == p.Id)\nIEnumerable<Appointment>CollectionViewvar myShared = _realm.All<SharedWith>().Where(sw => sw.RecipientId == p.Id);\nvar instNotes = new List<Note>();\nforeach (var sw in myShared) {\n\tinstNotes.AddRange(sw.Notes);\n}\nNotes = instNotes;\n", "text": "Given two Realm classesHow do I get a list from Realm realm of all the Note objects embedded within a set of SharedWith objects?I can get the SharedWith objects with a query such asI want to have a flattened IEnumerable<Appointment> I can bind to a Xamarin CollectionViewI’m using a rather horrible instantion of all objects.", "username": "Andy_Dent" }, { "code": "AllEmbedded// Get a collection for all Note objects\nvar allNotes = (IQueryable<Note>)typeof(Realm).GetMethod(\"AllEmbedded\", BindingFlags.Instance | BindingFlags.NonPublic)!\n .MakeGenericMethod(typeof(Note))\n .Invoke(realm, null)!;\n\n// Filter using the backlinks property for notes where their SharedWith.recipientId is equal to p.Id\nvar filteredNotes = allNotes.Filter(\"@links.SharedWith.Notes.recipientId == $0\", p.Id);\n", "text": "Hey Andy, so this is not really something well supported today. I guess a very hacky solution would be to use the internal AllEmbedded method and then filter that collection. It should look something like:", "username": "nirinchev" }, { "code": "", "text": "Thanks Nikola, I’ll have a go with the hacky solution but may not get to it for a fair while as lots of other stuff to deliver.The code I showed above actually works surprisingly snappily, maybe because it’s doing purely local iterations. It’s part of my SyncOddly demo and is good enough for now that I can focus on other work. I suspect it may not scale though.", "username": "Andy_Dent" } ]
Efficient way to get EmbeddedObjects matching query?
2023-10-24T23:44:27.820Z
Efficient way to get EmbeddedObjects matching query?
249
null
[ "react-native" ]
[ { "code": "app.logIncurrentUserawait app.logIn(<Valid Credentials>)appuseApp", "text": "I have a pretty simple goal: re-validate a logged-in user’s email and password credentials before setting a new password.The only way I’ve found to do this is to call app.logIn with the currentUser email and the password as entered by the user. If the credentials are valid, the app should proceed with a password reset, otherwise it would error. I already have a working “forgot password” flow, but I wanted a different flow for a user that’s already logged in involving re-validating the credentials.The problem is that the app freezes when it gets to await app.logIn(<Valid Credentials>). It throws as expected if the credentials are invalid. I see a successful authorization in my Atlas log but the Promise never resolves.Is there any way to achieve the desired effect of “re-validating” a logged-in user’s credentials?Tech stuff:", "username": "Nick_Martin" }, { "code": "", "text": "Bumping for visibility. Can anyone help?", "username": "Nick_Martin" }, { "code": "", "text": "Hi Nick - would you be able to share a minimal repro case with us?", "username": "mpobrien" }, { "code": "const UpdatePasswordScreen = () => {\n const [currentPassword, setCurrentPassword] = useState('');\n const [password, setPassword] = useState('');\n const [confirmPassword, setConfirmPassword] = useState('');\n\n const user = useUser();\n const app = useApp();\n\n const validateCurrentPassword = useCallback(async () => {\n try {\n const credentials = Realm.Credentials.emailPassword({ email: user?.profile.email, password: currentPassword });\n await app.logIn(credentials); // app freezes here\n return true;\n } catch (error) {\n return false;\n }\n }, [app, user, currentPassword]);\n\n const handleSubmit = async () => {\n const isValid = await validateCurrentPassword();\n if (isValid) {\n const email = user?.profile.email;\n await app.emailPasswordAuth.callResetPasswordFunction({ email, password }, 'resetPassword', user?.id);\n }\n };\n// ...render form\n}\n10.24.012.0.0", "text": "@mpobrien Thanks for the reply and sorry for the delay in my responseBelow is the relevant parts for the “update password” screen. This is accessible when the user has already logged in.This actually worked as expected before I upgraded my npm packages, including Realm (I was prompted to upgrade in order to migrate from a partitioned sync to a flexible sync). I went from 10.24.0 to 12.0.0Also of note, I’m not using the UserProvider fallback pattern, as this caused other problems. I’m not sure if it’s relevant, but since the issue seems to happen when trying to sign in with the same user that’s already signed in, I thought I’d mention it.I have only checked this behavior on Android. It happens every time I try to validate the current user’s password as shown", "username": "Nick_Martin" }, { "code": "", "text": "Any suggestions or guidance?", "username": "Nick_Martin" }, { "code": "import Realm from 'realm';\nRealm.setLogLevel('trace');\n", "text": "@Nick_Martin I can’t think of why this would change, but perhaps there was a regression. Can you active logs at the top of your app?We can then see if there are any oddities that occur and see how we can proceed further.", "username": "Andrew_Meyer" }, { "code": "", "text": "Unfortunately, every time I try to run the app with the log level set they way you said, it crashes the app before I can do anything. I can’t tell if this is a problem with Realm or my dev environment.", "username": "Nick_Martin" }, { "code": " LOG [debug] App: log_in_with_credentials: app_id: homeschool-activity-tracker-kfbds\n LOG [debug] App: version info: platform: Android version: 31 - sdk: JS - sdk version: 12.0.0 - core version: 13.17.2\n LOG [debug] App: request location: https://realm.mongodb.com/api/client/v2.0/app/homeschool-activity-tracker-kfbds/location\n LOG [debug] App: update_hostname: https://realm.mongodb.com | wss://ws.realm.mongodb.com\n LOG [detail] DB: 32129 Thread 129365459352816: Open file: /data/data/com.homeschoolactivitytracker/files/mongodb-realm/homeschool-activity-tracker-kfbds/server-utility/metadata/sync_metadata.realm\n LOG [debug] DB: 32129 Thread 129365459352816: Query find all: 'identity == \"6514e9b467771b4897d14c08\" and provider_type == \"local-userpass\"', limit = -1\n LOG [debug] DB: 32129 Thread 129365459352816: Query found: 1, Duration: 13 us\n LOG [debug] DB: 32129 Thread 129365459352816: Initiate commit version: 729\n LOG [debug] DB: 32129 Thread 129365459352816: Commit of size 2320 done in 3123 us\n\n", "text": "I set the log level to “debug” and here’s the whole log output after I press the submit button (that triggers the validation)", "username": "Nick_Martin" }, { "code": "Initiate commit versionCommit of size 2320 done", "text": "After logging this out a few times, I’m seeing that Initiate commit version changes every time but the Commit of size 2320 done is consistently where it stops", "username": "Nick_Martin" } ]
App freezes when logging in with the same user
2023-10-07T03:40:21.738Z
App freezes when logging in with the same user
550
null
[]
[ { "code": "", "text": "For thoses enrolled in student developer proogram , I am supposed to join DBADiscover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.\nbut apprenlty this path has been archieved yet available and new path has been availableDiscover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.I have completed 13 modules of the old path , should I restart it with the new one", "username": "Aman_Ahmed" }, { "code": "", "text": "Hi Aman, welcome to the forum!Yes, our DBA learning path has been updated and released. The learning path is now called Self-Managed Database Admin. If you completed some of the courses in the old DBA learning path, simply click the “Register Now” button on the Self-Managed Database Admin learning path landing page linked above and the same courses you’ve already completed will count toward the completion of the new learning path.Note that it’s only the learning path that’s changed. The DBA certification exam is the same.", "username": "Aiyana_McConnell" } ]
DBA path for Student Developer program
2023-11-03T01:10:41.731Z
DBA path for Student Developer program
125
null
[ "realm-web" ]
[ { "code": "Warning: bundle initial exceeded maximum budget. Budget 500.00 kB was not met by 441.48 kB with a total of 941.48 kB.\n./node_modules/realm-web/dist/bundle.es.js:1263:8-27 - Warning: Module not found: Error: Can't resolve 'electron' in '/Users/Jesse/Projects/GCP/BasinsList-Platform/web/node_modules/realm-web/dist'\n", "text": "Hello,I just created a new Angular app using the CLI, added Angular Universal and PWS support. Added Realm-Web and wired it up to my Realm App / MongoDB Atlas Cluster. It seems to work and I can make queries and return data.I noticed when I build my Angular app for Prod I get a bundle size warning. Keep in mind this app is very bare bones and I only added a few files to get connected to my Realm backend.This is HUGE considering my app only has a couple services and a module added.Something else I noticed when i build…Given the above error I must ask does the Realm Web sdk support tree shaking? Why is my web app trying to include electron? How much other bloat is getting added to my app bundles?Thanks.", "username": "Jesse_Beckton" }, { "code": "", "text": "A couple observations since first post. The Realm-Web SDK obviously does not run on Node which is where SSR takes place. I removed Angular Universal and the electron module not found error is gone. My guess is that when the run time is Node then electron is bundled?Using a bundle analyzer I don’t see electron included, which is desirable but my bundle sizes are still too large.", "username": "Jesse_Beckton" }, { "code": "", "text": "I know this posting was a while ago, but I seem to have a similar issue with“realm-web”: “^2.0.0”I’m trying to migrate from Stitch to Realm on a web app using Angular written in TS.when I include it (i.e import * as Realm from ‘realm-web’; ) in an angular service, I can’t build the app any more. I get the following error.ERROR in …/node_modules/realm-web/dist/bundle.d.ts:307:48 - error TS2304: Cannot find name ‘Omit’.Any help would be appreciated.Thanks,", "username": "Michael_Bass" } ]
Angular and Realm Web SDK issues
2022-01-15T18:38:43.195Z
Angular and Realm Web SDK issues
3,087
null
[]
[ { "code": "", "text": "We are introducing CSFLE in our applications. We are currently on version 4.0 and plan to move to version 4.2 (standalone/replica-set) for taking the advantage of CSFLE feature.\nThis question is on data migration. The data is currently not encrypted. How do we migrate the data - with CSFLE enabled - so that the data is migrated live and is encrypted based on the json encryption schemas provided?\nIs there a tool available? How do we enable CSFLE on the destination mongodb instance and configure the json schemas?\nIs there any live migration service available? Need urgent guidance on this.Thanks,\nAnu", "username": "Anu_Madan" }, { "code": "", "text": "We are having the same issue. Could you find a solution @Anu_Madan ?", "username": "Carlos_Villanueva" }, { "code": "{myField: {$not: {$type: \"binData\"}}}\n", "text": "@Carlos_Villanueva I have just looked into this and these steps seem to work for retroactively CSFLEing data that was not previously CSFLE’d, using explicit encryption:which should return the non-CSFLE fields that need CSFLEing\n4. Update the field(s) using the clientEncryption approach in the linked page\n5. The fields are now CSFLE’d", "username": "paltaie" }, { "code": "", "text": "I know this is a long time ago, but this resource is helpful: https://www.mongodb.com/docs/manual/tutorial/configure-encryption/#std-label-encrypt-existing-dataI think you’d have to use the Mongo Cloud console to shut down your secondary replica, delete the database data, and re-start it with encryption and replication settings set properly so that it syncs with the primary but with encryption.", "username": "Angus_Ryer" }, { "code": "", "text": "I believe the link you supplied is for how to enable encryption at rest, but the original question was about client-side field level encryption (CSFLE)", "username": "paltaie" } ]
Data Migration with CSFLE enabled
2020-10-13T06:15:31.749Z
Data Migration with CSFLE enabled
1,674
null
[ "python", "atlas-search" ]
[ { "code": "collection.create_search_indexsearch_index_model = operations.SearchIndexModel(\n definition={\n \"mappings\": {\n \"dynamic\": True,\n \"fields\": {\n \"values\": {\n \"type\": \"knnVector\",\n \"dimensions\": 768,\n \"similarity\": \"cosine\",\n }\n },\n },\n },\n name=\"embeddings2\",\n)\n\ncollection.create_search_index(model=search_index_model)\nOperationFailure: command not found", "text": "I’m doing some testing with the Mongo Atlas free tier. The free tier appears to only support version 6. The pymongo docs suggest that Atlas 7.0+ is required to call collection.create_search_index. However, since I am able to create the search index on the web console using the JSON editor, I just wanted to confirm if this is still the case.I’ve tried the following:But it fails with OperationFailure: command not found.", "username": "Matthew_Twomey" }, { "code": "mongosh", "text": "Hi @Matthew_Twomey - Welcome to the community.However, since I am able to create the search index on the web console using the JSON editor, I just wanted to confirm if this is still the case.As per the 7.0 release notes (specific to Atlas Search index management):Starting in MongoDB 7.0, you can manage Atlas Search indexes with mongosh methods and database commands. Atlas Search index commands are only available for deployments hosted on MongoDB Atlas, and require an Atlas cluster tier of at least M10I don’t know the exact reason but my best guess is that the create search index database command was more recently introduced (in 7.0) and M0 tier clusters are currently on 6.0.11.You can continue creating the search index from the UI for the free tier at this stage.Hope this answers your question / clears things up.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "That does help, thank you for confirming.", "username": "Matthew_Twomey" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Possible to create search index on free tier w/pymongo?
2023-11-02T15:48:20.005Z
Possible to create search index on free tier w/pymongo?
142
null
[ "node-js" ]
[ { "code": "", "text": "The instructions for connecting to MongoDB via X.509 are out of date. They use the deprecated options sslKey and sslCert. They have been replaced by tlsCertificateKeyFile.Took me a day to figure that out.", "username": "Zac_Twidale" }, { "code": "", "text": "Hey @Zac_Twidale,Can you share the page you found that was out of date? I believe https://www.mongodb.com/docs/drivers/node/current/fundamentals/authentication/mechanisms/#x.509 is the current page.", "username": "alexbevi" }, { "code": "", "text": "Screenshot 2023-11-03 0616181920×976 105 KBYou are correct. Inside the documentation it is correct. Now that I know where to look and what I am looking for. The screenshot comes from the more user friendly dashboard that supposedly makes the basics easier.", "username": "Zac_Twidale" }, { "code": "", "text": "Thanks for pointing that out @Zac_Twidale. We’ll get the docs updated so as to avoid this confusion.", "username": "alexbevi" }, { "code": "sslKeysslCertconst { MongoClient } = require('mongodb');\n\nconst credentials = '<path_to_certificate>'\n\nconst client = new MongoClient('mongodb+srv://xxx.yyy.mongodb.net/?authSource=%24external&authMechanism=MONGODB-X509&retryWrites=true&w=majority', {\n tlsCertificateKeyFile: credentials\n});\n\nasync function run() {\n try {\n await client.connect();\n const database = client.db(\"testDB\");\n const collection = database.collection(\"testCol\");\n const docCount = await collection.countDocuments({});\n console.log(docCount);\n // perform actions using client\n } finally {\n // Ensures that the client will close when you finish/error\n await client.close();\n }\n}\nrun().catch(console.dir);\n", "text": "@Zac_Twidale just to follow up we’re working to get the example in the connect modal adjusted. The sslKey and sslCert options were deprecated in v5 of the driver then removed in v6 (NODE-5376).If anyone else comes across this post in the meantime, the correct sample code would be:", "username": "alexbevi" } ]
NodeJS driver instructions are deprecated for connection via X509
2023-11-02T08:18:51.338Z
NodeJS driver instructions are deprecated for connection via X509
138
null
[ "aggregation", "indexes" ]
[ { "code": "[\n {\n $match:\n /**\n * query: The query in MQL.\n */\n {\n $and: [\n {\n /* GLOBAL criteria */\n $and: [\n {\n location: {\n $geoWithin: {\n $geometry: {\n type: \"Polygon\",\n coordinates: [\n [\n [-85, 27],\n [-85, 30],\n [-89, 30],\n [-89, 27],\n [-85, 27],\n ],\n ],\n crs: {\n type: \"name\",\n properties: {\n name: \"urn:x-mongodb:crs:strictwinding:EPSG:4326\",\n },\n },\n },\n },\n },\n },\n {\n deleted: false,\n },\n {\n deleted: {\n $ne: null,\n },\n },\n ],\n },\n {\n $or: [\n {\n /* layer 1 */\n $and: [\n {\n location: {\n $geoWithin: {\n $geometry: {\n type: \"Polygon\",\n coordinates: [\n [\n [-86, 27],\n [-86, 28],\n [-87, 28],\n [-87, 27],\n [-86, 27],\n ],\n ],\n crs: {\n type: \"name\",\n properties: {\n name: \"urn:x-mongodb:crs:strictwinding:EPSG:4326\",\n },\n },\n },\n },\n },\n },\n {\n carType: \"TAXI\",\n },\n {\n onACall: true,\n },\n {\n onACall: {\n $ne: null,\n },\n },\n ],\n },\n ],\n },\n ],\n },\n },\n]\n", "text": "I have a system that needs to perform many different kinds of searches, including pagination, so we’re using aggregation queries to get counts and limit results. The same processes are used for single queries and multi-layer queries.EG:\nI have simple-query A which may or may not have a geolocation limitation\nI have simple-query B which may or may not have a geolocation limitation\nI have a multi-layer query that can contain query A, query B, or both. It also has it’s own geolocation limitation, and a couple of other default criteria.\nWe use a 2dsphere index for the geolocation limitation because it dramatically improves performance, however, when my multi-layer query contains two geolocation limits (one within the other) and no other non-geolocation criteria, it throws an error: “Intersection requires two index intervals”\nThe same query works perfectly without the index.aggregation query:", "username": "Serena_Cassell" }, { "code": "db.collection.getIndexes()", "text": "Hey @Serena_Cassell,Welcome to the MongoDB Community!I have a multi-layer query that can contain query A, query B, or both. It also has it’s own geolocation limitation and a couple of other default criteria.May I ask what you mean by a multi-layer query? Could you please share an example query?I have a system that needs to perform many different kinds of searches, including pagination, so we’re using aggregation queries to get counts and limit resultsHowever to better understand the requirements, could you please share some additional information that will help the community assist you better?Regards\nKushagra", "username": "Kushagra_Kesav" }, { "code": "[\n { v: 2, key: { _id: 1 }, name: '_id_' },\n {\n v: 2,\n key: { location: '2dsphere' },\n name: 'location_2dsphere',\n '2dsphereIndexVersion': 3\n }\n]\nvehicleId,_class,location.type,location.coordinates[0],location.coordinates[1],carType,onACall,deleted,vehicleName\n429c19eb-9e45-4755-bdc0-7dfa0149f48d,ca.app.vehicle.VehicleWrapper,Point,-86.75,27.75,TAXI,TRUE,FALSE,TAXI1\nc716ccff-14f8-4443-89e1-9e318310796c,ca.app.vehicle.VehicleWrapper,Point,-87.78,28.23,STARSHIP,,TRUE,STARSHIP1\n0f7d95b1-b92b-4ec3-a897-ec0b0fc9aded,ca.app.vehicle.VehicleWrapper,Point,-87.75,28.25,STARSHIP,,FALSE,STARSHIP2\n47c9b950-9c38-4261-bfd7-8e92db389ecd,ca.app.vehicle.VehicleWrapper,Point,-87.25,28.25,PERSONAL,,FALSE,PERSONAL123\nd256ad4e-7f02-4320-b6f9-ca4461b80e3e,ca.app.vehicle.VehicleWrapper,Point,-85.25,27.25,TAXI,TRUE,FALSE,TAXI2319\n36400547-b79c-4a99-823c-289139e092bd,ca.app.vehicle.VehicleWrapper,Point,-88.5,27.5,STARSHIP,,FALSE,STARSHIP3\n3542f1c6-2b10-42e9-a873-71200bfc0e26,ca.app.vehicle.VehicleWrapper,Point,-86.25,29.25,PERSONAL,,FALSE,PERSONAL24\n2ef81027-7b9e-47d6-b264-de8e67b7fcf1,ca.app.vehicle.VehicleWrapper,Point,-86.25,27.25,TAXI,FALSE,FALSE,TAXI5\n79247aac-54fc-4d24-8a17-1494d2c2a535,ca.app.vehicle.VehicleWrapper,Point,-89.25,28.5,STARSHIP,,FALSE,STARSHIP7\n[\n {\n $match:\n {\n $and: [\n {\n /* GLOBAL criteria */\n $and: [\n {\n location: {\n $geoWithin: {\n $geometry: {\n type: \"Polygon\",\n coordinates: [\n [\n [-85, 27],\n [-85, 30],\n [-89, 30],\n [-89, 27],\n [-85, 27],\n ],\n ],\n crs: {\n type: \"name\",\n properties: {\n name: \"urn:x-mongodb:crs:strictwinding:EPSG:4326\",\n },\n },\n },\n },\n },\n },\n {\n deleted: false,\n },\n {\n deleted: {\n $ne: null,\n },\n },\n ],\n },\n {\n $or: [\n {\n /* layer 1 */\n $and: [\n {\n location: {\n $geoWithin: {\n $geometry: {\n type: \"Polygon\",\n coordinates: [\n [\n [-86, 27],\n [-86, 28],\n [-87, 28],\n [-87, 27],\n [-86, 27],\n ],\n ],\n crs: {\n type: \"name\",\n properties: {\n name: \"urn:x-mongodb:crs:strictwinding:EPSG:4326\",\n },\n },\n },\n },\n },\n },\n {\n carType: \"TAXI\",\n },\n {\n onACall: true,\n },\n {\n onACall: {\n $ne: null,\n },\n },\n ],\n },\n {\n /* layer 2 */\n $and: [\n {\n carType: \"STARSHIP\",\n },\n ],\n },\n ],\n },\n ],\n },\n },\n]\n", "text": "@Kushagra_Kesav thank you!\nI wish I could attach a ZIP file, as I have all the information including document examples, query examples, and steps to reproduce the issue.The version of MongoDB is 7.0.1This is the results of getIndexes():This is the document data in CSV format for the example I’ve created:The aggregation query in the post works without the 2dsphere index and fails with the 2dsphere index.An example of another query (that DOES work) that would be generated by the same system/process that creates the broken query is this:", "username": "Serena_Cassell" }, { "code": "$and: [\n { /* global criteria, including a geospatial area */ },\n { $or: [\n { /* sub query 1 */ },\n /* ... repeat for as many queries as there are to combine ... */\n { /* sub query x */ },\n ]}\n]\n", "text": "May I ask what you mean by a multi-layer query? Could you please share an example query?A multi-layer query in this case is a “query of queries”. Multiple queries are assembled based on pre-defined criteria from the users, and assembled into a larger query, which is executed and returns a specific page of data.Basically, in the $match aggregation, we have this structure:The global criteria ALWAYS includes a geospatial area, and the subqueries MAY or MAY NOT contain geospatial areas.\nI’ve noticed that as long as one subquery does not contain a geospatial area, this pattern works, but if all of the subqueries have a geospatial area, then the query fails due to the 2dsphere index throwing an “Intersection requires two index intervals” error.", "username": "Serena_Cassell" } ]
2dsphere index throws error
2023-11-02T14:17:34.704Z
2dsphere index throws error
139
https://www.mongodb.com/…8_2_1024x393.png
[]
[ { "code": "atlas auth logout", "text": "Hi!In the lesson 2 lab “Creating and Deploying an Atlas Cluster” I was able to successfully authenticate with the verification code from the terminal.However, after the “Successfully logged in as {Your Email Address}” message the terminal displays the following error messages:Screenshot from 2023-10-31 12-33-011146×440 86.6 KBThere is no prompt to select a project after authentication.I’ve used atlas auth logout to log out and log in again but the issue persists.", "username": "Ben_Polgardy" }, { "code": "", "text": "Hi Ben,This is probably an error with the environment instance. I suggest to leave the lab and restart it.If that doesn’t work, please reach out to the MongoDB University team by email: learn@mongodb.com", "username": "Davenson_Lombard" } ]
Lesson 2 Lab: Creating and Deploying an Atlas Cluster fatal error
2023-10-31T11:35:53.608Z
Lesson 2 Lab: Creating and Deploying an Atlas Cluster fatal error
165
null
[ "node-js", "replication" ]
[ { "code": "MongoServerError: not primary and secondaryOk=false\nconst { MongoClient } = require('mongodb');\n\n// The MongoDB URI for the secondary member in the replica set\nconst uri = 'mongodb://ak:nomercy@localhost:27018/?authSource=admin&directConnection=true';\nconst client = new MongoClient(uri);\n\nasync function connect() {\n try {\n await client.connect();\n console.log('Connected to MongoDB');\n\n const db = client.db('ak');\n const { filters } = await db.command({ planCacheListFilters: 'test' });\n filters.forEach(function (plan) {\n console.log(plan);\n });\n } catch (err) {\n console.error('Error connecting to MongoDB:', err);\n } finally {\n await client.close();\n }\n}\n\nconnect();\nMongo version: 4.4.24\nNode JS version: v20.5.1\nMongo driver version: 6.0.0\n", "text": "I have set up a MongoDB replica set with three nodes in my local environment. The replica set consists of one primary node and two secondary nodes. I’m using Node.js to connect to one of the secondary members in the replica set.However, when I attempt to run the planCacheListFilters command on the secondary member, I encounter the following error:This error suggests that the command is unable to run on a secondary node because it requires a primary node, and the secondaryOk option is set to false.I would like to understand how to properly configure my MongoDB connection or the planCacheListFilters command to execute successfully on a secondary node. Additionally, if there are any specific considerations or permissions required for running this command on secondary members, I would appreciate guidance on that.", "username": "Ajithkumar_N" }, { "code": "", "text": "Do pass all 3 ReplicaSet Member uri in deployment stringconst uri = ‘mongodb://ak:nomercy@localhost:27017,localhost:27018,localhost:27019/authSource=admin&directConnection=true’;It will switch between primary and secondary nodes", "username": "Bhavya_Bhatt" }, { "code": "readPreference=secondary", "text": "Set the readPreference=secondary on the connection string.", "username": "chris" }, { "code": "/home/nomercy/project/NodeJS/node_modules/mongodb/lib/connection_string.js:358\n throw new error_1.MongoParseError('directConnection option requires exactly one host');\n ^\n\nMongoParseError: directConnection option requires exactly one host\n at parseOptions (/home/nomercy/project/NodeJS/node_modules/mongodb/lib/connection_string.js:358:15)\n at new MongoClient (/home/nomercy/project/NodeJS/node_modules/mongodb/lib/mongo_client.js:51:63)\n at Object.<anonymous> (/home/nomercy/project/NodeJS/test/test.js:5:16)\n at Module._compile (node:internal/modules/cjs/loader:1198:14)\n at Object.Module._extensions..js (node:internal/modules/cjs/loader:1252:10)\n at Module.load (node:internal/modules/cjs/loader:1076:32)\n at Function.Module._load (node:internal/modules/cjs/loader:911:12)\n at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)\n at node:internal/main/run_main_module:22:47 {\n [Symbol(errorLabels)]: Set(0) {}\n}\n", "text": "I followed the solution you provided, but I encountered the following error:It seems I’m encountering a “MongoParseError” with the message “directConnection option requires exactly one host.” Could you please provide additional guidance on resolving this error?", "username": "Ajithkumar_N" }, { "code": "readPreference=secondaryconst { MongoClient } = require('mongodb');\n\n// The MongoDB URI for the secondary member in the replica set\nconst uri = 'mongodb://ak:nomercy@localhost:27018/?authSource=admin&directConnection=true&readPreference=secondary';\nconst client = new MongoClient(uri);\n\nasync function connect() {\n try {\n await client.connect();\n console.log('Connected to MongoDB');\n\n const db = client.db('ak');\n const { filters } = await db.command({ planCacheListFilters: 'test' });\n filters.forEach(function (plan) {\n console.log(plan);\n });\n } catch (err) {\n console.error('Error connecting to MongoDB:', err);\n } finally {\n await client.close();\n }\n}\n\nconnect();\nConnected to MongoDB\nError connecting to MongoDB: MongoServerError: not primary and secondaryOk=false\n at Connection.onMessage (/home/nomercy/project/NodeJS/node_modules/mongodb/lib/cmap/connection.js:202:26)\n at MessageStream.<anonymous> (/home/nomercy/project/NodeJS/node_modules/mongodb/lib/cmap/connection.js:61:60)\n at MessageStream.emit (node:events:513:28)\n at processIncomingData (/home/nomercy/project/NodeJS/node_modules/mongodb/lib/cmap/message_stream.js:124:16)\n at MessageStream._write (/home/nomercy/project/NodeJS/node_modules/mongodb/lib/cmap/message_stream.js:33:9)\n at writeOrBuffer (node:internal/streams/writable:391:12)\n at _write (node:internal/streams/writable:332:10)\n at MessageStream.Writable.write (node:internal/streams/writable:336:10)\n at Socket.ondata (node:internal/streams/readable:754:22)\n at Socket.emit (node:events:513:28) {\n topologyVersion: { processId: new ObjectId(\"6503c85cdf6acceaed8673ac\"), counter: 4 },\n operationTime: new Timestamp({ t: 1695004210, i: 5 }),\n ok: 0,\n code: 13435,\n codeName: 'NotPrimaryNoSecondaryOk',\n '$clusterTime': {\n clusterTime: new Timestamp({ t: 1695004210, i: 5 }),\n signature: {\n hash: Binary.createFromBase64(\"3nLFi7iebteJc+qLBXqQWP+hiCQ=\", 0),\n keyId: new Long(\"7253318057697738757\")\n }\n },\n [Symbol(errorLabels)]: Set(0) {}\n}\n", "text": "Even when I use the secondary read preference (readPreference=secondary), I still encounter the same issue:Despite setting the secondary read preference, I still face the “not primary and secondaryOk=false” error. I’m seeking further assistance to resolve this issue.", "username": "Ajithkumar_N" }, { "code": "", "text": "Could this be an error bounced up from the secondary as it’s not enable for direct reads? How is the secondary configured? Note the error says the error comes from the server and not the driver etc.I’ve not done this before but there are some google / SO hits that suggest others had this issue and it was a secondary config issue.", "username": "John_Sewell" }, { "code": "\n \"members\" : [\n {\n \"_id\" : 0,\n \"name\" : \"192.168.168.188:27017\",\n \"health\" : 1,\n \"state\" : 1,\n \"stateStr\" : \"PRIMARY\",\n \"uptime\" : 858392,\n \"optime\" : {\n \"ts\" : Timestamp(1695040891, 1),\n \"t\" : NumberLong(32)\n },\n \"optimeDate\" : ISODate(\"2023-09-18T12:41:31Z\"),\n \"lastAppliedWallTime\" : ISODate(\"2023-09-18T12:41:31.644Z\"),\n \"lastDurableWallTime\" : ISODate(\"2023-09-18T12:41:31.644Z\"),\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"electionTime\" : Timestamp(1694746727, 1),\n \"electionDate\" : ISODate(\"2023-09-15T02:58:47Z\"),\n \"configVersion\" : 59842,\n \"configTerm\" : 32,\n \"self\" : true,\n \"lastHeartbeatMessage\" : \"\"\n },\n {\n \"_id\" : 1,\n \"name\" : \"192.168.168.188:27018\",\n \"health\" : 1,\n \"state\" : 2,\n \"stateStr\" : \"SECONDARY\",\n \"uptime\" : 294169,\n \"optime\" : {\n \"ts\" : Timestamp(1695040891, 1),\n \"t\" : NumberLong(32)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(1695040891, 1),\n \"t\" : NumberLong(32)\n },\n \"optimeDate\" : ISODate(\"2023-09-18T12:41:31Z\"),\n \"optimeDurableDate\" : ISODate(\"2023-09-18T12:41:31Z\"),\n \"lastAppliedWallTime\" : ISODate(\"2023-09-18T12:41:31.644Z\"),\n \"lastDurableWallTime\" : ISODate(\"2023-09-18T12:41:31.644Z\"),\n \"lastHeartbeat\" : ISODate(\"2023-09-18T12:41:33.623Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2023-09-18T12:41:33.617Z\"),\n \"pingMs\" : NumberLong(1),\n \"lastHeartbeatMessage\" : \"\",\n \"syncSourceHost\" : \"192.168.168.188:27017\",\n \"syncSourceId\" : 0,\n \"infoMessage\" : \"\",\n \"configVersion\" : 59842,\n \"configTerm\" : 32\n }\n ],\n \"members\" : [\n {\n \"_id\" : 0,\n \"host\" : \"192.168.168.188:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 2,\n \"tags\" : {\n \n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 1,\n \"host\" : \"192.168.168.188:27018\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n \n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n }\n ],\n", "text": "The MongoDB server and Node.js code are indeed on the same machine, and I’ve set up a two-node replica set for testing purposes. Here’s the configuration details:Primary Node (Node 1): This node is running on port 27017 and is the primary in the replica set.Secondary Node (Node 2): This node is running on port 27018 and serves as one of the secondary nodes in the replica set.In this setup, I am trying to connect to the secondary node (Node 2) using the Node.js code. Despite setting the secondary read preference, I encountered the “not primary and secondaryOk=false” error. Below i have attached the rs.conf and rs.status output.", "username": "Ajithkumar_N" }, { "code": "", "text": "Hi, Any update on this?", "username": "Ajithkumar_N" }, { "code": "const { filters } = await db.command({ planCacheListFilters: 'test' });\n", "text": "I am encountering a problem when executing the following command:Interestingly, when I run any ‘find’ or ‘aggregate’ query with ‘readPreference,’ it works as expected.\"is there any limitation on the driver level?", "username": "Ajithkumar_N" }, { "code": "const { filters } = await db.command({ planCacheListFilters: 'test', readPreference: 'secondary' });\nconst { filters } = await db.command({ planCacheListFilters: 'test' }, { readPreference: 'secondary' });\n", "text": "I wanted to share an important observation related to a previous issue I mentioned. When using the following command:I noticed that the MongoDB driver did not throw any errors, such as syntax errors, even though the command did not behave as expected. This behaviour could potentially be misleading, as it may appear that the command is valid.After further investigation, I found that to ensure the desired behaviour, it’s necessary to specify the ‘readPreference’ both in the connection string and in the command itself, as shown below:", "username": "Ajithkumar_N" }, { "code": "readPreference=secondary", "text": "It’s worked for me, after adding the flag readPreference=secondary in the database connection.", "username": "Vijay_Vavdiya" } ]
MongoServerError: not primary and secondaryOk=false
2023-09-17T06:12:47.205Z
MongoServerError: not primary and secondaryOk=false
2,159
null
[ "aggregation" ]
[ { "code": " {\n \"$match\": {\n \"_id\": new ObjectID(\"651aff11fa62127802083116\")\n }\n }, \n {\n \"$project\": {\n \"colName\": \"col\",\n \"constantId\": \"$meta.constantId\"\n }\n }, \n {\n \"$lookup\": {\n \"from\": \"$colName\", // doesn't work\n \"from\": \"col\", // work\n \"localField\": \"constantId\",\n \"foreignField\": \"facilityConstantId\",\n \"as\": \"array\"\n }\n }\n", "text": "Hi, I try to use in aggreagation lookup stage “from”: “$colName”, but it doesn’t work\nAny suggestion will be help …", "username": "Nurlan_Kazdayev" }, { "code": "$project\"col\"$lookup", "text": "Hi @Nurlan_Kazdayev,Whats the use case details / requirements of passing that from the $project value in as opposed to just using \"col\" for the single $lookup stage you provided?Look forward to hearing from you.Regards,\nJason", "username": "Jason_Tran" } ]
Lookup stage "from": "$colName", but it doesn't work
2023-10-31T04:26:21.731Z
Lookup stage &ldquo;from&rdquo;: &ldquo;$colName&rdquo;, but it doesn&rsquo;t work
169
null
[ "queries", "indexes" ]
[ { "code": "{\n \"type\": \"command\",\n \"ns\": \"profiles.profiles\",\n \"command\": {\n \"find\": \"profiles\",\n \"filter\": {\n \"wallets.network\": {\n \"$exists\": true,\n \"$eq\": \"ETHEREUM\"\n },\n \"wallets.address\": {\n \"$exists\": true,\n \"$eq\": \"0x0000000000000000000000000000000000000000\"\n }\n },\n \"limit\": 1,\n \"singleBatch\": true,\n \"readConcern\": {\n \"level\": \"snapshot\"\n },\n \"lsid\": {\n \"id\": {\n \"$binary\": {\n \"base64\": \"G3/CI549RUmynPncgQoy9Q==\",\n \"subType\": \"04\"\n }\n }\n },\n \"txnNumber\": 47622,\n \"startTransaction\": true,\n \"autocommit\": false,\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": {\n \"t\": 1698949046,\n \"i\": 3\n }\n },\n \"signature\": {\n \"hash\": {\n \"$binary\": {\n \"base64\": \"BnzSwRPGSbjI5v8SFjck/2kQ4AA=\",\n \"subType\": \"00\"\n }\n },\n \"keyId\": 7237555922858410000\n }\n },\n \"$db\": \"profiles\",\n \"$readPreference\": {\n \"mode\": \"primary\"\n }\n },\n \"planSummary\": \"IXSCAN { wallets.network: 1, wallets.address: 1 }\",\n \"planningTimeMicros\": 117,\n \"keysExamined\": 16402,\n \"docsExamined\": 14633,\n \"fromPlanCache\": true,\n \"nBatches\": 1,\n \"cursorExhausted\": true,\n \"numYields\": 0,\n \"nreturned\": 0,\n \"queryHash\": \"0D3949B0\",\n \"planCacheKey\": \"2E2ECD8F\",\n \"queryFramework\": \"sbe\",\n \"reslen\": 231,\n \"locks\": {},\n \"readConcern\": {\n \"level\": \"snapshot\",\n \"provenance\": \"clientSupplied\"\n },\n \"storage\": {},\n \"cpuNanos\": 52353796,\n \"remote\": \"3.134.126.8:2830\",\n \"protocol\": \"op_msg\",\n \"durationMillis\": 105,\n \"v\": \"7.0.2\"\n}\n", "text": "I’m trying to understand more on the slow query data log. In some of the slow queries, I saw some queries has both high “keys examined” & “docs examples” & “used indexes = true”. Does it suggest an efficient index-based query?Here is an example of the query we have:", "username": "Yu_Zhang" }, { "code": "", "text": "The difference between keys examined and doc examined is very small, so i think the used index is ok. Otherwise if the index is not good, then doc examined should be much higher.", "username": "Kobe_W" } ]
Is it an efficient query when both keys examined & docs examined is high?
2023-11-02T18:46:19.444Z
Is it an efficient query when both keys examined &amp; docs examined is high?
117
null
[ "replication", "sharding" ]
[ { "code": "", "text": "I have seen this issue posted before, but my setup is Sharded Replicasets. I think I have found why this is the case but just to put it here in case anyone has some input.Database has about 4.5 billion documents all around 4K in size.\nI have TTL set to 8 days.\nCluster is 16 shards.\nEach shard is a PSA replicaset.\nAll Primary and Secondary have their own seperate Nodes, P/S split across 2 DC.\nAll Arbitors are located on a single node in third DC.\nEach node is a VM with 8 CPU, 64GB MEM, 1TB DISKBulk Write I am pushing in 20K/s documents.However the TTL deletion is lucky to be getting a few hundred per sec. I am out to about 20 days retention cause deletion just is so slow, It’s like they are not even moving.\nI can see in the logs a lazy handful of REPL CRUD “d” on the $oid which I assume is the Replica getting individual delete events for TTL.\nThe IOWait is up to about 20% due to deletion and is affecting the inserts.\nI believe the issue might be the operation and replication of the TTL deletes are one by one which is way beond the capability of the DB for the incoming rate.I have looked at Partitioning but it is not really existent even though it comes up in serches, Partitioning seems to be equated to Sharding, But this really is a different strategy.I am thinking pperhaps I create a manual partitioning strategy into multiple collection names representing 4 hour periods of time, Then I can calculate the collections to insert/query in middleware based on the document date/time and have the old collections dropped at expire. This seems real clunky, Surely a proper Partitioning solution would be great to add as native.Sharding + Partitioning would have so many benefits.What’s annoying is there doesn’t appear to be a fast method for dumping data from a collection and keeping the collection and index definitions in tact, Like Truncate. Even deleteMany performs poorly.Capped collections are not suitable as they require unique IDs and I am unable to guarantee this due to multiple inserters and the frequency of data.Regards\nJohn", "username": "John_Torr" }, { "code": "mongodmongodserverStatusttl.deletedDocuments.wtdbPathcompact", "text": "Hi @John_Torr,Welcome to the MongoDB community forums There is a background single-thread in mongod that reads the values in the index and removes expired documents from the collection. The background task that removes expired documents runs every 60 seconds. And as a result, MongoDB starts deleting documents 0 to 60 seconds after the index completes.Because the duration of the removal operation depends on the workload of your mongod instance, expired data may exist for some time beyond the 60 second period between runs of the background task.You can check the documents which are deleted in the serverStatus command’s ttl.deletedDocuments fields.Whereas in replica set members, the TTL background thread only deletes documents when a member is in the state primary. The TTL background thread is idle when a member is in secondary state. These deletions are then replicated to any secondary replica set members. That is, secondaries depend on TTL deletes being replicated from the primary instead of having their own TTL process. This is to keep consistency between replica set members. To read more about it, please refer to the documentation link.I believe the issue might be the operation and replication of the TTL deletes are one by one which is way beyond the capability of the DB for the incoming rate.It’s likely that the reason for “beyond the capability of the DB” is related more to hardware/network performance rather than a constraint within MongoDB itself.Bulk Write I am pushing in 20K/s documents.Here it is important to note that the TTL index needs to keep up with the rate of incoming data. I think the rate at which new items are inserted must be the same or slower than the rate at which items are being deleted otherwise it will never be able to catch up. It’s also worth noting that insert operations take priority over TTL indexes.To resolve this issue, you could consider upgrading your hardware to one that is capable of handling 20K/s inserts and also 20K/s deletes simultaneously (although this is just in general terms and there are likely more nuanced considerations besides these numbers). Alternatively, if you don’t necessarily need the TTL index, you could opt for a batch operation where you identify and delete batches of data after a specificed period of time. However, please note that the rate of data being deleted must exceed the rate of new data being inserted, otherwise it could place constraints on your hardware.I am out to about 20 days of retention cause deletion just is so slow, It’s like they are not even moving.Correspondingly with any delete operation in MongoDB, blocks of bytes inside existing .wt files in the dbPath are marked as free, but the file itself is not automatically trimmed. So, after a new TTL index deletes a large number of documents, consider resyncing replica set member nodes, or trying compact.I hope it helps!Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Thanks for the response Kushagra,I have been assessing the DB for a week to make sure my observations are justified. I just want to clear up a few things from the response to define the issue a little more.The duration of the deltes are not in the area of minutes here, They are days to weeks where the TTL data should have been cleared out.I am not saying the TTL deletes are not working, They are indeed working and being replicated to the secondaries. However the rate of deletion is in the order of 10’s per second.I had stated the 20K/s inserts to give a general idea of the DB performance, This might have been misleading, It can handle 56K/s inserts without too much strain, but generally is only doing around 16K/s.I have removed the TTL index for the past week and a half and note the IOWait is way down to around 3-4% per node.Also in terms of space, yes I realize the disk space is not being relased, The issue was with the data in the collection not being deleted fast enough.I still believe the TTL implementation is at issue here, It is probably not suitable for big data.I consider the DB to be fairly well provisioned @ 16 shards, 256 CPU cores, 1TB RAM, Enterprise SSD SAN storage over 10GB interfaces.", "username": "John_Torr" }, { "code": "", "text": "Update:\nI dumped my DB and re-initiated it without the TTL Index. It ran for 11 days before filling 16TB. Totaling around 30 billion documets at 4KB each, Given the DB gets compressed to disk. This equates to around 200MB per second ingress documents to the DB.In this time the IOWait remained around 3-4% and there were no issues with performance. Actually the performance remained great the whole time, even up until I have to dump it again for filling up.As compared to previously when the TTL index kicked in and the IOWait then remained between 15-20% and insert performance was impacted.Question: Does TTL delete records one by one or as a batch operation?Next steps: I plan to partition the DB data over multiple collections with the date as a component of the name. Then I will drop the older collections and create new one each day. The scripts for insertion will be modified to use the correct collection and the query will utilize the $unionWith aggregation pipeline.Suggestion: Provide native partitioning across collections based on a date value from the document.", "username": "John_Torr" }, { "code": "", "text": "Hi @John_Torr,I still believe the TTL implementation is at issue here, It is probably unsuitable for big data.Based on my understanding, it is relative and depends on certain factors like how big the data is quantitatively and whether the data being processed is aligned with the hardware specs or not.Although, as the TTL is single-threaded and large batches of deletes in a single period, may overwhelm the cluster’s IOPS capacity on primary or secondary nodes as the changes are also being replicated simultaneously.Question: Does TTL delete records one by one or as a batch operation?From MongoDB 6.1 the TTL deletion happens in batches. Please refer to the SERVER-63040 for more details. There are also some tunable setParameters for batched deletes which may make your TTL deletes efficient but it all depends on your data size.By default MongoDB batches every 10 documents, 2MB or 5 ms. If you have larger documents you may possibly benefit from raising the data size threshold, but that would also be more impactful on the storage engine’s cache.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "I am using MongoDB 6.0.4, I use it containerized so will need wait until the mongo dockerhub catches up.I am using 120 seperate processes sharing 30 mongos, each doing bulk inserts @ 1000 documents at a time. Each document is arounf 4KB, So I would say I am well over the batch thresholds.My take away here seems to be that the TTL index is not intended for big data. Hence I believe the partitioning will be the way to go. Therfore it makes sense to have this functionality provided as a native mechanism. Otherwise we can’t really classify MongoDB as suitable for big data if it can’t deal with big clean-up.Thanks for the info.", "username": "John_Torr" }, { "code": "", "text": "I have exactly the same issue here I’m using amazon documentDB 5.0 and with a big amount of data (object that represents audit) I have 1,5 millions documents per day and I need to keep one year of data.The query of search is quick but my main issue is the clean up. After one week the delete of one day of data take 160 seconds .I’m not sure if use TTL in 2 or 3TB of data will be efficient , do you think it can ?If no what’s the best solution ?", "username": "Dimitri_Scole" }, { "code": "", "text": "Hi,As I found out that yes indeed the TTL delete does individual delete operations prior to 6.1, So essentially the TTL can’t keep up with inserts on a very heavy load and ends up running you out of disk. From 6.1 the TTL deletes were changed to do batch.I opted to partition my data into hourly collections where I use a function to determine the range of collections the query will need to fetch from and utilize an aggregation pipeline to union the collections.This way I get to drop the old collections at practically zero cost. Also don’t need any indexes for datetimes and only use the Shard Key as the primary key for the data I need to find.", "username": "John_Torr" } ]
TTL Deletes are extremely slow
2023-03-04T03:41:26.604Z
TTL Deletes are extremely slow
2,171
null
[ "containers" ]
[ { "code": "version: \"3.8\"\n\nservices:\n app:\n image: lscr.io/linuxserver/unifi-network-application:latest\n container_name: unifi_app\n hostname: unifi_app\n restart: always\n volumes:\n - /etc/localtime:/etc/localtime:ro\n - /docker/unifi_stack/app/conf:/config:rw\n networks:\n macvlan_lan:\n ipv4_address: 10.1.1.1\n backend:\n environment:\n - PUID=1000\n - PGID=1000\n - MONGO_USER=unifi\n - MONGO_PASS=foobar\n - MONGO_HOST=unifi_db\n - MONGO_PORT=27017\n - MONGO_DBNAME=unifi\n\n db:\n image: mongo:3.6\n container_name: unifi_db\n hostname: unifi_db\n restart: always\n volumes:\n - /etc/localtime:/etc/localtime:ro\n - /docker/unifi_stack/db/data:/data/db\n - /docker/unifi_stack/db/conf/init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro\n networks:\n backend:\n\nnetworks:\n backend:\n2023-10-20T16:29:36.757+0200 I NETWORK [initandlisten] waiting for connections on port 27017\n2023-10-20T16:29:46.865+0200 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends\n2023-10-20T16:29:46.866+0200 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...\n2023-10-20T16:29:46.866+0200 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock\n2023-10-20T16:29:46.866+0200 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture\n2023-10-20T16:29:46.871+0200 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down\n2023-10-20T16:29:47.018+0200 I STORAGE [signalProcessingThread] shutdown: removing fs lock...\n2023-10-20T16:29:47.018+0200 I CONTROL [signalProcessingThread] now exiting\n2023-10-20T16:29:47.019+0200 I CONTROL [signalProcessingThread] shutting down with code:0\n", "text": "Hi, I created a new docker compose file for my unifi controller after linuxserver.io changed their docker image to not include mongodb.But the mongodb always crashes after a while.I have tested mongodb version 3.6 to 4.0, all with the same result.\nVersion >4.0 seems to need armv8.2.Host:\nPlatform: Raspberry Pi 4\nOS: Ubuntu 23.10 aarch64 (armv8)\nKernel: 6.5.0-1005-raspi\nRAM: 4GB\nDocker: 24.0.6, build ed223bcdocker-compose.yml:init-mongo.js:db.getSiblingDB(“unifi”).createUser({user: “unifi”, pwd: “foobar”, roles: [{role: “dbOwner”, db: “unifi”}, {role: “dbOwner”, db: “unifi_stat”}]});Log:", "username": "f3e2f9c2ca4e5beea43d800ff690f5e" }, { "code": "", "text": "Something is killing mongod. I would look at docker first, its logs and journalctl.mongod is just doing what it is told, to shut down.", "username": "chris" }, { "code": "", "text": "what is signal 15 in linux?", "username": "Kobe_W" }, { "code": "", "text": "As an alternative to running MongoDB 3.x/4.x, if you want to run against a modern version of MongoDB on the Pi, see here.", "username": "Matt_Kneiser" }, { "code": "", "text": "The error occurs at irregular intervals.\nIn the beginning, when I initialized the Unifi controller, the error occurred more frequently, but now it takes days to occur.\nIt takes me a while to notice it, and then unfortunately I can’t find anything in the logs.I don’t understand why such a signal comes. The load is not high and all other containers do not have such problems.@Matt_Kneiser\nThe Unifi controller only supports MongoDB up to 4.4.\nIt has been reported that newer versions will work, but it is not officially supported.", "username": "f3e2f9c2ca4e5beea43d800ff690f5e" }, { "code": "", "text": "Considering that MongoDB does not officially support any version on the Raspberry Pi platform at the moment, you are operating in uncharted territory. You’re pushing your luck no matter what you try ", "username": "Matt_Kneiser" }, { "code": "", "text": "But MongoDB officially supports Docker and arm64?Or is that not the official DockerHub repo of MongoDB?\nhttps://hub.docker.com/r/mongodb/mongodb-community-serverIt feels like the old days when Doom only ran with certain hardware components.I know that MongoDB >4.0 requires additional command extensions that the arm64 processor of the raspberryPi 4 does not support.\nThis also tells me the MongoDB 4.4 container as soon as I try to start it.\nIs there an official list so you can at least select your hardware for command extensions.The only information I have found is here:", "username": "f3e2f9c2ca4e5beea43d800ff690f5e" }, { "code": "", "text": "I wasn’t fully correct because MongoDB 4.4 isn’t end of life (EOL) yet. It will fall into EOL this upcoming February. MongoDB 5.0+ places microarchitecture requirements that exceed what’s on the Pi 4.", "username": "Matt_Kneiser" } ]
mongoDB Docker crashing/terminating after a while (got signal 15 (Terminated))
2023-10-20T15:03:38.702Z
mongoDB Docker crashing/terminating after a while (got signal 15 (Terminated))
408
null
[ "aggregation", "queries", "indexes" ]
[ { "code": "{a:1,b:1}{a:1, b:2}{a:[1,2,3],b:\"hi\"}{a:[1,2,3], b:[\"a\",\"b\",\"c\"]}{a:1, a_sort:1, b:[1,2,3], b_sort:\"[1,2,3]\"}{a_sort:1, b_sort:1}", "text": "I have an issue with indexing and sorting of fields with unknown type in my use case. This means they may be an Array in one document, but the same field in another document could be something else.Now, you may know Mongo doesn’t allow indexing or sorting on multiple Array fields it gives the following error if you try : “cannot sort with keys that are parallel arrays”As an exemple to explain this post I’ll use the following index : {a:1,b:1}\nKeep in mind field a and field b can be of any type in this exemple (Object, Array, number, string, Date etc…) and the type can be different between documents.In this exemple if I try to insert the following document it works :\n{a:1, b:2}\n{a:[1,2,3],b:\"hi\"}But if I try to insert a document like this :\n{a:[1,2,3], b:[\"a\",\"b\",\"c\"]}I get an error.I don’t know what to do to fix this issue, and this restriction makes no sense.\nhttps://jira.mongodb.org/browse/SERVER-826I absolutely need in my use case to be able to sort on arbitrary fields, and I’ve yet to come up with a workaround that makes sense and won’t impact performance too much.The workaround I came up with :Create a copy of all fields for sorting purposes where Arrays are converted to strings or another format (Obviously this is very bad because it duplicates all data)\n1.exemple : {a:1, a_sort:1, b:[1,2,3], b_sort:\"[1,2,3]\"} and the index : {a_sort:1, b_sort:1}Replace the $sort aggregation stage with a $function aggregation performing a custom sort, this is also very bad because we lose all benefits from indexes and the optimization on the $limit stage (limit stage optimization can be somewhat emulated in the function by returning immediatly false in the filter, but it won’t be as efficient as the mongo one)Are there other workarounds I haven’t though of ? Will the Mongo Team ever give us an option to bypass this limitation since from what I can tell it’s only meant to stupid proof against very inefficient sorts ? But in my case my workarounds would certainly result in worse performances than what I could get from a regular sort.Any help is appreciated", "username": "Jean-Samuel_Girard" }, { "code": "", "text": "This post was flagged by the community and is temporarily hidden.", "username": "Ibraheem_ALI" }, { "code": "", "text": "I’m still looking for better workarounds, I’m not satified with the workarounds I came up with, I feel like the trade-offs are too important.I did came up with another potential solution :To never store an array and always store an object with keys “entry_0”, “entry_1”, “entry_2” and convert into an array in application code, also a very bad workaround because we lose access to $push, $pull, $in and other operators that work on arrays and it adds work to the application to convert to and from arrays.I hope I can find a better workaround…", "username": "Jean-Samuel_Girard" }, { "code": "", "text": "I came up with a better version of this workaround :\nNever store arrays, but store an object {array:[ ]} essentially forcing mongo to treat it as an object instead of an array. This is less tedious than the previous option, because we can still do $push, $pull etc… on the inner array. This also means a lot of changes to the application code in order to extract the array from the object but is much better than the previous option.I really shouldn’t need to do that kind of things", "username": "Jean-Samuel_Girard" } ]
Sorting on multiple fields that may or may not be Arrays (“cannot sort with keys that are parallel arrays”)
2023-10-31T16:30:37.774Z
Sorting on multiple fields that may or may not be Arrays (“cannot sort with keys that are parallel arrays”)
186
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "Hello,Right now in my nextjs app I am using email/password auth from Atlas.\nOn registration I collect extra user data such as name and birthdate.\nNow that I disabled automatic email verification, the user needs to be confirmed and the onUserCreation Atlas function doesn’t execute after submitting the register form and the custom data gets lost.\nI have tried saving the data in local storage but if the user doesn’t immediately login and clear the browser, or logs in for the first time on a different device, the custom user data is also lost.\nAre there any suggestions on how to make sure I can confirm the user in a reliable way, while at the same time having the custom user data?Thanks", "username": "Stef_N_A" }, { "code": "", "text": "Hi Stef!\nWhen I did my teste, I have noticed that the onUserCreation realm function works whe user log in the first time. So I personnally save the custom data in the app intern memory until the user log in for the first time. Then I send the custom data in the user’s collection.", "username": "Damien" } ]
Atlas email/password with confirmation and custom data
2023-11-02T05:25:01.178Z
Atlas email/password with confirmation and custom data
132
https://www.mongodb.com/…6_2_1024x429.png
[ "android" ]
[ { "code": "", "text": "I am trying to update a record that has a relationship with another collection but getting an ErrorCompensatingWrite Error during the sync.For example I have an object ‘Item’ and an object ‘Box’. Box has a realmlist of type ‘Item’ and this relationship was configured in the schema and works great for reads. However when I attempt to add an ‘Item’ to the box I get the above error. I have write permissions on the ‘Box’ but only read permissions on the ‘Item’. Do I need write permissions on both? I have attached the error from the logsrealmError1507×632 21.8 KBMore info. The item is being added to the ‘Box’ and can be read. But this error is appearing in the logs. Is it possible I am trying to add/overwrite the ‘Item’ during this add operation?", "username": "Joseph_Devlin" }, { "code": "BoxItem", "text": "Hi @Joseph_Devlin,The server sees this as two separate inserts to two separate tables. A user would need write permissions on both tables in order to create the Box and Item entries as you described.", "username": "Kiro_Morkos" }, { "code": "", "text": "Thanks for the quick response. So the Item I am trying to add to the box already exists in the Item collection. I am just trying to add that to the Box via the relationship. Is this still counted as two writes?", "username": "Joseph_Devlin" }, { "code": "BoxBoxBoxItem", "text": "Oh I see, sorry I misinterpreted the scenario. In that case it is a write to the Box table because you are modifying a property of the Box object (adding an element to a list). So you would need write permission on Box, but not on Item.", "username": "Kiro_Morkos" }, { "code": "", "text": "Thanks, my permissions are correct then. Maybe it is the schema? So to provide more detail this is a better example of my situation.Box (realm object has a RealmList)\nBoxedItem (embedded object inside Box)\nItem (realm object relationship with BoxedItem)I have the following 3 collections. I am adding an item to a box. To do this I create an instance of BoxedItem, add Item to the boxed BoxedItem via its relationship, then I add the boxedItem to the list of BoxedItems in the box. Do you think this is the reason for the ErrorCompensatingWrite issue?", "username": "Joseph_Devlin" }, { "code": "ItemBoxSync -> WriteErrorCompensatingWriterealm.write", "text": "It appears that you are actually updating both Item and Box according to your app logs. If you look at the Sync -> Write entry directly before the ErrorCompensatingWrite entry you’ll see that both objects are being updated.Can you share a code snippet/pseudocode of what you’re doing in the realm.write block?", "username": "Kiro_Morkos" } ]
ErrorCompensatingWrite Error when inserting related item into object with realmList
2023-11-02T17:40:33.946Z
ErrorCompensatingWrite Error when inserting related item into object with realmList
128
null
[ "aggregation", "queries" ]
[ { "code": "subaggagg[{\n \"_id\": \"AAA\",\n \"count\": 2,\n \"location\":\n [\n {\n \"country\": \"US\",\n \"region\": \"ia\",\n \"city\": \"cedar rapids\",\n \"count\": 1\n },\n {\n \"country\": \"AU\",\n \"region\": \"vic\",\n \"city\": \"melbourne\",\n \"count\": 1\n }\n ]\n}]\nsubaggcount[{\n \"_id\": \"AAA\",\n \"count\": 2,\n \"location\":\n [\n {\n \"country\": \"US\",\n \"region\": \"ca\",\n \"city\": \"los angeles\",\n \"count\": 1\n },\n {\n \"country\": \"AU\",\n \"region\": \"vic\",\n \"city\": \"melbourne\",\n \"count\": 1\n }\n ]\n}]\n[{\n \"_id\": \"AAA\",\n \"count\": 4,\n \"location\":\n [\n {\n \"country\": \"US\",\n \"region\": \"ia\",\n \"city\": \"cedar rapids\",\n \"count\": 1\n },\n {\n \"country\": \"US\",\n \"region\": \"ca\",\n \"city\": \"los angeles\",\n \"count\": 1\n },\n {\n \"country\": \"AU\",\n \"region\": \"vic\",\n \"city\": \"melbourne\",\n \"count\": 2\n }\n ]\n}]\n$merge[\n{\n \"$merge\": {\n \"into\": \"agg\",\n \"on\": \"_id\",\n \"whenMatched\": [\n {\n \"$addFields\": {\n \"count\": {\n \"$add\": [\n \"$count\",\n \"$$new.count\"\n ]\n }\n }\n }\n ],\n \"whenNotMatched\": \"insert\"\n }\n }]\nlocation", "text": "Hello Everyone,\nI’m faced with an issue in my aggregation pipeline and I can’t quite find a solution, so I hopped someone could come up with an idea.I’m aggregating data from a collection sub into a collection agg.\nExample of content of the agg collection:Currently I have an aggregation pipeline getting data from the sub collection that output data with the same format. Now I want to merge the data from my pipeline with the agg collection so that all the count are updated to be a SUM of the previous and new values.For instance, my pipeline give me the following data:And i want the final data to look like this:I came up with the following $merge at the end of my pipeline that sums the top count:However I have no idea on how to do the same thing for the data inside the location array.Would someone have an idea on how to do that? I could store all my data “unwinded” and add another collection to store the aggregation of the unwind, but I don’t feel that is the adequate solution.\nThanks", "username": "yc_rkc" }, { "code": "accagglocationaccagg$mergeconst docBefore = {\n _id: { $oid: \"6540f0237b938738dabe9558\" },\n acc: [\n { country: \"US\", region: \"ia\", city: \"cedar rapids\", count: 1 },\n { country: \"AU\", region: \"vic\", city: \"melbourne\", count: 1 },\n ],\n agg: [\n { country: \"US\", region: \"ca\", city: \"los angeles\", count: 1 },\n { country: \"AU\", region: \"vic\", city: \"melbourne\", count: 1 },\n ],\n};\n\nconst docAfter = {\n _id: { $oid: \"6540f0237b938738dabe9558\" },\n acc: [\n { country: \"US\", region: \"ia\", city: \"cedar rapids\", count: 1 },\n { country: \"AU\", region: \"vic\", city: \"melbourne\", count: 1 },\n ],\n agg: [\n { country: \"US\", region: \"ca\", city: \"los angeles\", count: 1 },\n { country: \"AU\", region: \"vic\", city: \"melbourne\", count: 1 },\n ],\n location: [\n { country: \"US\", region: \"ia\", city: \"cedar rapids\", count: 1 },\n { country: \"US\", region: \"ca\", city: \"los angeles\", count: 1 },\n { country: \"AU\", region: \"vic\", city: \"melbourne\", count: 2 },\n ],\n};\n\n// Code related to the pipeline\nconst accExists = { $isArray: [\"$acc\"] };\nconst aggExists = { $isArray: [\"$agg\"] };\n\nconst accItemIsInInitialValue = {\n $eq: [\n {\n $size: {\n $filter: {\n input: \"$$value\",\n as: \"valueItem\",\n cond: {\n $and: [\n { $eq: [\"$$this.country\", \"$$valueItem.country\"] },\n { $eq: [\"$$this.region\", \"$$valueItem.region\"] },\n { $eq: [\"$$this.city\", \"$$valueItem.city\"] },\n ],\n },\n },\n },\n },\n 1,\n ],\n};\n\nconst sumNewItemCountToInitialValue = {\n $map: {\n input: \"$$value\",\n as: \"valueItem\",\n in: {\n $cond: {\n if: { $eq: [\"$$this.country\", \"$$valueItem.country\"] },\n then: {\n $mergeObjects: [\n \"$$valueItem\",\n { count: { $add: [\"$$valueItem.count\", \"$$this.count\"] } },\n ],\n },\n else: \"$$valueItem\",\n },\n },\n },\n};\n\nconst includeNewItemInInitialValue = { $concatArrays: [[\"$$this\"], \"$$value\"] };\n\nconst stage = {\n $addFields: {\n location: {\n $switch: {\n branches: [\n { case: { $and: [accExists, { $not: aggExists }] }, then: \"$acc\" },\n { case: { $and: [{ $not: accExists }, aggExists] }, then: \"$agg\" },\n ],\n default: {\n $reduce: {\n input: \"$acc\",\n initialValue: \"$agg\",\n in: {\n $cond: {\n if: accItemIsInInitialValue,\n then: sumNewItemCountToInitialValue,\n else: includeNewItemInInitialValue,\n },\n },\n },\n },\n },\n },\n },\n};\n", "text": "Hi @yc_rkc, welcome to the community!I believe I solved your issue. The solution is quite extensive and a little complicated to understand, but it works. To make it easier to test and implement, I created one document with the fields acc and agg and on each one, I put the data referent of each collection, the aggregation stage will create a new field location with the combination of acc and agg. I’m quite in a hurry these days if you can´t migrate the implementation to a $merge stage let me know and I’ll try it, too. Here is the code:", "username": "Artur_57972" }, { "code": "{\n \"$merge\": {\n \"into\": \"old\",\n \"on\": \"_id\",\n \"whenMatched\": [\n {\n \"$addFields\": {\n \"count\": {\n \"$add\": [\n \"$count\",\n \"$$new.count\"\n ]\n },\n \"location\": {\n \"$switch\": {\n \"branches\": [\n {\"case\": {\"$and\": [newExists, {\"$not\": oldExists}]}, \"then\": \"$$new.location\"},\n {\"case\": {\"$and\": [{\"$not\": newExists}, oldExists]}, \"then\": \"$location\"},\n ],\n \"default\": {\n \"$reduce\": {\n \"input\": \"$$new.location\",\n \"initialValue\": \"$location\",\n \"in\": {\n \"$cond\": {\n \"if\": newItemIsInInitialValue,\n \"then\": sumNewItemCountToInitialValue,\n \"else\": includeNewItemInInitialValue,\n },\n },\n },\n },\n },\n },\n }\n }\n ],\n \"whenNotMatched\": \"insert\"\n }\n }\n const newExists = {\"$isArray\": [\"$$new.location\"]}\n\n const oldExists = {\"$isArray\": [\"$location\"]}\n[{\n \"$lookup\": {\n \"from\": \"old\",\n \"localField\": \"_id\",\n \"foreignField\": \"_id\",\n \"as\": \"existing\"\n }\n },\n {\n \"$unwind\": \"$location\"\n },\n {\n \"$unwind\": {\n \"path\": \"$existing\",\n \"preserveNullAndEmptyArrays\": True\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$existing.location\",\n \"preserveNullAndEmptyArrays\": True\n }\n },\n {\n \"$match\": {\n \"$or\": [\n {\n \"$expr\": {\n \"$and\": [\n {\n \"$eq\": [\n \"$existing.location.city\",\n \"$location.city\"\n ]\n },\n {\n \"$eq\": [\n \"$existing.location.region\",\n \"$location.region\"\n ]\n },\n {\n \"$eq\": [\n \"$existing.location.country\",\n \"$location.country\"\n ]\n }\n ]\n }\n },\n {\n \"existing\": {\n \"$exists\": False\n }\n }\n ]\n }\n },\n {\n \"$group\": {\n \"_id\": {\n \"mainId\": \"$_id\",\n \"country\": \"$location.country\",\n \"region\": \"$location.region\",\n \"city\": \"$location.city\"\n },\n \"count\": {\n \"$sum\": \"$location.count\"\n },\n \"existingCount\": {\n \"$sum\": {\n \"$ifNull\": [\n \"$existing.location.count\",\n 0\n ]\n }\n }\n }\n },\n {\n \"$group\": {\n \"_id\": \"$_id.mainId\",\n \"count\": {\n \"$sum\": {\n \"$add\": [\n \"$count\",\n \"$existingCount\"\n ]\n }\n },\n \"location\": {\n \"$push\": {\n \"country\": \"$_id.country\",\n \"region\": \"$_id.region\",\n \"city\": \"$_id.city\",\n \"count\": {\n \"$sum\": {\n \"$add\": [\n \"$count\",\n \"$existingCount\"\n ]\n }\n }\n }\n }\n }\n },\n {\n \"$merge\": {\n \"into\": \"old\",\n \"on\": \"_id\",\n \"whenMatched\": \"merge\",\n \"whenNotMatched\": \"insert\"\n }\n }]\n", "text": "Hello @Artur_57972, Thanks for your reply, it seems to be working well in production settings!For those wondering the merge stage looks like the following. I just renamed acc to new and agg to old to have a better understanding of who is who.With the following changes to 2 constant:While waiting for the reply I also managed to find another way to do the same things in a less elegant and more bruteforce way with the following stages:I did have the time to compare performance between the 2, but I suspect that your way should be faster, or at least less memory intensive.Thanks a lot again for your reply !", "username": "yc_rkc" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Merging data while doing a sum of elements of an array of dict
2023-10-30T15:50:15.059Z
Merging data while doing a sum of elements of an array of dict
199
null
[ "aggregation", "golang" ]
[ { "code": "", "text": "I have a query that runs much faster when specifying the particular index with Hint. Can we specify hint in golang production code? or is there any way in which we can improve the performance of query.", "username": "Harsh_Kothari1" }, { "code": "SetHintFindfilter := bson.M{\"my field\": \"my value\"}\n\ncoll.Find(\n\tctx,\n\tfilter,\n\toptions.Find().SetHint(\"index hint\"))\n", "text": "@Harsh_Kothari1 welcome and thanks for the question! You can specify an index hint on any operation using the SetHint setter on the corresponding operation options type.For example, to set an index hint for a Find operation:", "username": "Matt_Dale" }, { "code": "", "text": "When i run the same query with hint in Golang which i have already used in Studio 3T, then i see a huge increase in query execution time. I am unable to debug the reason behind it.", "username": "Harsh_Kothari1" } ]
Can we use Hint in production code?
2023-11-02T10:28:09.145Z
Can we use Hint in production code?
116
null
[ "java", "spring-data-odm", "time-series" ]
[ { "code": "TimeSeriesOptions tso = new TimeSeriesOptions(\"my_time_field\")\n .granularity(TimeSeriesGranularity.HOURS)\n .metaField(\"my_meta_field\");\n\nCreateCollectionOptions cco = new CreateCollectionOptions().timeSeriesOptions(tso).expireAfter(30, DAYS);\n\nmongoTemplate.getMongoDatabaseFactory().getMongoDatabase().createCollection(\"time_series_collection_name\", cco);\n", "text": "I am using MongoDb version 7.0.2 and I am creating a new time series collections using Java MongoDb API with Spring.This is the code that creates the collections:The collections are created successfully but here in the documentation it is stated that\nMongoDB 6.3 and later automatically creates a compound index on the time and metadata fields for new time series collections.\nIt can be found on this page: https://www.mongodb.com/docs/v7.0/core/timeseries-collections/\nThe problem is that my time series collections don’t have any index associated with them when they are created.\nIs this a bug or I am missing something?", "username": "Vladimir_Stoyanov" }, { "code": "db.createCollection(\"my_test_collection\", {timeseries:{timeField:\"timestamp\",metaField:\"metadata\"}});\n[\n {\n v: 2,\n key: { metadata: 1, timestamp: 1 },\n name: 'metadata_1_timestamp_1'\n }\n]\n", "text": "Update: I am answering my own question.\nI have executed the following create collection command using mongosh on two different MongoDB editions and the result is different:MongoDb Community edition v 7.0.2 - The time series collection is created but there is no default index created associated with this collection\nMongoDb Atlas edition v 7.0.2 - The time series collection is created and a default index associated with this collection is createdThis is the default index that is created.As a conclusion I can say that there is difference in the behaviour of different MongoDb editions but maybe it is good to state that in the documentation that I linked in my question.", "username": "Vladimir_Stoyanov" }, { "code": "", "text": "Hey @Vladimir_Stoyanov,Welcome to the MongoDB Community!I tested it with MongoDB Community version 7.0.2, and it worked as expected for me. I’ve attached a screenshot for your reference, showing that it automatically created the compound index for the TS collection in the local deployment too.mongosh2286×1152 310 KBCould you please share the workflow you are following to create a TS collection?Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hi @Kushagra_Kesav,\nThank you for your response.Here are exactly the steps I follow.\nMongoDbTimeSeriesProblem1746×868 91.4 KBI have installed my MongoDb using brew command on Mac OSThanks,\nVladimir", "username": "Vladimir_Stoyanov" } ]
No default compound index created on time series collection
2023-11-01T16:59:47.831Z
No default compound index created on time series collection
141
null
[ "replication", "mongodb-shell" ]
[ { "code": "", "text": "Hi All,I am unable to connect to Mongo DB using .js file. Below is the string mentioned in .js file.db = connect( ‘mongodb://user:pswd@db-1,db-a2,db-3/AlteryxGallery_Lucene?authSource=AlteryxGallery_Lucene&replicaSet=AlteryxProd&readPreference=primary --tlsCertificateKeyFile E:/bin/mongosh-1.10.6-win32-x64/bin/db-1.pem --tlsAllowInvalidCertificates’ );I am getting below error while executing it as below in command line.Loading file: Gallery.js\nMongoInvalidArgumentError: Invalid read preference mode “primary --tlsCertificateKeyFile E:/bin/mongosh-1.10.6-win32-x64/bin/db-alteryx1.pem --tlsAllowInvalidCertificates”Someone please help us.", "username": "J_Damodar" }, { "code": "--tlsCertificateKeyFile E:/bin/mongosh-1.10.6-win32-x64/bin/db-alteryx1.pem --tlsAllowInvalidCertificates", "text": "You have mixed command line options --tlsCertificateKeyFile E:/bin/mongosh-1.10.6-win32-x64/bin/db-alteryx1.pem --tlsAllowInvalidCertificates with connection string uri format.You will need to use the matching connection-string-optionsPlease use appropriate formatting to make posts more readable.", "username": "chris" }, { "code": "", "text": "Thanks Chris. I have used it as below with single and tried with double quotes of file location E:\\bin\\mongosh-1.10.6-win32-x64\\bin\\db-alteryx1.pem but throwing a different error. Please let me for any inputs.db = connect( ‘mongodb://user:pswd@db-1,db-2,db-3/AlteryxGallery?authSource=AlteryxGallery&replicaSet=AlteryxProd&readPreference=primary&tls=true&tlsCertficateKeyFile=“E:\\bin\\mongosh-1.10.6-win32-x64\\bin\\db-alteryx1.pem”&tlsAllowInvalidCertificates=true’ );Error :MongoParseError: option tlscertficatekeyfile is not supported", "username": "J_Damodar" } ]
Unable to connect to Mongo DB using .js file
2023-10-31T09:34:04.830Z
Unable to connect to Mongo DB using .js file
171
null
[ "queries" ]
[ { "code": "", "text": "Is it possible to create dynamic collection using realm-sync and mongodb through Device Sync?", "username": "Vishal_Soni2" }, { "code": "", "text": "Do you mind elaborating on what you mean by a dynamic collection? I assume you are referencing having no schema, but not entirely sure.Best,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Hi Tyler,I’ve a database which contains multiple collection(we call as a group). So is it possible to create a dynamic collection from a device sync function or any other way? Like I explain you as below screenshot", "username": "Vishal_Soni2" }, { "code": "", "text": "Hi @Tyler_Kaye,Moreover, I’ve the requirement to create Group chat mobile app, with around 1M+ concurrent users. So I need to create dynamic collection(group) using device sync functions or any other way. So would it be possible or not?Thanks,\nVishal", "username": "Vishal_Soni2" }, { "code": "", "text": "We do not support that architecture unfortunately, the reason being that each realm table name needs to map to a MongoDB namespace (db/collection). If you really wanted to, you could have that architecture but each of your realm table’s would need to be named something like todo_group1 and that would not be fun for you. I would strongly encourage you to not structure your data like this in general though. If you read a bit more about MongoDB’s suggested approached to multi-tenant architecture, it is an anti-pattern to try to split everything into its own collections (instead they should be in the same collection with an index on the field that “partitions” them).Moreover, I’ve the requirement to create Group chat mobile app, with around 1M+ concurrent users. So I need to create dynamic collection(group) using device sync functions or any other way. So would it be possible or not?As for this, I am not entirely sure I understand what you are suggesting / asking here. Do you mind elaborating?Best,\nTyler", "username": "Tyler_Kaye" } ]
How to create dynamically collection using Device Sync(realm-sync)
2023-11-01T10:35:51.894Z
How to create dynamically collection using Device Sync(realm-sync)
160
null
[ "react-native" ]
[ { "code": "", "text": "Hello,\nI’m new to the forum so I don’t know if the right place to ask.\nI’m trying to develop my first app using the realm sdk implementing it in an expo app. The thing is it always gibes me the erro: ERROR Error: Realm context not found. Did you call useRealm() within a ?When I use a single realm at the root of my project I can send data to my cluster, but if I move it into a more nested screen that it gives me this error. Keep in mind that I always have this in the screen\nimport { useQuery, useRealm } from “@realm/react”;and then inside the definition of the screen I use the realm so for example:const hatching = () => {\nconst [name, setName] = useState(“”);\nconst [commonName, setCommonName] = useState(“”);\nconst [species, setSpecies] = useState(“”);const realm = useRealm();\n…My question is:\ncould it be because I’m using the new expo routing system (v2) or is it because I use the RealmProvider that gives me access to only one realm by default. That’s tricky to me, I haven’t fully understand if passing multiple schemas like this actually expose multiple realms:const indexPage = () => {\nreturn (\n\n\n<RealmProvider\nschema={[Profile, Animal]}\nsync={{ flexible: true }}\n>\n\n\n\n\n);\n};\n…Thanks a lot if someone could provide me with additional info, I think the Realm Docs are a bit tricky but maybe the problem is with compatibility with expo.Best regards", "username": "Luca_Scalenghe" }, { "code": "", "text": "This should help Realm Database, Expo SDK 49 and Expo Router Getting Started - DEV Community", "username": "Usman_Hassan" } ]
Realm in react native using Expo Router
2023-10-06T18:15:37.116Z
Realm in react native using Expo Router
323
null
[]
[ { "code": "", "text": "Hi, I am trying to download MongoDB community edition. But when I click on the MongoDB community server it is not showing me any download option. Where I can found that option to simply download MongoDB community server? I should mention that I am new to this community.", "username": "Kashaf_Jamil" }, { "code": "", "text": "Hey @Kashaf_Jamil,Welcome to the MongoDB Community!Please refer to this link and there you can click on Select package where you can choose the OS and the MongoDB version you want to download.image1464×1476 240 KBPlease let the community know if you face any further issues.Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Oh Okay, I found it. Thank you vey much for helping me out. ", "username": "Kashaf_Jamil" } ]
Unable to find download button at community server
2023-11-02T04:53:12.548Z
Unable to find download button at community server
116
null
[ "storage" ]
[ { "code": "", "text": "Hello\nI’m luca and I’m studyng Mongo onprem documentation for build the right configuration for my customer.\nI would like to know if there is any size (TB) limit for each shard.\nFor instance can I have a single shard with 1TB or 10TB ? or after a certain size I have to add some shards?\nor there is a formula among storage ram cpu to maintain the right performance ( aka mongodb best practices)I can’t find this type of information in the documentationthank you\nLuca", "username": "bassa_priorita" }, { "code": "", "text": "Not sure if there’s any such limit. But even if there is one, i doubt if you will ever reach that limit. I mean you will probably already see a performance degradation/too high infra cost before you have so much data.", "username": "Kobe_W" }, { "code": "", "text": "Thank you Kobe\nwhat do you mean with “so much data” ? 1TB less than 1TB , more than 1TB or in region of 10TB.\nI don’t want to reach degradation to know the limit, I have to right size the infra now, exactly to avoid the degradation tomorrow", "username": "bassa_priorita" } ]
Database size in a single shard
2023-11-01T13:03:49.776Z
Database size in a single shard
138
null
[ "aggregation" ]
[ { "code": "result = client['sample_mflix']['movies'].aggregate([\n { '$vectorSearch': {\n 'queryVector': embed,\n 'path': 'plot_embedding',\n 'numCandidates': 100,\n 'limit': 5,\n 'index': 'sampleindex'\n }\n }\n])\n", "text": "I am struggling to come up with a way to filter for a given document, rather than all. I found an especially helpful example for $vectorSearch using a movies database on the Mongo blog (Building Generative AI Applications Using MongoDB: Harnessing the Power of Atlas Vector Search and Open Source Models | MongoDB), but I am looking for an example in which a list of product recommendations based on a customers past purchases. My thought is the flow will be:When I want to generate a list of similar products based on past purchases is where I am stuck. I am assuming I can use a $eq on customer email, or whatever. Let’s say a customer has three purchases. What I can’t wrap my brain around is whether I should be:The example for the vector search has:If I plug in the $eq operator, will this meet what I need? I am testing concurrently with this post, so I amy find out on my own, but I wanted to float this as I am most likely making it too hard ", "username": "Steve_Howard" }, { "code": "", "text": "I figured this one out using this:This brings me back to the design question. Is there a way to pass an embedded document of purchases to vector search, or if I have multiple customer orders, will a filter on the customer ID, email, etc. be the filter? That should return a list of products with multiple embeddings for products and multiple orders for the customer, correct?Is that efficient, or is there something I am not considering?", "username": "Steve_Howard" }, { "code": "", "text": "I think I have a gap in design. I have a skus collection for which I have created vector embeddings, as well as the same attributes and vector embeddings for a customer orders collection. I am picturing looping through the customer ID’s and doing a vector search for each vector embedding for that customer against the skus collection and its embedding. That seems inefficient.Is there a pattern for what I am looking to do?", "username": "Steve_Howard" }, { "code": "embeddedDocuments", "text": "Hi @Steve_Howard and welcome to MongoDB community forums!!Is there a way to pass an embedded document of purchases to vector search,As mentioned in the documentations for embeddedDocuments, you can’t define a field inside the embeddedDocuments type as the knnVector type.The recommendation would be to rethink on the database design to use the knn search algorithms. It would be beneficial if you could share your current design model so that we could help you further.Regards\nAasawari", "username": "Aasawari" } ]
How to filter vector searches by another attribute?
2023-10-19T13:51:11.870Z
How to filter vector searches by another attribute?
290
null
[ "compass", "php" ]
[ { "code": "", "text": "I suddenly get this error:\nFatal error: Uncaught Error: Call to undefined function MongoDB\\is_document() in /vendor/mongodb/mongodb/src/Operation/Find.php:166thrown in /Users/kka02/Documents/GitHub/zwemindex/data/_php/vendor/mongodb/mongodb/src/Operation/Find.php on line 166This happens when calling the find/findOne fuction.It used to work. I changed to 1.16 but after reverting to 1.15 it still is broken.\nI have no clue what could cause the error.I tested my query again in compass, and that works as well.", "username": "Kasper_N_A" }, { "code": "MongoDB\\is_documentvendor/autoload.phpvendor/autoload_static.php$filessrc/functions.phpfindis_document", "text": "MongoDB\\is_document was introduced in version 1.16 of the library. The error you posted for 1.16 indicates that the functions.php file was not correctly autoloaded. Do you use composer to install the library, and if so, can you please confirm that the file is correctly included? You can do so by following the autoloader initialisation trail from vendor/autoload.php. In my case, the autoloader class in vendor/autoload_static.php contains a $files property that correctly includes the src/functions.php file.The other error after downgrading to 1.15 however makes me think there are some additional issues going on. After downgrading, the find operation should no longer use is_document, so something fishy is going on. If the error appears only in a web SAPI (e.g. php-fpm), make sure to restart the FPM process to ensure no autoloaded code remains. One way to confirm this would be to try this in the CLI - the error should not appear there as it always starts from a clean slate without code loaded and typically without an I-cache.", "username": "Andreas_Braun" }, { "code": "", "text": "@Andreas_Braun Thanks. I noticed that I downgraded in the wrong directory. I can confirm that downgrading to 1.15 solved the issue.After that I upgraded to 1.16 again, and I didn’t got the error. So probably something went wrong with my first upgrade.Thanks for your confirmation it was related to 1.15/1.16.", "username": "Kasper_N_A" } ]
PHP Fatal error: Uncaught Error: Call to undefined function MongoDB\is_document() in ../vendor/mongodb/mongodb/src/Operation/Find.php:166
2023-11-01T09:31:34.093Z
PHP Fatal error: Uncaught Error: Call to undefined function MongoDB\is_document() in ../vendor/mongodb/mongodb/src/Operation/Find.php:166
128
null
[ "atlas-cluster" ]
[ { "code": "/Users/tung/project/sk/mongo-db-embedded/node_modules/mongodb/lib/cmap/connection_pool.js:520\n const error = this.closed ? new errors_1.PoolClosedError(this) : new errors_1.PoolClearedError(this);\n ^\n\nPoolClearedError [MongoPoolClearedError]: Connection pool for ac-727rc86-shard-00-02.fgbtzus.mongodb.net:27017 was cleared because another operation failed with: \"connection <monitor> to 18.167.26.166:27017 closed\"\n at ConnectionPool.processWaitQueue (/Users/tung/project/sk/mongo-db-embedded/node_modules/mongodb/lib/cmap/connection_pool.js:520:82)\n at ConnectionPool.clear (/Users/tung/project/sk/mongo-db-embedded/node_modules/mongodb/lib/cmap/connection_pool.js:251:14)\n at updateServers (/Users/tung/project/sk/mongo-db-embedded/node_modules/mongodb/lib/sdam/topology.js:461:29)\n at Topology.serverUpdateHandler (/Users/tung/project/sk/mongo-db-embedded/node_modules/mongodb/lib/sdam/topology.js:332:9)\n at Server.<anonymous> (/Users/tung/project/sk/mongo-db-embedded/node_modules/mongodb/lib/sdam/topology.js:444:77)\n at Server.emit (node:events:513:28)\n at markServerUnknown (/Users/tung/project/sk/mongo-db-embedded/node_modules/mongodb/lib/sdam/server.js:286:12)\n at Monitor.<anonymous> (/Users/tung/project/sk/mongo-db-embedded/node_modules/mongodb/lib/sdam/server.js:58:46)\n at Monitor.emit (node:events:513:28)\n at failureHandler (/Users/tung/project/sk/mongo-db-embedded/node_modules/mongodb/lib/sdam/monitor.js:153:17) {\n address: 'ac-727rc86-shard-00-02.fgbtzus.mongodb.net:27017',\n [Symbol(errorLabels)]: Set(1) { 'RetryableWriteError' }\n}\n", "text": "When I am doing the insert to Mongodb in the while loop, the error suddenly prompt out:I have no idea where the error come out, and why?", "username": "WONG_TUNG_TUNG" }, { "code": "PoolClearedError [MongoPoolClearedError]: \nConnection pool for ac-727rc86-shard-00-02.fgbtzus.mongodb.net:27017 \nwas cleared because another operation failed with: \n\"connection <monitor> to 18.167.26.166:27017 closed\"\n[Symbol(errorLabels)]: Set(1) { 'RetryableWriteError' }\nPoolClearedError", "text": "Hey @WONG_TUNG_TUNG,Welcome to the MongoDB Community!The PoolClearedError can occur due to two common scenarios:Essentially, this error happens when the driver believes the server associated with a connection pool is no longer available. The pool gets cleared and closed so that operations waiting for a connection can retry on a different server.To address this:Let me know if you have any other questions!Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "awaitcollection.insertOne(obj)", "text": "For me this happened when I forgot to put await before the collection.insertOne(obj) command. Any reason as to why this was happening?", "username": "Aman_Verma2" } ]
Const error = this.closed ? new errors_1.PoolClosedError(this) : new errors_1.PoolClearedError(this);
2023-07-21T09:53:42.163Z
Const error = this.closed ? new errors_1.PoolClosedError(this) : new errors_1.PoolClearedError(this);
834
null
[]
[ { "code": "", "text": "When searching with autocomplete, for a query string of “stanf”, it would highlight word starting with “stanf”, which is expected. However, it would also highlight words like “publications” or “public”.We did not set up any Synonym Mappings.Any idea why those words are also highlighted?", "username": "williamwjs" }, { "code": "", "text": "Hey @williamwjs,Can you provide the output including the highlights section for this scenario you’ve described? Please also provide the index definition + query used.Look forward to your response!Regards,\nJason", "username": "Jason_Tran" }, { "code": "type\"text\"\"hit\"", "text": "However, it would also highlight words like “publications” or “public”.I’ll probably see this in the output you provide but is the type value for the highlight for “publications” and “public” \"text\" or \"hit\"?Regards,\nJason", "username": "Jason_Tran" }, { "code": "\"hit\"", "text": "It is \"hit\"(I am working on getting a test data for you to repro)", "username": "williamwjs" }, { "code": "autocompleteautocompletehighlighthit", "text": "Thanks for confirming William. Interested to see the test data in the repro / output you’re getting so will wait for those. I did see some hits when the words were right next to the autocomplete search term but I believe this may be expected as per the autocomplete / highlight example documentation:Atlas Search matches a highlight hit more coarsely to your query terms when a highlighted path is referenced only in the autocomplete operators of the highlighted query.", "username": "Jason_Tran" } ]
Unexpected highlights
2023-11-01T22:39:29.920Z
Unexpected highlights
124
null
[]
[ { "code": "", "text": "The following sentence: We have a requirement to synchronize some table data in MongoDB to another MongoDB database in real time. There are two requirements: one is real-time, and the other is to only synchronize some fields of some tables in the original database, and the field names are different from those in the destination database, requiring some personalized configuration. Do you know if MongoDB has a similar tool, or do you know where to find a similar tool that can be implemented?", "username": "xue_jupo" }, { "code": "", "text": "Not sure if this can work for your case. If not, you can try change stream capture", "username": "Kobe_W" } ]
How can I real-time synchronize heterogeneous data?
2023-11-01T06:10:35.434Z
How can I real-time synchronize heterogeneous data?
128
null
[ "aggregation", "queries" ]
[ { "code": "[\n {\n _id: \"ABC\",\n category: \"Other\"\n },\n {\n _id: \"DEF\",\n category: \"Cat\"\n },\n {\n _id: \"GHI\",\n category: \"Cat\"\n },\n {\n _id: \"XYZ\",\n category: \"Zebra\"\n },\n {\n _id: \"123\",\n category: \"Dog\"\n },\n {\n _id: \"456\",\n category: \"Other\"\n }\n]\ndb.collection.aggregate([\n {\n $sort: {\n category: 1\n }\n }\n])\n[\n {\n \"_id\": \"DEF\",\n \"category\": \"Cat\"\n },\n {\n \"_id\": \"GHI\",\n \"category\": \"Cat\"\n },\n {\n \"_id\": \"123\",\n \"category\": \"Dog\"\n },\n {\n \"_id\": \"ABC\",\n \"category\": \"Other\"\n },\n {\n \"_id\": \"456\",\n \"category\": \"Other\"\n },\n {\n \"_id\": \"XYZ\",\n \"category\": \"Zebra\"\n }\n]\n", "text": "I have the following database of items that have a category:I want to be able to sort the items by category, but with items of the Other category always appearing after everything else.Below is my current query:The query has the following result:What changes need to be made to the query to ensure Other always shows up last?", "username": "Curtis_L" }, { "code": "db.collection.aggregate([\n {\n $addFields: {\n sortValue: {\n $cond: { \n if: { $eq: [\"$category\", \"Other\"] }, \n then: 2, \n else: 1 \n }\n }\n }\n },\n {\n $sort: {\n sortValue: 1,\n category: 1\n }\n },\n {\n $project: {\n sortValue: 0\n }\n }\n])\n[\n {\n \"_id\": \"DEF\",\n \"category\": \"Cat\"\n },\n {\n \"_id\": \"GHI\",\n \"category\": \"Cat\"\n },\n {\n \"_id\": \"123\",\n \"category\": \"Dog\"\n },\n {\n \"_id\": \"XYZ\",\n \"category\": \"Zebra\"\n },\n {\n \"_id\": \"ABC\",\n \"category\": \"Other\"\n },\n {\n \"_id\": \"456\",\n \"category\": \"Other\"\n }\n]\nsortValuesortValue$condsortValuecategorysortValue", "text": "Hello @Curtis_L ,To achieve the desired sorting where items of the “Other” category always appear after everything else, you can modify your aggregation query by using a combination of $cond and $eq operators to assign a sort value based on the category. Here’s the modified query:Here’s the Result:Explanation of the query:With this modified query, the result will have items sorted by category, with items of the “Other” category always appearing after everything else.Note: This query is an example based on the data provided by you. Kindly, test and modify the query based on your use case and requirements.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "If I may add.Since the sort is based on a computed value, then no index will be used so a memory sort will be done. If the result set is small it is not an issue. If the result set is large you may try the following alternatives:1 - Simply do 2 queries, one with a $match that uses $ne:“Other” and then a second one with $eq:“Other”. For some data set doing 2 database access might be faster than a memory sort.2 - Permanently update the collection with the sortValue, then have an index with sortValue:1,category:1.", "username": "steevej" } ]
Order by category with items of category other appearing last
2023-11-01T12:10:22.481Z
Order by category with items of category other appearing last
129
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "I have an Atlas App that I am using app services for Google with. Recently, google suddenly disabled the ability for my app to authenticate. Saying I needed to verify the mongodb.com domain. I went back and forth with them over email but they say this:You also need to verify that you own all the authorized domain listed in your request: * [mongodb.com]mongodb.comGo to the IAM page to add a role owner to your project. Roles give project members the permissions they need to verify domain ownership.Please Note: Third party domain not owned by you, or domains that are hosted by a third party site, or redirects to third party sites are not permitted.So what’s the deal here? Seems like they are saying they don’t allow the Atlas App authentication setup… has anyone run into this issue before?", "username": "Lukas_deConantseszn1" }, { "code": "", "text": "Replying to hopefully bump this… There still doesn’t seem to be any kind of resolution here.", "username": "Lukas_deConantseszn1" } ]
Google Oauth Consent Screen 3rd Party Issues
2023-09-30T15:15:53.501Z
Google Oauth Consent Screen 3rd Party Issues
328
null
[]
[ { "code": "atlas clusters search indexes create --clusterName ${MONGO_CLUSTER_NAME} --file ${FILE}{\n \"name\": \"default\",\n \"collectionName\": \"locations\",\n \"database\": \"core\",\n \"storedSource\": {\n \"include\": [\n \"_id\",\n \"type\",\n \"i18n.fallback.name\",\n \"i18n.en.name\",\n \"i18n.nl.name\",\n \"i18n.de.name\",\n \"i18n.fr.name\",\n \"i18n.es.name\",\n \"i18n.it.name\",\n \"geonamesId\",\n \"accommodationCount\"\n ]\n },\n \"analyzers\": [\n ...\n", "text": "I apply an atlas search index with the following command:atlas clusters search indexes create --clusterName ${MONGO_CLUSTER_NAME} --file ${FILE}my index file contains the following section for storedSource definition:The storedSource definition is completely omitted when the index is created. If I use the exact same code to create an index in the UI it does work…Is this a bug or do I need to defnie it a separate way?", "username": "Ruud_van_Buul" }, { "code": "", "text": "Hi @Ruud_van_Buul , storedSource is not currently supported by the Atlas CLI. This is a known gap that we are looking into prioritizing, you can vote for the feature here and stay up to date on its progress.In the meantime, would creating search indexes programatically through mongosh or drivers (use the dropdown in the top right to change languages) work for you?", "username": "amyjian" }, { "code": "", "text": "Thanks Amy. I’ll have to look into that then", "username": "Ruud_van_Buul" } ]
Atlas clusters search indexes create omits storedSource form definition
2023-11-01T15:53:59.682Z
Atlas clusters search indexes create omits storedSource form definition
112
null
[ "java" ]
[ { "code": "", "text": "Im trying to Migrate the Mongodb from 3.4 to mongodb 6, Need to know about the Compatible mongo-java-driver for java 8 applicationMongodb dependencies used in application pointing to mongodb 3.4\n\norg.mongodb\nmongo-java-driver\n3.2.2\n\n\norg.springframework.data\nspring-data-mongodb\n1.8.4.RELEASE\nwhile pointing to Mongodb 6 facing issues due to old driver, need info on latest Latest drivers", "username": "Vethaprasadh_27531" }, { "code": "", "text": "This is the link that you are looking for:", "username": "tapiocaPENGUIN" } ]
Im trying to Migrate the Mongodb from 3.4 to mongodb 6, Need to know about the Compatible mongo-java-driver for java 8 application
2023-11-01T19:59:43.203Z
Im trying to Migrate the Mongodb from 3.4 to mongodb 6, Need to know about the Compatible mongo-java-driver for java 8 application
124
null
[ "sharding" ]
[ { "code": "", "text": "We have a few large collections and were devising strategies to partition those into smaller collections for better CRUD performance. Especially because we generally access data for a particular logical group (customer, etc.) together.We are considering chunking data with a logical partitioning key to get the performance benefit without client code needing to be aware of underlying chunking. Would chunking give desired benefits even if we have a single server (one shard to start with, can add more as data further increased)? Benefits due to physically partitioned data, better locality of access, relatively uniform chunk sizing, smaller indexes per chunk?", "username": "Parikshit_Samant" }, { "code": "", "text": "Hi @Parikshit_Samant, welcome to the community!I believe the performance of your database could be worse using a shared cluster with only one shard when compared with a standard replica set cluster. Sharding adds more load to your replica set because it will need to check the range keys/chunks of data and split them when required. Another issue is that a sharded cluster needs other entities to work, config servers, and Mongos, increasing the cost of your deployment. Considering that you have only one shard, no performance gain would be provided, and the cost would increase.Relating to partitioning your data, you can do something similar to it with your indexes. Using a Partial Index to index only documents that match a filter.", "username": "Artur_57972" } ]
Can chunked collections help even with single shard
2023-11-01T12:42:52.727Z
Can chunked collections help even with single shard
107
null
[ "queries", "performance" ]
[ { "code": "collection.find({\n \"$or\": [\n {\n \"groupId\": \"ObjectId(\\\"552d1d24dc1c586b09d2d051\\\")\",\n \"isVisible\": false,\n \"isVisibleForPagination\": true,\n \"type\": {\n \"$nin\": [\n 5,\n 10\n ]\n },\n \"score\": {\n \"$gt\": 0\n },\n \"_id\": {\n \"$gt\": \"ObjectId(\\\"65368d780000000000000000\\\")\"\n }\n },\n {\n \"groupId\": \"ObjectId(\\\"5af4501a0a592c0014cbceae\\\")\",\n \"isVisible\": false,\n \"isVisibleForPagination\": true,\n \"type\": {\n \"$nin\": [\n 5,\n 10\n ]\n },\n \"score\": {\n \"$gt\": 0\n },\n \"_id\": {\n \"$gt\": \"ObjectId(\\\"652c52d00000000000000000\\\")\"\n }\n },\n {\n \"groupId\": \"ObjectId(\\\"5f5b5d305c2ecc001dd170f1\\\")\",\n \"isVisible\": false,\n \"isVisibleForPagination\": true,\n \"type\": {\n \"$nin\": [\n 5,\n 10\n ]\n },\n \"score\": {\n \"$gt\": 0\n },\n \"_id\": {\n \"$gt\": \"ObjectId(\\\"652ea4780000000000000000\\\")\"\n }\n },\n {\n \"groupId\": \"ObjectId(\\\"6017c6612e2fc21280381692\\\")\",\n \"isVisible\": false,\n \"isVisibleForPagination\": true,\n \"type\": {\n \"$nin\": [\n 5,\n 10\n ]\n },\n \"score\": {\n \"$gt\": 0\n },\n \"_id\": {\n \"$gt\": \"ObjectId(\\\"652ea4780000000000000000\\\")\"\n }\n },\n {\n \"groupId\": \"ObjectId(\\\"603e4e3f825ea5002321b20e\\\")\",\n \"isVisible\": false,\n \"isVisibleForPagination\": true,\n \"type\": {\n \"$nin\": [\n 5,\n 10\n ]\n },\n \"score\": {\n \"$gt\": 0\n },\n \"_id\": {\n \"$gt\": \"ObjectId(\\\"652ea4780000000000000000\\\")\"\n }\n }\n ]\n}).sort({ score: -1, _id: -1 }).limit(20)\n", "text": "I need some help on boosting performance of my query, which is used for a pagination purposes.\nI’m curious if I could create an appropriate index for the query (probably with avoiding sort_merge somehow), but I wasn’t able to find an index to not face with merge sorting since each subquery in the $or condition has its own greater than condition for an _id field and I can’t get rid of it since it’s a big part of bussiness logic.I created the following index for the query:\ngroupId: 1, isVisible: 1, isVisibleForPagination: 1, score: -1, _id: 1, type: -1performance with the index:\n“nReturned” : 20.0,\n“executionTimeMillis” : 235.0,\n“totalKeysExamined” : 85098.0,\n“totalDocsExamined” : 20.0Additional info, isVisible and isVisibleForPagination truthy in most cases and a real query contains 50 subqueries in $or condition and all fields are the same within or conditions except of groupId and _idstages:\n“sortPattern” : {\n“score” : -1.0,\n“_id” : -1.0\n},\nand 50 scans of index mentioned below.Here is the query, but with 5 or conditions instead of 50. So, can anyone help finding a great index for the usecase or I should have done a migration to mark if the doc is greater than the id to create an additional field (the _id condition is a static for the whole group)?:", "username": "danilastrizenok" }, { "code": "{ isVisible:false, isVisibleForPagination:true, score:{$gt:0} }\n", "text": "The first thing would be to factor to common conditions so they are not repeated. It would make the whole thing easier to understand.The $nin is not very selective so it is usually slower than a $in of the complementary values.You could replace the normal compound index with a partial index that could haveas the partialFilterExpression.", "username": "steevej" } ]
Poor performance on query with sort by two fields and a huge $or condition
2023-11-01T08:32:37.384Z
Poor performance on query with sort by two fields and a huge $or condition
126
null
[ "app-services-cli" ]
[ { "code": "failed to download Realm CLI: Error: Only Linux 64 bits supportedrpm -q system-releaseuname -mnode -vnode -p \"process.arch\"npm -vsudo npm install -g mongodb-realm-clinpm ERR! code 1\nnpm ERR! path /usr/lib/node_modules/mongodb-realm-cli\nnpm ERR! command failed\nnpm ERR! command sh -c node install.js\nnpm ERR! failed to download Realm CLI: Error: Only Linux 64 bits supported.\nnpm ERR! at getDownloadURL (/usr/lib/node_modules/mongodb-realm-cli/install.js:24:13)\nnpm ERR! at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n\nnpm ERR! A complete log of this run can be found in: /root/.npm/_logs/2023-11-01T16_32_51_933Z-debug-0.log\n100 timing command:install Completed in 1600ms\n101 verbose stack Error: command failed\n101 verbose stack at ChildProcess.<anonymous> (/usr/lib/node_modules/npm/node_modules/@npmcli/promise-spawn/lib/index.js:53:27)\n101 verbose stack at ChildProcess.emit (node:events:514:28)\n101 verbose stack at maybeClose (node:internal/child_process:1105:16)\n101 verbose stack at ChildProcess._handle.onexit (node:internal/child_process:305:5)\n102 verbose pkgid mongodb-realm-cli@2.6.2\n103 verbose cwd /home/ec2-user\n104 verbose Linux 6.1.55-75.123.amzn2023.aarch64\n105 verbose node v20.9.0\n106 verbose npm v10.1.0\n107 error code 1\n108 error path /usr/lib/node_modules/mongodb-realm-cli\n109 error command failed\n110 error command sh -c node install.js\n111 error failed to download Realm CLI: Error: Only Linux 64 bits supported.\n111 error at getDownloadURL (/usr/lib/node_modules/mongodb-realm-cli/install.js:24:13)\n111 error at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n112 verbose exit 1\n113 timing npm Completed in 1871ms\n", "text": "According to the product matrix at https://www.mongodb.com/docs/manual/administration/production-notes/#platform-support-matrix Amazon Linux 2023 supports both Community and Enterprise editions of MongoDB 7.0 on arm64. However, I’m unable to install realm-cli on Amazon Linux 2023, with the error failed to download Realm CLI: Error: Only Linux 64 bits supported. My architecture is aarch64 (arm64).OS Version: Amazon Linux 2023.2.20231026 (rpm -q system-release)\nOS Architecture: aarch64 (uname -m)\nNode Version: v20.9.0 (node -v)\nNode Architecture: arm64 (node -p \"process.arch\")\nNPM Version: 10.1.0 (npm -v)Installation Command: sudo npm install -g mongodb-realm-cliOutput:Log snippet:", "username": "Ben_Giddins" }, { "code": "git clone git@github.com:10gen/realm-cli.git\ngo build -v -o realm-cli main.go\n", "text": "Ok, was able to build it from source for anyone else running into this.", "username": "Ben_Giddins" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cannot install realm-cli on Amazon Linux 2023 (aarch64) - not supported?
2023-11-01T16:36:16.148Z
Cannot install realm-cli on Amazon Linux 2023 (aarch64) - not supported?
134
null
[ "aggregation", "queries" ]
[ { "code": "[{\"_id\":\"653d383456aa701344da1330\",\"bookName\":\"Test Book\",\"ownerId\":\"AijoAaT6VAQYJfnxwi7Krxl65C73\"},\n{\"_id\":\"653bddf92ff8543b0c418220\",\"bookName\":\"Test Programming\",\"ownerId\":\"AijoAaT6VAQYkQExwi7Krxl65C73\"},\n{\"_id\":\"6534069f1241c256efb3f91a\",\"bookName\":\"Test\",\"ownerId\":\"AijoAaT6VAQYkQExwi7Krxl65C73\"}]\n\"ownerId\":\"AijoAaT6VAQYkQExwi7Krxl65C73\"\n\"ownerName\":\"Jeff\"\n", "text": "Hi there.\nI have a search function that currently returns an array of records from a collection that match a criteria.Here is a simplified result:After getting this result, I now need to loop through each record in the returned result array, use the ownerId to look up another collection named “owners”, find the name associated with that “ownerId”, and then replace “ownerid” with the result of this lookup. As in, for each record in the result array, instead of a field like:I need:Pretty new to this stuff, so would appreciate any and all advice. Have read into $lookup but not entirely sure.", "username": "archit" }, { "code": "", "text": "We need more details.How do you obtain the array of documents with the field ownerId?You mentioned of function but it is not clear what driver you are using. It is not clear if it is an aggregation or a simple query. You could share the code to see how it could be modified to do the $lookup.", "username": "steevej" } ]
Looking up field in another Collection and replacing it with lookup value
2023-10-31T15:44:58.084Z
Looking up field in another Collection and replacing it with lookup value
127
null
[ "node-js", "mongoose-odm" ]
[ { "code": "", "text": "Hey guys. I am using mongodb with my nodejs application, and I am keeping receiving this connection closed error:“connection x to IP:27017 closed”Every once in a while I am getting this error. I dont know if this is something happening on my code-setup (using nestjs with mongoose), or if is something configurable at atlas side.Someone already had this error?Edit: I already whitelisted the ips. I can connected but my app keep getting disconnected.", "username": "JulioW" }, { "code": "", "text": "Hello @JulioW ,If you have already whitelisted your current IP address and still getting the issue then I would recommend you to check a few things such as:If you’ve checked all of the above and the issue still persists, it might be helpful to provide more information about your specific setup, how frequently you’re experiencing disconnections and code so that a more detailed analysis can be conducted. Additionally, check MongoDB’s logs for any related error messages that might provide more insight into the issue.Regards,\nTarun", "username": "Tarun_Gaur" } ]
Connection keep being closed
2023-10-25T23:27:40.476Z
Connection keep being closed
185
null
[ "node-js" ]
[ { "code": "", "text": "var MongoClient = require(‘mongodb’).MongoClient;\n// Connect to the db\nMongoClient.connect(“mongodb://127.0.0.1:27017/newdb”,function(err, db) {\nif(!err) {\nconsole.log(“You are connected!”);\n};\ndb.close();\n});\ni am not getting any output on the terminal", "username": "Jeny_Jijo" }, { "code": "", "text": "Hey @Jeny_Jijo,i am not getting any output on the terminalPlease refer to the NodeJs Quick Start documentation for the code snippet and reference, and make sure you have MongoDB, Node, and npm correctly installed on your system.Please feel free to reach out in case of further questions.Best regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Mongodb connection string to nodejs
2023-10-31T13:24:11.878Z
Mongodb connection string to nodejs
122
null
[ "react-native" ]
[ { "code": "refreshCustomData()", "text": "I’m building an app in React Native, in which I am now implementing a service where users can authenticate themselves to sync data, and share this with a list of friends they can manage.When adding or removing a friend, my app needs to basically invalidate and reload its custom user data. The reason for this is, that we are modifying the access of a user to other custom data in the backend, and that the frontend needs to become aware of this change.Realm’s refreshCustomData() function does detect the new friend in our user’s object, but we cannot read it’s data yet since the updated permissions aren’t loaded into our front-end yet. I need some way to invalidate, refresh, re-authenticate or just reload the entire user into my realm context, not just the custom data. This does not seem to be documented.How I achieve this?", "username": "Joris_Dijkstra1" }, { "code": "", "text": "To update your view after a custom user data change, you will need to pause and resume your sync session.\nThe team is planning an improvement that will eliminate the need for this workaround, but until that is ready this is the best way to have your application react to changes in custom user data and permissions", "username": "Sudarshan_Muralidhar" }, { "code": "", "text": "Hi @Sudarshan_Muralidhar, we found that pause and resume did not work, however we’ve been able to come up with another workaround by closing and then re-opening our realm.", "username": "Joris_Dijkstra1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Realm in React Native: Reload a user
2023-11-01T10:17:59.848Z
MongoDB Realm in React Native: Reload a user
129
null
[ "atlas-search", "atlas" ]
[ { "code": "", "text": "I implemented atlas search in my project, but found out that lucene.standard doesn’t do any icuFolding. Can I add icuFolding to lucene.standard with a custom analyzer?If I have to recreate it completely, can I find the settings for lucene.standard somewhere?", "username": "Ruud_van_Buul" }, { "code": "lucene.standardstandardlowercaseicuFolding", "text": "Hi @Ruud_van_Buul - Good question. The lucene.standard analyzer underneath is the standard tokenizer and a lowercase filter. So yes, you could create a custom analyzer that used those two plus the icuFolding filter.", "username": "Erik_Hatcher" }, { "code": "", "text": "Thanls Eric! That did the trick", "username": "Ruud_van_Buul" } ]
Is there a way to add ICU folding to lucene.standard?
2023-10-20T09:37:42.801Z
Is there a way to add ICU folding to lucene.standard?
218
null
[ "compass", "atlas-cluster", "kotlin" ]
[ { "code": "", "text": "Hellow,PROBLEM: I can´t connect Mongo Compass with mongoDB Atlas (community edition). I use kotlin driver. I use IntelliJ editor.\nThe connection provided by MongoDB Atlas is:\nmongodb+srv://:@cluster0.p8clp7f.mongodb.net/?retryWrites=true&w=majorityI completed the user with my data in Mongo Atlas and added a password. The mongoDB works fine.I tried to upload here the compiler result to show the data, i couldn´t because I new .I download mongo compass, update to 1.40.4 edition.\nwhen open Compass from my desktop, it shows:URI: mongodb://localhost:27017/I read the documentation, but honesty I couldn´t solve by myself.this is primary connection in mongodb atlas in cluster0 project:\nac-7fa6plv-shard-00-02.p8clp7f.mongodb.net:27017ac-7fa6plv-shard-00-02.p8clp7f.mongodb.net:27017which data i have to include in the URI to connect Compass with my Mongodb cluster0?\nThanks a lot.", "username": "Germansdev_N_A" }, { "code": "", "text": "which data i have to include in the URI to connect Compass with my Mongodb cluster0?If you’re trying to connect with the cluster0 hosted in Atlas, then it would probably be:The connection provided by MongoDB Atlas is:\nmongodb+srv://:@cluster0.p8clp7f.mongodb.net/?retryWrites=true&w=majorityIf you’re having trouble using that connection string and Compass then please provide any error messages.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Dear Jason,\nyour suggest works pretty fine.\nI had forgotten include\"+srv\" at the beginning.\nThanks a lot.\nGermansdev", "username": "Germansdev_N_A" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
mongoDB Atlas community edition( kotlin ) connection with mongo compass 1.40.4 edition
2023-10-31T14:00:17.474Z
mongoDB Atlas community edition( kotlin ) connection with mongo compass 1.40.4 edition
140
null
[ "kubernetes-operator" ]
[ { "code": "", "text": "Hello there,\nI recently discovered a very nice project of mongodb: mongodb-kubernetes-operator which allows to run mongodb cluster on own kubernetes cluster.\nFor most of my workload I’m using spot-instances which are pricing with 60-90% of discount, but they are preemptible and can be terminated at any point of time. Since web services are stateless this is not a problem for us, but just a requirement to have at least 2 nodes and 2 pods distributed among them (which is anyway a requirement for any production workload).\nNow I’m wondering is it makes sense to run MongoDB also on spot instances: just from the theory it should work, when the node (primary or secondary) is terminated it shouldn’t cause any downtime as the load will be transferred.\nBut I’m not sure how much data might be corrupted (of course the ssd disks are permanent for these nodes).Are there any insights or case studies available, or does anyone experience similar setup, would be interesting to learn.", "username": "Sergey_Zelenov" }, { "code": "", "text": "I honestly don’t know enough about spot instances to know how feasible that would be! Very interesting question though…!!If you lose a node and a pod, what happens to the storage? Will the pod come up again somewhere else with the storage (theoretically) intact? (I’m not sure if that’s what you meant by the SSDs being permanent)I’m definitely not aware of anyone else running on spot instances!", "username": "Dan_Mckean" }, { "code": "", "text": "Hi Dan,\nI also like the idea and I think it worths to give a shot The storage is provided by the cloud provider (GCP in my case), and they are independent of kubernetes, instead they are mounted to nodes and can be used by its’ pods.\nSo yes, disks are permanent in a sense that they survive node/pod terminations.", "username": "Sergey_Zelenov" }, { "code": "", "text": "Hmm… should work then!I can’t advise on the disk corruption… but I would imagine that this scenario is no different to hardware or VMs, where you might lose one of the replica set members and then bring it back up.If you try it, let us know how you get on!", "username": "Dan_Mckean" }, { "code": "", "text": "@Dan_Mckean switched production traffic today morning, fingers crossed \nwe have around 250-300 rps, and around 50GB of data at the moment\nI set 3 spot instances c2-standard-4 (4vCPU, 16GB RAM) for the cluster", "username": "Sergey_Zelenov" }, { "code": "", "text": "Nice!! Good luck, let us know how it goes and we can mention that it works () and give you an honourable mention for being the first to try it! ", "username": "Dan_Mckean" } ]
Run Cluster on Spot instances?
2023-10-30T12:45:09.759Z
Run Cluster on Spot instances?
173
null
[ "python", "atlas-cluster" ]
[ { "code": "", "text": "pymongo.errors.ServerSelectionTimeoutError: SSL handshake failed: my-clus1-shard-00-00-pri.xyz.mongodb.net:27017: [SSL: WRONG_VERSION_NUMBER]Flask==2.0.2\nFlask-PyMongo==2.2.0\nrequests==2.20.1\npymongo[tls,srv]==4.0.1Any idea why the wrong ssl is from the packages or something else", "username": "Howard_Hill" }, { "code": "", "text": "What operating system, Python version and MongoDB version, and are you trying to connect through a proxy?", "username": "Bernie_Hackett" }, { "code": "", "text": "Docker\nFROM python:3.6.8-alpine3.9WORKDIR /appCOPY requirements.txt .RUN pip install -r requirements.txtRUN pip install gunicornCOPY flaskr/bar.py .EXPOSE 5000CMD [“python”, “bar.py”]", "username": "Howard_Hill" }, { "code": "", "text": "What version of MongoDB and where is MongoDB running?", "username": "Bernie_Hackett" }, { "code": "", "text": "I found that the ssl issue is a red herring , i was trying to connect to a db in a different region and the error presented as such.Its very strange and dishearting none the less everything is working now", "username": "Howard_Hill" }, { "code": "", "text": "how did you fixed it?, i have the same error", "username": "Maayan_Cohen1" }, { "code": "", "text": "It may be due to the OpenSSL updates. If you are on ubuntu:22.04, upgrade to ubuntu:23.x. However what OS are you on right now and the mongodb version now?", "username": "Sibidharan_Nandakumar" }, { "code": "", "text": "Hi, check your network access configuration in Atlas in case you are connecting to it.In my case, I was connecting to an instance in Atlas. As long as I have added the new IP, I was using from my network, I could connect", "username": "Stephen_Vincent_Strange" }, { "code": "", "text": "YES!!! I encountered the same error and after 2 hours of troubleshooting this is what solved my issue. Just add your IP to the cluster", "username": "Vedant_Chaskar" } ]
pymongo.errors.ServerSelectionTimeoutError: SSL handshake failed
2022-03-02T13:23:42.061Z
pymongo.errors.ServerSelectionTimeoutError: SSL handshake failed
5,989
null
[ "node-js" ]
[ { "code": "const storage = new GridFsStorage({\n url: process.env.MONGO_CONN_URL,\n file: (req, file) => {\n return new Promise( async (resolve, reject) => {\n\n const filename = file.originalname;\n const shortname = await hashName();\n\n console.log(req.body);\n\n const expTime = new Date(req.body.expiryTime);\n const noOfDown = parseInt(req.body.noOfDownload);\n const isPublic = req.body.isPublic==='true';\n const passwordValue = req.body.password;\n\n var uploadedBy = 'anonymous';\n if(req.cookies!==undefined && req.cookies.pass!==undefined){\n try{\n const token = req.cookies.pass;\n const payload=jwt.verify(token,process.env.JWT_PRIVATE_KEY);\n uploadedBy = payload.userId;\n }catch(err){\n console.log('catch block executed');\n \n }\n }\n\n const fileInfo = {\n filename: filename,\n metadata:{\n shortname: shortname,\n expiryTime: expTime,\n noOfDownload: noOfDown,\n isPublic: isPublic,\n password: passwordValue,\n uploadedBy: uploadedBy\n },\n bucketName: \"newBucket\"\n };\n resolve(fileInfo);\n });\n }\n }); \n\n const upload = multer({\n storage\n });\n\napp.post('/upload',upload.single(\"fileUpload\"),async (req,res)=>{\n\n const file = req.file;\n \n console.log(\"File has been uploaded. File id: \"+file.metadata.shortname);\n res.send(file.metadata.shortname)\n\n});\n", "text": "How to handle MongoServerError: you are over your space quota, using 514 MB of 512 MB\nin nodejs server while uploading a file using multer-gridfs-storage library to avoid server crashHere is my code:Github link: https://github.com/Dy-123/File_Sharing/blob/master/backend/index.jsI tried storage.on streamError and dbError but it is not working", "username": "Divyanshu_N_A1" }, { "code": "try {\n // ... your file upload logic\n} catch (error) {\n if (error instanceof MongoServerError && error.code === 8000) {\n return res.status(500).json({ error: 'MongoDB space quota exceeded' });\n } else {\n // Handle other errors\n console.error('An error occurred during file upload:', error);\n return res.status(500).json({ error: 'An error occurred during file upload' });\n }\n}\n", "text": "Hey @Divyanshu_N_A1,Welcome to the MongoDB Community forums!How to handle MongoServerError: you are over your space quota, using 514 MB of 512 MB\nin nodejs serverWhen your storage exceeds the allocated limit, it triggers an error with an Atlas error code 8000. To handle this error, you can capture the error code and integrate the following Node.js code snippet into your applicationIf you require additional storage capacity, you can expand it by upgrading your Atlas Tier. Additionally, considering that you are uploading images, I recommend storing these images in an AWS S3 bucket or similar object storage solution and storing references to them within your documents for better scalability and cost-efficiency.However, in case of further assistance, please share your use case and details related to it, which will help the community to assist you better.Best regards,\nKushagra", "username": "Kushagra_Kesav" } ]
How to handle "MongoServerError" which is causing server crash
2023-10-30T04:41:15.446Z
How to handle &ldquo;MongoServerError&rdquo; which is causing server crash
161
null
[ "node-js", "react-js" ]
[ { "code": "", "text": "Hi everyone,\nI am working in mern stack web app i have a collection of name “Orders” I want to send notification to admin when ever any document in this collectionupdated/added or deleted how i can do this please", "username": "ali_aboubakr" }, { "code": "", "text": "try change stream from mongodb", "username": "Kobe_W" }, { "code": "", "text": "How i can do this can you provide me link for video tutorial or documentation for this", "username": "ali_aboubakr" }, { "code": "", "text": "Hi @ali_aboubakr and welcome to MongoDB community forums!!As @Kobe_W correctly mentioned, you can make use of the change streams in MongoDB to view the documents when undergo any operation.Please visit the documentation for Change Steams and let us know if you encounter any issues.Warm Regards\nAasawari", "username": "Aasawari" } ]
How to send notification to user when document updated
2023-10-29T07:38:57.152Z
How to send notification to user when document updated
203
null
[]
[ { "code": "schoolschoolelementary schoolhigh schoolhigh schoolschoolelementary schoolequals", "text": "Looking at the example on https://www.mongodb.com/docs/atlas/atlas-search/embedded-document/ we search for for items tagged school. This search would include items tagged with school, elementary school and high school. If I wanted to search just for items tagged with high school using this query it would return the items tagged with school, elementary school as well as we’re doing a text query. equals seems like an logical operator but it doesn’t work with strings. So the question is how can we do search for an exact match on an embedded document?", "username": "Daniel_Reuterwall" }, { "code": "{\n name: \"default\",\n mappings: {\n dynamic: true,\n fields: {\n \"items\": {\n type: \"embeddedDocuments\",\n dynamic: true,\n fields: {\n \"tags\": {\n type: \"string\",\n analyzer: \"lucene.keyword\"\n }\n }\n }\n }\n }\n", "text": "Hi @Daniel_Reuterwall , you can achieve an exact text match with embedded documents using the keyword analyzer just like you would with a regular string field. In the example, the search index would be modified to look like this:You can learn more about ways to perform exact matching in Atlas Search here.", "username": "amyjian" }, { "code": "", "text": "Thanks Amy, just what I needed.\nAlso, great tip with the linked article. Very informative!Have a great day!", "username": "Daniel_Reuterwall" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Exact match on mebedded documents in atlas search
2023-10-27T08:17:01.648Z
Exact match on mebedded documents in atlas search
225
null
[ "java", "spring-data-odm" ]
[ { "code": "\"message\": \"Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused}}]\",\n", "text": "HI i am using Spring boot i started facing above issue.Back my application was runing fineError Message:", "username": "Bhavya_Bhatt" }, { "code": "java.net.ConnectException: Connection refused", "text": "java.net.ConnectException: Connection refusedThis should be the root cause. Where is your mongodb process running?", "username": "Kobe_W" }, { "code": "", "text": "Mongodb process running at altas AWSjava.net.ConnectException: Connection refusedHow can i solve the error", "username": "Bhavya_Bhatt" } ]
com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by java.net.ConnectException: Connection refused
2023-10-31T09:17:54.473Z
com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by java.net.ConnectException: Connection refused
167
null
[ "atlas-cli" ]
[ { "code": "atlas deployments list\nError: exit status 125:\n", "text": "Hi allFollowing the notes from:introducing-local-development-experience-atlas-search-vector-search-atlas-cliThis is on ARM based MBP\nNote: I a logged in.G", "username": "georgelza" }, { "code": "", "text": "Hi George,Thanks for sharing it. Have you had a chance to try troubleshooting steps?Additionally, does this error occur also on other commands, e.g. “atlas deployments setup”?\nIf so, could you run the affected command with “–debug” and share the output?Thanks,\nJakub", "username": "Jakub_Lazinski" }, { "code": "", "text": "atlas deployments setup seems to be working…Screenshot 2023-10-27 at 08.11.491738×582 48.1 KBatlas deployments setup", "username": "georgelza" }, { "code": "", "text": "Thanks for checking it.\nWhat was the output of troubleshooting steps I’ve shared in my previous message?", "username": "Jakub_Lazinski" }, { "code": "", "text": "sorry, havent gotten to the steps, been a bit side swiped with work, will try and get to it today.G", "username": "georgelza" }, { "code": "", "text": "quickly had a look and notice you use podman for atlas deployments, I don’t have podman installed, only use docker on my MAC.but the curious thing, all i’m doing is a atlas list, which at min should respond with my atlas environment in the cloud.G", "username": "georgelza" } ]
Mongo Atlas CLI
2023-10-26T08:55:36.772Z
Mongo Atlas CLI
256
null
[ "node-js", "swift", "php", "cxx" ]
[ { "code": "", "text": "Hello There,can anyone help me to check where to check package version in mongodbI want to check below driver version\nfor CVE-2021-32050\nMongoDB C Driver 1.0.0 prior to 1.17.7, MongoDB PHP Driver 1.0.0 prior to 1.9.2, MongoDB Swift Driver 1.0.0 prior to 1.1.1, MongoDB Node.js Driver 3.6 prior to 3.6.10, MongoDB Node.js Driver 4.0 prior to 4.17.0 and MongoDB Node.js Driver 5.0 prior to 5.8.0. This issue also affects users of the MongoDB C++ Driver dependent on the C driver 1.0.0 prior to 1.17.7 (C++ driver prior to 3.7.0).also help in CVE-2023-43651Regards,\nSAM", "username": "sameer_khamkar" }, { "code": "package.jsonpackage-lock.jsonmongodb", "text": "Hey @sameer_khamkar,also help in CVE-2023-43651 CVE-2023-43651 appears to be a CVE for jumpserver.I want to check below driver version\nfor CVE-2021-32050As for CVE-2021-32050, it would depend on what language you’re specifically interested in. For example, if you have a Node.js application you can check the package.json or package-lock.json to see what version of the mongodb package is installed.", "username": "alexbevi" }, { "code": "", "text": "The mongod server logs and $currentOpCan be used to get an idea of what driver version has previously and is currently connected to your cluster if you don’t have access to the code or wish to supplement that information.", "username": "chris" }, { "code": "", "text": "Thank you @alexbevi for your guidance", "username": "sameer_khamkar" }, { "code": "", "text": "Thank you @chris for your guidance", "username": "sameer_khamkar" } ]
Guidance for CVE-2023-43651 & CVE-2021-32050
2023-10-31T06:08:40.114Z
Guidance for CVE-2023-43651 &amp; CVE-2021-32050
181
null
[ "queries", "atlas-search" ]
[ { "code": " $search: {\n \"index\": \"fulltext\",\n \"text\": {\n \"query\": \"His my foot hurts orthopedic\",\n \"path\": [\"locationName\", \"practicingSpecialty\"],\n //\"synonyms\": \"synonyms1\"\n }\n }\n $search: {\n \"index\": \"fulltext\",\n \"text\": {\n \"query\": \"His my foot hurts orthopedic\",\n \"path\": [\"locationName\", \"practicingSpecialty\"],\n \"synonyms\": \"synonyms1\"\n }\n }\n {\n \"mappingType\": \"equivalent\",\n \"synonyms\": [\"knee\", \"Orthopedic\", \"Orthopedics\", \"Orthopedist\"]\n }\n{\n \"mappings\": {\n \"dynamic\": true\n },\n \"synonyms\": [\n {\n \"name\": \"synonyms1\",\n \"analyzer\": \"lucene.standard\",\n \"source\": {\n \"collection\": \"searchSynonyms\"\n }\n }\n ]\n}\n{\n \"gender\": \"Female\",\n \"practicingSpecialty\": \"Orthopedics\",\n \"practiceType\": \"Medical\",\n}\n", "text": "I am having trouble with synonymsWhen I apply the following query, I get search results.However, when I apply the following query, I get no search results at all::I have one synonym in my collection:My index definition looks like the following:When I apply a query the gets an “exact” match in a field then I get search results, but if there is not an exact match I get no results. I want to get search results where I have a fields that looks like this:The ‘orthopedic’ term in the query above should match on Orthopedics in the practicingSpecialty field. Also, if I used the term ‘knee’, my synonym should apply ‘orthopedics’ and also get a match.Wondering what I might be doing wrong?", "username": "Shon_Schetnan" }, { "code": "texttexttext[\n {\n $search: {\n index: \"test\",\n compound: {\n should: [\n {\n text: {\n query:\n \"His my foot hurts orthopedic\",\n path: \"practicingSpecialty\",\n synonyms: \"mySynonyms\",\n },\n },\n {\n text: {\n query:\n \"His my foot hurts orthopedics\",\n path: \"practicingSpecialty\",\n },\n },\n ],\n },\n },\n },\n]\northopedics[\n {\n $search: {\n index: \"test\",\n compound: {\n should: [\n {\n text: {\n query: \"orthopedic\",\n path: \"practicingSpecialty\",\n synonyms: \"mySynonyms\",\n },\n },\n {\n text: {\n query:\n \"His my foot hurts orthopedic\",\n path: \"practicingSpecialty\",\n },\n },\n ],\n },\n },\n },\n]\n", "text": "Hi @Shon_Schetnan and welcome to MongoDB community forums!!Thats a very nice catch!!.I’ve raised this with the team internally but in the mean time you can share your requirements and use case on our MongoDB Feedback Engine.Meanwhile, as mentioned in the documentation for Atlas Search, the text queries work differently when used with synonym and without synonyms.\nThe text queries that use synonyms look for a conjunction (AND) of query tokens. text queries that don’t use synonyms search for a disjunction (OR) of query tokens. To run text queries that use synonyms and search for a disjunction (OR) of query tokens also, use the compound operator.To resolve your issue, you can try the query below and let us know if the expectations are met.In the above query, inside the should operator, the first part of the query which uses synonyms, treats each word of the query with AND whereas in the second case, it finds the exact match word orthopedics and returns the results.If I put the query as:the results are obtained from the first part of the should operator.Let us know if the above query helps.Warm Regards\nAasawari", "username": "Aasawari" } ]
Using synonyms stops fulltext search from working
2023-10-27T14:14:53.504Z
Using synonyms stops fulltext search from working
211
null
[ "aggregation", "time-series" ]
[ { "code": "$out: {\n db: \"mydatabase\",\n coll: \"weathernew\",\n timeseries: {\n timeField: \"ts\",\n metaField: \"metaData\"\n }\n }\n", "text": "Hello,I’m a newbie with MongoDB.\nI read in the limitation of timeseries “You cannot use the $out or aggregation pipeline stages to add data from another collection to a time series collection.”But in the documentation “Migrate Data into a Time Series Collection”, the example uses $out.When I write like in the example (with the version MongoDB 7.0.1):I have the error: $out supports only db and coll.When I remove the field “timeseries”, I read the error “error during aggregation :: caused by :: Taking mydatabase.weathernew lock for timeseries is not allowed”.My question:\nCan I use $out like the documentation “Migrate Data into a Time Series Collection”?\nIf yes, how can I resolve the error about the “lock for timeseries”?Thanks.", "username": "Karima_Rafes" }, { "code": "$outmongodumpmongorestore", "text": "Hi @Karima_Rafes and welcome to MongoDB community forums!!Using $out to Time Series collection has been updated in MongoDB version 7.0.3.The documentation “Migrate Data into a Time Series Collection” now recommends using $out to a “non-Time Series” collection to correctly transform data for a Time Series collection.\nPrior to version 7.0.3, you will have to migrate the data using mongodump and mongorestore to the time series collection. The detailed steps are mentioned in the documentation for MongoDB version 6.0.I understand the documentation does not clarify the version and this has been raised internally.However, to answer your questions:\n1.Can I use $out like the documentation “Migrate Data into a Time Series Collection”?If you are on version 7.0.3 and above, the answer would be yes. But prior to version 7.0.3, you will have to use to the dump/restore method to migrate.Let us know if you have any further questions.Warm Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Migrate Data into a Time Series Collection throws an error "taking collection lock for timeseries is not allowed"
2023-10-27T10:52:31.536Z
Migrate Data into a Time Series Collection throws an error &ldquo;taking collection lock for timeseries is not allowed&rdquo;
208
null
[ "replication", "connecting", "atlas-cluster" ]
[ { "code": "1.ExecuteAsync(IReadBinding binding, CancellationToken cancellationToken)\\\\\\\\n at MongoDB.Driver.OperationExecutor.ExecuteReadOperationAsync[TResult](IReadBinding binding, IReadOperation1.ExecuteReadOperationAsync[TResult](IClientSessionHandle session, IReadOperation1.UsingImplicitSessionAsync[TResult](Func", "text": "I have a multi region mongo atlas cluster of 3 nodes us-east-1(Primary), us-east-2, us-west-2.\n2 APIs in AWS in us-east-1 and us-west-2\nSetup two Peering Connection for eac VPC in my AWS infra. All necessary steps done.Problem - I can connect to Mongo from API/Lambda running in us-east-1 but cannot connect from us-west-2\"detail\":\"{\\\"ClassName\\\":\\\"System.TimeoutException\\\",\\\"Message\\\":\\\"A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = ReadPreferenceServerSelector{ ReadPreference = { Mode : Primary } }, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : \\\\\\\"1\\\\\\\", ConnectionMode : \\\\\\\"ReplicaSet\\\\\\\", Type : \\\\\\\"ReplicaSet\\\\\\\", State : \\\\\\\"Connected\\\\\\\", Servers : [{ ServerId: \\\\\\\"{ ClusterId : 1, EndPoint : \\\\\\\"Unspecified/prod-usa-cluster-shard-00-00.aaaaaa.mongodb.net:27017\\\\\\\" }\\\\\\\", EndPoint: \\\\\\\"Unspecified/prod-usa-cluster-shard-00-00.aaaaaa.mongodb.net:27017\\\\\\\", ReasonChanged: \\\\\\\"Heartbeat\\\\\\\", State: \\\\\\\"Disconnected\\\\\\\", ServerVersion: , TopologyVersion: , Type: \\\\\\\"Unknown\\\\\\\", HeartbeatException: \\\\\\\"MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.\\\\n —> System.TimeoutException: Timed out connecting to 52.206.88.60:27017. Timeout was 00:00:30.\\\\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.Connect(Socket socket, EndPoint endPoint, CancellationToken cancellationToken)\\\\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.CreateStream(EndPoint endPoint, CancellationToken cancellationToken)\\\\n at MongoDB.Driver.Core.Connections.SslStreamFactory.CreateStream(EndPoint endPoint, CancellationToken cancellationToken)\\\\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\\\\n — End of inner exception stack trace —\\\\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\\\\n at MongoDB.Driver.Core.Connections.BinaryConnection.Open(CancellationToken cancellationToken)\\\\n at MongoDB.Driver.Core.Servers.ServerMonitor.InitializeConnection(CancellationToken cancellationToken)\\\\n at MongoDB.Driver.Core.Servers.ServerMonitor.Heartbeat(CancellationToken cancellationToken)\\\\\\\", LastHeartbeatTimestamp: \\\\\\\"2023-10-20T08:38:38.6956720Z\\\\\\\", , LastUpdateTimestamp: \\\\\\\"2023-10-20T08:38:38.6956723Z\\\\\\\" }, { ServerId: \\\\\\\"{ ClusterId : 1, EndPoint : \\\\\\\"Unspecified/prod-usa-cluster-shard-00-01.aaaaaa.mongodb.net:27017\\\\\\\" }\\\\\\\", EndPoint: \\\\\\\"Unspecified/prod-usa-cluster-shard-00-01.aaaaaa.mongodb.net:27017\\\\\\\", ReasonChanged: \\\\\\\"Heartbeat\\\\\\\", State: \\\\\\\"Connected\\\\\\\", ServerVersion: 6.0.0, TopologyVersion: { \\\\\\\"processId\\\\\\\" : ObjectId(\\\\\\\"653191f7a07a297924746906\\\\\\\"), \\\\\\\"counter\\\\\\\" : NumberLong(8) }, Type: \\\\\\\"ReplicaSetSecondary\\\\\\\", Tags: \\\\\\\"{ nodeType : ELECTABLE, provider : AWS, region : US_WEST_2, workloadType : OPERATIONAL }\\\\\\\", WireVersionRange: \\\\\\\"[0, 17]\\\\\\\", LastHeartbeatTimestamp: \\\\\\\"2023-10-20T08:38:39.0250305Z\\\\\\\", LastUpdateTimestamp: \\\\\\\"2023-10-20T08:38:39.0250309Z\\\\\\\" }, { ServerId: \\\\\\\"{ ClusterId : 1, EndPoint : \\\\\\\"Unspecified/prod-usa-cluster-shard-00-02.aaaaaa.mongodb.net:27017\\\\\\\" }\\\\\\\", EndPoint: \\\\\\\"Unspecified/prod-usa-cluster-shard-00-02.aaaaaa.mongodb.net:27017\\\\\\\", ReasonChanged: \\\\\\\"Heartbeat\\\\\\\", State: \\\\\\\"Disconnected\\\\\\\", ServerVersion: , TopologyVersion: , Type: \\\\\\\"Unknown\\\\\\\", HeartbeatException: \\\\\\\"MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.\\\\n —> System.TimeoutException: Timed out connecting to 13.56.111.96:27017. Timeout was 00:00:30.\\\\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.Connect(Socket socket, EndPoint endPoint, CancellationToken cancellationToken)\\\\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.CreateStream(EndPoint endPoint, CancellationToken cancellationToken)\\\\n at MongoDB.Driver.Core.Connections.SslStreamFactory.CreateStream(EndPoint endPoint, CancellationToken cancellationToken)\\\\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\\\\n — End of inner exception stack trace —\\\\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\\\\n at MongoDB.Driver.Core.Connections.BinaryConnection.Open(CancellationToken cancellationToken)\\\\n at MongoDB.Driver.Core.Servers.ServerMonitor.InitializeConnection(CancellationToken cancellationToken)\\\\n at MongoDB.Driver.Core.Servers.ServerMonitor.Heartbeat(CancellationToken cancellationToken)\\\\\\\", LastHeartbeatTimestamp: \\\\\\\"2023-10-20T08:38:38.6758014Z\\\\\\\", LastUpdateTimestamp: \\\\\\\"2023-10-20T08:38:38.6758018Z\\\\\\\" }] }.\\\",\\\"Data\\\":null,\\\"InnerException\\\":null,\\\"HelpURL\\\":null,\\\"StackTraceString\\\":\\\" at MongoDB.Driver.Core.Clusters.Cluster.ThrowTimeoutException(IServerSelector selector, ClusterDescription description)\\\\n at MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChangedHelper.HandleCompletedTask(Task completedTask)\\\\n at MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChangedAsync(IServerSelector selector, ClusterDescription description, Task descriptionChangedTask, TimeSpan timeout, CancellationToken cancellationToken)\\\\n at MongoDB.Driver.Core.Clusters.Cluster.SelectServerAsync(IServerSelector selector, CancellationToken cancellationToken)\\\\n at MongoDB.Driver.Core.Clusters.IClusterExtensions.SelectServerAndPinIfNeededAsync(ICluster cluster, ICoreSessionHandle session, IServerSelector selector, CancellationToken cancellationToken)\\\\n at MongoDB.Driver.Core.Bindings.ReadPreferenceBinding.GetReadChannelSourceAsync(CancellationToken cancellationToken)\\\\n at MongoDB.Driver.Core.Operations.RetryableReadContext.InitializeAsync(CancellationToken cancellationToken)\\\\n at MongoDB.Driver.Core.Operations.RetryableReadContext.CreateAsync(IReadBinding binding, Boolean retryRequested, CancellationToken cancellationToken)\\\\n at MongoDB.Driver.Core.Operations.FindOperation1.ExecuteAsync(IReadBinding binding, CancellationToken cancellationToken)\\\\\\\\n at MongoDB.Driver.OperationExecutor.ExecuteReadOperationAsync[TResult](IReadBinding binding, IReadOperation1 operation, CancellationToken cancellationToken)\\\\n at MongoDB.Driver.MongoCollectionImpl1.ExecuteReadOperationAsync[TResult](IClientSessionHandle session, IReadOperation1 operation, ReadPreference readPreference, CancellationToken cancellationToken)\\\\n at MongoDB.Driver.MongoCollectionImpl1.UsingImplicitSessionAsync[TResult](Func2 funcAsync, CancellationToken cancellationToken)\\\\n at MongoDB.Driver.IAsyncCursorSourceExtensions.ToListAsync[TDocument](IAsyncCursorSource`1 source, CancellationToken cancellationToken)\\\\n at", "username": "Saurabh_Singh8" }, { "code": "pingping", "text": "Hi @Saurabh_Singh8 - Welcome to the community.I have a multi region mongo atlas cluster of 3 nodes us-east-1(Primary), us-east-2, us-west-2.\n2 APIs in AWS in us-east-1 and us-west-2\nSetup two Peering Connection for eac VPC in my AWS infra. All necessary steps done.As per the Deployments in multiple regions section of the Network Peering documentation:Atlas deployments in multiple regions must have a peering connection for each Atlas region.\nFor example: If you have a VPC in Sydney and Atlas deployments in Sydney and Singapore, create two peering connections.To clarify, when you state you set up two peering connections for each VPC from your AWS infra, does that mean you set up a total of 4 VPC peering connections?Problem - I can connect to Mongo from API/Lambda running in us-east-1 but cannot connect from us-west-2Are you able to try performing a ping test from each of your application’s VPCs to each node in Atlas to see the results. Are you being returned the an IP that exists within the configured Atlas VPC CIDR for each ping test?Regards,\nJason", "username": "Jason_Tran" } ]
AWS Peering with Multi-region Cluster connection problem
2023-10-20T08:53:43.886Z
AWS Peering with Multi-region Cluster connection problem
226
null
[ "node-js", "mongoose-odm" ]
[ { "code": "import express from 'express';\nconst router = express.Router();\nimport {story_point_get, story_points_create_post,\nindex, story_points_delete_post} from '../controllers/story_point_controller.js'\n\nrouter.get(\"/\", index)\n\n.get(\n \"/story_points/create\",\n story_point_get\n)\n// POST request for creating Story Points.\n.post(\n \"/story_points/create\", \n story_points_create_post\n)\n\n// POST request to delete story points.\n.post(\n \"/story_points/:id/delete\", \n story_points_delete_post\n);\n\nexport {router as catalogRouter};\nexport const story_points_create_post = async (req, res) => {\n console.log('made it within the post route')\n console.log(req.body)\n let storyPoints = new StoryPoints({\n firstName: req.body.firstName,\n lastName: req.body.lastName,\n team: req.body.team,\n storyPoints: req.body.storyPoints,\n sprint: {\n name: req.body.sprint.name,\n startDate: req.body.sprint.startDate,\n endDate: req.body.sprint.endDate\n }\n })\n \n try {\n res.send(await storyPoints.save()) \n } catch(e) {\n console.log(e)\n }\n \n};\nres.send(await storyPoints.save()) res.send(storyPoints){\n \"firstName\": \"Joe\",\n \"lastName\": \"Smoe\",\n \"team\": \"Data Science\",\n \"storyPoints\": 3,\n \"sprint\": {\n \"name\": \"My first Sprint\",\n \"startDate\": \"2023-10-18T20:23:23.000Z\",\n \"endDate\": \"2023-10-18T20:24:09.000Z\"\n },\n \"_id\": \"6540057560a4fde45b3cc521\"\n}\n{\"_id\":{\"$oid\":\"6531798a38ae95fe7e56c3cc\"},\"firstName\":\"Joe\",\"lastName\":\"Smoe\",\"team\":\"Data Science\",\"storyPoints\":{\"$numberInt\":\"15\"},\"sprint\":{\"name\":\"My first Sprint\",\"startDate\":{\"$date\":{\"$numberLong\":\"1697660603000\"}},\"endDate\":{\"$date\":{\"$numberLong\":\"1697660649000\"}}},\"createdAt\":{\"$date\":{\"$numberLong\":\"1697741194091\"}},\"updatedAt\":{\"$date\":{\"$numberLong\":\"1697741194091\"}},\"__v\":{\"$numberInt\":\"0\"}}\n/ Require Mongoose\nimport mongoose from \"mongoose\";\n\n// Define a schema\nconst Schema = mongoose.Schema;\n/**\n * Story Point Report Schema\n * The timestamps used in this way create updatedAt and createdAt automatically\n */\nconst storyPointsSchema = new Schema({\n firstName: { \n required: true, \n type: String, \n maxLength: 100\n },\n lastName: {\n required: true, \n type: String, \n maxLength: 100\n },\n team: {\n required: true, \n type: String}\n ,\n storyPoints: {\n required: true,\n type: Number\n },\n sprint: {\n name: {\n required: true,\n type: String\n },\n startDate: {\n required: true,\n type: Date, \n },\n endDate: {\n required: true,\n type: Date,\n }\n }\n \n}, {timestamps: true });\n\nstoryPointsSchema.pre('save', function(next){\n this.spintStartDate = Date.now();\n this.sprintEndDate = Date.now() + 14;\n})\n\n/**\n * Calculate Full Name\n * Virtual for ticket assignee's full name\n */\nstoryPointsSchema.virtual('name').get(() => {\n let fullname = \"\"\n if (this.firstName && this.lastName) {\n fullname = `${this.firstName}, ${this.lastName}`\n }\n return fullname\n})\n\nfunction convertDateToReadableFormat(Date) {\n return DateTime.formJSDate(Date).toLocaleString(DateTime.DATE_MED);\n}\n\nstoryPointsSchema.virtual('prety_dates').get(() => {\n return {\n sprintStartDateF: convertDateToReadableFormat(this.spintStartDate),\n sprintEndDateF: convertDateToReadableFormat(this.spintEndDate),\n updatedAtF: convertDateToReadableFormat(this.updatedAt)\n }\n})\n\nexport const StoryPoints = mongoose.model('StoryPoints', storyPointsSchema)\nConnected to MongoDBimport mongoose from 'mongoose';\nimport * as dotenv from 'dotenv';\ndotenv.config()\n\n// Set `strictQuery: false` to globally opt into filtering by properties that aren't in the schema\n// Included because it removes preparatory warnings for Mongoose 7.\n// See: https://mongoosejs.com/docs/migrating_to_6.html#strictquery-is-removed-and-replaced-by-strict\nmongoose.set(\"strictQuery\", false);\n\nconst mongoDBConnect = async () => {\n try {\n await mongoose.connect(process.env.DATABASE_URI, {\n useNewUrlParser: true,\n useUnifiedTopology: true\n });\n console.log('Connected to MongoDB');\n return 'Connected to MongoDB'\n } catch (error) {\n console.log('Error connecting to MongoDB:', error);\n return JSON.stringify(`Error connecting to MongoDB: ${error}`)\n }\n };\n\n export const connectDB = mongoDBConnect;\nimport createError from 'http-errors';\nimport express from 'express';\nimport bodyParser from 'body-parser';\nimport * as path from 'path';\nimport { dirname } from 'path';\nimport { fileURLToPath } from 'url';\nimport cookieParser from 'cookie-parser';\nimport logger from 'morgan';\n\n/**\n * Database - MongoDB using Mongoose ORM\n */\nimport {connectDB} from './datatabase/db.js'\n// Connect to MongoDB\nconnectDB();\n\nimport {indexRouter} from './routes/index.js';\nimport {storyPointsRouter} from './routes/metrics.js';\nimport {catalogRouter} from './routes/catalog.js';\n\nconst __dirname = dirname(fileURLToPath(import.meta.url));\nexport const app = express();\n\n\n// view engine setup\napp.set('views', path.join(__dirname, 'views'));\napp.set('view engine', 'pug');\n\napp.use(logger('dev'));\napp.use(express.json());\napp.use(express.urlencoded({ extended: false }));\napp.use(cookieParser());\napp.use(express.static(path.join(__dirname, 'public')));\n\n\napp.use('/', indexRouter);\napp.use('/storyPoints', storyPointsRouter)\napp.use('/metrics', catalogRouter)\n\n// catch 404 and forward to error handler\napp.use(function(req, res, next) {\n next(createError(404));\n});\n\n// error handler\napp.use(function(err, req, res, next) {\n // set locals, only providing error in development\n res.locals.message = err.message;\n res.locals.error = req.app.get('env') === 'development' ? err : {};\n\n // render the error page\n res.status(err.status || 500);\n res.render('error');\n});\n", "text": "Hi Everyone,I hope you are well!At this point of my nodejs, express and mongoose app I have fairly simple routes, views, models and controllers. I have a story_points_create_post route that when called from Postman does call the code. I’ve been able to trace (using a debugger) that the model.save does not save and never returns a response to resolve the post.Hrere is the story_points_create_post route in project/routes:Here is the controller:Notice the try/catch block with res.send(await storyPoints.save()) this never resolves. I did comment this out and returned the storyPoints model res.send(storyPoints) and got the model looking correct with all required fields present, here was the output:The call to mode.save() eventually times out with a 500 error however it is not caught by the try/catch ( maybe do the async call?In atlas right now here are some example document:Here is the storyPoint model:Here is the atlas db connection which when the app is running reports Connected to MongoDB:And finally here is the app:Can you see any issues with the code or reasons why save() is failing?Your help is appreciated.", "username": "Steve_Browning" }, { "code": "res.send(await storyPoints.save())save()try {\n const savedStoryPoint = await storyPoints.save();\n res.send(savedStoryPoint);\n} catch (err) {\n console.error(err);\n res.status(500).send('Error saving story point'); \n}\n", "text": "Hey @Steve_Browning,Welcome to the MongoDB Community forums The call to mode.save() eventually times out with a 500 error.\nCan you see any issues with the code or reasons why save() is failing?I’ll suggest splitting the res.send(await storyPoints.save()) to catch any errors that may occur during the save() process. This will help in debugging the issue.In case of further issues, please share the error log message you encounter and the workflow you followed so that the community can assist you better.Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "const savedStoryPoint = await storyPoints.save();", "text": "Thank you Kushagra!I did ammend the code like you recommended and found the query never times out ( well at least for 30min before I stopped the query from Postman. It never passes the const savedStoryPoint = await storyPoints.save(); and does not throw an error.I’m not sure if you’ve seen this type of behaviour before?Regards,\nSteve", "username": "Steve_Browning" }, { "code": "", "text": "Hi Kushagra,I got the .save() to work.It turns out that one of the virtual fields I had defined where causing issues with save(). I am going to go one by one with the virtual fields to see which one is causing the issue.Thank you again for your support.", "username": "Steve_Browning" }, { "code": "const storyPointsSchema = new Schema({\n firstName: { \n required: true, \n type: String, \n maxLength: 100\n },\n lastName: {\n required: true, \n type: String, \n maxLength: 100\n },\n team: {\n required: true, \n type: String}\n ,\n storyPoints: {\n required: true,\n type: Number\n },\n sprint: {\n name: {\n required: true,\n type: String\n },\n startDate: {\n required: true,\n type: Date, \n },\n endDate: {\n required: true,\n type: Date,\n }\n }\n \n}, {timestamps: true });\n\nfunction convertDateToReadableFormat(Date) {\n return DateTime.formJSDate(Date).toLocaleString(DateTime.DATE_MED);\n}\n\nstoryPointsSchema.virtual('prety_dates').get(() => {\n return {\n sprintStartDateF: convertDateToReadableFormat(this.sprint.startDate),\n sprintEndDateF: convertDateToReadableFormat(this.sprint.endDate),\n updatedAtF: convertDateToReadableFormat(this.updatedAt)\n }\n})\n", "text": "It turns out referencing a sub document in the Story Points Schema within a virtual could not be find. As a reminder here is the model for StoryPoints:I get and error that sprint is undefined:TypeError: Cannot read properties of undefined (reading ‘sprint’)As you can see my Sprint properties of name, startDate and endDate are not assessable in the virtual field. Does this need populate?Regards,\nSteve", "username": "Steve_Browning" }, { "code": "// Require Mongoose\nimport mongoose from \"mongoose\";\n\n// Define a schema\nconst Schema = mongoose.Schema;\n/**\n * Story Point Report Schema\n * The timestamps used in this way create updatedAt and createdAt automatically\n */\nconst storyPointsSchema = new Schema({\n firstName: { \n required: true, \n type: String, \n maxLength: 100\n },\n lastName: {\n required: true, \n type: String, \n maxLength: 100\n },\n team: {\n required: true, \n type: String}\n ,\n storyPoints: {\n required: true,\n type: Number\n },\n sprint: {\n name: {\n required: true,\n type: String\n },\n startDate: {\n required: true,\n type: Date, \n },\n endDate: {\n required: true,\n type: Date,\n }\n }\n \n}, {timestamps: true });\n\n// storyPointsSchema.pre('save', function(next){\n// this.spintStartDate = Date.now();\n// this.sprintEndDate = Date.now() + 14;\n// })\n\n/**\n * Calculate Full Name\n * Virtual for ticket assignee's full name\n */\nstoryPointsSchema.virtual('fullname').get(function () {\n let fullname = \"\"\n if (this.firstName && this.lastName) {\n fullname = `${this.firstName}, ${this.lastName}`\n }\n return fullname\n})\n\n// Virtual for author's URL\nstoryPointsSchema.virtual(\"url\").get(function () {\n // We don't use an arrow function as we'll need the this object\n return `/story_points/${this._id}`;\n});\n\nfunction convertDateToReadableFormat(date) {\n let year = date.getFullYear();\n let month = date.getMonth()+1;\n let dt = date.getDate();\n\n if (dt < 10) {\n dt = '0' + dt;\n }\n if (month < 10) {\n month = '0' + month;\n }\n\n return year+'-' + month + '-'+dt;\n}\n\nstoryPointsSchema.virtual('prety_dates').get(function () {\n return {\n sprintStartDateF: convertDateToReadableFormat(this.sprint.startDate),\n sprintEndDateF: convertDateToReadableFormat(this.sprint.endDate),\n updatedAtF: convertDateToReadableFormat(this.updatedAt)\n }\n})\n\nexport const StoryPoints = mongoose.model('StoryPoints', storyPointsSchema)\n{\n firstName: 'Joe',\n lastName: 'Smoe',\n team: 'Data Science',\n storyPoints: 3,\n sprint: {\n name: 'My first Sprint',\n startDate: 2023-10-18T20:23:23.000Z,\n endDate: 2023-10-18T20:24:09.000Z\n },\n _id: new ObjectId(\"654161f226ab0ec909faf1cd\"),\n createdAt: 2023-10-31T20:22:10.974Z,\n updatedAt: 2023-10-31T20:22:10.974Z,\n __v: 0,\n fullname: 'Joe, Smoe',\n url: '/story_points/654161f226ab0ec909faf1cd',\n prety_dates: {\n sprintStartDateF: '2023-10-18',\n sprintEndDateF: '2023-10-18',\n updatedAtF: '2023-10-31'\n },\n id: '654161f226ab0ec909faf1cd'\n}\n{\n \"firstName\": \"Joe\",\n \"lastName\": \"Smoe\",\n \"team\": \"Data Science\",\n \"storyPoints\": 3,\n \"sprint\": {\n \"name\": \"My first Sprint\",\n \"startDate\": \"2023-10-18T20:23:23.000Z\",\n \"endDate\": \"2023-10-18T20:24:09.000Z\"\n },\n \"_id\": \"654161f226ab0ec909faf1cd\",\n \"createdAt\": \"2023-10-31T20:22:10.974Z\",\n \"updatedAt\": \"2023-10-31T20:22:10.974Z\",\n \"__v\": 0\n}\n", "text": "Well I resolved every issue with virtuals and it all works now.Here is the fixed model:And this is the output from console.log():And here is what is returned to Postman:I believe this thread can be closed.", "username": "Steve_Browning" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Model.save() not saving to Atlas
2023-10-30T20:05:35.718Z
Model.save() not saving to Atlas
156
null
[ "aggregation", "node-js" ]
[ { "code": "{\n_id: ObjectId('123456'),\nstatus: 'in work',\ntitle: 'Document 5',\ndescription: 'Description of a document',\nstages: {\n pending: {\n deadline: \"30/10/2023\",\n title: 'Pending status'\n },\n in work: {\n deadline: \"04/11/2023\",\n title: 'In work status'\n },\n done: {\n deadline: \"10/11/2023\",\n title: 'Done status'\n },\n }\n}\nstatus", "text": "Hello everyone,I’m currently using MongoDB in my application to manage documents with a structure like the one below:Each document is in a specific stage, and this stage is reflected in the status field. I need to sort these documents based on their respective stages, specifically by the deadline associated with each stage.For instance, I would like to retrieve documents sorted by their deadlines within the “in work” stage. This query should also consider various other parameters, such as pageSize, that I want to include in the query.Is this achievable in MongoDB, and if so, how can I go about it?Thank you in advance for your assistance!", "username": "bojand" }, { "code": "stages: [\n { stage: \"pending\" ,\n deadline: \"30/10/2023\",\n title: 'Pending status'\n },\n { stage: \"in work\" ,\n deadline: \"04/11/2023\",\n title: 'In work status'\n },\n { stage: \"done\" ,\n deadline: \"10/11/2023\",\n title: 'Done status'\n },\n ]\n", "text": "Your title indicate dynamic field value but your use-case looks like dynamic field name.So my best recommendation at this time is to change your model to use The Attribute Pattern:Learn about the Attribute Schema Design pattern in MongoDB. This pattern is used to target similar fields in a document and reducing the number of indexes.With this stages: would become an array as in:In the link I provided, they explain why it is a better design.As for the deadline: field, I strongly suggest that you use the date data type rather the string. Date data type is faster than string and provides a richer API. What is particularly bad about your string dates, is that you use the format “DD/MM/YYYY” which is like worst to use when you want to sort by dates since you would need to extract YYYY, MM and DD before your sort or query by range. It means that could not use an index. The format “YYYY/MM/DD” would be better because for sorting or range query. But it would be slower than date data type because comparison is done character per character.", "username": "steevej" }, { "code": "", "text": "Thank you for the response @steevejThis approach that you explained and suggested is nice and much more performant, but it still doesn’t solve my problem.I need to figure out the stage based on the status of the entry. Can this approach solve the issue?Thank you!", "username": "bojand" }, { "code": "{ \"$set\" : {\n \"_tmp.deadline\" : { \"$switch\" : {\n \"branches\" : [\n {\n \"case\" : { \"$eq\" : [ \"$status\" , \"pending\" ] } ,\n \"then\" : { \"$stages.pending.deadline\" }\n } ,\n {\n \"case\" : { \"$eq\" : [ \"$status\" , \"in work\" ] } ,\n \"then\" : { \"$stages.in work.deadline\" }\n } ,\n {\n \"case\" : { \"$eq\" : [ \"$status\" , \"done\" ] } ,\n \"then\" : { \"$stages.done.deadline\" }\n } \n ]\n } }\n} }\n{ \"$set\" : {\n \"_tmp.sortable_deadline\" : { \"$dateFromString\" : {\n \"dateString\" : \"$_tmp.deadline\" ,\n \"format\" : \"%d/%m/%Y\"\n } }\n} }\n{ \"$sort\" : {\n \"_tmp.sortable_deadline\" : 1\n} }\n{ \"$project\" : { \"_tmp\" : 0 } }\n", "text": "To understand correctly what you want, we would need more sample documents and the expected results.If you stick with the original document structure, I think the first stage will be to extract the stage date that you want with a $set that uses $switch to get the date.Then another $set that convert the deadline into a value that can be sortedThe you would be able to sortA cleanup $project to remove all temporary fields", "username": "steevej" } ]
Sorting based on dynamic filed value
2023-10-30T14:01:22.383Z
Sorting based on dynamic filed value
147
null
[]
[ { "code": "", "text": "I would just like to inquire about the change in the learning path, I have completed the learning path for DBA associate however the learning path has change and new topics were added, would that affect the DBA exam if I am taking it in the upcoming week and would those new topics be included", "username": "Mogamat_Ghaalied" }, { "code": "", "text": "@Mogamat_Ghaalied Thank you for reaching out with your question. Both the old DBA and the new Self-Managed Database Administrator learning path were created to help enable our users to have the best chance at becoming MongoDB certified. The old DBA path is still sufficient to prepare you properly for the exam. However, there is no harm in reviewing the new topics in the Self-Managed Database Admin path.\nPlease Note: If you need more time to prepare you have the ability to reschedule your exam through your University account.If you have any further questions please reach out to certification@mongodb.comGood luck on your exam!", "username": "Heather_Davis" } ]
DBA exam path change?
2023-10-30T09:42:13.795Z
DBA exam path change?
165
https://www.mongodb.com/…1_2_1024x272.png
[]
[ { "code": "", "text": "Hi Team,I scheduled mongodb dba exam at 21-10-2023 12:30 pm but I’ve not received any link for exam in registered email and i checked in website also there is no information about that and once again i just go to register exam page then in my dashboard shows you missed the exam please find the below snapshot.image1738×462 39 KBExam Receipt #1419-0383please help me anyone, i sent mail to certification team also but not getting any response", "username": "Aravind_rajamani" }, { "code": "", "text": "Hi @Aravind_rajamani Thank you for reaching out. We received your email that was sent to certification@mongodb.com. We have since responded and resolved this issue. Please let us know if you need any further assistance. Thank you", "username": "Heather_Davis" } ]
Today is my mongodb dba exam but not getting any mail or anything
2023-10-21T06:02:08.767Z
Today is my mongodb dba exam but not getting any mail or anything
221
null
[]
[ { "code": "", "text": "my scheduled exam data was today 29/10/2023 and the time is 8:00 pm and when i loged in my account to start the exam its giving me a massage that i was missed the exam but i didnt and they want me to pay again so i want to make another appoitment so please help me fast as you can", "username": "mohammed_alfaqeeh" }, { "code": "", "text": "Hi @mohammed_alfaqeeh Thank you for reaching out. We received your email sent to certification@mongodb.com. We have responded and resolved this issue. Please let us know if you need any further assistance. Thank you", "username": "Heather_Davis" } ]
Scheduled exam date
2023-10-29T16:17:34.328Z
Scheduled exam date
159
null
[ "node-js" ]
[ { "code": "", "text": "Hi,\nI have intended to attempt the Associate Developer Certification exam and the practice questions isn’t enough for me. May I be directed to additional practice questions that would help me better prepare for the exam?", "username": "shoaib_hossain" }, { "code": "", "text": "@shoaib_hossain We apologize, but at this time we do not offer additional practice questions. Please reach out to certification@mongodb.com if you have further questions. Good luck on your exam!", "username": "Heather_Davis" } ]
Additional Practice questions apart from Associate Developer Node.js Practice Questions
2023-10-24T18:09:44.099Z
Additional Practice questions apart from Associate Developer Node.js Practice Questions
213
null
[]
[ { "code": "Internal Failure", "text": "Hello, I’m having an issue with the cloud formation resources in a CDK project. I’m calling the project resource directly using the CloudFormation cdk lib, (which is a 1-1 to the Cfn template) the status is stuck as CREATE_IN_PROGRESS until reaching timeout and writes to the error output as Internal Failure.I’m positive my credentials and permissions are correct because I’m using them as a CLI call to create them and it succeeds.Is there a known solution for this?", "username": "AJ_Ortiz" }, { "code": "", "text": "Hi AJ_Ortiz,Are you able to get any logging of the process? As as I understand it all CF resources, including ours, should work as level 1 CDK constructs and there are no known bugs with the project resource. I’ll see if we can look into this on our side too and try to reproduce.It would be great to learn more about your use case. If you’d be willing to share please let me know.Thank you!\nMelissa", "username": "Melissa_Plunkett" }, { "code": "Internal Failure", "text": "Yes, of course, I’d be happy to. How can I help?In regards to logging, the only output I would get regarding the project resource was the Internal Failure after timing out the loading bar during deploy.", "username": "AJ_Ortiz" }, { "code": "", "text": "Hi AJ_Ortiz,Thank you! I will message you.Also seeing if I can get some instructions to you on how to potentially log information to help us narrow this down.", "username": "Melissa_Plunkett" }, { "code": "", "text": "Hi,\nI have the exact same problem with cdk deployment of MongoDB Atlas resources (MONGODB::ATLAS::PROJECT). The deployments gets stuck in CREATE_IN_PROGRESS and fails after a while with “Internal Failure”. Would be great if you could share some insights, what am I missing?", "username": "David_Boschmann" }, { "code": "", "text": "Hi David_Boschmann, thanks for your comment. There are a few possibilities that may cause this error which I have highlighted below. If these don’t unblock you directly, feel free to create a new issue in our GitHub repo and can have someone help you further.Hope this helps!", "username": "Zuhair_Ahmed" }, { "code": "", "text": "Thanks a lot.\nThe problem were missing activation role permissions (for some reason AdminAccess seems to be not sufficient).\nI have another question about step 5, since I don’t want to open the mongodb to 0.0.0.0/1 for production. Can I somehow determine which IP/IP-range will be used during cdk deployment?", "username": "David_Boschmann" }, { "code": "", "text": "Hi David, keep in mind that AWS CDK gets synthesized into AWS CloudFormation on your behalf. So you will need to determine IP Address / CIDR Range that CloudFormation is using to deploy your MongoDB Atlas clusters and other resources.AWS has a large pool of IP addresses that are used by its services, and these IP addresses can change frequently as new resources are added and old resources are retired. AWS does not publish a list of specific IP addresses that are used by CloudFormation or other services.Instead, AWS provides a list of IP address ranges that are used by its services which varies by each AWS region. The list of IP address ranges is updated weekly and is available on the AWS IP Address Ranges page: AWS IP address ranges - AWS General Reference.Lastly, given you mentioned you are seeking to deploy your AWS CDK resources into production I would in addition contact AWS / your AWS rep to help provide additional guidance on this matter as well. Hope this helps.", "username": "Zuhair_Ahmed" }, { "code": " Resource: \"*\"\"Resource\": \"*\",\n\"Condition\": {\n \"StringEquals\": {\n \"aws:ResourceAccount\": [\n \"536727724300\"\n ]\n }\n }\n", "text": "Hi Zuhair, The roles mentioned in https://github.com/mongodb/mongodbatlas-cloudformation-resources/blob/master/cfn-resources/execute-role.template.yml seem to be more permissive. Is it possible to restrict the “Resource”? As the cloudformation will deploy to Mongo Atlas, can the “Resource” be something like the following instead of Resource: \"*\"", "username": "Darmadeja_Venkatakrishnan" }, { "code": "", "text": "Just curious if anyone else has determined the best avenue for this. My assumption is that I need to get the ip address that is being used by cloudformation to create the database and use that dynamically in the ipAddress whitelist each time. Yet because I am not finding anything on the web concerning my assumption as the solution nor anything within the awscdk-resources-mongodbatlas and AtlasBasicPrivateEndpoint, i fear I am wasting my time. From what I understand this ipAddress is only regarding the initial deploy. Yet no one would want to open 0.0.0.0/1 in production as the work around and I believe there should be better guidance from MongoDB on this matter.", "username": "Alan_Ruth" } ]
CloudFormation resources (Atlas Project) not working in CDK
2022-08-29T17:46:28.110Z
CloudFormation resources (Atlas Project) not working in CDK
2,211
null
[ "java", "atlas", "android" ]
[ { "code": "", "text": "Hi, I’m a novice/junior developer and I’m trying to develop a desktop app and android app. Both should connect to the same db that I’m hosting in MongoDB Atlas. It’s for a college project, essentially, and I’d be using the free tier. I have some experience with MongoDB Atlas, and it seems like the best deal considering I can’t pay.So I have a model of Java objects to match the elements of the collections and whatnot. I have experience doing this interacting with Atlas directly in Java desktop apps. I’ve added a lot of functionality to all the classes and heavily tested them.However it seems that the API for interacting with such elements in mobile devices (the Realm stuff) is completely different. They require things from your model classes like usage of special annotations, extending RealmObject and other stuff that would be irrelevant for the model in the desktop application.And as far as I know the Java SDK doesn’t work in Android, so using this “Realm” feature would be the way to go right?I’m just wondering if anyone could give me some rough guide, tip or resource to try to solve this conundrum that I’m presenting since this has been frustrating me for days now. I’d rather not have to maintain two versions of the same code.Any help will be hugely appreciated!", "username": "Marcos_Chamosa_Rodriguez" }, { "code": "", "text": "If you separate database for Mobile and Desktop the in same Altas project create two user one for App and another for desktop.Create user form above YouTube video.And can deploy app user in app code deployment string and desktop app user in desktop deployment string.", "username": "Bhavya_Bhatt" }, { "code": "", "text": "Hi, that’s interesting and may be useful but I don’t entirely understand how it addresses what I’m speaking of.My problem is figuring out how I can access data in Atlas using Realm, and how I can, if it is possible, maintain a single code base for my app model.For example, my app model has a class called Tag, which I use in my desktop app to map to stuff in a the tags collection. My understanding is that in Android with Realm, I’d have to have a different code base because it forces you to use elements of the API to be able to do crud operations normally, such as RealmObject and various annotations.", "username": "Marcos_Chamosa_Rodriguez" }, { "code": "", "text": "Hi,\nYou can create two database in single altas cluster by creating two users in altas cluster please watch video", "username": "Bhavya_Bhatt" }, { "code": "", "text": "I watched it, but I’m not seeing how doing this helps me. I’m sorry but could offer further clarification?", "username": "Marcos_Chamosa_Rodriguez" }, { "code": "", "text": "First create Two usersUsername same as database name\nUser should provide readWrite preimssionGo to collection on AltasPress on + icon and create database same name of username created", "username": "Bhavya_Bhatt" } ]
Connecting to db in MongoDB atlas through desktop app and mobile app (android, Java)
2023-10-29T23:13:49.634Z
Connecting to db in MongoDB atlas through desktop app and mobile app (android, Java)
182
null
[ "aggregation", "atlas-search" ]
[ { "code": "{\n client_id: \"123\",\n name: \"John Smith\",\n pets: [\n {\n pet_id: \"abc\",\n name: \"Fluffy\"\n },\n {\n pet_id: \"xyz\",\n name: \"Spot\"\n }\n ]\n}\n", "text": "Hey all!I have a situation where we have a collection of clients, and each client has an array of pets. We need to search based on the client name and the pet name, and we want to locate not only the best client match, but also which pet matched the best.Is there a way to score each individual embedded document? Or sort/filter by which embedded doc matches the best?An example schema is this, imagine a collection of many clients like this.Is there a way I can query on both the client name, and the pet name, resulting in the best matching client and pet pair? Im able to search for the doc, including embedded documents, but it just returns the whole document, with no indication as to which pet matches the best.Thanks in advance!", "username": "Jamis_Hartley" }, { "code": "{ name: Fluffy }{ client_id: \"123\" }", "text": "Hi Jamis Hartley!I’m almost 100% sure that it is not possible to get the score per embedded document. Atlas Search index points to the document as a whole and not a field or embedded document in it. The index knows that the pet with { name: Fluffy } is in the document of { client_id: \"123\" }, but where in the document, the index has no idea because it doesn’t store this type of information.", "username": "Artur_57972" }, { "code": "pet_name = \"Spot\" ;\npipeline = [\n /* your current pipeline stages */\n { \"$set\" : { \"pets\" : { \"$filter\" : {\n \"input\" : \"$pets\" ,\n \"cond\" : { \"$eq\" : [ \"$$this.name\" , pet_name ] }\n } } } }\n]\n", "text": "it just returns the whole document, with no indication as to which pet matches the bestTo do than you would need a $set stage that $filter the pets: array. Something like:", "username": "steevej" }, { "code": "", "text": "Ah thats a shame! Thanks for your reply.I was hoping that since it had some embedded documents scores in the scoreDetails metadata, there would be something.", "username": "Jamis_Hartley" }, { "code": "", "text": "Unfortunately this doesn’t quite work for my use case The user may be submitting a query that doesn’t exactly match the pet name. We are using Mongo’s search to enable some fuzzy finding. Which works great, but means its trickier to filter down to the actual pet.Is there a way to do some sort of fuzzy finding in the $set stage of the pipeline like you’ve described? Instead of the $eq condition.", "username": "Jamis_Hartley" }, { "code": "", "text": "Instead of the $eq condition.You may take a look at", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to attach score to each individual embedded document in Atlas search
2023-10-30T19:37:39.705Z
How to attach score to each individual embedded document in Atlas search
149
null
[ "queries" ]
[ { "code": "db.alphabet.find().limit(2).order({letter: 1})db.alphabet.find().order({letter: 1}) .limit(2)", "text": "Hi all,Will I get different results based on the order of FindOptions chained to my find() query?\nI.e. will db.alphabet.find().limit(2).order({letter: 1}) return something else than db.alphabet.find().order({letter: 1}) .limit(2) (assuming I created documents in a random, unordered fashion)?\nI’m currently watching a tutorial, and it looked like it might matter. Unfortunately, the docs I linked above don’t say (or I missed it)…", "username": "_Christian" }, { "code": "limitsortsortlimitlimit", "text": "Hi Christian,The order in which limit and sort are set on the query cursor doesn’t affect how the query is executed. MongoDB will always run sort first and then limit.Considering the situation where the limit is executed first, it’s not very useful because it’s impossible to know what you are limiting.", "username": "Artur_57972" }, { "code": "", "text": "the options are set on client once, and then sent to servers for processing in one go. So the ordering in your code never matters.", "username": "Kobe_W" }, { "code": "", "text": "Do you know where in the docs that’s explained, by any chance? ", "username": "_Christian" }, { "code": "", "text": "I took a look at cursor’s documentation for sort and limit and this information is not there. I have this knowledge from internal training.", "username": "Artur_57972" }, { "code": "", "text": "Take a look at", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does the order of FindOptions matter?
2023-10-30T10:50:10.903Z
Does the order of FindOptions matter?
166
null
[ "containers", "cluster-to-cluster-sync" ]
[ { "code": "mongosynccluster0: 'mongodb://XXXX:YYYY@host.docker.internal:30001/?&directConnection=true'\ncluster1: 'mongodb://XXXX:YYYY@host.docker.internal:30005/?&directConnection=true'\nverbosity: 'TRACE'\nlogPath: \"/var/log/mongosync\"\nport: 27182\n2023-06-01 15:06:11 {\"level\":\"info\",\"serverID\":\"9ccd6da3\",\"time\":\"2023-06-01T13:06:11.8584176Z\",\"message\":\"Maximum memory limit for CEA set to 2045591552 bytes\"}\nprogress# curl localhost:27182/api/v1/progress -POST\n{\"progress\":{\"state\":\"IDLE\",\"canCommit\":false,\"canWrite\":false,\"info\":null,\"lagTimeSeconds\":null,\"collectionCopy\":null,\"directionMapping\":null,\"mongosyncID\":\"coordinator\",\"coordinatorID\":\"\"}}\nstart# curl localhost:27182/api/v1/start -POST \n405 method not allowed\nmongosync", "text": "Hi, I need mongosync to synchronise 2 self-maneged clusters with mongosync in a docker environment. For the developpement, the 2 clusters are on the same host.\nHere is the config file :And this is the output (nothing more) :When I send progress command, i have this :And when i try others command I have the following output (start for example):It looks like mongosync does not end the init process.Thanks for your help,\nMallory.", "username": "MalloryLP" }, { "code": "", "text": "Hi Mallory,Can you try the DEBUG verbosity level instead?", "username": "Alexander_Komyagin" }, { "code": "curl localhost:27182/api/v1/start -XPOST --data '{\"source\": \"cluster0\",\"destination\": \"cluster1\"}'", "text": "It must be -XPOST, not -POST. In your case it sends GET request which is not supported by mongosync. And you have to set source and destination:curl localhost:27182/api/v1/start -XPOST --data '{\"source\": \"cluster0\",\"destination\": \"cluster1\"}'Actually I’m upset that this question is still open on this forum, looks like it’s not active at all.", "username": "Dmitry_Eroshenko1" } ]
Unable to run mongosync with 2 self-maneged clusters
2023-06-01T13:17:28.033Z
Unable to run mongosync with 2 self-maneged clusters
870
null
[ "replication" ]
[ { "code": "", "text": "HI Team,We have setup a replicaset in linux(centos) 1 Primary and 2 secondary. How to create SRV connection string in linux step by step.", "username": "Kiran_Joshy" }, { "code": "", "text": "Hi @Kiran_JoshyA SRV connection string is backed by two types of DNS records; SRV records and TXT records.Once you have a Fully Qualified Domain Name you will need to create SRV records.\nOne is needed for each replicaset member (target).\nThe TXT record is optional and specifies the replicaSet name and/or authentication database.This page goes into it in more detail.\nhttps://www.mongodb.com/docs/manual/reference/connection-string/#srv-connection-format", "username": "chris" } ]
Step to step for creating SRV connection string in mongodb Linux server
2023-10-30T09:52:22.090Z
Step to step for creating SRV connection string in mongodb Linux server
149
null
[ "installation" ]
[ { "code": "W: GPG error: http://repo.mongodb.org/apt/debian stretch/mongodb-org/4.0 Release: The following signatures were invalid: EXPKEYSIG 68818C72E52529D4 MongoDB 4.0 Release Signing Key <packaging@mongodb.com>\n", "text": "Could this key’s expiry be please extended, and re-published ?Key in the question is published at https://www.mongodb.org/static/pgp/server-4.0.ascThanks!", "username": "Abbe67" }, { "code": "", "text": "is this issue solved? Because i’m still seeing the old keyW: GPG error: MongoDB Repositories stretch/mongodb-org/4.0 Release: The following signatures were invalid: EXPKEYSIG 68818C72E52529D4 MongoDB 4.0 Release Signing Key packaging@mongodb.com\nE: The repository ‘MongoDB Repositories stretch/mongodb-org/4.0 Release’ is not signed.\nN: Updating from such a repository can’t be done securely, and is therefore disabled by default.\nN: See apt-secure(8) manpage for repository creation and user configuration details.", "username": "Sander_Bohm" }, { "code": "", "text": "Same answer:", "username": "chris" } ]
MongoDB 4.0 Debian GPG key expired few days ago
2023-04-27T12:14:36.964Z
MongoDB 4.0 Debian GPG key expired few days ago
2,268
null
[]
[ { "code": "", "text": "Hello everyone, do you happen to have a tutorial that can assist me in reading mongodb atlas data via pentaho pdi, on the internet there are only examples of local connection", "username": "Diego_Martins" }, { "code": "", "text": "Please help. I am trying to connect to mongodb atlas vlusing pentaho over the internet. I have a org VPN and the IPs are whitelisted at both ends. Have set up a load balancer at atlas with 3 sub nodes behind it. Trying to connect to the load balancer but failing.From my understanding, pentaho needs an IP that is to be resolved to connect. Howver atlas doesn’t have a dns resolving ips\nWhen we tried connecting using visual studio code using the domain name directly, i was able to connect to atlas database", "username": "Satish_Kumar4" }, { "code": "", "text": "I would like to have it too, pls", "username": "Samuel_Alexandre_Oliveira" } ]
Connect mongodb to pentaho pdi
2021-02-25T19:43:59.071Z
Connect mongodb to pentaho pdi
3,009
https://www.mongodb.com/…37e93bd7fbd.jpeg
[]
[ { "code": "", "text": "Hello,\nI am doing this tutorialWith this repo:MongoDB DREAM: Developer Relation Engagement & Activity Metrics - GitHub - mongodb-developer/dream: MongoDB DREAM: Developer Relation Engagement & Activity MetricsMy repo is here:Contribute to coding-to-music/mongodb-scrape-YouTube-stats-pipeline development by creating an account on GitHub.When I make a commit in my repo, it says the application is deployed, yet the code in the repo is not in the dashboard. I am supposed to paste it in one-by-one or is it supposed to be populated and synced with my repo?Thanks", "username": "Tom_Connors" }, { "code": "//realm", "text": "Hi @Tom_Connors,Have you been through these steps first (first deployment section in the readme)?Contribute to coding-to-music/mongodb-scrape-YouTube-stats-pipeline development by creating an account on GitHub.Looks like there is an issue with your repo. You have a Realm app in / and one in /realm.I think the idea wasn’t to fork this repo and then reuse the same repo to sync the Realm app with your new Realm app. This repo was created just to share the template to get you started. Then you can optionally sync and deploy through Github with a blank repo I guess.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "For a YouTube Stats Tutorial involving creating functions, it’s up to you whether you want to use GitHub or manually create the code. Using GitHub can be helpful for version control and collaboration with others, as you can host your code there and easily share it. However, manually creating functions within your preferred programming environment is also perfectly fine, especially for a small-scale project or personal tutorial. It depends on your specific needs and preferences.", "username": "Sebastian_Miller" } ]
YouTube Stats Tutorial - creating functions - can use GitHub or must manually create?
2022-01-17T22:15:21.674Z
YouTube Stats Tutorial - creating functions - can use GitHub or must manually create?
2,037
null
[ "serverless" ]
[ { "code": "", "text": "I use google cloud run to connect atlas serverless instance, in the monitor I found that each read operation cost 20-50ms, in the oppsite, write operation is far more faster than read, which is weird I think.Is there anyway to optimize read performance in MongoDB Atlas serverless instances? I don’t want to use VPC to connect Atlas Serverless instances, it will create 50 static ips which cost too much for a small application.", "username": "Xu_Jingxin" }, { "code": "", "text": "Hi Xu_JingxinThank you for the question- have you created indexes for your query? I would start with this resource to see if there’s anything you can do to optimize query performance.Thank you", "username": "Anurag_Kadasne" }, { "code": "", "text": "Hi AnurageMy collection size the small (<100 documents) and there is nothing other than query by id, so I think it’s not the matter of index.", "username": "Xu_Jingxin" }, { "code": "", "text": "Hi Xu_JingxinThank you for the correspondence over Direct Message. Confirming that you now see a latency of 4ms 75% of the time. The issue earlier could possibly be because of a cold start issue on the Google Cloud Run side. Please let us know if you have other questions.Best Regards,\nAnurag Kadasne", "username": "Anurag_Kadasne" } ]
Does anyone tested the latency from google cloud run to mongo serverless through public internet connection?
2023-10-27T05:51:00.840Z
Does anyone tested the latency from google cloud run to mongo serverless through public internet connection?
223
null
[ "atlas-cluster" ]
[ { "code": "", "text": "I’ve got a free version of mongoDB for storing data for applications in which I didn’t change anything and suddenly everything stopped working.\nMy role under Database Access: atlasAdmin@admin, Authentication Method: SCRAM. I am the only admin.\nAnd I am logged into the application via:\nUsername\nAuthentication TypeI didn’t even know about the existence of: Certificate.\nI accessed a link similar to the following:\nmongodb+srv://fullstack:@cluster0.o1opl.mongodb.net/nameApp?retryWrites=true&w=majority", "username": "cyxapik_VID" }, { "code": "mongosh", "text": "@cyxapik_VID - Welcome to the community.Do you have any more information regarding your application attempting to connect? What driver and driver version?Are you able to connect to the same cluster using MongoDB Compass or mongosh?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi, Jason .Thanks, but the problem disappeared by itself in a couple of days after it appeared. Probably it was some technical work of mongoDB developers.\nMy applications were working properly for several months, I didn’t make any changes to them, and one day they stopped working, it’s not my fault.\nI use Node.js 14.17.0 for deployment.\nLocally Node.js v18.18.0.\nI am also using the mongoose offecial library.\n“mongoose”: “^6.10.4”,\n“mongoose-unique-validator”: “^3.1.0”.Thanks for the feedback and for contributing to the community .", "username": "cyxapik_VID" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoNetworkError: certificate has expired
2023-10-29T20:04:17.359Z
MongoNetworkError: certificate has expired
162
null
[]
[ { "code": "", "text": "Out of tricks now. Installation fails due to untrusted link/cert.:~$ curl -fsSL {pgp.link}/server-7.0.asc | \\sudo gpg -o /usr/share/keyrings/mongodb-server-7.0.gpg \\–dearmorcurl: (60) SSL certificate problem: unable to get local issuer certificatecurl failed to verify the legitimacy of the server and therefore could notestablish a secure connection to it. To learn more about this situation andhow to fix it, please visit the web page mentioned above.File ‘/usr/share/keyrings/mongodb-server-7.0.gpg’ exists. Overwrite? (y/N) Ygpg: no valid OpenPGP data found.echo “deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg ] {repo}/apt/ubuntu jammy/mongodb-org/7.0 multiverse” | sudo tee /etc/apt/sources.list.d/mongodb-org-7.0.list\ndeb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg ] {repo}/apt/ubuntu jammy/mongodb-org/7.0 multiverse:~$ sudo apt-get updateCertificate verification failed: The certificate is NOT trusted. The certificate issuer is unknown. Could not handshake: Error in the certificate verification. [IP: ::ffff:146.112.61.104 443]\nReading package lists… Done\nW: Failed to fetch https://repo.mongodb.org/apt/ubuntu/dists/jammy/mongodb-org/7.0/InRelease Certificate verification failed: The certificate is NOT trusted. The certificate issuer is unknown. Could not handshake: Error in the certificate verification. [IP: ::ffff:146.112.61.104 443]\nW: Some index files failed to download. They have been ignored, or old ones used instead.openpgp f", "username": "Dev_Engine" }, { "code": "{pgp.link}/server-7.0.aschttps://pgp.mongodb.com/server-7.0.asc{pgp.link}pgp[.]link", "text": "SSL certificate problem: unable to get local issuer certificateHi @Dev_Engine, I see you’re using {pgp.link}/server-7.0.asc, the official instructions state to use https://pgp.mongodb.com/server-7.0.asc. Can you confirm you’re entering as the documentation suggests?If you’re pasting it in as such {pgp.link} into bash, it’s likely being evaluated to pgp[.]link, which appears to be a suspicious website that has an invalid SSL certificate, and the MongoDB PGP key information would NOT be there.", "username": "Jacob_Latonis" } ]
Mongo Install fails for community edition on Ubuntu
2023-10-30T19:45:39.690Z
Mongo Install fails for community edition on Ubuntu
148
null
[ "queries", "kotlin" ]
[ { "code": "collection.find(\n Filters.eq(Movie::title.name, \"Shrek\")\n).firstOrNull()\neqeq(@Nullable final TItem value)\n\n// Example\ncollection.find<User>(Document().append(\"email\", email)).firstOrNull()\neq(final String fieldName, @Nullable final TItem value)\n\n// Example\ncollection.find<User>(eq(\"email\", email)).firstOrNull()\ncollection.find(\n Filters.eq(Movie::title.name, \"Shrek\")\n).firstOrNull()\n", "text": "As of this time of writing, this page: https://www.mongodb.com/docs/drivers/kotlin/coroutine/current/quick-reference/Shows a quick reference ofBut none of the rest of the documents mention this anywhere else, are they confusing it with how KMongo use to do it? In here https://www.mongodb.com/docs/drivers/kotlin/coroutine/current/migrate-kmongo/#construct-queriesWe see, MongoDB Kotlin Driver only supports two eq methodsUsing Documentsor using BuildersThe API does not seem to support this syntax at all when trying it in a real project", "username": "FranzVz" }, { "code": "Movie::title.Movie::title.nameString", "text": "Isn’t Movie::title. a Kotlin property reference, so Movie::title.name is of type String?", "username": "Jeffrey_Yemin" } ]
Kotlin Driver Quick Reference wrong?
2023-10-20T16:02:46.619Z
Kotlin Driver Quick Reference wrong?
229
null
[ "compass", "atlas-search" ]
[ { "code": "[\n {\n $search: {\n compound: {\n must: [\n {\n compound: {\n must: [\n {\n text: {\n query: \"group\",\n path: \"documentGroupId\",\n },\n },\n {\n range: {\n path: \"uploadDate\",\n gt: {\n $date:\n \"2023-08-14T00:00:00Z\",\n },\n },\n },\n ],\n },\n },\n ],\n },\n index: \"default\",\n },\n },\n]\n", "text": "Hi,I ran this query successfully from the atlas search portal - uploadDate is a date type, found in all records:when trying to run it in compass on the same database, I get this error:mongot returned an error :: caused by :: “compound.must[0].compound.must[1].range.gt.type” is requiredwhat does this error mean and how do I resolve it?", "username": "GUY_SOFFER_ZAMRANY1" }, { "code": "$searchgtISODate()[\n {\n $search: {\n compound: {\n must: [\n {\n compound: {\n must: [\n {\n text: {\n query: \"group\",\n path: \"documentGroupId\",\n },\n },\n {\n range: {\n path: \"uploadDate\",\n gt: ISODate(\"2023-08-14T00:00:00Z\"),\n }\n },\n ]\n }\n }\n ]\n },\n index: \"default\"\n }\n }\n]\nmongoshmongosh", "text": "Hi @GUY_SOFFER_ZAMRANY1 - Welcome to the community.I did receive the same error when copying and pasting the same $search stage you provided.Can you try with the following search query to see if it works in Compass? I changed the gt value to use ISODate():I tried the above in my test environment on Compass version 1.40.2 which returned a document successfully both from the embedded mongosh and the Aggregations portion of the Compass UI when filtering for documents.If that doesn’t work, does it return the same or different error? Additionally, when you state the error is from compass - Is the error from the Aggregation stage in the UI or from the embedded mongosh within Compass? Lastly, please also provide the index definition and sample documents if the above doesn’t help.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Weird error from compass on a valid query
2023-10-26T07:58:07.161Z
Weird error from compass on a valid query
207
null
[ "atlas" ]
[ { "code": "Error: invalid bare key character: @\n", "text": "There seems to be an issue with Atlas when trying to delete/log-out of my profile.\nI get an error, in both instances, which says:Could someone help me?-Thaks ", "username": "telometto" }, { "code": "Error: invalid bare key character: @\n", "text": "Hi @telometto - Welcome to the community.There seems to be an issue with Atlas when trying to delete/log-out of my profile.\nI get an error, in both instances, which says:I’ve not encountered this error before but i’d try:If the following doesn’t help, please contact the Atlas in-app chat support team in regards to your account deletion attempts. The following Delete an Atlas Account documentation may be useful in this scenario.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi um where do you delete your account?", "username": "Justin_Thornhill" }, { "code": "", "text": "Hi @Justin_Thornhill,Hi um where do you delete your account?If you’re referring to an Atlas account, please review the following Delete an Atlas Account documentation.Regards,\nJason", "username": "Jason_Tran" } ]
Cannot delete (nor log-out) of Atlas profile
2023-01-20T21:07:27.278Z
Cannot delete (nor log-out) of Atlas profile
1,234