image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"queries",
"node-js",
"mongoose-odm",
"connecting",
"atlas-cluster"
] | [
{
"code": "DATABASE_URI=mongodb+srv://<my user name>:<my password>@my DB name>.mgxnkhk.mongodb.net/swic?retryWrites=true&w=majority\n try {\n const conn = await mongoose.connect(process.env.DATABASE_URI, {\n useUnifiedTopology: true,\n useNewUrlParser: true,\n });\n\n console.log(`Database connected: ${conn.connection.host}`);\n } catch (error) {\n console.log(`Error: ${error.message}`);\n process.exit(1);\n }\n};\n[nodemon] starting `node app.js`\nApp running on port 5000\nError: querySrv ECONNREFUSED _mongodb._tcp.swic.mgxnkhk.mongodb.net\n[nodemon] app crashed - waiting for file changes before starting...\n",
"text": "I’m using Ubuntu in WSL2 in Windows 11.\nNode v18.18.2 with MongooseI am in China and use Astrill VPN.My connection string is:Config file:When I do ‘npm run server’, I see this:I’ve tried turning VPN on and off, switching VPN routes, all Windows firewalls are turned off, I am using the nearest cluster (Hong Kong), my DNS servers are set to Google.I am able to connect when setting Astrill to stealth mode - otherwise, I keep getting this error.While stealth mode is functional, I’d like to know if there is a better solution to this issue.Thanks for any pointers.",
"username": "Seb_Lex"
},
{
"code": "nslookup_mongodb._tcp.swic.mgxnkhk.mongodb.net",
"text": "Hi @Seb_Lex - Welcome to the community.I am able to connect when setting Astrill to stealth mode - otherwise, I keep getting this error.While stealth mode is functional, I’d like to know if there is a better solution to this issue.Whilst I do understand you’ve tried using Google DNS servers, it seems like (At least based off the information provided) a possible configuration issue with the VPN you are using since it works in “stealth mode”. You can maybe try doing a nslookup against _mongodb._tcp.swic.mgxnkhk.mongodb.net from stealth mode and outside of stealth mode to see what the output is for troubleshooting purposes although I can’t say what configuration(s) / changes would be required here to be able to connect using Astrill VPN.Is the use case requiring VPN usage and if so, have you considered testing with other VPN providers to see if the same errors are generated?From the above, it sounds like there isn’t anything particularly wrong with the Atlas network settings / cluster since you’re able to connect as well.Regards,\nJason",
"username": "Jason_Tran"
}
] | Error: querySrv ECONNREFUSED | 2023-10-30T08:54:04.920Z | Error: querySrv ECONNREFUSED | 161 |
null | [
"data-modeling",
"replication",
"sharding"
] | [
{
"code": "",
"text": "Hello, I had a lot of questions about the design of a certain type of system before and I thought about it for almost a year, but today is the time of implementation and I had to ask a question here for the first time.My design is multi-tenant, so as a rule, the data of all users, for example, the collection of products, should be in one place. Well, I have to give the user the possibility to, for example, return the data to yesterday’s time if he wants. My system has a lot of users and the volume of data is high, and I can’t do many deletion requests and I have to write, for example, delete a certain collection or document and replace it from the backup.I don’t see 2 more approaches.\nOne is that I create a unique group for each user for the group of products, which is an anti-design pattern, and I may have, for example, 20 million users, and on the other hand, the models are not added on new sharded nodes, but on all nodes. are applied.\nIt should be considered that these are only products and I have to do this, for example, for the collection of comments, etc., even though my system is a microservice and my databases are separate, but I think this approach is terrible.Another approach is to come and put all the data of a user in one document and increase the size of documents to more than 16 megabytes, which creates 100 more challenges.Well, do you have any other suggestions, should I look outside the box?Another case is that a set of mine has 10 indexes.\nWell, the number of my users is high, and on the other hand, the volume of documents and document production is also extremely high for the type of business.\nI have a lot of questions here\n1- Here, the database must first distinguish the user’s documents from the other user’s and then apply a filter, in my opinion, this method is problematic in high volumes.\n2- Isn’t it better to make an algorithm to find the desired document myself and not use indexes to reduce the terrible amount of indexes in RAM and increase the speed?\n3- What algorithm do you use to make the product price 20, size 40, color blue, etc. in Mango DB? I mean, what algorithm does MongoDB use for multiple profiles?\nShould I use another database?",
"username": "reza_salehi"
},
{
"code": "",
"text": "Hi Reza Salehi,I couldn’t fully understand the needs of your application, but from what I could get, the Building with Patterns series would help you with your data modeling.",
"username": "Artur_57972"
}
] | Requirements engineering in schema | 2023-10-30T11:16:13.125Z | Requirements engineering in schema | 164 |
null | [
"dot-net",
"production"
] | [
{
"code": "RenderRender(IBsonSerializer<TDocument> documentSerializer, IBsonSerializerRegistry serializerRegistry)Render(SearchDefinitionRenderContext<TDocument> renderContext)",
"text": "This is the general availability release for the 2.21.0 version of the driver.The main new features in 2.21.0 include:All Render methods in Atlas Search builders have a new signature:Render(IBsonSerializer<TDocument> documentSerializer, IBsonSerializerRegistry serializerRegistry)Changed toRender(SearchDefinitionRenderContext<TDocument> renderContext)The full list of issues resolved in this release is available at CSHARP JIRA project.Documentation on the .NET driver can be found here.",
"username": "Boris_Dogadov"
},
{
"code": "Search$embeddedDocument",
"text": "Hi,\nThis new it’s very good, but your documentation it’s not updated.\nHere You can see NOTEThe Search class does not currently support the $embeddedDocument operator.And can you please put an example as for the other types?Thanks for everything,",
"username": "C_Steeven"
},
{
"code": "embeddedDocument",
"text": "embeddedDocumentThank you @C_Steeven for bringing this to our attention, we’ll be updating the documentation accordingly.",
"username": "Boris_Dogadov"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | .NET Driver 2.21.0 Released | 2023-08-10T18:47:54.559Z | .NET Driver 2.21.0 Released | 715 |
null | [] | [
{
"code": "",
"text": "Due to political unrest internet access is heavily restricted in our country. I am trying to connect to our atlas cluster through protonVPN. But it is not working. I am obligated to use a VPN to have internet access to allow my expressJS rest api to be able to connect to our atlas cluster. I am accessing other websites. I already added my new ip address to the allowed IP addresses on my atlas cluster but when launching my expressjs app it says that I should add the (already added Ip) to the whitelisted adresses. I there any restriction on mongodb atlas concerning VPNs.",
"username": "Mohamed_Cisse2"
},
{
"code": "",
"text": "It’s called MongoDB Atlas IP Whitelisting Issue\nCheck this out this may helpTroubleshoot your connection error and find solutions to the most common MongoDB Atlas login issues that relate to IP whitelisting.",
"username": "Shubham_Jaiswal2"
},
{
"code": "",
"text": "Apparently, ProtonVPN blocks some ports, including 27017.See replies: Reddit - Dive into anything",
"username": "Toby_Flenderson"
}
] | Accessing database via proton VPN | 2023-06-03T03:37:09.989Z | Accessing database via proton VPN | 663 |
null | [
"aggregation",
"golang",
"atlas-search"
] | [
{
"code": "func (m *MarketDB) Test() {\n\ttext := \"WROSship\"\n\n\tpipeLine := []bson.M{\n\t\t{\"$search\": bson.M{\n\t\t\t\"index\": NFT_INDEX,\n\t\t\t\"text\": bson.M{\n\t\t\t\t\"query\": text,\n\t\t\t\t\"path\": []string{\"searchKeyword\"},\n\t\t\t},\n\t\t}},\n\t}\n\n\tctx := context.Background()\n\tif cursor, err := m.nfts.Aggregate(ctx, pipeLine, options.Aggregate()); err != nil {\n\t\tpanic(err)\n\t} else {\n\t\tdefer cursor.Close(ctx)\n\n\t\tvar nfts []*types.Nft\n\n\t\tif err = cursor.All(ctx, &nfts); err != nil {\n\t\t\tpanic(err)\n\t\t} else {\n\t\t\tfmt.Println(len(nfts))\n\t\t}\n\t}\n}\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"searchKeyword\": {\n \"analyzer\": \"lucene.standard\",\n \"type\": \"string\"\n }\n }\n }\n}\nWM4CHA 1285 0xc234e58cb077ccf4fbe57d3ac27db0d1cdfe0473 Taoist\n",
"text": "hi i developing serverthis time we apply Atlas cluster and AtlasSearchso need test Searchmy code like thisnfts collection’s total Document is 13M\nand that query result count is 35000but it takes 30swhat is my fault??\nmy search index like thisand searchKeyword like thisthank to read",
"username": "_WM2"
},
{
"code": "",
"text": "Hi @_WM2 are you looking for an exact match in your queries?If so there are a few strategies to achieve this which could be worth testing.",
"username": "Elle_Shwer"
}
] | Why my search query is so slow...? | 2023-10-26T08:37:29.292Z | Why my search query is so slow…? | 212 |
null | [
"aggregation",
"queries",
"data-modeling",
"python",
"spark-connector"
] | [
{
"code": " Code Snippet:spark.conf.set(\"com.mongodb.spark.sql.connector.read.partitioner.Partitioner\", \"com.mongodb.spark.sql.connector.read.partitioner.PaginateIntoPartitionsPartitioner\")\n#Have used all the given partitioner from the official docs [link](https://www.mongodb.com/docs/spark-connector/master/configuration/read/#std-label-conf-paginateintopartitionspartitioner)\nspark.conf.set(\"spark.mongodb.read.partitioner.options.partition.field\", \"t_uid\")\nspark.conf.set(\"spark.mongodb.read.partitioner.options.partition.size\", \"128\")\nspark.conf.set(\"spark.mongodb.read.partitioner.options.samples.per.partition\", \"1\")\ndf = spark.read.format(\"mongodb\")\\\n .option(\"spark.mongodb.read.database\", \"db_name\")\\\n .option(\"spark.mongodb.read.collection\", \"collection_name\")\\\n .option(\"sql.inferSchema.mapTypes.enabled\", \"true\")\\\n .option(\"sql.inferSchema.mapTypes.minimum.key.size\", \"512\")\\\n .option(\"outputExtendedJson\", \"true\")\\\n .option(\"spark.mongodb.read.connection.uri\", \"mongodb+srv://***:***@***.mongodb.net/\").load()\ndisplay(df)\nConfiguration Details:Py4JJavaError: An error occurred while calling o610.load.\n: scala.MatchError: com.mongodb.spark.sql.connector.schema.InferSchema$1@74b6c909 (of class com.mongodb.spark.sql.connector.schema.InferSchema$1)\n\tat org.apache.spark.sql.catalyst.encoders.RowEncoder$.encoderForDataType(RowEncoder.scala:80)\n\tat org.apache.spark.sql.catalyst.encoders.RowEncoder$.encoderForDataType(RowEncoder.scala:116)\n\tat org.apache.spark.sql.catalyst.encoders.RowEncoder$.$anonfun$encoderForDataType$2(RowEncoder.scala:129)\n\tat scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)\n\tat scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)\n\tat scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)\n\tat scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)\n\tat scala.collection.TraversableLike.map(TraversableLike.scala:286)\n\tat scala.collection.TraversableLike.map$(TraversableLike.scala:279)\n\tat scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:198)\n\tat org.apache.spark.sql.catalyst.encoders.RowEncoder$.encoderForDataType(RowEncoder.scala:126)\n\tat org.apache.spark.sql.catalyst.encoders.RowEncoder$.$anonfun$encoderForDataType$2(RowEncoder.scala:129)\n\tat scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)\n\tat scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)\n\tat scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)\n\tat scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)\n\tat scala.collection.TraversableLike.map(TraversableLike.scala:286)\n\tat scala.collection.TraversableLike.map$(TraversableLike.scala:279)\n\tat scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:198)\n\tat org.apache.spark.sql.catalyst.encoders.RowEncoder$.encoderForDataType(RowEncoder.scala:126)\n\tat org.apache.spark.sql.catalyst.encoders.RowEncoder$.$anonfun$encoderForDataType$2(RowEncoder.scala:129)\n\tat scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)\n\tat scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)\n\tat scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)\n\tat scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)\n\tat scala.collection.TraversableLike.map(TraversableLike.scala:286)\n\tat scala.collection.TraversableLike.map$(TraversableLike.scala:279)\n\tat scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:198)\n\tat org.apache.spark.sql.catalyst.encoders.RowEncoder$.encoderForDataType(RowEncoder.scala:126)\n\tat org.apache.spark.sql.catalyst.encoders.RowEncoder$.encoderFor(RowEncoder.scala:75)\n\tat org.apache.spark.sql.catalyst.encoders.RowEncoder$.apply(RowEncoder.scala:63)\n\tat org.apache.spark.sql.catalyst.encoders.RowEncoder$.apply(RowEncoder.scala:67)\n\tat org.apache.spark.sql.Dataset$.$anonfun$ofRows$1(Dataset.scala:101)\n\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1113)\n\tat org.apache.spark.sql.SparkSession.$anonfun$withActiveAndFrameProfiler$1(SparkSession.scala:1120)\n\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)\n\tat org.apache.spark.sql.SparkSession.withActiveAndFrameProfiler(SparkSession.scala:1120)\n\tat org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:98)\n\tat org.apache.spark.sql.execution.datasources.v2.DataSourceV2Utils$.loadV2Source(DataSourceV2Utils.scala:146)\n\tat org.apache.spark.sql.DataFrameReader.$anonfun$load$1(DataFrameReader.scala:333)\n\tat scala.Option.flatMap(Option.scala:271)\n\tat org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:331)\n\tat org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:226)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)\n\tat py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:397)\n\tat py4j.Gateway.invoke(Gateway.java:306)\n\tat py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)\n\tat py4j.commands.CallCommand.execute(CallCommand.java:79)\n\tat py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:195)\n\tat py4j.ClientServerConnection.run(ClientServerConnection.java:115)\n\tat java.lang.Thread.run(Thread.java:750)\n",
"text": "Hi EveryoneAm trying to read the MongoDB collection of having around 90M documents to a Spark dataframe in Databricks. When read the collection am getting this below error. Can someone help me here to resolve this error or give me some idea to resolve it. Code Snippet:Configuration Details:Mongo DB Spark Connector - org.mongodb.spark:mongo-spark-connector_2.12:10.1.1Bson - org.mongodb:bson:4.10.0Databricks Runtime Verison - 13.3 LTS (includes Apache Spark 3.4.1, Scala 2.12)Collection Size - 340 GBAvg Document Size - 10KBTotal No. of Docs - 90 Million",
"username": "Mani_Perumal"
},
{
"code": "",
"text": "Hi @Mani_Perumal ,The error seems to be related to the schema inference process. Before fixing that can you try using with Apache spark version 3.2.4? (I see you are using version Spark 3.4.1)If the problem still persists , you can try setting explicit schema instead of relying on automatic schema inference. If you’re not sure about the schema, you can inspect a few documents from your MongoDB collection to determine the schema.",
"username": "Prakul_Agarwal"
},
{
"code": "",
"text": "Hi @Prakul_AgarwalHave tried with the Spark version 3.2.1, but the issue still persist. Agreed, as you said can give the schema manually and read the documents, but in my case have hundreds of collections with the same volume and each documents in the collection contains 250+ fields. It’s very hard to manually feed the schema for all pipeline and it’s just one DB and have many other in my plate to do it.Can you please suggest some other workaround to sort this issue.",
"username": "Mani_Perumal"
},
{
"code": "",
"text": "Hi @Prakul_AgarwalHave given schema explicitly by reading some documents and still am getting the same schema inference error. Don’t know how to proceed further. Can you give me some light on this, how to resolve it.",
"username": "Mani_Perumal"
},
{
"code": "",
"text": "Possibly due to empty arrays that are nested in your collection structure somewhere with 10.1+ connector. (MongoDB would need to confirm).https://jira.mongodb.org/browse/SPARK-412",
"username": "mikev"
},
{
"code": "",
"text": "Thanks @mikev for reporting this as a bug with more detail in the Mongo Jira board.\nhttps://jira.mongodb.org/browse/SPARK-412@Prakul_Agarwal Can you please check this one.",
"username": "Mani_Perumal"
}
] | Getting scala.MatchError: com.mongodb.spark.sql.connector.schema.InferSchema when read a large collection of 90M documents | 2023-09-24T08:22:42.743Z | Getting scala.MatchError: com.mongodb.spark.sql.connector.schema.InferSchema when read a large collection of 90M documents | 680 |
[] | [
{
"code": "",
"text": "My cluster starts responding extremely slow, and I don’t even understand what is the root cause of that. I didn’t change anything on the application side, the traffic also normal, no spikes, no problems.\nI see in the monitoring that for some reasons there is significant drop in operations, and increase in latency. What can cause this problem, is there a way to debug it somehow? this is my setup:\nVERSION\n5.0.21REGION\nAWS / Frankfurt (eu-central-1)CLUSTER TIER\nM20 (General)Screenshot 2023-10-28 at 19.59.152538×1120 326 KB\nScreenshot 2023-10-28 at 19.59.072532×1152 350 KB\nScreenshot 2023-10-28 at 19.58.572556×1174 287 KB",
"username": "Sergey_Zelenov"
},
{
"code": "",
"text": "It turns out be problems with resources. After upgrading cluster everything is back to normal.",
"username": "Sergey_Zelenov"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Cluster is unresponsive for a long time | 2023-10-28T17:59:55.749Z | Cluster is unresponsive for a long time | 153 |
|
null | [
"crud"
] | [
{
"code": "{\n \"_id\": {\n \"$oid\": \"653c626b6ae0997703ddb119\"\n },\n \"count\": 39028,\n \"totalSize\": 10014720\n},\ndb.collection.updateMany({}, { $set: { \"avgValue\": \"$totalSize\" / \"$count\"} } )\n{\n \"_id\": {\n \"$oid\": \"653c626b6ae0997703ddb119\"\n },\n \"count\": 39028,\n \"totalSize\": 10014720,\n \"avgSize\": {\n \"$numberDouble\": \"NaN\"\n }\n},\n{\n \"_id\": {\n \"$oid\": \"653c626b6ae0997703ddb119\"\n },\n \"count\": 39028,\n \"totalSize\": 10014720,\n \"avgSize\": 256.60346418\n}\n",
"text": "HiCan some one tell me the correct way to do this:I got this collection:need to add a new fieldI’m getting this result:And I just need:Thanks",
"username": "Oscar_Cervantes"
},
{
"code": "db.collection.updateMany({}, [ { $set: { \"avgValue\": \"$totalSize\" / \"$count\"} } ] )\n",
"text": "When your expression in a $set refers to field of the document you want to update you need to use\nhttps://www.mongodb.com/docs/v5.3/tutorial/update-documents-with-aggregation-pipeline/.In your case, you would need to doSo simply enclose the $set object into a matching pair of square brackets.",
"username": "steevej"
}
] | Correct way to add a new field with a division between existing fields | 2023-10-28T02:03:47.138Z | Correct way to add a new field with a division between existing fields | 157 |
null | [
"aggregation",
"node-js"
] | [
{
"code": "[\n {\n _id: ObjectId(\"653c4c017800342a3bedb82e\"),\n type: 'buy',\n item: 555,\n amount: 1,\n priceEach: 100000\n },\n {\n _id: ObjectId(\"653c4c017800342a3bedb830\"),\n type: 'buy',\n item: 384,\n amount: 2,\n priceEach: 70000\n },\n {\n _id: ObjectId(\"653c4c017800342a3bedb831\"),\n type: 'buy',\n item: 384,\n amount: 1,\n priceEach: 50000\n },\n {\n _id: ObjectId(\"653c4c017800342a3bedb831\"),\n type: 'sell',\n item: 384,\n amount: -1,\n priceEach: 50000\n }\n]\ndb.collection.aggregate([\n {\n $group: {\n _id: \"$item\",\n totalBuyPrice: {\n $sum: { // Only if type === 'buy'\n $multiply: [\n \"$priceEach\",\n \"$amount\"\n ]\n }\n },\n totalBuyAmount: // $sum $amount if type === 'buy'\n totalAmount: {\n $sum: \"$amount\"\n }\n }\n },\n {\n $project: {\n _id: 1,\n avg: {\n $divide: [\n \"totalBuyPrice\",\n \"totalBuyAmount\"\n ]\n },\n amount: \"$totalAmount\"\n }\n }\n])\n",
"text": "I want to aggregate the average of a collection only for documents with type ‘buy’.\nSo the problem is how to get the if condition around the sum expression and same for totalBuyAmount. Check the comments in the code These are example documents:I need to totalBuyPrice only if type === ‘buy’",
"username": "olli962_N_A"
},
{
"code": "\"totalBuyAmount\" : { \"$sum\" : { \"$cond\" : {\n \"$if\" : { \"$eq\" : [ \"$type\" , \"buy\" ] } ,\n \"$then\" : \"$amount\" ,\n \"$else\" : 0\n} } }\n",
"text": "Looks like what you need is $cond.Example:",
"username": "steevej"
}
] | Conditional expressions | 2023-10-28T07:31:14.194Z | Conditional expressions | 169 |
[] | [
{
"code": "",
"text": "Good day please I noticed that when I ran npm start or npm start server, the result was “Server Running On Port 8080” later it brought out an Error. The image will explain more though\nIMG-20231027-WA0004960×540 37.2 KB\nAlso there was a new update in redux js if am not mistaken, that createStore was deprecated and I was advices to use configureStore, okay when I used configureStore on running the command npm start in my terminal it prompted error though causing I using configureStore especially, but after working on my code it didn’t prompt that error again though the file I inputted it didn’t have an errors in my code or what u will call “terminals” in vs code, so pls am I safe like that since it didn’t prompt any errors in my terminal after running the command npm start? Also I was told to use legacy_createStore instead from someone to replace createStore, I used it there was not error prompt in my terminal even without doing extensive work like what I did when inputting configureStore to replace createStore. Please which should I use is it configureStore(what vs code and online told me to use and the updated version of createStore according to reduxjs toolkit I installed) or legacy_createStore (that my friend(Suraj) said I should use an indian?\nPlease the screenshots will explain better\nIMG-20231014-WA0131(1)800×276 44.4 KB\nIMG-20231027-WA0005656×536 60.3 KB\nThe first image was when I was told createStore is deprecated, the second is when I inputted configureStore as prompted by vs code in the first popup highlight in the first image in vs code.",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "@Aravind_Asha_Ramachandran @Besart_H @Carlos_Vilas @dhanya_sankar @Emmanuel_Frimpong @Francesco_Maccari @Guy_Ulrich @Harshit @Ivan_Poyatos_Luque . Pls any help especially for the first image especially no 2 and 3 image are kind of sorted out thanks for ur time",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "@Andrew_M1 @Bansi_Dholakiya @Daniel_Gonzalez_Reina @Esha_Mahendra @Gokhan_Watches @hiroki_kawakami , @Ilya_Diallo @John_Sewell @Kushagra_Kesav , @Laura_Zhukas1\nPls any help, for the first image though that’s my only issue though but please if u can also provide feedback on the use of configureStore rather than createStore. Thanks",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "What is your connect string?\nAre you using localhost in it?\nIf yes try to replace with 127.0.0.1 and see if it works\nThe error indicates issue with IPV6\nPlease search our forum threads about default usage of Ipv6 vs Ipv4 address",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Okay yeah I am using a local host though, there’s no connection string unless u mean mongodb connection url link\nOkay will check the ipv6 something u talked about",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "Pls how will I check ur forum thread about the default usage of ipv6 and ipv4?",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "I searched for \"ipv6\"in search button under this poat it didn’t show anything",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "Sir pls what of my second and 3rd image? Or what I talked about concerning the 2nd and third image about using configureStore to replace createStore in the third image pls am I good to go?",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "Good day sir I followed ur instruction concerning changing or replacing with 127.0.0.1 and it brought out yhis below\n20231029_0152491920×864 159 KBAlso I retried that I use b4 and this was the outcome though I believe my editor could have contributed to this issue though , the image is below, thanks\n20231029_0148591920×864 130 KBf",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "Pls any help on this?\nIMG-20231029-WA00001080×66 19 KB",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "Should I use legacy_createStore instead since it’s part of the options I know to replace createStore?",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "@Ajey_K , @Una_Liang , @xiongyan_zhong @Yogesh_Here @Zokhid_Abdullaev @Vaishali_Panchal @William_Byrne_III @Samyak_Jain5 @Rising_Star @Quonn_Bernard , pls any help on this.Pls am battling on this thing since and the help according to my post I tried it out but it hasn’t helped at all",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "Pls what am encountering that I thouhht was solved happened again. The inage is below\n20231029_1437321920×864 184 KB",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "Check if the MongoDB server is running. You can do this by running the command sudo systemctl status mongod on Linux or brew services list on macOS.If the server is not running, start it using the command sudo systemctl start mongod on Linux or brew services start mongodb-community on macOS.If the server is running, check if it is listening on the correct port (27017). You can do this by running the command netstat -an | grep 27017 on Linux or lsof -i :27017 on macOS.If the server is not listening on the correct port, you can change the port in the MongoDB configuration file (mongod.conf) and restart the server.If the server is running and listening on the correct port, check if there are any firewall rules blocking the connection. You can temporarily disable the firewall to test if this is the issue.If none of the above steps work, try reinstalling MongoDB and make sure to follow the installation instructions carefully.bro, This type of question can be directly asked to ChatGPT.",
"username": "xiongyan_zhong"
},
{
"code": "",
"text": "Yeah thanks sir for the help. Though it’s windows am using though, you mentioned something I should check if it listening on the correct port and yes it is which is port 8080.\nI tried changing my mongodb connection string from “mongodb://localhost:27017/” to \"mongodb://localhost:0.0.0.0:27017, it worked though but wasn’t giving me what I want. So I tried something else.Yeaj I was finally able to resolve the issue though, someone was saying my mondo db service wasn’t started\nSo I tried starting it myself by doikg this:",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "I aslo have some questions to ask u, though based on my images below\n20231030_1033481920×864 110 KB\n20231030_1034161920×864 117 KB\n20231030_1034301920×864 174 KB\nI am more concerned where it said module not found is it that I should install the module using command prompt or terminal in the project itself?\nIt’s also saying res is undefined so am thinking of passing it as res.send =“” that is the output would be nothing",
"username": "Somtochukwu_Kelvin_Akuche"
}
] | Error: connect ECONNREFUSED ::1:27017, connect ECONNREFUSED 127.0.0.1:27017 | 2023-10-27T12:15:04.352Z | Error: connect ECONNREFUSED ::1:27017, connect ECONNREFUSED 127.0.0.1:27017 | 302 |
|
null | [
"spark-connector"
] | [
{
"code": "batch_df1 = spark.read.format (\"mongodb\").option(\"spark.mongodb.connection.uri\", \"mongodb+srv://xxx: xxx@xxx/admin?readPreference=secondaryPreferred&retryWrites=true&w=majority\")\\\n.option('spark.mongodb.database', 'db1') \\\n.option('spark.mongodb.collection' , 'collection1') \\\n.load()\nbatch_df2 = spark.read.format (\"mongodb\").option(\"spark.mongodb.connection.uri\", \"mongodb+srv://xxx: xxx@xxx/admin?readPreference=secondaryPreferred&retryWrites=true&w=majority\")\\\n.option('spark.mongodb.database', 'db1') \\\n.option('spark.mongodb.collection' , 'collection2') \\\n.load()\nError in callback <bound method UserNamespaceCommandHook.post_command_execute of <dbruntime. DatasetInfo.UserNamespaceCommandHook obiect at 0x7fb94374e250>> (for post_execute): ",
"text": "Hi, I am currently using mongodb v10.1.1 connector to read some tables from my mongo collection. (source)However I am able to read some collections and some collections are giving errors.Example (1)Example (2)Example (1) works fine, but example (2) gives an error:\nError in callback <bound method UserNamespaceCommandHook.post_command_execute of <dbruntime. DatasetInfo.UserNamespaceCommandHook obiect at 0x7fb94374e250>> (for post_execute): How can I troubleshoot this?",
"username": "Yh_G"
},
{
"code": "readPreference=secondaryPreferred",
"text": "Hi @Yh_G ,There isnt anything obvious wrong that I can see here. We can try a couple of things to debug",
"username": "Prakul_Agarwal"
},
{
"code": "",
"text": "Hi @Yh_GAm also facing the same issue on my end. Did you get any workaround for this issue, please share it here. It will be helpful for the entire community.@Prakul_Agarwal: Please share is it a known issue or any mitigation is out there.",
"username": "Mani_Perumal"
},
{
"code": "",
"text": "@Mani_Perumal This is not a known issue.\nAre you seeing the exact same issue - successful read on some collection while failing on others?\nCan you also please share the mongoDB server version, and the Spark version you are using.",
"username": "Prakul_Agarwal"
},
{
"code": "",
"text": "Hi @Prakul_AgarwalWe are using 7.0.2 version with Replica Set and Cluster Tier M400.\nFor spark version, have tried 3.21, 3.3.0, 3.3.2 and 3.4.1 with Scala version 2.12 and Python 3.10.1.\nMongo DB Connector: org.mongodb.spark:mongo-spark-connector_2.12:10.1.1",
"username": "Mani_Perumal"
},
{
"code": "",
"text": "@Yh_G @Mani_Perumal thanks for the additional info. Unable to repro the issue. Possible to provide the full stacktrace to help understand the issue further and if its coming from the Spark nodes or the connector?",
"username": "Prakul_Agarwal"
},
{
"code": "Spark Connector: 2.12:10.2.0\nDatabricks Runtime version: 11.3 LTS (includes Apache Spark 3.3.0, Scala 2.12)/ 13.3 LTS (includes Apache Spark 3.4.1, Scala 2.12)\nPy4JJavaError: An error occurred while calling o351.load.\n: scala.MatchError: com.mongodb.spark.sql.connector.schema.InferSchema$1@409386d7 (of class com.mongodb.spark.sql.connector.schema.InferSchema$1)\n\tat org.apache.spark.sql.catalyst.encoders.RowEncoder$.externalDataTypeFor(RowEncoder.scala:240)\n\tat org.apache.spark.sql.catalyst.encoders.RowEncoder$.externalDataTypeForInput(RowEncoder.scala:236)\n\tat org.apache.spark.sql.catalyst.expressions.objects.ValidateExternalType.<init>(objects.scala:1962)\n\tat org.apache.spark.sql.catalyst.encoders.RowEncoder$.$anonfun$serializerFor$2(RowEncoder.scala:157)\n\tat org.apache.spark.sql.catalyst.expressions.objects.MapObjects$.apply(objects.scala:859)\n\tat org.apache.spark.sql.catalyst.SerializerBuildHelper$.createSerializerForMapObjects(SerializerBuildHelper.scala:203)\n\tat org.apache.spark.sql.catalyst.encoders.RowEncoder$.serializerFor(RowEncoder.scala:156)\n\tat org.apache.spark.sql.catalyst.encoders.RowEncoder$.$anonfun$serializerFor$3(RowEncoder.scala:199)\n\tat scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:293)\n\tat scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)\n\tat scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)\n\tat scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)\n\tat scala.collection.TraversableLike.flatMap(TraversableLike.scala:293)\n\tat scala.collection.TraversableLike.flatMap$(TraversableLike.scala:290)\n\tat scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:198)\n\tat org.apache.spark.sql.catalyst.encoders.RowEncoder$.serializerFor(RowEncoder.scala:192)\n\tat org.apache.spark.sql.catalyst.encoders.RowEncoder$.$anonfun$serializerFor$3(RowEncoder.scala:199)\n\tat scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:293)\n\tat scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)\n\tat scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)\n\tat scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)\n\tat scala.collection.TraversableLike.flatMap(TraversableLike.scala:293)\n\tat scala.collection.TraversableLike.flatMap$(TraversableLike.scala:290)\n\tat scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:198)\n\tat org.apache.spark.sql.catalyst.encoders.RowEncoder$.serializerFor(RowEncoder.scala:192)\n\tat org.apache.spark.sql.catalyst.encoders.RowEncoder$.apply(RowEncoder.scala:73)\n\tat org.apache.spark.sql.catalyst.encoders.RowEncoder$.apply(RowEncoder.scala:81)\n\tat org.apache.spark.sql.Dataset$.$anonfun$ofRows$1(Dataset.scala:96)\n\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:968)\n\tat org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:93)\n\tat org.apache.spark.sql.streaming.DataStreamReader.loadInternal(DataStreamReader.scala:215)\n\tat org.apache.spark.sql.streaming.DataStreamReader.load(DataStreamReader.scala:162)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)\n\tat py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)\n\tat py4j.Gateway.invoke(Gateway.java:295)\n\tat py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)\n\tat py4j.commands.CallCommand.execute(CallCommand.java:79)\n\tat py4j.GatewayConnection.run(GatewayConnection.java:251)\n\tat java.lang.Thread.run(Thread.java:750)\nSpark Connector: 2.12:10.2.0\nDatabricks Runtime version: 12.2 LTS (includes Apache Spark 3.3.2, Scala 2.12)\nError in callback <bound method UserNamespaceCommandHook.post_command_execute of <dbruntime.DatasetInfo.UserNamespaceCommandHook object at 0x7fbe8c4f0040>> (for post_execute):\n\nStreamingQueryException: [STREAM_FAILED] Query [id = e4fe97e6-ead4-490c-b8e4-10b8e0b4d763, runId = 6c59eb1f-66e4-4492-b7d6-aa20bd08b2c2] terminated with exception: Column `location_geohashs` has a data type of struct<lst:array<>,str:string>, which is not supported by JSON.\n",
"text": "Hi @Prakul_AgarwalConfiguration 1:Full Stack Trace:Configuration 2:Full Stacktrace:Second configuration gives some clarity on where the error is, but not sure what exactly the error means here. Happy to provide more steps if you look for.",
"username": "Mani_Perumal"
}
] | MongoDB Spark Connector v10.1.1 - Failing to read from some mongo tables | 2023-04-28T03:22:45.175Z | MongoDB Spark Connector v10.1.1 - Failing to read from some mongo tables | 1,190 |
null | [
"kafka-connector"
] | [
{
"code": "",
"text": "I am setting up a mongo replication between 2 mongo clusters using change.data.capture.handler=com.mongodb.kafka.connect.sink.cdc.mongodb.ChangeStreamHandler\neverything is working fine, collections are replicated. But, the only problem I have is the Mongo source connector creates the topic as prefix.db.collection1, prefix.db.collection2, etc … The mongo sink connector creates the MongoDB collections on the destination cluster with name prefix.db.collection1, prefix.db.collection2, is there any way I can configure the sink connector to keep the original collection names the same.\nFor now, to fix this, I use topic.override.prefix.db.collection1.collection=collection1 and the collection name collection1 is kept on the destination cluster, but the DB has several hundred of collections, I do not want to set this instruction several hundred of times to cover all possible topics. Is there anyway this can be achieved to keep the original collection name the same on the destination server.Thank you.",
"username": "SVC_RVBRAND"
},
{
"code": "",
"text": "Anyone has any idea of using some kind of regex grouping, back reference , like\ntopic.override.prefix.db.(.*).collection=$1or Some SMT to perform this.\nDoes the sink connector support dynamic namespace mapping to set the target collection name?Thank you.",
"username": "SVC_RVBRAND"
},
{
"code": "",
"text": "We introduced dynamic topic mapping in an earlier version MongoDB Connector for Apache Kafka 1.4 Available Now | MongoDB BlogIn your use case you might need to implement a custom TopicMapper interface, include it in a JAR file and place it in the classpath for the kafka connectors",
"username": "Robert_Walters"
},
{
"code": "",
"text": "can u give me some example? The link u mentioned is not active, right?",
"username": "Pham_The_Giau"
}
] | Howto override topics collection name as one instruction | 2021-12-09T04:21:57.160Z | Howto override topics collection name as one instruction | 3,658 |
null | [
"database-tools",
"backup"
] | [
{
"code": "",
"text": "We backed up mongodb data using mongodump. When we tried to restore the data using mongorestore, sometimes we get this error - : insertion error: not master. Our logic has two retries if there are errors in the first attempt. At the end, we found that this error shows for a particular collection’s restore in all three attempts. I read online posts, people pointed out this error is usually due to the write operation on the secondary. But if that is the cause, how come the restore operations on other collections were successful? I am not convinced this is the cause in our restore process. Can anyone shed lights on why mongorestore gives this error? Thanks.log messages -",
"username": "Yanni_Zhang"
},
{
"code": "connected to: mongodb://localhost:2001/\n2023-10-26T16:34:39.576+0530\tFailed: (NotWritablePrimary) not primary\nrs.status()rs.conf()mongodumpmongorestore",
"text": "Hi @Yanni_Zhang and welcome to MongoDB community forums!!As per my understanding,I read online posts, people pointed out this error is usually due to the write operation on the secondary.the write operation on Secondary node is not allowed and would give you the error as:In order to understand the issue further, could you help me with some information as mentioned below:Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Hi, Aasawari,The error was seen when we ran the mongorestore command.The deployment type you are on(e.g. no of secondary, arbiters etc)\nThere is one primary pod and 2 secondary pods, no aribter.The output for rs.status() or rs.conf()\nWe did not capture it.The MongoDB version you are on.\n4.0.25A code snippet that would help me reproduce the error.\nWe observed it in restore process. One collection’s restore out of all had this problem. It’s not always reproducible. We just use mongodump https://www.mongodb.com/docs/database-tools/mongodump/ for backup, and use mongoretore mongorestore — MongoDB Manual for restoring it to another mongo instance.Can you help me with the mongodump and mongorestore command that you are using.\nSee aboveFinally, could you confirm if there is any operation performed on the configuration of the replica set while performing the restore operation.No configuration on the replica set.",
"username": "Yanni_Zhang"
},
{
"code": "mongorestore",
"text": "Hi @Yanni_Zhang and thank you for writing back.The MongoDB version you are on.\n4.0.25The MongoDB version mentioned above quiet old and was released around June 2021.MongoDB 4.0 is no longer supported, and upgrading to a newer version can provide improved stability, bug fixes, and additional features. You can refer to the EOL Support Policies for more information on MongoDB versions and their support status.However, based on the error message, it appears that you are connected to a secondary node that cannot accept write operations. I would suggest verifying the current primary node and then trying the mongorestore operation again.Also please let us know if you are still seeing the issue consistently after the upgrade.Best Regards\nAasawari",
"username": "Aasawari"
}
] | Mongorestore gives not master error | 2023-10-25T05:38:11.378Z | Mongorestore gives not master error | 238 |
[
"delhi-mug",
"bengaluru-mug",
"chennai-mug",
"mumbai-mug",
"mug-virtual-india"
] | [
{
"code": "Developer Advocate @ MongoDBData Engineer @ Webstrings Global, Community CreatorMarketing @ Amazon India",
"text": "Copy of vMUG India MongoDB User Groups - Event Deck (2)960×540 82 KBJoin MongoDB India Virtual User Group on Saturday, October 28th for a day of learning, meeting other developers in the region, and taking back home some exciting MongoDB Swag!We’ll begin the event with introductions and engaging ice-breakers. Then, we’ll start with an overview session on MongoDB Atlas. Make sure you join on time to participate in the jumpstart session and get all the knowledge you need to attend the workshop later.Then we’ll explore the world of Generative AI and take a deep dive into that through a hands-on workshop on using Generative AI with MongoDB Atlas!There’ll be ample opportunities during the talks and workshop to win some cool and exclusive MongoDB Swag!To Register - Please click on the “Registration” link below if you plan to attend. You need to be signed in to access the link.Event Type: Online\nLink(s):\nRegistration LinkDeveloper Advocate @ MongoDB\n__Data Engineer @ Webstrings Global, Community Creator__Marketing @ Amazon India",
"username": "Satyam"
},
{
"code": "",
"text": "Hi guys,\nThis is a very interesting event and I want to attend. However, the event time is 1am in my time zoom. Would be at all possible to record this ans send me the link? thanks",
"username": "john_Doe3"
},
{
"code": "",
"text": "Excited for this amazing session!Let’s get connected over LinkedIn and I’m happy to networkinghttps://www.linkedin.com/in/pranam-bhat-11670689",
"username": "Pranam_Bhat"
},
{
"code": "",
"text": "Hey @john_Doe3,Yes, the session would be recorded. Additionally, you can look out for events in your region too: MongoDB User Groups and EventsRegards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "Hey folks! Once again a big thank you to everyone who joined.As promised, here’s the recording link:\nZoom Recording Link\nPasscode: f9%q$8GcHoping everybody had a great time!Regards,\nSatyam",
"username": "Satyam"
}
] | MongoDB India Virtual MUG: Deep diving into MongoDB Atlas and Generative AI | 2023-10-11T06:14:11.932Z | MongoDB India Virtual MUG: Deep diving into MongoDB Atlas and Generative AI | 1,411 |
|
null | [
"java"
] | [
{
"code": "Exception in thread \"main\" java.lang.NoClassDefFoundError: com/mongodb/internal/connection/StreamFactoryHelper\n at com.mongodb.client.internal.MongoClientImpl.getStreamFactory(MongoClientImpl.java:235)\n at com.mongodb.client.internal.MongoClientImpl.createCluster(MongoClientImpl.java:227)\n at com.mongodb.client.internal.MongoClientImpl.<init>(MongoClientImpl.java:73)\n at com.mongodb.client.MongoClients.create(MongoClients.java:108)\n at com.mongodb.client.MongoClients.create(MongoClients.java:93)\n at com.mongodb.client.MongoClients.create(MongoClients.java:78)\n at com.mongodb.client.MongoClients.create(MongoClients.java:61)\n at org.example.Main.main(Main.java:10)\nCaused by: java.lang.ClassNotFoundException: com.mongodb.internal.connection.StreamFactoryHelper\n at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641)\n at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188)\n at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)\n ... 8 more\npackage org.example;\n\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.MongoDatabase;\n\npublic class Main {\n public static void main(String[] args) {\n MongoClient client = MongoClients.create(\"mongodb+srv://kkartik:<password>@agilecluster.8wtmt68.mongodb.net/?retryWrites=true&w=majority\");\n\n MongoDatabase db = client.getDatabase(\"SampleDB\");\n\n MongoCollection col = db.getCollection(\"sampleCollection\");\n }\n}\n<dependencies>\n <dependency>\n <groupId>org.mongodb</groupId>\n <artifactId>mongodb-driver-sync</artifactId>\n <version>4.11.0</version>\n </dependency>\n </dependencies>\n",
"text": "I’m trying to connect to MongoDB atlas using Java and I’m getting the following error:Here’s my code:Here’s my pom.xml dependancy:Any help is appreciated!",
"username": "kushagra_kartik"
},
{
"code": "Exception in thread \"main\" java.lang.NoClassDefFoundError\njava.lang.NoClassDefFoundErrormongodb-driver-xxx.jar",
"text": "Hey @kushagra_kartik,Welcome to MongoDB Community forums!The java.lang.NoClassDefFoundError generally indicates that a class is missing during runtime (not during the build/compile process). To address this, please open the “Run Configurations” dialog for your project and ensure that the mongodb-driver-xxx.jar is listed in the Classpath tab.Also, if you’d like to reference code samples, please take a look at the Java - Quick Start Project.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Where do I download the mongo-driver jar from?",
"username": "kushagra_kartik"
},
{
"code": "/*\n * Copyright 2008-present MongoDB, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage com.mongodb.internal.connection;\n\nimport com.mongodb.MongoClientException;\nimport com.mongodb.MongoClientSettings;\n <dependency>\n <groupId>org.mongodb</groupId>\n <artifactId>mongodb-driver-core</artifactId>\n <version>4.11.0</version>\n </dependency>\n <dependency>\n <groupId>org.mongodb</groupId>\n <artifactId>mongodb-driver-sync</artifactId>\n <version>4.11.0</version>\n </dependency>\n",
"text": "I see that the class “StreamFactoryHelper” is in “driver-core”:Within your pom.xml file, add another dependency for “mongodb-driver-core” with a version matching the “mongodb-driver-sync” like this:",
"username": "Nitin_Katkam"
},
{
"code": "",
"text": "I am getting the same error despite adding the mongodb-driver-core dependancy to my pom.xml file.",
"username": "kushagra_kartik"
}
] | Unable to connect MonogDB atlas to Java | 2023-10-27T20:32:46.757Z | Unable to connect MonogDB atlas to Java | 211 |
null | [
"node-js",
"atlas-cluster",
"containers"
] | [
{
"code": "MongoServerSelectionError: connection timed out\n at Timeout._onTimeout (/opt/discord/node_modules/mongodb/lib/sdam/topology.js:292:38)\n at listOnTimeout (node:internal/timers:559:17)\n at processTimers (node:internal/timers:502:7) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n '***.mongodb.net:27017' => [ServerDescription],\n '***.mongodb.net:27017' => [ServerDescription],\n '***.mongodb.net:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 'atlas-gxcew9-shard-0',\n maxElectionId: new ObjectId(\"7fffffff0000000000000081\"),\n maxSetVersion: 7,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined,\n [Symbol(errorLabels)]: Set(0) {}\n}\nconst url = `mongodb+srv://${environment.mongodb.username}:${environment.mongodb.password}@${environment.mongodb.host}/app_logs?retryWrites=true&keepAlive=true`;\nthis.client = new MongoClient(url, {\n loggerLevel: LoggerLevel.ERROR,\n });\n\nawait this.client.connect();\nawait this.client.db('app_logs').command({ ping: 1 });\n",
"text": "Hello!my Mongo connection seems to time out everyday, after about 7-10 hours of my app running with the error:I connect doing:My app runs in a Docker container. I’ve read something about whitelisting the IPs but if I could have some instructions on how to accomplish this with Docker, that’d be very much appreciated.I’m running Node v16.6.1, Docker 3.8 and Mongo package v4.13.0Thank you",
"username": "Kane"
},
{
"code": "",
"text": "Same here, did you find out why this happens? it happens to me each 5-7 hours",
"username": "Alberto_Oteiza"
},
{
"code": "",
"text": "Hey @Alberto_Oteiza,Welcome to the MongoDB Community forums Same here, did you find out why this happens? it happens to me each 5-7 hoursI would recommend creating a new topic with all the deployment details and the specific issue you are facing. This will make it more likely for the community to interact and assist you better.I also recommend reading Getting Started with the MongoDB Community: README.1ST for some tips to help improve your community outcomes.Regards,\nKushagra Kesav",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | MongoServerSelectionError: connection timed out | 2022-12-23T05:47:51.220Z | MongoServerSelectionError: connection timed out | 1,946 |
[
"react-native"
] | [
{
"code": "const user = useUser()\n useEffect(() => {\n const setupChangeStream = async () => {\n try {\n const collection = user!\n .mongoClient('mongodb-atlas')\n .db('dev')\n .collection('Job')\n\n for await (const change of collection.watch()) {\n // The change event will always represent a newly inserted perennial\n const { documentKey, fullDocument } = change\n console.log(`new document: ${documentKey}`, fullDocument)\n }\n } catch (error) {\n console.error(error)\n }\n }\n\n setupChangeStream()\n }, [user])\n",
"text": "I’m use collection.watch() return error [AbortError: Aborted]use by Realm react-nativethis codeI have set accordingly Examples of every step finished.Thank you.",
"username": "Piyapan_Duwao"
},
{
"code": "",
"text": "I am the same issue as well. Any update for this issue?",
"username": "Chongju_Mai"
}
] | Collection.watch() error on React Native SDK | 2023-10-05T03:07:14.203Z | Collection.watch() error on React Native SDK | 290 |
|
null | [
"node-js",
"mongoose-odm",
"connecting",
"atlas-cluster"
] | [
{
"code": "Error: queryTxt ETIMEOUT cluster0.xxxxxxx.mongodb.net\n at QueryReqWrap.onresolve [as oncomplete] (node:internal/dns/promises:252:17) {\n errno: undefined,\n code: 'ETIMEOUT',\n syscall: 'queryTxt',\n hostname: 'cluster0.xxxxxxx.mongodb.net'\n}\n\nconst connect = async () => {\n try {\n await mongoose.connect(process.env.MONGODB_URI, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n });\n console.log('Connected to mongoDB Atlas!');\n } catch (error) {\n console.log(error);\n }\n};\nconnect().then(() => {\n try {\n app.listen(process.env.PORT, () => {\n console.log(`Server running on port ${process.env.PORT}`);\n });\n } catch (error) {\n console.log(error);\n }\n});\nMONGODB_URI = mongodb+srv://yyyyyyyyy:xxxxxxxxxx@cluster0.gdrr9qe.mongodb.net/?retryWrites=true&w=majority\n\n",
"text": "when i connect to my atlas it giving this error and i have no idea what its about so i cant debug it, it just happened suddenly coz it was working okay before all thishere is how am connecting to the atlas:here is my URI:",
"username": "iSteven_Zion_N_A"
},
{
"code": "Error: queryTxt ETIMEOUT cluster0.xxxxxxx.mongodb.net\n at QueryReqWrap.onresolve [as oncomplete] (node:internal/dns/promises:252:17) {\n errno: undefined,\n code: 'ETIMEOUT',\n syscall: 'queryTxt',\n hostname: 'cluster0.xxxxxxx.mongodb.net'\n}\n8.8.8.88.8.4.4",
"text": "Hi @iSteven_Zion_N_A,Welcome to the MongoDB Community It seems like the DNS issue, try using Google’s DNS 8.8.8.8 and 8.8.4.4. Also, you can refer to one of responses here to learn more about it.In case you’re still encountering further issues whilst connecting, please provide any error messages received.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Unable to connnect to my MongoDB Atlas | 2023-10-29T08:25:47.357Z | Unable to connnect to my MongoDB Atlas | 179 |
[] | [
{
"code": "",
"text": "I’ve had an index named ‘id’ for a long time. But it seems no matter how I set it up, have tried different ways multiple times, its not being used and my bill is through the roof. But the only way to reduce the bill is using the indexes but I can’t figure out how to switch the index being used.\nimage731×89 2.66 KB\nAbove is the attached indexes",
"username": "Jacob_N_A2"
},
{
"code": "",
"text": "\nThis is the data per document. 313 documents total",
"username": "Jacob_N_A2"
},
{
"code": "",
"text": "Hey @Jacob_N_A2,Welcome to the MongoDB Community!But it seems no matter how I set it up, have tried different ways multiple times, its not being usedCould you please share the query you are running against this collection? It will allow us to replicate the issue in our environment and help you better. Meanwhile, you can refer to Text Indexes - documentation to read more about it.In case if your data is hosted on MongoDB Atlas, you may want to explore MongoDB’s enhanced full-text search solution called Atlas Search. However, to better assist you, could you please share the specific use case you are trying to address?Looking forward to your response.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | How do I use indexes? | 2023-10-29T00:23:30.783Z | How do I use indexes? | 140 |
|
null | [
"aggregation",
"queries",
"node-js",
"mongoose-odm"
] | [
{
"code": "const vendorDetailSchema = new mongoose.Schema(\n {\n // som other fields\n locationDetails: {\n type: mongoose.Schema.Types.ObjectId,\n ref: 'LocationDetail',\n required: true,\n },\n});\nconst locationDetailSchema = new mongoose.Schema({\n locationType: {\n type: String,\n enum: ['USER', 'VENDOR', 'DELIVERY_ADDRESS'],\n required: true,\n },\n gpsInfo: {\n type: {\n type: String,\n enum: ['Point'],\n required: true,\n },\n coordinates: {\n type: [Number],\n required: true,\n },\n },\n locationName: String,\n locationAddress: String,\n locationGooglePlacesJSON: String, // This is a JSON String from Google Places API\n});\n\nlocationDetailSchema.index({ gpsInfo: '2dsphere' });\n const vendors = await LocationDetail.find({\n gpsInfo: {\n $nearSphere: {\n $geometry: {\n type: 'Point',\n coordinates: [userLocation[1], userLocation[0]],\n },\n $maxDistance: 500 * 1000, // kilometers multiplier\n },\n },\n })\n .limit(limit)\n .exec();\n const vendors = await VendorDetail.find({\n 'locationDetails.gpsInfo': {\n $nearSphere: {\n $geometry: {\n type: 'Point',\n coordinates: [userLocation[1], userLocation[0]],\n },\n $maxDistance: 500 * 1000, // kilometers multiplier\n },\n },\n })\n .populate({\n path: 'locationDetails',\n })\n .limit(limit)\n .exec();\n",
"text": "Hi, I have a schema, for a vendor and a schema for locationsThe vendor has the location model as a reference like so:The location model is defined like this, with a 2dsphere index included:When I try to get the nearest locations directly, I can retrieve them properly…However, if I try to populate the data into the VendorDetail model, it throws an error… Is this not possible? If not, how else would one go about sorting location data that is stored in another document?The error message that I get is:\nerror processing query: ns=test.vendordetailsTree: GEONEAR field=locationDetails.gpsInfo maxdist=500000 isNearSphere=0\\nSort: {}\\nProj: {}\\n planner returned error :: caused by :: unable to find index for $geoNear queryI am assuming this is related to me populating the data… It works fine when directly done on the LocationDetail… Any suggestions how to solve this issue? Thanks!",
"username": "Andreas_Galster"
},
{
"code": " $lookup: {\n from: 'locationdetails',\n localField: 'locationDetails',\n foreignField: '_id',\n as: 'locationDetails',\n },\n",
"text": "So… I have been doing some more digging, I think the only way to make this work would be through aggregation… But aggregation also cannot work because the first step in an aggregation pipeline with something like nearSphere would have to be nearSphere, meaning something liketo get the data from the location docs wouldn’t work either… So… Hmm is the only choice to actually embed the location directly in the vendorDocument?",
"username": "Andreas_Galster"
}
] | How to populate field before running a geospatial query on it? | 2023-10-29T12:25:32.291Z | How to populate field before running a geospatial query on it? | 136 |
null | [
"node-js"
] | [
{
"code": "index.js\nconst mongoClient=require(\"mongodb\").MongoClient;\nconst connectionString=\"mongodb://localhost:27017\"\nmongoClient.connect(connectionString,(err,clientObject)=>{\n if(!err) {\n console.log(\"Connected successfully\");\n }\n else{\n console.log(err)\n }\n})\n error\nconst timeoutError = new error_1.MongoServerSelectionError(`Server selection timed out after ${serverSelectionTimeoutMS} ms`, this.description);\n ^\n\nMongoServerSelectionError: connect ECONNREFUSED ::1:27017\n at Timeout._onTimeout (C:\\Users\\vv jagadeesh\\Desktop\\node_practise\\MEN\\mongodb1\\node_modules\\mongodb\\lib\\sdam\\topology.js:278:38)\n at listOnTimeout (node:internal/timers:569:17)\n at process.processTimers (node:internal/timers:512:7) {\n reason: TopologyDescription {\n type: 'Unknown',\n servers: Map(1) {\n 'localhost:27017' => ServerDescription {\n address: 'localhost:27017',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 97501159,\n lastWriteDate: 0,\n error: MongoNetworkError: connect ECONNREFUSED ::1:27017\n at connectionFailureError (C:\\Users\\vv jagadeesh\\Desktop\\node_practise\\MEN\\mongodb1\\node_modules\\mongodb\\lib\\cmap\\connect.js:370:20)\n at Socket.<anonymous> (C:\\Users\\vv jagadeesh\\Desktop\\node_practise\\MEN\\mongodb1\\node_modules\\mongodb\\lib\\cmap\\connect.js:293:22)\n at Object.onceWrapper (node:events:628:26)\n at Socket.emit (node:events:513:28)\n at emitErrorNT (node:internal/streams/destroy:151:8)\n at emitErrorCloseNT (node:internal/streams/destroy:116:3)\n at process.processTicksAndRejections (node:internal/process/task_queues:82:21) { \n cause: Error: connect ECONNREFUSED ::1:27017\n at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16) {\n errno: -4078,\n code: 'ECONNREFUSED',\n syscall: 'connect',\n address: '::1',\n port: 27017\n },\n [Symbol(errorLabels)]: Set(1) { 'ResetPool' }\n },\n topologyVersion: null,\n setName: null,\n setVersion: null,\n electionId: null,\n logicalSessionTimeoutMinutes: null,\n primary: null,\n me: null,\n '$clusterTime': null\n }\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: null,\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined,\n [Symbol(errorLabels)]: Set(0) {}\n}\n\n",
"text": "",
"username": "Venkata_Jagadeesh_Vardhineni"
},
{
"code": "",
"text": "Search the forum for ECONNREFUSED and IPv6.",
"username": "steevej"
},
{
"code": "",
"text": "Good day pls am also having the same issues. Please how will I search for the ipv6 something in the forum like u said, though I saw the owner of this post under tagged suggested post by mongodb, when I typed “ipv6” it showed nothing in the search bar, pls the image below will explain better\n20231029_1437321920×864 184 KB",
"username": "Somtochukwu_Kelvin_Akuche"
}
] | I am new to mongodb . I am using nodejs comunicate with mongodb server but it throws an error any one help me | 2023-05-25T10:01:40.911Z | I am new to mongodb . I am using nodejs comunicate with mongodb server but it throws an error any one help me | 888 |
null | [
"node-js",
"atlas"
] | [
{
"code": "",
"text": "Ive hosted my nodejs server to google cloud but the server isnt running and getting this err: MongoAPIError: URI cannot contain options with no value at parseOptionsmy password doesnt have special characters and its running properly in dev environment. what could be the problem please.",
"username": "iSteven_Zion_N_A"
},
{
"code": "",
"text": "what could be the problem please.Lots of things could be wrong.Paste the URI you are using (but of course obscure your actual server name, user id, and password) and we can try to debug together.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "heres the URI am using and ive tested locally its correctly fetching and saving data, the network access is set to all IP addresses so i dont think its IP restrictionMONGODB_URI= ‘mongodb+srv://abcd:abcd@cluster0.xkjdutx.mongodb.net/?retryWrites=true&w=majority’",
"username": "iSteven_Zion_N_A"
},
{
"code": "",
"text": "URI looks pretty normal AFAIK.\nThe error suggests that maybe something in the URI is escaping your quoting and getting expanded incorrectly in Node. My suggestion is to stare at your code until your brain hurts. ",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "It didnt work out so i switched to a different service - google compute engine and it worked okay",
"username": "iSteven_Zion_N_A"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Google cloud mongoURI error | 2023-07-01T20:08:23.988Z | Google cloud mongoURI error | 756 |
null | [
"aggregation",
"queries"
] | [
{
"code": "db={\n \"assessmentNotice\": [\n {\n \"_id\": {\n \"$oid\": \"653d5269bddbd13af40c339a\"\n },\n \"referenceNumber\": \"220000007131\",\n \"status\": \"UNMAPPED\"\n }\n ],\n \"measurementNotice\": [\n {\n \"_id\": {\n \"$oid\": \"653d5268bddbd13af40c3396\"\n },\n \"data\": {\n \"referenceNumber\": \"220000003003\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"653d5269bddbd13af40c3399\"\n },\n \"data\": {\n \"referenceNumber\": \"220000007131\"\n }\n }\n ]\n}\ndb.assessmentNotice.aggregate([\n {\n $lookup: {\n from: \"measurementNotice\",\n let: {\n ref_number: \"$data.referenceNumber\"\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $eq: [\n \"$$ref_number\",\n \"$referenceNumber\"\n ]\n }\n }\n }\n ],\n as: \"lookup\"\n }\n }\n])\n[\n {\n \"_id\": ObjectId(\"653d5269bddbd13af40c339a\"),\n \"lookup\": [\n {\n \"_id\": ObjectId(\"653d5268bddbd13af40c3396\"),\n \"data\": {\n \"referenceNumber\": \"220000003003\"\n }\n },\n {\n \"_id\": ObjectId(\"653d5269bddbd13af40c3399\"),\n \"data\": {\n \"referenceNumber\": \"220000007131\"\n }\n }\n ],\n \"referenceNumber\": \"220000007131\",\n \"status\": \"UNMAPPED\"\n }\n]\n",
"text": "Hi, I have 2 collections that I would like to bring together using a field in the respective collection.I created an example in Playground. Can one of you please help me. Thank you very much!PlaygroundDB:Query:Result:",
"username": "Marcus_Baitz"
},
{
"code": "let: {\n ref_number: \"$referenceNumber\"\n },\n$match: {\n $expr: {\n $eq: [\n \"$$ref_number\",\n \"$data.referenceNumber\"\n ]\n }\n }\n\"localField\" : \"referenceNumber \" , \n\"foreignField\" : \"data.referenceNumber\" ,\n",
"text": "It looks like you simply inverted the fields names.In your $let, you have access to the assessmentNotice fields, but you use $data.referenceNumber. The field data is in the measurementNotice documents. What would make sense with the sample documents would beandBut when your pipeline: $match is a simple equality like yours, it would be better to use",
"username": "steevej"
},
{
"code": "",
"text": "Hi Steeve Juneau,I assumed that I had access to the measurement fields in the $lookup because of the “from”.Thank you for your time.",
"username": "Marcus_Baitz"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Joining two Collections with $lookup and nested Object | 2023-10-28T19:34:28.085Z | Joining two Collections with $lookup and nested Object | 163 |
null | [
"auckland-mug"
] | [
{
"code": "",
"text": "Welcome to the Auckland MongoDB User Group!We are delighted to announce MUG number 5 on 14-November.The team at Figured has kindly offered to host us for the evening at their offices on Fanshawe Street. Complimentary food and drinks as well as some SWAG will be available, proudly sponsored by the team at MongoDB.Agenda\n5:00pm - Arrivals\n5:30pm - Talks Start\n5:35pm - Kevin Norris: Joyous’ journey with MongoDB\n5:55pm - Q&A/Discussion\n6:05pm - Jeremy Bruce: Atlas Streams Processing Lightning Talk\n6:15pm - Kahoot\n6:30pm - Networking and Food\n7:30pm - CloseEvent Type: In-Person\nLocation: Figured, 9 Fanshawe Street · AucklandSpeakers\nKevin Norris - Kevin is the Co-Head of Engineering at Joyous, an Auckland based company that provides a workforce feedback product which builds human relationships and aligns people around common challenges. Joyous is built on MongoDB, and Kevin will be talking about their architecture - their history with MongoDB and the main benefits they get from using it over other databases.Jeremy Bruce - Jeremy is the Data & Insights Principal at Figured, who have recently had the opportunity to participate in a Private Preview of MongoDB’s new Atlas Stream Processing product. Jeremy will be talking a little bit about what it is, how Figured is planning to use it and what the experience has been like so far.Thanks,\nThe Auckland MUG Team!",
"username": "Jake_McInteer"
},
{
"code": "",
"text": "Looking forward to this one! Can’t wait to learn some more about the stream processing ",
"username": "Adam_Holt"
}
] | Auckland MongoDB User Group #5 | 2023-10-24T00:59:45.189Z | Auckland MongoDB User Group #5 | 503 |
null | [
"aggregation"
] | [
{
"code": "{\n ...\n problems:\n [ { ... id: 1, ...}, { ... id: 7, ...}, { ... id: 122, ...} ],\n solutions:\n [ { ... assignedProblems: [1, 7] ... }, { ... assignedProblems: [122] ...} ],\n ... \n}\nassignedProblemsidproblems{\n $unwind: {\n path: \"$solutions\"\n },\n $lookup: {\n from: \"problems\",\n localField: \"solutions.assignedProblems\",\n foreignField: \"id\",\n as: \"problems\"\n }\n}\n\n$match: {\n $expr: {\n $setEquals: [\"$solutions.assignedProblems\", \"$problems.id\"]\n }\n }\n",
"text": "I have searched around this problem I am facing for a bit but I haven’t found a good solution without using the $lookup parameter.My Documents currently roughly look like this:I am now trying to match the assignedProblems array with the id in problems.My current aggregation looks like this:This aggregation works flawlessly but is painfully slow, even when using indexes.I have tried to replace the $lookup with following:But this does only work if the amount of problems is the same as the elements in the assignedProblems array. This is the same case when using the $eq operator.Is there any other way to match these id’s with the array?",
"username": "Adrian_Bit"
},
{
"code": "",
"text": "Could you try the $map function? I’m away from a computer this weekend so cannot play about with it.You say it’s slow, how many document are in the collection and average sizes of problems and solutions?",
"username": "John_Sewell"
},
{
"code": "$lookup $addFields: {\n problems: {\n $filter: {\n input: \"$problems\",\n cond: {\n $in: [\"$$this.id\", \"$solutions.assignedProblems\"]\n }\n }\n }\n }\nproblemsassignedProblems",
"text": "I have found a solution to what I was trying to do.I replaced the $lookup stage to this one:This replaces the problems array with a new one that only includes the problems objects that have an id element that is in the assignedProblems array.",
"username": "Adrian_Bit"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Matching array against value in nested documents | 2023-10-28T11:58:13.278Z | Matching array against value in nested documents | 133 |
null | [
"installation"
] | [
{
"code": "id mongodb\nuid=116(mongodb) gid=65534(nogroup) groups=65534(nogroup),122(mongodb)\n",
"text": "Hi ,I have 2 questions.\ni) I have installed MongoDB on Ubuntu 20.04 using apt\nMongoDB installI see the below permissions . Are they correct?ii) I installed another MongoDB manually on Ubuntu 20.04 and created MongoDB data folder as\n/mongodb/replsetname/data\nWhat should be the permissions on the above directory structure for MongoDB user\nchown -R mongodb:mongodb /mongodb -------> is this correct\nchmod -R 755 /mongodb -------> is this correctThanks\n{“Satya”}",
"username": "Satya_Oradba"
},
{
"code": "",
"text": "Hi Team,@Jack_Woehr @chris @Britt_Snyman\nCan someone please respond to the above questions.Thanks\n{“Satya”}",
"username": "Satya_Oradba"
},
{
"code": "",
"text": "What are the permissions on the data directory on the system where you installed automatically?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Thanks @Jack_Woehr for the update.What are the permissions on the data directory on the system where you installed automatically?ls -lrt /var/lib/mongodb\ntotal 340\n-rw------- 1 mongodb mongodb 21 Jun 9 06:48 WiredTiger.lock\n-rw------- 1 mongodb mongodb 50 Jun 9 06:48 WiredTiger\n-rw------- 1 mongodb mongodb 114 Jun 9 06:48 storage.bson\ndrwx------ 2 mongodb mongodb 4096 Jun 9 06:52 testdb\ndrwx------ 2 mongodb mongodb 4096 Jun 9 06:55 local\ndrwx------ 2 mongodb mongodb 4096 Jun 9 06:55 config\ndrwx------ 2 mongodb mongodb 4096 Jun 9 06:55 admin\ndrwx------ 2 mongodb mongodb 4096 Oct 16 01:23 journal\ndrwx------ 2 mongodb mongodb 4096 Oct 16 03:43 diagnostic.data\n-rw------- 1 mongodb mongodb 1796 Oct 16 03:43 WiredTiger.turtle\n-rw------- 1 mongodb mongodb 32768 Oct 16 03:43 WiredTigerHS.wt\n-rw------- 1 mongodb mongodb 36864 Oct 16 03:43 sizeStorer.wt\n-rw------- 1 mongodb mongodb 36864 Oct 16 03:43 _mdb_catalog.wt\n-rw------- 1 mongodb mongodb 192512 Oct 16 03:43 WiredTiger.wt\n-rw------- 1 mongodb mongodb 0 Oct 16 03:43 mongod.lockls -lrt /var/log/mongodb\ntotal 97948\n-rw------- 1 mongodb mongodb 100292917 Oct 16 03:43 mongod.logIt is clear from the above output that just the last directory named “mongodb” in side /var/lib/mongodb and /var/log/mongodb are owned by the OS user “mongodb”My query is incase I create a /mongodb/replsetname/data\nWhat should be the permissions for MongoDB user\nchown -R mongodb:mongodb /mongodb -------> is this correct\nchmod -R 755 /mongodb -------> is this correctWhat I mean\ncan mongodb user own whole /mongodb, instead of having permissions on the last sub directory as seen in default installation\nIs there any harm, if mongodb OS user owns whole /mongodbThanks\n{“Satya”}",
"username": "Satya_Oradba"
},
{
"code": "chownchmod",
"text": "chown -R mongodb:mongodb /mongodb -------> is this correct\nchmod -R 755 /mongodb -------> is this correctchown’ing the whole data directory to mongodb should be correct.\nchmod’ing everything 755 is wrong.At least, that’s the way it is in my installation.PLEASE NOTE the following: https://www.mongodb.com/docs/manual/reference/configuration-options/#mongodb-setting-storage.dbPathWhen one asks questions of this sort, it’s usually an indication that one is trying to do something that one shouldn’t be doing ",
"username": "Jack_Woehr"
},
{
"code": "chownchmod",
"text": "chown’ing the whole data directory to mongodb should be correct.\nchmod’ing everything 755 is wrong.At least, that’s the way it is in my installation.PLEASE NOTE the following: https://www.mongodb.com/docs/manual/reference/configuration-options/#mongodb-setting-storage.dbPath Thanks @Jack_Woehr for the update.When one asks questions of this sort, it’s usually an indication that one is trying to do something that one shouldn’t be doing I respect the gray hairs Thanks\n{“Satya”}",
"username": "Satya_Oradba"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB installation Default and Manual on Ubuntu 20.04 | 2023-10-15T17:35:10.561Z | MongoDB installation Default and Manual on Ubuntu 20.04 | 326 |
null | [] | [
{
"code": "development.realmdefault.realmdevelopment.realmdefault.realmimport RealmSwift\n\nstruct MyView: View {\n // var realm = RealmService.shared.realm // I tried adding this, just to test, but it didn't work\n @ObservedResults(SomeObject.self, where: {$0.isValid}) var validObject\n // ...\n}\n",
"text": "This is a followup question to a previous post I wrote here: Can't use anything other than default.realm with SwiftUI - #2 by JayThe setup is that I have a development.realm that I use for development, and a default.realm that I use when I open the app not during development.Following @Jay response I managed to get things updated in the development.realm, which I wasn’t doing properly before.The problem now is that I have a bunch of views reading from the realm, and they all seem to be using default.realm.My code looks something like this:I guess I must be missing some initialization somewhere, but I have no clue where that could be. I’ve already gone through this other conversation, but it didn’t help my solve it: How do you change the default configuration for @ObservedResults()? - #12 by JayThanks!",
"username": "Daniel_Gonzalez_Reina"
},
{
"code": " @ObservedResults(SomeObject.self, where: ...default.realmvar config = Realm.Configuration.defaultConfiguration\nconfig.fileURL!.deleteLastPathComponent()\nconfig.fileURL!.appendPathComponent(\"jays_cool\")\nconfig.fileURL!.appendPathExtension(\"realm\")\nRealm.Configuration.defaultConfiguration = config\nlet realm = try! Realm()\nlet config = realm.configuration\nprint(config.fileURL)\n.../jays_cool.realm)ListView().environment(\\.realm, realm)\nLocalOnlyContentView().environment(\\.realmConfiguration, Realm.Configuration(some config))\n",
"text": "I am not sure there’s enough code in the question to troubleshoot the issue as we don’t know what was done with Realm before that line of code @ObservedResults(SomeObject.self, where: ...If you don’t specify a configuration for Realm, it uses the default configuration, which would include using the default.realm file. One suggestion in a SwiftUI environment is that if you want to specify a certain file, you do that globally by defining what the defaultConfig is, and then future calls to Realm know to use that file.For example, early in your apps code if you do thislater on, elsewhere in your app, doing thiswould output.../jays_cool.realm)that in turn would be that all proper wrappers, like @ObservedResults would use that configured realm as well.While my solution (linked in the question) to use a singleton/global RealmService is another option, in SwiftUI you can also explicitly define the Realm configuration. Then that Realm or config can be injected as a .environment value, or even inject the .realmConfiguration instead.So start here Open a Realm with a Configuration noting thisWhen you do not specify a configuration, these property wrappers use the defaultConfiguration. You can set the defaultConfiguration globally, and property wrappers across the app can use that configuration when they implicitly open a realm.Then, in SwiftUI you can inject a realm that you opened in another SwiftUI view into a view as an environment value. The property wrapper (@ObservedResults for example) uses this passed-in realm to populate the view:or inject the realm configuration",
"username": "Jay"
},
{
"code": "import Foundation\nimport RealmSwift\n\nclass RealmService {\n private init() {}\n\n static let shared = RealmService()\n\n lazy var realm: Realm = {\n var config = Realm.Configuration.defaultConfiguration\n\n #if DEBUG\n let developmentRealmLocation = config.fileURL!.deletingLastPathComponent().appendingPathComponent(\"development\").appendingPathExtension(\"realm\")\n\n config.fileURL = developmentRealmLocation\n #endif\n\n Realm.Configuration.defaultConfiguration = config\n let realm = try! Realm.init(configuration: config)\n return realm\n }()\n}\n\nRealm.Configuration.defaultConfiguration = configconfiglet realm = try! Realm.init()",
"text": "Thanks for the fast response again Jay. Setting the global configuration worked.Here’s the final singleton code in case it’s useful for someone else:I just had to add the line Realm.Configuration.defaultConfiguration = config once config is defined.I’m guessing I could just call let realm = try! Realm.init() in the next line given that I’ve already set the config globally, but it’s not hurting being explicit there as well.Thanks!",
"username": "Daniel_Gonzalez_Reina"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | @ObservedResults always connects to the default realm | 2023-10-27T16:29:30.954Z | @ObservedResults always connects to the default realm | 175 |
[
"crud"
] | [
{
"code": "POST https://us-east-1.aws.data.mongodb-api.com/app/data-yyumk/endpoint/data/v1/action/updateMany\n{\n\t\"collection\": \"products\",\n\t\"database\": \"floret-platform\",\n\t\"dataSource\": \"{{ _.API.DATA.SOURCE }}\",\n\t\"filter\": {\n\t\t\"organization._id\": { \"$oid\": \"636d28d827ae88d5420774b8\" }\n\t},\n\t\"update\": {\n\t\t\"$set\": { \"organization._id\": { \"$oid\": \"636d28d827ae88d5420774b8\" } }\n\t}\n}\n",
"text": "When using the data api it seems to be extremely slow when running an updateMany operation. An updateMany call that updates 589 records about 17 seconds when that same update through mongo shell or using the mongo sdk takes a couple milliseconds at most. At over 30k records the data api becomes useless as all update requests time out. I don’t seem to have this issue with any other operation. I would expect the data api would have similar performance to using the sdk or shell directly aside from receiving, parsing and responding to the HTTP request.Here is my REST request if it helps:Here is the log entry of the request from Atlas:\n\nScreenshot 2023-06-20 at 12.05.14 AM2552×1268 220 KB\n",
"username": "Christopher_Thaw"
},
{
"code": "",
"text": "Hoping mdb engineers can chime in on this, is this still in beta?",
"username": "Rishi_uttam"
}
] | The Data API seems to be extremely slow to update many documents | 2023-06-20T04:02:52.120Z | The Data API seems to be extremely slow to update many documents | 930 |
|
[
"data-api"
] | [
{
"code": "",
"text": "I am trying to call data-api from postman, it’s taking nearly 3 seconds to give response. Is it the normal case? Because querying through connection URL is a lot more faster than this. Will data Api give the same performance as we get from connection URL or it’s slower than the other method.Following are the details of my cluster :\n\nScreenshot 2022-05-05 1603001618×458 42.9 KB\n\nWhatsApp Image 2022-05-05 at 4.12.29 PM1280×339 20.6 KB\n",
"username": "VIKASH_KUMAR_SHUKLA"
},
{
"code": "",
"text": "There is a lot more going on with Data API compared to a driver connection.So a lot more is going on.Data API is very convenient because it is a layer of abstraction where you do not have to manage an application server. But it comes at a performance cost. You have to be aware of that.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @VIKASH_KUMAR_SHUKLA welcome to the community!I would also like to point out that the Data API is still very much in Preview/Beta at this moment, so things might change for the better in the future.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi Vikash - I’ll also jump in and add that we have some improvements planned that will make this much faster, including some optimizations internally and the ability to externally choose a better deployment model fit for your region and cloud provider.\nCurrently I can see you’ve deployed your cluster on GCP which may cause some latency since the API is default hosted on AWS which may lead to some cross-cloud latency.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Thank you Sumedha, I got your point. Now the problem is that we have deployed our Node Js (Back End) + React Js (Front End) on Google Cloud Platform. Also, our mongoDB cluster is deployed on GCP to reduce any latency. My questions are :",
"username": "VIKASH_KUMAR_SHUKLA"
},
{
"code": "",
"text": "Hi, is the data api still in beta.Was hoping to be able to use this in production, we too are seeing 4-5 seconds delay (calling from a cloudflare worker)We may shift some of our data storage to cloudflare d1 or kv stores… really looking for data to be stored at the edge.",
"username": "Rishi_uttam"
}
] | Data API is slow | 2022-05-05T11:06:00.266Z | Data API is slow | 5,110 |
|
null | [
"replication",
"golang"
] | [
{
"code": "{serverStatus: 1}",
"text": "We’re using mongo-driver to connect to a replicaset and we’re able to run the {serverStatus: 1} command against the primary node (hooray!). serverStatus reports different data though depending on if you are on a primary vs a secondary (e.g. opcountersRepl is only set when querying a secondary).\nWe believe we can use readpreference/readconcern to deterministically connect to a primary node. We believe switching the readconcern will then allow us to non-deterministically connect to one of our secondary nodes, but is there some way to explicitly connect to a given node in the replicaSet without establishing a new direct connection to each?\ne.g. is there somewhere in the mongo.Client or a driver cache somewhere which holds the nodes which have been connected to? Is there some method for targeting each node?",
"username": "Topher_Sterling"
},
{
"code": "",
"text": "Hi @Topher_Sterling,The MongoDB drivers are intended to route read requests to replica set members based on the configured Read Preference. Typically it “doesn’t matter” which node is targeted (as long as the read preference tags and preference are met) so direct control over which host is selected for a command isn’t exposed. Drivers continually monitor deployments so that if the topology changes (a node goes offline), operations can continue to be automatically routed appropriately to satisfy the read preference without users needing to be aware of the state of the cluster.As you’ve correctly identified, if there’s a need to consistently target a single host in a replica set via the Go driver, using a Direct Connection would be the way to accomplish this.",
"username": "alexbevi"
}
] | After connecting to a replicaset, is there a way to target each member with the same query? | 2023-10-27T18:44:15.024Z | After connecting to a replicaset, is there a way to target each member with the same query? | 144 |
null | [
"replication",
"compass",
"storage"
] | [
{
"code": " logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1, 172.20.10.4\n\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\n#security:\n\n#operationProfiling:\n\nreplication:\n replSetName: \"rs0\"\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\n# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1, 172.20.10.10\n\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\n#security:\n\n#operationProfiling:\n\nreplication:\n replSetName: \"rs0\"\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\n# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1, 172.20.10.11\n\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\n#security:\n\n#operationProfiling:\n\nreplication:\n replSetName: \"rs0\"\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\n> rs.initiate(\n... {\n... _id: \"rs0\",\n... members: [\n... { _id: 0, host: \"172.20.10.4\" },\n... { _id: 1, host: \"172.20.10.10\" },\n... { _id: 2, host: \"172.20.10.11\" }\n... ]\n... })\nrs0:PRIMARY> rs.status()\n{\n\t\"set\" : \"rs0\",\n\t\"date\" : ISODate(\"2023-10-27T16:39:16.746Z\"),\n\t\"myState\" : 1,\n\t\"term\" : NumberLong(1),\n\t\"syncSourceHost\" : \"\",\n\t\"syncSourceId\" : -1,\n\t\"heartbeatIntervalMillis\" : NumberLong(2000),\n\t\"majorityVoteCount\" : 2,\n\t\"writeMajorityCount\" : 2,\n\t\"votingMembersCount\" : 3,\n\t\"writableVotingMembersCount\" : 3,\n\t\"optimes\" : {\n\t\t\"lastCommittedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1698424756, 1),\n\t\t\t\"t\" : NumberLong(1)\n\t\t},\n\t\t\"lastCommittedWallTime\" : ISODate(\"2023-10-27T16:39:16.726Z\"),\n\t\t\"readConcernMajorityOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1698424756, 1),\n\t\t\t\"t\" : NumberLong(1)\n\t\t},\n\t\t\"appliedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1698424756, 1),\n\t\t\t\"t\" : NumberLong(1)\n\t\t},\n\t\t\"durableOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1698424746, 1),\n\t\t\t\"t\" : NumberLong(1)\n\t\t},\n\t\t\"lastAppliedWallTime\" : ISODate(\"2023-10-27T16:39:16.726Z\"),\n\t\t\"lastDurableWallTime\" : ISODate(\"2023-10-27T16:39:06.725Z\")\n\t},\n\t\"lastStableRecoveryTimestamp\" : Timestamp(1698424706, 1),\n\t\"electionCandidateMetrics\" : {\n\t\t\"lastElectionReason\" : \"electionTimeout\",\n\t\t\"lastElectionDate\" : ISODate(\"2023-10-27T16:07:36.459Z\"),\n\t\t\"electionTerm\" : NumberLong(1),\n\t\t\"lastCommittedOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(1698422845, 1),\n\t\t\t\"t\" : NumberLong(-1)\n\t\t},\n\t\t\"lastSeenOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(1698422845, 1),\n\t\t\t\"t\" : NumberLong(-1)\n\t\t},\n\t\t\"numVotesNeeded\" : 2,\n\t\t\"priorityAtElection\" : 1,\n\t\t\"electionTimeoutMillis\" : NumberLong(10000),\n\t\t\"numCatchUpOps\" : NumberLong(0),\n\t\t\"newTermStartDate\" : ISODate(\"2023-10-27T16:07:36.548Z\"),\n\t\t\"wMajorityWriteAvailabilityDate\" : ISODate(\"2023-10-27T16:07:37.685Z\")\n\t},\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 0,\n\t\t\t\"name\" : \"172.20.10.4:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 1,\n\t\t\t\"stateStr\" : \"PRIMARY\",\n\t\t\t\"uptime\" : 2348,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1698424756, 1),\n\t\t\t\t\"t\" : NumberLong(1)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2023-10-27T16:39:16Z\"),\n\t\t\t\"lastAppliedWallTime\" : ISODate(\"2023-10-27T16:39:16.726Z\"),\n\t\t\t\"lastDurableWallTime\" : ISODate(\"2023-10-27T16:39:06.725Z\"),\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"electionTime\" : Timestamp(1698422856, 1),\n\t\t\t\"electionDate\" : ISODate(\"2023-10-27T16:07:36Z\"),\n\t\t\t\"configVersion\" : 1,\n\t\t\t\"configTerm\" : 1,\n\t\t\t\"self\" : true,\n\t\t\t\"lastHeartbeatMessage\" : \"\"\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 1,\n\t\t\t\"name\" : \"172.20.10.10:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 1910,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1698424746, 1),\n\t\t\t\t\"t\" : NumberLong(1)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1698424746, 1),\n\t\t\t\t\"t\" : NumberLong(1)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2023-10-27T16:39:06Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2023-10-27T16:39:06Z\"),\n\t\t\t\"lastAppliedWallTime\" : ISODate(\"2023-10-27T16:39:16.726Z\"),\n\t\t\t\"lastDurableWallTime\" : ISODate(\"2023-10-27T16:39:16.726Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2023-10-27T16:39:15.593Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2023-10-27T16:39:15.004Z\"),\n\t\t\t\"pingMs\" : NumberLong(1),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncSourceHost\" : \"172.20.10.4:27017\",\n\t\t\t\"syncSourceId\" : 0,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 1,\n\t\t\t\"configTerm\" : 1\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 2,\n\t\t\t\"name\" : \"172.20.10.11:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 1910,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1698424746, 1),\n\t\t\t\t\"t\" : NumberLong(1)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1698424746, 1),\n\t\t\t\t\"t\" : NumberLong(1)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2023-10-27T16:39:06Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2023-10-27T16:39:06Z\"),\n\t\t\t\"lastAppliedWallTime\" : ISODate(\"2023-10-27T16:39:16.726Z\"),\n\t\t\t\"lastDurableWallTime\" : ISODate(\"2023-10-27T16:39:16.726Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2023-10-27T16:39:15.593Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2023-10-27T16:39:15.058Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncSourceHost\" : \"172.20.10.4:27017\",\n\t\t\t\"syncSourceId\" : 0,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 1,\n\t\t\t\"configTerm\" : 1\n\t\t}\n\t],\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1698424756, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1698424756, 1)\n}\nmongodb://172.20.10.4,172.20.10.10,172.20.10.11/?replicaSet=rs0",
"text": "Hi, I am trying to connect from my localhost computer to my 3 VM replica set nodes using Compass but I am getting ‘connection timed out’.\nHere is my /etc/mongod.conf files:\n1.Here is how I created the replica set:Here is the result of command - rs.status():Here is the string that I am using when connecting with Compass:\nmongodb://172.20.10.4,172.20.10.10,172.20.10.11/?replicaSet=rs0",
"username": "Tudor_Nicula"
},
{
"code": "sudo ufw allow from localhostul-ip to any port 27017",
"text": "Guys I find the problem, check the ufw rules if you activated ufw for your VM’s, make sure you allow connections for your localhost on port 27017. sudo ufw allow from localhostul-ip to any port 27017",
"username": "Tudor_Nicula"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Cannot connect to MongoDB replica set cluster using Compass on localhost | 2023-10-27T16:43:25.231Z | Cannot connect to MongoDB replica set cluster using Compass on localhost | 154 |
null | [
"flutter"
] | [
{
"code": "",
"text": "Good morning/Good afternoon/Good evening everyoneFirstly, I apologize for the inconvenience.I have a question regarding the use of Realm and what would be the best way to work with it.We currently have a project that uses Realm to use data in real time and this same app uses the offline first strategy.We currently have an issue with the real-time update feature.\nAs we follow the clean arch structure, we use a mapper that transforms the Realm object into an application object and we trigger the information using the streams feature to propagate the records, but as I said previously, we lose the real-time update feature, After changing the project to use Realm objects directly, we no longer had this problem.Then I had some doubts regarding the use of Realm in the project, they would be as follows:1 - is it mandatory to work directly with Realm objects to be able to have the real-time synchronization feature?\nAs we use the concept of datasources, if we need to change the database it would be easier to change.\nFollowing the strategy I mentioned, could you not work directly with the Realm object directly in the project?2 - for every query that I need to have the real-time resource, is it necessary to register the same query again?\nIs there any other type of strategy to have this functionality?Thank you very much for everyone’s support.",
"username": "Fernando_da_Costa_Felicio"
},
{
"code": "",
"text": "1 - is it mandatory to work directly with Realm objects to be able to have the real-time synchronization feature?Why wouldn’t you want to do that?As we use the concept of datasourcesA Realm Results objects is a fine datasource. As the database changes, so do the results, and via notifications, the UI can be updated as well.could you not work directly with the Realm object directly in the projectNot sure I understand the question. What does “work directly” mean code-wise?for every query that I need to have the real-time resource, is it necessary to register the same query again?That depends. A results object reflects the state of the underlying data and running a query is what populates that results object. Suppose a query is run to show all restaurants with one mile of your location. As the app is running and you move about your city, restaurants meet that query will be added (within 1 mile) and restaurants will be removed that fall outside the query parameters (outside 1 mile). If the results are stored at a high level within your app, as an app-level var or within the main class app for instance, that Results object will not go out of scope so it would not need to be re-registered.",
"username": "Jay"
},
{
"code": "",
"text": "Good afternoon!Thank you very much for responding.Regarding the doubts that I left and that generated more doubts for you, I will leave them below:We recently had a meeting with our Solutions Architect and got into a debate about the options for data consumption tools in the application that is under development (it is currently already being integrated with Realm due to our backend services using the resources of MongoDB).One of the questions he asked us was the following:\nTo maintain information reactively to changes, how does Realm work?\nWe explained to him that, for it to work correctly, it would be necessary to work only with the objects provided by Realm, that is, an object specific to the app could not be used to have realtime functionality.\nIn the first attempt to use it, we used the concept of data source mappers, created an app’s own object and made the necessary transformations to use the returned data (Realm object mapper for application object) and propagate it through streams.\nWe did it this way, so that if the data source was changed (changing from Realm to an api, or Firebase for example), there would be a change only in the datasource, that is, it would not affect the other layers below.\nOn the day of the Realm presentation with Flutter (Youtube/LinkedIn), I asked a question related to this and based on the answer, I was asked to use the Realm object directly in the project.We currently had a problem regarding some business rules between the application and the web system (which work integrated), which involved the real-time update feature.\nWe were asked why we were working directly with Realm objects in the project, where we explained that to have the real-time synchronization feature, it would be necessary to use it in the way indicated in the documentation and indicated by the library’s development team.We explained to him that at the beginning of the project, we used the concept of mappers, but over time we saw that we lost the ability for data to be updated in real time.\nAs we changed the project to transport Realm objects, the feature worked correctly, that is, the data linked to the Realm object is transmitted to the interface (UI), domain layers (usecases and repositories), data (repository implementation) and datasource (Realm).About your question:Not sure I understand the question. What does “work directly” mean code-wise?Is it really mandatory to work only with Realm objects to have real-time synchronization?\nWhy do I ask this question?\nAs our architect asked about how to use Realm in the project and our explanation (to have the realtime feature it is necessary to work directly with the objects provided by the library), we were asked if this would not generate high coupling in the project?\nIf the data source were changed (api or another library for example), we would need to remove the Realm usage references from the project, and only then carry out the data consumption operations from this new data source.Again, thank you very much for responding and I’m sorry for leaving such big questions, as I’m new to using Realm and Flutter, I had these doubts.\nAs there was an indication to ask questions about Realm (through Github), I came to leave these questions.I thought it would be better to ask the creators about the correct way to work with the library, rather than deducing what can be done and not having an assertive answer.",
"username": "Fernando_da_Costa_Felicio"
},
{
"code": "",
"text": "Thanks for the additional information. I will match your big question with a big answer This meanders a little but really comes down to whether there is a need to decouple the data, a and if so, at what cost.(skip to TL;DR to avoid this wall of text)After coding for 45+ years now, I ‘get’ the concept and purpose of decoupling your project code from your datasource, and have done it many times.Back in the day, when we had different flavors of SQL, it made sense. However, in many cases today that concept is very difficult to put into practice unless you make everything as ‘generic’ as possible when then bypasses the features of the database you’ve selected.In many cases a database is not just a dumb datastore - databases are selected because of how it works with that data and the additional features that database provides - like Realms “live” objects.I am intimately familiar with Firebase as well as Realm so allow me to provide an example.Firebase and Realm are radically different. Right up front, Realm is an offline first database which provides syncing. Firebase on the other hand is a very much online-first database with some offline persistence (to handle brief interruptions in connection status)Firebase has no concept of ‘live’ objects - the data is static unless there’s an update from the server (or an event). Whereas Realm objects are live and dynamic, always reflecting the status of the underlying data regardless of where that change came from.So, what does that mean?If you’ve selected Realm as your database and craft an app around the capability of “live” (i.e. dynamic) objects, you’re going to be hard pressed to adapt that to Firebase as just doesn’t do that. It would require total re-think of the app logic as well as a re-write of pretty much everything dealing with that data.More importantly, when designing an app (with Realm), that design will leverage features like forward and inverse relationships, embedded objects, and the queries, objects and schema will reflect that functionality. Firebase doesn’t have relationships, schema (it’s schema-less!) or even objects! It’s a totally different NoSQL approach to working with data. Again, it would require a massive do-over.My messaging is this - if separation of code from the underlying database is a requirement, it can be done. Look at Apple’s CoreData - it can be backed by a number of underlying sources. But… look at CoreData, it’s a massive effort! I don’t think I would want to build entire persistance framework into my project ‘just in case’ we switch data sources - even at that, which datasources? There are a lot.For clarity. IMO Firebase is an awesome product, so is Realm. They serve different purposes and have radically different implementations.I have transitioned many projects from and to a variety of databases and in general, if the datasource is changed, no matter how much effort was put into decoupling it, it’s a massive re-write and an many cases starting over; re-thinking everything about that data, relationships etc. Even though Realm is backed by NoSQL, the Realm front end is a very different approach to NoSQL than say Firebase Firestore is.TL;DRMy suggestion is to think about the features you want and select a database based on that. Then, create a basic To-Do app - keep it simple, implement Adding and persisting ToDo’s, edit them and delete. Create a ‘relationship’ from one to do to another. Then, port it to a different database.That process is often very revealing with a minimal amount of effort and time. It will help you gauge the complexity of the layer needed to decouple your code from the datasource or if you want to leverage the features (Realm) offers.Is it really mandatory to work only with Realm objects to have real-time synchronization?Yes. That’s one of the features Realm offers. If you can do without that feature or implement it in some other fashion, that would make it a bit easier to move to another platform if necessary.",
"username": "Jay"
},
{
"code": "",
"text": "Good morning @Jay!Again, thank you very much for responding.I’m still very inexperienced, but with your explanation, I was able to understand very well the purpose and how Realm helps.I’m sorry for extending the conversation too long.\nHere in Brazil, teams use Firebase resources more (including me hahaha), and the way Realm works differs a lot from what I was used to.\nRealm’s performance is actually much better than other services that exist (Firebase, Supabase), and as there was this question from the leadership, I came here to seek help and be able to better explain how Realm supports us.I really apologize for bothering you and again, thank you very much for giving your time to help me and answer my questions.",
"username": "Fernando_da_Costa_Felicio"
},
{
"code": "",
"text": "My pleasure to help!One other thing to note that may help in your discussions about Realm is the memory footprint.Realm is incredibly memory friendly - huge datasets can be accessed and managed without really worrying about overloading a devices memory. Objects are lazily loaded and as long as you stick with with Realm functions and avoid high level Swift functions, the memory footprint is low impact.Many other databases don’t handle data in the same way; Firebase for example relies on pagination to handle large datasets in the device - while that’s not a bad thing, it just doesn’t work in the same fashion as Realm so that adds another layer of complexity in attempting to create a generic environment.Realm’s performance is actually much better than other services that exist (FirebaseI am very surprised that Firebase doesn’t outperform other databases. It’s a crazy-fast platform… however, Realm’s advantage is that is pulls data locally (it’s a local-first database!) and local data is always faster than the internet. Again, you’re selecting Realm because it’s performant and has features you want to use.",
"username": "Jay"
}
] | Questions about the correct way to use Realm and the Realtime functionality | 2023-10-24T20:38:09.999Z | Questions about the correct way to use Realm and the Realtime functionality | 268 |
null | [
"python"
] | [
{
"code": "\"filter\": {\n \"location_id\": { \"$eq\": location_id }\n}\n",
"text": "Greetings all.\nI have followed the guide on using $vectorSearch. But I cannot find any reference to ObjectId.\nHow to filter by an ObjectId?\nI’ve tried the following but it produces an error:where location_id is an ObjectId.\nThe error is the following: “pymongo.errors.OperationFailure: Operand type is not supported for $vectorSearch: objectId”.\nThank you!",
"username": "Mathieu_Morgensztern"
},
{
"code": "",
"text": "Hey,\nYou might want to take a look at SuperDuperDB GitHub - SuperDuperDB/superduperdb: 🔮 SuperDuperDB: Bring AI to your favourite database! Integrate, train and manage any AI models and APIs directly with your database and your data.\nIt has out of box support for vector search with atlas.Let us know if it helps in solving your problem, also\nLeave a star at GitHub - SuperDuperDB/superduperdb: 🔮 SuperDuperDB: Bring AI to your favourite database! Integrate, train and manage any AI models and APIs directly with your database and your data. Thanks",
"username": "thejumpman2323"
}
] | $vectorSearch with pre-filtering on ObjectId | 2023-10-27T09:42:04.592Z | $vectorSearch with pre-filtering on ObjectId | 161 |
null | [
"java",
"app-services-user-auth",
"android"
] | [
{
"code": "AUTH_ERROR(realm::app::ServiceError:47): illegal base64 data at input byte 342\n\n22:42:09\nAUTH\n\tat io.realm.internal.network.NetworkRequest.onError(NetworkRequest.java:68)\n\n22:42:09\nAUTH\n\tat io.realm.internal.objectstore.OsJavaNetworkTransport.nativeHandleResponse(Native Method)\n\n22:42:09\nAUTH\n\tat io.realm.internal.objectstore.OsJavaNetworkTransport.handleResponse(OsJavaNetworkTransport.java:98)\n\n22:42:09\nAUTH\n\tat io.realm.internal.network.OkHttpNetworkTransport$1.run(OkHttpNetworkTransport.java:102)\n\n22:42:09\nAUTH\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1137)\n\n22:42:09\nAUTH\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:637)\n\n22:42:09\nAUTH\n\tat java.lang.Thread.run(Thread.java:1012)\ncrypto/rsa: verification error\nerror exchanging access code with OAuth2 provider\noverride fun onCreate(savedInstanceState: Bundle?) {\n // Start: signin with google\n val gso = GoogleSignInOptions\n .Builder(GoogleSignInOptions.DEFAULT_SIGN_IN)\n //.requestIdToken(\"555172020768-kd9ggah2uum9dui095fb0es4r9icf0ug.apps.googleusercontent.com\")\n .requestServerAuthCode(\"555172020768-kd9ggah2uum9dui095fb0es4r9icf0ug.apps.googleusercontent.com\")\n //.requestIdToken(\"555172020768-5qa3i5u536svsahannl3k8q8at3l4dgq.apps.googleusercontent.com\")\n .requestEmail()\n .build()\n\n\n val googleSignInClient = GoogleSignIn.getClient(this, gso)\n val resultLauncher: ActivityResultLauncher<Intent> =\n registerForActivityResult(ActivityResultContracts.StartActivityForResult())\n { result ->\n val task: Task<GoogleSignInAccount> =\n GoogleSignIn.getSignedInAccountFromIntent(result.data)\n handleSignInResult(task)\n }\n // End: signin with google\n\n loginwithGoogleButton.setOnClickListener {\n loginWithGoogle(googleSignInClient,resultLauncher)\n }\n\n\n private fun loginWithGoogle(googleSignInClient:GoogleSignInClient, resultLauncher: ActivityResultLauncher<Intent>) {\n val signInIntent: Intent = googleSignInClient.signInIntent\n resultLauncher.launch(signInIntent)\n }\n\n\n private fun handleSignInResult(completedTask: Task<GoogleSignInAccount>) {\n showProgressDialog(\"Autentificare in curs!\")\n try {\n if (completedTask.isSuccessful) {\n val account : GoogleSignInAccount? = completedTask.getResult(ApiException::class.java)\n val email: String? = account?.email\n val firstName = account?.givenName\n\n account?.serverAuthCode?.let { account.email?.let { it1 ->\n account.givenName?.let { it2 ->\n astaSaFie(it,\n it1, it2\n )\n }\n } }\n\n\n } else {\n Log.e(\n \"AUTH\", \"Google Auth failed: \"\n + completedTask.exception.toString()\n )\n onLoginFailed(\"Google Auth failed:,${completedTask.exception.toString()}\")\n\n }\n } catch (e: ApiException) {\n Log.w(\"AUTH\", \"Failed to log in with Google OAuth: \" + e.message)\n onLoginFailed(\"Failed to log in with Google OAuth::,${e.message}\")\n }\n }\n\n\n private fun astaSaFie(token:String, email: String, firstName: String){\n\n // val googleCredentials = Credentials.google(token, GoogleAuthType.ID_TOKEN)\n val googleCredentials = Credentials.google(token, GoogleAuthType.AUTH_CODE)\n\n try {\n val urlDecoded = java.util.Base64.getUrlDecoder().decode(token)\n Log.d(\"JWT\", \"URL Decoded: $urlDecoded\")\n }catch (e:Exception){\n Log.d(\"JWT\",\"e:$e\")\n }\n\n try {\n val decoded = java.util.Base64.getDecoder().decode(token)\n Log.d(\"JWT\", \"Decoded: $decoded\")\n } catch (e: Exception) {\n Log.d(\"JWT\", \"e: $e\");\n } \n\n try {\n val androidDecoded = android.util.Base64.decode(token, android.util.Base64.DEFAULT)\n Log.d(\"JWT\", \"Android Decoded DEFAULT: $androidDecoded\")\n } catch (e: Exception) {\n Log.d(\"JWT\", \"e: $e\");\n } \n\n try {\n val urlAndroidDecoded = android.util.Base64.decode(token, android.util.Base64.URL_SAFE)\n Log.d(\"JWT\", \"Android Decoded URL_SAFE: $urlAndroidDecoded\")\n } catch (e: Exception) {\n Log.d(\"JWT\", \"e: $e\");\n } \n\n taskApp?.loginAsync(googleCredentials) { it ->\n if (it.isSuccess) {\n Log.v(\n \"AUTH\",\n \"Successfully logged in to MongoDB Realm using Google OAuth.\"\n )\n Helpers().setLoginSuccesfoul(this, true)\n Helpers().setAuthProvider(this,\"google\")\n\n //TODO: 1 are limite setate in cont? daca nu are inseamna ca este un user now daca da inseamana ca este un user vechi\n checkifAccountIsNeeded(email, firstName)\n\n } else {\n hideProgressDialog()\n onLoginFailed(\"Nu s-a putut crea contul,${it.error}\")\n Log.e(\n \"AUTH\",\n \"Failed to log in to MongoDB Realm: \", it.error\n )\n }\n }\n\n }\n",
"text": "Hello all,Our app’s google sign in is currently not working for a huge part of our user base. I have tried for days to get this thing working and i am fed up with fustration that i am unable to make this work.I have a question for users with more experience … have you been able to successfully implement Sign in with Google for ALL your users ?I have implemented login with both tokenID and authcode methods … but all trys end with unsuccessful errors.orAny help wil be much much appreciated because frankly after reading all forum threads and SO topics … i am frenkly running out of ideeas.Code:The above is my latest attempt using authcode insted of tokenid.Any help will be greatly appreciated.Best regards,\nAndrei",
"username": "Rasvan_Andrei_Dumitr"
},
{
"code": "",
"text": "This post was flagged by the community and is temporarily hidden.",
"username": "Alicia_Smith"
}
] | Google sign in is DOWN for Android app - App Services User Auth | 2022-12-22T13:41:54.715Z | Google sign in is DOWN for Android app - App Services User Auth | 1,887 |
[
"atlas-device-sync",
"android",
"kotlin",
"developer-hub"
] | [
{
"code": "",
"text": "I’ve written an article that shows you how to use Jetpack Compose, Realm Sync and a bunch of emojis to have fun with people at a conference!\nIt uses the latest in architecture techniques too.Dive into: Compose architecture A globally synced Emoji garden Reactivity like you've never seen before!",
"username": "Aniket_Kadam"
},
{
"code": "",
"text": "Hi. I tried to follow your article but it’s already outdated. Many things have changed since the latest Compose release in beta. Any chance you could give it an update?",
"username": "Lukasz_Ciastko"
},
{
"code": "",
"text": "@Lukasz_Ciastko: Thanks for letting us know, will update it and get back to you as soon as possible.",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "Thanks. I’ve been trying to add Realm to a Jetpack Compose project using the latest Android Studio, but I’m having this issue:Hi, I tried to follow these steps to add Realm to my new Android project:\n\nhtt…ps://docs.mongodb.com/realm/sdk/android/install/#installation\n\nInitially, I'm getting this error:\n\n```\nA problem occurred configuring root project 'My Compose Application'.\n> Could not resolve all artifacts for configuration ':classpath'.\n > Could not find io.realm:realm-gradle-plugin:10.3.1.\n Searched in the following locations:\n - https://dl.google.com/dl/android/maven2/io/realm/realm-gradle-plugin/10.3.1/realm-gradle-plugin-10.3.1.pom\n - https://repo.maven.apache.org/maven2/io/realm/realm-gradle-plugin/10.3.1/realm-gradle-plugin-10.3.1.pom\n Required by:\n project :\n\nPossible solution:\n - Declare repository providing the artifact, see the documentation at https://docs.gradle.org/current/userguide/declaring_repositories.html\n ```\n\nAfter adding jcenter(), the dependency gets downloaded.\n\nThen, I try to add:\n\n```\napply plugin: 'kotlin-kapt'\napply plugin: 'realm-android'\n```\n\nBut this only gives me the following error:\n\n```\nA problem occurred configuring project ':app'.\n> Build was configured to prefer settings repositories over project repositories but repository 'BintrayJCenter' was added by plugin 'realm-android'\n\n* Try:\nRun with --info or --debug option to get more log output. Run with --scan to get full insights.\n\n* Exception is:\norg.gradle.api.ProjectConfigurationException: A problem occurred configuring project ':app'.\n\tat org.gradle.configuration.project.LifecycleProjectEvaluator.wrapException(LifecycleProjectEvaluator.java:75)\n\tat org.gradle.configuration.project.LifecycleProjectEvaluator.addConfigurationFailure(LifecycleProjectEvaluator.java:68)\n\tat org.gradle.configuration.project.LifecycleProjectEvaluator.access$400(LifecycleProjectEvaluator.java:51)\n\tat org.gradle.configuration.project.LifecycleProjectEvaluator$NotifyAfterEvaluate.run(LifecycleProjectEvaluator.java:191)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:29)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:26)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$3.execute(DefaultBuildOperationRunner.java:75)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$3.execute(DefaultBuildOperationRunner.java:68)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:153)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:68)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.run(DefaultBuildOperationRunner.java:56)\n\tat org.gradle.internal.operations.DefaultBuildOperationExecutor.lambda$run$1(DefaultBuildOperationExecutor.java:71)\n\tat org.gradle.internal.operations.UnmanagedBuildOperationWrapper.runWithUnmanagedSupport(UnmanagedBuildOperationWrapper.java:45)\n\tat org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:71)\n\tat org.gradle.configuration.project.LifecycleProjectEvaluator$EvaluateProject.lambda$run$0(LifecycleProjectEvaluator.java:105)\n\tat org.gradle.api.internal.project.DefaultProjectStateRegistry$ProjectStateImpl.lambda$applyToMutableState$0(DefaultProjectStateRegistry.java:250)\n\tat org.gradle.api.internal.project.DefaultProjectStateRegistry$ProjectStateImpl.lambda$withProjectLock$3(DefaultProjectStateRegistry.java:310)\n\tat org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:213)\n\tat org.gradle.api.internal.project.DefaultProjectStateRegistry$ProjectStateImpl.withProjectLock(DefaultProjectStateRegistry.java:310)\n\tat org.gradle.api.internal.project.DefaultProjectStateRegistry$ProjectStateImpl.fromMutableState(DefaultProjectStateRegistry.java:291)\n\tat org.gradle.api.internal.project.DefaultProjectStateRegistry$ProjectStateImpl.applyToMutableState(DefaultProjectStateRegistry.java:249)\n\tat org.gradle.configuration.project.LifecycleProjectEvaluator$EvaluateProject.run(LifecycleProjectEvaluator.java:91)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:29)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:26)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$3.execute(DefaultBuildOperationRunner.java:75)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$3.execute(DefaultBuildOperationRunner.java:68)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:153)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:68)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.run(DefaultBuildOperationRunner.java:56)\n\tat org.gradle.internal.operations.DefaultBuildOperationExecutor.lambda$run$1(DefaultBuildOperationExecutor.java:71)\n\tat org.gradle.internal.operations.UnmanagedBuildOperationWrapper.runWithUnmanagedSupport(UnmanagedBuildOperationWrapper.java:45)\n\tat org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:71)\n\tat org.gradle.configuration.project.LifecycleProjectEvaluator.evaluate(LifecycleProjectEvaluator.java:63)\n\tat org.gradle.api.internal.project.DefaultProject.evaluate(DefaultProject.java:721)\n\tat org.gradle.api.internal.project.DefaultProject.evaluate(DefaultProject.java:151)\n\tat org.gradle.execution.TaskPathProjectEvaluator.configure(TaskPathProjectEvaluator.java:41)\n\tat org.gradle.execution.TaskPathProjectEvaluator.configureHierarchy(TaskPathProjectEvaluator.java:69)\n\tat org.gradle.configuration.DefaultProjectsPreparer.prepareProjects(DefaultProjectsPreparer.java:46)\n\tat org.gradle.configuration.BuildTreePreparingProjectsPreparer.prepareProjects(BuildTreePreparingProjectsPreparer.java:56)\n\tat org.gradle.configuration.BuildOperationFiringProjectsPreparer$ConfigureBuild.run(BuildOperationFiringProjectsPreparer.java:52)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:29)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:26)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$3.execute(DefaultBuildOperationRunner.java:75)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$3.execute(DefaultBuildOperationRunner.java:68)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:153)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:68)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.run(DefaultBuildOperationRunner.java:56)\n\tat org.gradle.internal.operations.DefaultBuildOperationExecutor.lambda$run$1(DefaultBuildOperationExecutor.java:71)\n\tat org.gradle.internal.operations.UnmanagedBuildOperationWrapper.runWithUnmanagedSupport(UnmanagedBuildOperationWrapper.java:45)\n\tat org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:71)\n\tat org.gradle.configuration.BuildOperationFiringProjectsPreparer.prepareProjects(BuildOperationFiringProjectsPreparer.java:40)\n\tat org.gradle.initialization.DefaultGradleLauncher.prepareProjects(DefaultGradleLauncher.java:226)\n\tat org.gradle.initialization.DefaultGradleLauncher.doClassicBuildStages(DefaultGradleLauncher.java:163)\n\tat org.gradle.initialization.DefaultGradleLauncher.doBuildStages(DefaultGradleLauncher.java:148)\n\tat org.gradle.initialization.DefaultGradleLauncher.executeTasks(DefaultGradleLauncher.java:124)\n\tat org.gradle.internal.invocation.GradleBuildController$1.create(GradleBuildController.java:72)\n\tat org.gradle.internal.invocation.GradleBuildController$1.create(GradleBuildController.java:67)\n\tat org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:213)\n\tat org.gradle.internal.invocation.GradleBuildController.doBuild(GradleBuildController.java:67)\n\tat org.gradle.internal.invocation.GradleBuildController.run(GradleBuildController.java:56)\n\tat org.gradle.tooling.internal.provider.runner.AbstractClientProvidedBuildActionRunner.runClientAction(AbstractClientProvidedBuildActionRunner.java:53)\n\tat org.gradle.tooling.internal.provider.runner.ClientProvidedPhasedActionRunner.run(ClientProvidedPhasedActionRunner.java:47)\n\tat org.gradle.launcher.exec.ChainingBuildActionRunner.run(ChainingBuildActionRunner.java:35)\n\tat org.gradle.launcher.exec.ChainingBuildActionRunner.run(ChainingBuildActionRunner.java:35)\n\tat org.gradle.launcher.exec.BuildOutcomeReportingBuildActionRunner.run(BuildOutcomeReportingBuildActionRunner.java:63)\n\tat org.gradle.tooling.internal.provider.ValidatingBuildActionRunner.run(ValidatingBuildActionRunner.java:32)\n\tat org.gradle.tooling.internal.provider.FileSystemWatchingBuildActionRunner.run(FileSystemWatchingBuildActionRunner.java:77)\n\tat org.gradle.launcher.exec.BuildCompletionNotifyingBuildActionRunner.run(BuildCompletionNotifyingBuildActionRunner.java:41)\n\tat org.gradle.launcher.exec.RunAsBuildOperationBuildActionRunner$3.call(RunAsBuildOperationBuildActionRunner.java:49)\n\tat org.gradle.launcher.exec.RunAsBuildOperationBuildActionRunner$3.call(RunAsBuildOperationBuildActionRunner.java:44)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:200)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:195)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$3.execute(DefaultBuildOperationRunner.java:75)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$3.execute(DefaultBuildOperationRunner.java:68)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:153)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:68)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:62)\n\tat org.gradle.internal.operations.DefaultBuildOperationExecutor.lambda$call$2(DefaultBuildOperationExecutor.java:76)\n\tat org.gradle.internal.operations.UnmanagedBuildOperationWrapper.callWithUnmanagedSupport(UnmanagedBuildOperationWrapper.java:54)\n\tat org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:76)\n\tat org.gradle.launcher.exec.RunAsBuildOperationBuildActionRunner.run(RunAsBuildOperationBuildActionRunner.java:44)\n\tat org.gradle.launcher.exec.InProcessBuildActionExecuter.lambda$execute$0(InProcessBuildActionExecuter.java:54)\n\tat org.gradle.composite.internal.DefaultRootBuildState.run(DefaultRootBuildState.java:86)\n\tat org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:53)\n\tat org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:29)\n\tat org.gradle.launcher.exec.BuildTreeScopeLifecycleBuildActionExecuter.lambda$execute$0(BuildTreeScopeLifecycleBuildActionExecuter.java:33)\n\tat org.gradle.internal.buildtree.BuildTreeState.run(BuildTreeState.java:49)\n\tat org.gradle.launcher.exec.BuildTreeScopeLifecycleBuildActionExecuter.execute(BuildTreeScopeLifecycleBuildActionExecuter.java:32)\n\tat org.gradle.launcher.exec.BuildTreeScopeLifecycleBuildActionExecuter.execute(BuildTreeScopeLifecycleBuildActionExecuter.java:27)\n\tat org.gradle.tooling.internal.provider.ContinuousBuildActionExecuter.execute(ContinuousBuildActionExecuter.java:104)\n\tat org.gradle.tooling.internal.provider.ContinuousBuildActionExecuter.execute(ContinuousBuildActionExecuter.java:55)\n\tat org.gradle.tooling.internal.provider.SubscribableBuildActionExecuter.execute(SubscribableBuildActionExecuter.java:64)\n\tat org.gradle.tooling.internal.provider.SubscribableBuildActionExecuter.execute(SubscribableBuildActionExecuter.java:37)\n\tat org.gradle.tooling.internal.provider.SessionScopeLifecycleBuildActionExecuter.lambda$execute$0(SessionScopeLifecycleBuildActionExecuter.java:54)\n\tat org.gradle.internal.session.BuildSessionState.run(BuildSessionState.java:67)\n\tat org.gradle.tooling.internal.provider.SessionScopeLifecycleBuildActionExecuter.execute(SessionScopeLifecycleBuildActionExecuter.java:50)\n\tat org.gradle.tooling.internal.provider.SessionScopeLifecycleBuildActionExecuter.execute(SessionScopeLifecycleBuildActionExecuter.java:36)\n\tat org.gradle.tooling.internal.provider.GradleThreadBuildActionExecuter.execute(GradleThreadBuildActionExecuter.java:36)\n\tat org.gradle.tooling.internal.provider.GradleThreadBuildActionExecuter.execute(GradleThreadBuildActionExecuter.java:25)\n\tat org.gradle.tooling.internal.provider.StartParamsValidatingActionExecuter.execute(StartParamsValidatingActionExecuter.java:59)\n\tat org.gradle.tooling.internal.provider.StartParamsValidatingActionExecuter.execute(StartParamsValidatingActionExecuter.java:31)\n\tat org.gradle.tooling.internal.provider.SessionFailureReportingActionExecuter.execute(SessionFailureReportingActionExecuter.java:55)\n\tat org.gradle.tooling.internal.provider.SessionFailureReportingActionExecuter.execute(SessionFailureReportingActionExecuter.java:41)\n\tat org.gradle.tooling.internal.provider.SetupLoggingActionExecuter.execute(SetupLoggingActionExecuter.java:47)\n\tat org.gradle.tooling.internal.provider.SetupLoggingActionExecuter.execute(SetupLoggingActionExecuter.java:31)\n\tat org.gradle.launcher.daemon.server.exec.ExecuteBuild.doBuild(ExecuteBuild.java:65)\n\tat org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:37)\n\tat org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)\n\tat org.gradle.launcher.daemon.server.exec.WatchForDisconnection.execute(WatchForDisconnection.java:39)\n\tat org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)\n\tat org.gradle.launcher.daemon.server.exec.ResetDeprecationLogger.execute(ResetDeprecationLogger.java:29)\n\tat org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)\n\tat org.gradle.launcher.daemon.server.exec.RequestStopIfSingleUsedDaemon.execute(RequestStopIfSingleUsedDaemon.java:35)\n\tat org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)\n\tat org.gradle.launcher.daemon.server.exec.ForwardClientInput$2.create(ForwardClientInput.java:78)\n\tat org.gradle.launcher.daemon.server.exec.ForwardClientInput$2.create(ForwardClientInput.java:75)\n\tat org.gradle.util.Swapper.swap(Swapper.java:38)\n\tat org.gradle.launcher.daemon.server.exec.ForwardClientInput.execute(ForwardClientInput.java:75)\n\tat org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)\n\tat org.gradle.launcher.daemon.server.exec.LogAndCheckHealth.execute(LogAndCheckHealth.java:55)\n\tat org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)\n\tat org.gradle.launcher.daemon.server.exec.LogToClient.doBuild(LogToClient.java:63)\n\tat org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:37)\n\tat org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)\n\tat org.gradle.launcher.daemon.server.exec.EstablishBuildEnvironment.doBuild(EstablishBuildEnvironment.java:84)\n\tat org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:37)\n\tat org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)\n\tat org.gradle.launcher.daemon.server.exec.StartBuildOrRespondWithBusy$1.run(StartBuildOrRespondWithBusy.java:52)\n\tat org.gradle.launcher.daemon.server.DaemonStateCoordinator$1.run(DaemonStateCoordinator.java:297)\n\tat org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)\n\tat org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)\n\tat org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)\nCaused by: org.gradle.api.InvalidUserCodeException: Build was configured to prefer settings repositories over project repositories but repository 'BintrayJCenter' was added by plugin 'realm-android'\n\tat org.gradle.internal.management.DefaultDependencyResolutionManagement.repoMutationDisallowedOnProject(DefaultDependencyResolutionManagement.java:160)\n\tat org.gradle.internal.ImmutableActionSet$SetWithFewActions.execute(ImmutableActionSet.java:285)\n\tat org.gradle.api.internal.DefaultDomainObjectCollection.doAdd(DefaultDomainObjectCollection.java:264)\n\tat org.gradle.api.internal.DefaultNamedDomainObjectCollection.doAdd(DefaultNamedDomainObjectCollection.java:113)\n\tat org.gradle.api.internal.DefaultDomainObjectCollection.add(DefaultDomainObjectCollection.java:253)\n\tat org.gradle.api.internal.artifacts.DefaultArtifactRepositoryContainer.access$101(DefaultArtifactRepositoryContainer.java:35)\n\tat org.gradle.api.internal.artifacts.DefaultArtifactRepositoryContainer.lambda$new$0(DefaultArtifactRepositoryContainer.java:38)\n\tat org.gradle.api.internal.artifacts.DefaultArtifactRepositoryContainer.addWithUniqueName(DefaultArtifactRepositoryContainer.java:101)\n\tat org.gradle.api.internal.artifacts.DefaultArtifactRepositoryContainer.addRepository(DefaultArtifactRepositoryContainer.java:89)\n\tat org.gradle.api.internal.artifacts.DefaultArtifactRepositoryContainer.addRepository(DefaultArtifactRepositoryContainer.java:84)\n\tat org.gradle.api.internal.artifacts.dsl.DefaultRepositoryHandler.jcenter(DefaultRepositoryHandler.java:111)\n\tat org.gradle.api.artifacts.dsl.RepositoryHandler$jcenter.call(Unknown Source)\n\tat io.realm.gradle.Realm$_apply_closure1.doCall(Realm.groovy:89)\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat org.gradle.configuration.internal.DefaultListenerBuildOperationDecorator$BuildOperationEmittingClosure$1.lambda$run$0(DefaultListenerBuildOperationDecorator.java:181)\n\tat org.gradle.configuration.internal.DefaultUserCodeApplicationContext$CurrentApplication.reapply(DefaultUserCodeApplicationContext.java:86)\n\tat org.gradle.configuration.internal.DefaultListenerBuildOperationDecorator$BuildOperationEmittingClosure$1.run(DefaultListenerBuildOperationDecorator.java:178)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:29)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:26)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$3.execute(DefaultBuildOperationRunner.java:75)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$3.execute(DefaultBuildOperationRunner.java:68)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:153)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:68)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.run(DefaultBuildOperationRunner.java:56)\n\tat org.gradle.internal.operations.DefaultBuildOperationExecutor.lambda$run$1(DefaultBuildOperationExecutor.java:71)\n\tat org.gradle.internal.operations.UnmanagedBuildOperationWrapper.runWithUnmanagedSupport(UnmanagedBuildOperationWrapper.java:45)\n\tat org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:71)\n\tat org.gradle.configuration.internal.DefaultListenerBuildOperationDecorator$BuildOperationEmittingClosure.doCall(DefaultListenerBuildOperationDecorator.java:175)\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat org.gradle.listener.ClosureBackedMethodInvocationDispatch.dispatch(ClosureBackedMethodInvocationDispatch.java:41)\n\tat org.gradle.listener.ClosureBackedMethodInvocationDispatch.dispatch(ClosureBackedMethodInvocationDispatch.java:25)\n\tat org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:43)\n\tat org.gradle.internal.event.BroadcastDispatch$SingletonDispatch.dispatch(BroadcastDispatch.java:245)\n\tat org.gradle.internal.event.BroadcastDispatch$SingletonDispatch.dispatch(BroadcastDispatch.java:157)\n\tat org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:61)\n\tat org.gradle.internal.event.BroadcastDispatch$CompositeDispatch.dispatch(BroadcastDispatch.java:346)\n\tat org.gradle.internal.event.BroadcastDispatch$CompositeDispatch.dispatch(BroadcastDispatch.java:249)\n\tat org.gradle.internal.event.ListenerBroadcast.dispatch(ListenerBroadcast.java:141)\n\tat org.gradle.internal.event.ListenerBroadcast.dispatch(ListenerBroadcast.java:37)\n\tat org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)\n\tat com.sun.proxy.$Proxy50.afterEvaluate(Unknown Source)\n\tat org.gradle.configuration.project.LifecycleProjectEvaluator$NotifyAfterEvaluate$1.execute(LifecycleProjectEvaluator.java:183)\n\tat org.gradle.configuration.project.LifecycleProjectEvaluator$NotifyAfterEvaluate$1.execute(LifecycleProjectEvaluator.java:180)\n\tat org.gradle.api.internal.project.DefaultProject.stepEvaluationListener(DefaultProject.java:1465)\n\tat org.gradle.configuration.project.LifecycleProjectEvaluator$NotifyAfterEvaluate.run(LifecycleProjectEvaluator.java:189)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:29)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:26)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$3.execute(DefaultBuildOperationRunner.java:75)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$3.execute(DefaultBuildOperationRunner.java:68)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:153)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:68)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.run(DefaultBuildOperationRunner.java:56)\n\tat org.gradle.internal.operations.DefaultBuildOperationExecutor.lambda$run$1(DefaultBuildOperationExecutor.java:71)\n\tat org.gradle.internal.operations.UnmanagedBuildOperationWrapper.runWithUnmanagedSupport(UnmanagedBuildOperationWrapper.java:45)\n\tat org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:71)\n\tat org.gradle.configuration.project.LifecycleProjectEvaluator$EvaluateProject.lambda$run$0(LifecycleProjectEvaluator.java:105)\n\tat org.gradle.api.internal.project.DefaultProjectStateRegistry$ProjectStateImpl.lambda$applyToMutableState$0(DefaultProjectStateRegistry.java:250)\n\tat org.gradle.api.internal.project.DefaultProjectStateRegistry$ProjectStateImpl.lambda$withProjectLock$3(DefaultProjectStateRegistry.java:310)\n\tat org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:213)\n\tat org.gradle.api.internal.project.DefaultProjectStateRegistry$ProjectStateImpl.withProjectLock(DefaultProjectStateRegistry.java:310)\n\tat org.gradle.api.internal.project.DefaultProjectStateRegistry$ProjectStateImpl.fromMutableState(DefaultProjectStateRegistry.java:291)\n\tat org.gradle.api.internal.project.DefaultProjectStateRegistry$ProjectStateImpl.applyToMutableState(DefaultProjectStateRegistry.java:249)\n\tat org.gradle.configuration.project.LifecycleProjectEvaluator$EvaluateProject.run(LifecycleProjectEvaluator.java:91)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:29)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:26)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$3.execute(DefaultBuildOperationRunner.java:75)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$3.execute(DefaultBuildOperationRunner.java:68)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:153)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:68)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.run(DefaultBuildOperationRunner.java:56)\n\tat org.gradle.internal.operations.DefaultBuildOperationExecutor.lambda$run$1(DefaultBuildOperationExecutor.java:71)\n\tat org.gradle.internal.operations.UnmanagedBuildOperationWrapper.runWithUnmanagedSupport(UnmanagedBuildOperationWrapper.java:45)\n\tat org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:71)\n\tat org.gradle.configuration.project.LifecycleProjectEvaluator.evaluate(LifecycleProjectEvaluator.java:63)\n\tat org.gradle.api.internal.project.DefaultProject.evaluate(DefaultProject.java:721)\n\tat org.gradle.api.internal.project.DefaultProject.evaluate(DefaultProject.java:151)\n\tat org.gradle.execution.TaskPathProjectEvaluator.configure(TaskPathProjectEvaluator.java:41)\n\tat org.gradle.execution.TaskPathProjectEvaluator.configureHierarchy(TaskPathProjectEvaluator.java:69)\n\tat org.gradle.configuration.DefaultProjectsPreparer.prepareProjects(DefaultProjectsPreparer.java:46)\n\tat org.gradle.configuration.BuildTreePreparingProjectsPreparer.prepareProjects(BuildTreePreparingProjectsPreparer.java:56)\n\tat org.gradle.configuration.BuildOperationFiringProjectsPreparer$ConfigureBuild.run(BuildOperationFiringProjectsPreparer.java:52)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:29)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:26)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$3.execute(DefaultBuildOperationRunner.java:75)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$3.execute(DefaultBuildOperationRunner.java:68)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:153)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:68)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.run(DefaultBuildOperationRunner.java:56)\n\tat org.gradle.internal.operations.DefaultBuildOperationExecutor.lambda$run$1(DefaultBuildOperationExecutor.java:71)\n\tat org.gradle.internal.operations.UnmanagedBuildOperationWrapper.runWithUnmanagedSupport(UnmanagedBuildOperationWrapper.java:45)\n\tat org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:71)\n\tat org.gradle.configuration.BuildOperationFiringProjectsPreparer.prepareProjects(BuildOperationFiringProjectsPreparer.java:40)\n\tat org.gradle.initialization.DefaultGradleLauncher.prepareProjects(DefaultGradleLauncher.java:226)\n\tat org.gradle.initialization.DefaultGradleLauncher.doClassicBuildStages(DefaultGradleLauncher.java:163)\n\tat org.gradle.initialization.DefaultGradleLauncher.doBuildStages(DefaultGradleLauncher.java:148)\n\tat org.gradle.initialization.DefaultGradleLauncher.executeTasks(DefaultGradleLauncher.java:124)\n\tat org.gradle.internal.invocation.GradleBuildController$1.create(GradleBuildController.java:72)\n\tat org.gradle.internal.invocation.GradleBuildController$1.create(GradleBuildController.java:67)\n\tat org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:213)\n\tat org.gradle.internal.invocation.GradleBuildController.doBuild(GradleBuildController.java:67)\n\tat org.gradle.internal.invocation.GradleBuildController.run(GradleBuildController.java:56)\n\tat org.gradle.tooling.internal.provider.runner.AbstractClientProvidedBuildActionRunner.runClientAction(AbstractClientProvidedBuildActionRunner.java:53)\n\tat org.gradle.tooling.internal.provider.runner.ClientProvidedPhasedActionRunner.run(ClientProvidedPhasedActionRunner.java:47)\n\tat org.gradle.launcher.exec.ChainingBuildActionRunner.run(ChainingBuildActionRunner.java:35)\n\tat org.gradle.launcher.exec.ChainingBuildActionRunner.run(ChainingBuildActionRunner.java:35)\n\tat org.gradle.launcher.exec.BuildOutcomeReportingBuildActionRunner.run(BuildOutcomeReportingBuildActionRunner.java:63)\n\tat org.gradle.tooling.internal.provider.ValidatingBuildActionRunner.run(ValidatingBuildActionRunner.java:32)\n\tat org.gradle.tooling.internal.provider.FileSystemWatchingBuildActionRunner.run(FileSystemWatchingBuildActionRunner.java:77)\n\tat org.gradle.launcher.exec.BuildCompletionNotifyingBuildActionRunner.run(BuildCompletionNotifyingBuildActionRunner.java:41)\n\tat org.gradle.launcher.exec.RunAsBuildOperationBuildActionRunner$3.call(RunAsBuildOperationBuildActionRunner.java:49)\n\tat org.gradle.launcher.exec.RunAsBuildOperationBuildActionRunner$3.call(RunAsBuildOperationBuildActionRunner.java:44)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:200)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:195)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$3.execute(DefaultBuildOperationRunner.java:75)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner$3.execute(DefaultBuildOperationRunner.java:68)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:153)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:68)\n\tat org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:62)\n\tat org.gradle.internal.operations.DefaultBuildOperationExecutor.lambda$call$2(DefaultBuildOperationExecutor.java:76)\n\tat org.gradle.internal.operations.UnmanagedBuildOperationWrapper.callWithUnmanagedSupport(UnmanagedBuildOperationWrapper.java:54)\n\tat org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:76)\n\tat org.gradle.launcher.exec.RunAsBuildOperationBuildActionRunner.run(RunAsBuildOperationBuildActionRunner.java:44)\n\tat org.gradle.launcher.exec.InProcessBuildActionExecuter.lambda$execute$0(InProcessBuildActionExecuter.java:54)\n\tat org.gradle.composite.internal.DefaultRootBuildState.run(DefaultRootBuildState.java:86)\n\tat org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:53)\n\tat org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:29)\n\tat org.gradle.launcher.exec.BuildTreeScopeLifecycleBuildActionExecuter.lambda$execute$0(BuildTreeScopeLifecycleBuildActionExecuter.java:33)\n\tat org.gradle.internal.buildtree.BuildTreeState.run(BuildTreeState.java:49)\n\tat org.gradle.launcher.exec.BuildTreeScopeLifecycleBuildActionExecuter.execute(BuildTreeScopeLifecycleBuildActionExecuter.java:32)\n\tat org.gradle.launcher.exec.BuildTreeScopeLifecycleBuildActionExecuter.execute(BuildTreeScopeLifecycleBuildActionExecuter.java:27)\n\tat org.gradle.tooling.internal.provider.ContinuousBuildActionExecuter.execute(ContinuousBuildActionExecuter.java:104)\n\tat org.gradle.tooling.internal.provider.ContinuousBuildActionExecuter.execute(ContinuousBuildActionExecuter.java:55)\n\tat org.gradle.tooling.internal.provider.SubscribableBuildActionExecuter.execute(SubscribableBuildActionExecuter.java:64)\n\tat org.gradle.tooling.internal.provider.SubscribableBuildActionExecuter.execute(SubscribableBuildActionExecuter.java:37)\n\tat org.gradle.tooling.internal.provider.SessionScopeLifecycleBuildActionExecuter.lambda$execute$0(SessionScopeLifecycleBuildActionExecuter.java:54)\n\tat org.gradle.internal.session.BuildSessionState.run(BuildSessionState.java:67)\n\tat org.gradle.tooling.internal.provider.SessionScopeLifecycleBuildActionExecuter.execute(SessionScopeLifecycleBuildActionExecuter.java:50)\n\tat org.gradle.tooling.internal.provider.SessionScopeLifecycleBuildActionExecuter.execute(SessionScopeLifecycleBuildActionExecuter.java:36)\n\tat org.gradle.tooling.internal.provider.GradleThreadBuildActionExecuter.execute(GradleThreadBuildActionExecuter.java:36)\n\tat org.gradle.tooling.internal.provider.GradleThreadBuildActionExecuter.execute(GradleThreadBuildActionExecuter.java:25)\n\tat org.gradle.tooling.internal.provider.StartParamsValidatingActionExecuter.execute(StartParamsValidatingActionExecuter.java:59)\n\tat org.gradle.tooling.internal.provider.StartParamsValidatingActionExecuter.execute(StartParamsValidatingActionExecuter.java:31)\n\tat org.gradle.tooling.internal.provider.SessionFailureReportingActionExecuter.execute(SessionFailureReportingActionExecuter.java:55)\n\tat org.gradle.tooling.internal.provider.SessionFailureReportingActionExecuter.execute(SessionFailureReportingActionExecuter.java:41)\n\tat org.gradle.tooling.internal.provider.SetupLoggingActionExecuter.execute(SetupLoggingActionExecuter.java:47)\n\tat org.gradle.tooling.internal.provider.SetupLoggingActionExecuter.execute(SetupLoggingActionExecuter.java:31)\n\tat org.gradle.launcher.daemon.server.exec.ExecuteBuild.doBuild(ExecuteBuild.java:65)\n\tat org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:37)\n\tat org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)\n\tat org.gradle.launcher.daemon.server.exec.WatchForDisconnection.execute(WatchForDisconnection.java:39)\n\tat org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)\n\tat org.gradle.launcher.daemon.server.exec.ResetDeprecationLogger.execute(ResetDeprecationLogger.java:29)\n\tat org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)\n\tat org.gradle.launcher.daemon.server.exec.RequestStopIfSingleUsedDaemon.execute(RequestStopIfSingleUsedDaemon.java:35)\n\tat org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)\n\tat org.gradle.launcher.daemon.server.exec.ForwardClientInput$2.create(ForwardClientInput.java:78)\n\tat org.gradle.launcher.daemon.server.exec.ForwardClientInput$2.create(ForwardClientInput.java:75)\n\tat org.gradle.util.Swapper.swap(Swapper.java:38)\n\tat org.gradle.launcher.daemon.server.exec.ForwardClientInput.execute(ForwardClientInput.java:75)\n\tat org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)\n\tat org.gradle.launcher.daemon.server.exec.LogAndCheckHealth.execute(LogAndCheckHealth.java:55)\n\tat org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)\n\tat org.gradle.launcher.daemon.server.exec.LogToClient.doBuild(LogToClient.java:63)\n\tat org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:37)\n\tat org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)\n\tat org.gradle.launcher.daemon.server.exec.EstablishBuildEnvironment.doBuild(EstablishBuildEnvironment.java:84)\n\tat org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:37)\n\tat org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)\n\tat org.gradle.launcher.daemon.server.exec.StartBuildOrRespondWithBusy$1.run(StartBuildOrRespondWithBusy.java:52)\n\tat org.gradle.launcher.daemon.server.DaemonStateCoordinator$1.run(DaemonStateCoordinator.java:297)\n\tat org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)\n\tat org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)\n\tat org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)\n```\n\nBtw. the link to install instructions in this repository is outdated.\n\nWhat should I do to add Realm to my app?\n\nI'm using:\n\n<img width=\"628\" alt=\"Screenshot 2021-03-20 at 00 19 41\" src=\"https://user-images.githubusercontent.com/964414/111850960-f868d000-8911-11eb-9618-0bcd4351844e.png\">\n\nMy project gradle:\n\n```\nbuildscript {\n ext {\n compose_version = '1.0.0-beta02'\n }\n repositories {\n jcenter()\n google()\n mavenCentral()\n }\n dependencies {\n classpath \"com.android.tools.build:gradle:7.0.0-alpha10\"\n classpath \"org.jetbrains.kotlin:kotlin-gradle-plugin:1.4.30\"\n classpath \"io.realm:realm-gradle-plugin:10.3.1\"\n }\n}\n\ntask clean(type: Delete) {\n delete rootProject.buildDir\n}\n```\n\nMy app gradle:\n\n```\nplugins {\n id 'com.android.application'\n id 'kotlin-android'\n id 'kotlin-kapt'\n id 'realm-android'\n}\n\nandroid {\n compileSdkVersion 30\n buildToolsVersion \"30.0.3\"\n\n\n defaultConfig {\n applicationId \"com.example.mycomposeapplication\"\n minSdkVersion 26\n targetSdkVersion 30\n versionCode 1\n versionName \"1.0\"\n\n testInstrumentationRunner \"androidx.test.runner.AndroidJUnitRunner\"\n }\n\n buildTypes {\n release {\n minifyEnabled false\n proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'\n }\n }\n compileOptions {\n sourceCompatibility JavaVersion.VERSION_1_8\n targetCompatibility JavaVersion.VERSION_1_8\n }\n kotlinOptions {\n jvmTarget = '1.8'\n useIR = true\n }\n buildFeatures {\n compose true\n }\n composeOptions {\n kotlinCompilerExtensionVersion compose_version\n kotlinCompilerVersion '1.4.30'\n }\n}\n\ndependencies {\n\n implementation 'androidx.core:core-ktx:1.3.2'\n implementation 'androidx.appcompat:appcompat:1.2.0'\n implementation 'com.google.android.material:material:1.3.0'\n implementation \"androidx.compose.ui:ui:$compose_version\"\n implementation \"androidx.compose.material:material:$compose_version\"\n implementation \"androidx.compose.ui:ui-tooling:$compose_version\"\n implementation 'androidx.lifecycle:lifecycle-runtime-ktx:2.3.0'\n implementation 'androidx.activity:activity-compose:1.3.0-alpha04'\n testImplementation 'junit:junit:4.+'\n androidTestImplementation 'androidx.test.ext:junit:1.1.2'\n androidTestImplementation 'androidx.test.espresso:espresso-core:3.3.0'\n androidTestImplementation \"androidx.compose.ui:ui-test-junit4:$compose_version\"\n\n // For the viewModel function that imports them into activities\n implementation 'androidx.activity:activity-ktx:1.3.0-alpha04'\n\n // For the ViewModelScope if using Coroutines in the ViewModel\n implementation 'androidx.lifecycle:lifecycle-viewmodel-ktx:2.3.0'\n implementation 'androidx.lifecycle:lifecycle-runtime-ktx:2.3.0'\n}\n```",
"username": "Lukasz_Ciastko"
},
{
"code": "",
"text": "@Lukasz_Ciastko : Yes, I am aware of that and has responded to you on the same.",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "This post was flagged by the community and is temporarily hidden.",
"username": "Alicia_Smith"
}
] | Article on building an Android App - an Emoji Garden with friends! | 2021-03-19T06:41:37.193Z | Article on building an Android App - an Emoji Garden with friends! | 4,330 |
|
null | [] | [
{
"code": "",
"text": "Just generated a new code for the GitHub education $50 offer but it’s not letting me redeem it.\nI keep getting this error: “The coupon is either applied before start date or after end date”",
"username": "nxsi"
},
{
"code": "",
"text": "Hi Tyson and welcome to the forums! I will reach out to you via DM for more info.",
"username": "Aiyana_McConnell"
},
{
"code": "",
"text": "Hey, facing the same issue. Can you DM me, please?",
"username": "s4gor"
},
{
"code": "",
"text": "Welcome, Imran! Yes, I will DM you shortly.",
"username": "Aiyana_McConnell"
}
] | Can't redeem GitHub Educational code | 2023-10-17T03:35:36.197Z | Can’t redeem GitHub Educational code | 266 |
null | [
"queries",
"python"
] | [
{
"code": "<COLLECTION>.update_one({'list': 'a'}, {'$addToSet': {'list': 'a'}}, upsert=True)<COLLECTION>.update_one({'list': {'$in': ['a']}}, {'$addToSet': {'list': 'a'}}, upsert=True)Cannot apply $addToSet to non-array field. Field named 'list' has non-array type string<COLLECTION>.update_one({'list': ['a']}, {'$addToSet': {'list': 'a'}}, upsert=True)<COLLECTION>.update_one({'list': {'$in': ['a', 'b']}}, {'$addToSet': {'list': 'a'}}, upsert=True)",
"text": "On MongoDB 6 with an empty collection, running these queries won’t work:\n<COLLECTION>.update_one({'list': 'a'}, {'$addToSet': {'list': 'a'}}, upsert=True)\n<COLLECTION>.update_one({'list': {'$in': ['a']}}, {'$addToSet': {'list': 'a'}}, upsert=True)Only if the document doesn’t exist, will MongoDB assume the field is a string and raise this error:\nCannot apply $addToSet to non-array field. Field named 'list' has non-array type stringThese will work because MongoDB assumes the key is an array:\n<COLLECTION>.update_one({'list': ['a']}, {'$addToSet': {'list': 'a'}}, upsert=True)\n<COLLECTION>.update_one({'list': {'$in': ['a', 'b']}}, {'$addToSet': {'list': 'a'}}, upsert=True)I tried searching for the solution everywhere and people say it’s solved around MongoDB 3.2~ which it probably either wrong or a fix for a different issue.\nIt’s probably not related to ‘pymongo’, but that’s the package I used.Does anyone know a good solution or how to get around this issue?",
"username": "ashley.majar"
},
{
"code": "<COLLECTION}.update_one( { 'list' : { '$in' : [ 'a' , null ] } },\n { '$addToSet' : { 'list' : 'a' } } ,\n upsert=True)\n",
"text": "how to get around this issue?This workaround will not work if you want to allow null in your array. If you want to allow null you may choose any value that you know should not be in the array. The empty object {} also seems to work.By the way it is the fact that you wroteThese will work because MongoDB assumes the key is an array:\n…\n.update_one({‘list’: {‘$in’: [‘a’, ‘b’]}}, {‘$addToSet’: {‘list’: ‘a’}}, upsert=True)that gave me the idea to try with null rather than ‘b’.",
"username": "steevej"
}
] | $addToSet not working in update_one with upsert=True when defined in query too | 2023-10-23T15:37:13.487Z | $addToSet not working in update_one with upsert=True when defined in query too | 357 |
null | [
"node-js"
] | [
{
"code": "numberInvalid type. Expected: type: undefined, bsonType: [double,null], given: [integer int long number mixed]150",
"text": "I am using Realm React SDK with typescript. I have a field that is defined as GeoPoint on my schema located on the Mongo Atlas. in there, there is a field named coordinates that is defined as double.However, there is chance that a coordinate is a flat number, i.e. 150. In that case, when I try to set that number of my typescript with type number. The server side will reject my insert request with the message of Invalid type. Expected: type: undefined, bsonType: [double,null], given: [integer int long number mixed]How can I convert the number 150 to double so that the server will not reject my insert.",
"username": "Chongju_Mai"
},
{
"code": "",
"text": "coordinates that is defined as doubleWe made need some clarity; you’ve got a field called ‘coordinates’ (plural) but it only stores one double value, is that correct?Internally, both a double and an int map to a Java Number so the reason for the error is not clear. But, what’s being assigned doesn’'t appears to match the error.Can you please update your question to include your model and the code you’re attempting to use to write to that model?",
"username": "Jay"
},
{
"code": "testgeoLocationGeoPoint{\n \"properties\": {\n \"geoLocation\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"type\": {\n \"bsonType\": \"string\"\n },\n \"coordinates\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"double\"\n }\n }\n },\n \"required\": [\n \"type\"\n ]\n },\n \"_id\": {\n \"bsonType\": \"objectId\"\n }\n }\n}\nreact nativeintawait mongoUser.mongoClient('mongodb-atlas').db('main').collection('test').insertOne({\n geoLocation: {\n type: 'Point',\n coordinates: [1.0, 2.0],\n }\n });\ninsert not permittedrole \"readAndWriteAll\" in \"main.test\" does not have insert permission: could not validate document: geoLocation.coordinates.0: Invalid type. Expected: type: undefined, bsonType: [double,null], given: [integer int long number mixed] geoLocation.coordinates.1: Invalid type. Expected: type: undefined, bsonType: [double,null], given: [integer int long number mixed]\nreact",
"text": "For sure. I have created a simple collection that can repro this issue.Create a collection test with a validation schema: (Note that the type of geoLocation is GeoPoint)Then on the client side (react native), try to insert a new record that have int.The insert will failed with message insert not permitted. When go to Mongo Atlas log, I can see error with the messageNow the question is that, what can I do on the react client side so that the number can be understand as a double.",
"username": "Chongju_Mai"
},
{
"code": "const { Double } = require('bson');\nconst value = new Double(1);\n",
"text": "Hi, sorry to hear about this. I think there are a few things we can do on our side to make this better, but first I will try to troubleshoot to get this working for you.Right now we only support GeoPoints that are of type Double. It seems that you are running into a known “feature” (I put it in quotes because I agree it is odd) that the NodeJS driver will always make a numeric type the smallest type that can be represented (in this case an integer). You can force the driver to send specific types though and they are exported in the package:See the docs here: mongodb\nAnd the code here: https://github.com/mongodb/js-bson/blob/77fac2a369c9009e88604a9dce0c688778826973/src/double.tsTherefore, I think if you wrap your script with a “Double” it should solve your problems.On our end, I will file 2 tickets as action items from this:Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "intimport {Double} from 'bson';\ntry {\n await mongoUser.mongoClient('mongodb-atlas').db('main').collection('test').insertOne({\n geoLocation: {\n type: 'Point',\n coordinates: [new Double(1.0), new Double(2.0)],\n },\n });\n} catch (e) {\n logger.error(e);\n}\n",
"text": "Hi Tyler,Thank you for your suggestion. However, when I try the workaround, I same error message are raise and I still not able to add the int number.",
"username": "Chongju_Mai"
},
{
"code": "",
"text": "Hi, I did not realize you are running this within an app-services function. Can you try using this: https://www.mongodb.com/docs/atlas/app-services/functions/globals/#mongodb-method-BSON.DoubleIf that does not work can you post the exact error you are seeing (or is it the exact same thing?) and a link to your application (the URL).Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "import {BSON} from 'realm';\n\nconst test = new BSON.Double(1.0);\nTS2339: Property Double does not exist on type typeof BSONDecimal128",
"text": "Hi,I am actually using the Realm React Native SDK.As for your suggestion, that is very interesting. For some reason, Double is not exported vai the BSONThis will raise error TS2339: Property Double does not exist on type typeof BSON.However, they do export Decimal128. Not sure if that is the same as double.",
"username": "Chongju_Mai"
},
{
"code": "",
"text": "Okay, got it. Was a bit hard to decipher where this was running. Do you have a link to your application? And could you just modify the schema to allow integers / mixed? I can chat with the Javascript team but wrapping it in the original bson.Double() should work.",
"username": "Tyler_Kaye"
},
{
"code": "GeoPointGeoPointcoordinatesreact-js",
"text": "I don’t have a link to my application because it is being developed for mobile devices.I can not modify the schema because of the preset of GeoPoint. Once I select the type to be GeoPoint, the coordinates are auto-generated with the type double and cannot be modified.Should I open this as an issue on react-js repo on github?",
"username": "Chongju_Mai"
}
] | [Realm React, Typescript] bsonType: [double,null], given: [integer int long number mixed] | 2023-10-23T20:21:12.432Z | [Realm React, Typescript] bsonType: [double,null], given: [integer int long number mixed] | 280 |
null | [
"swift"
] | [
{
"code": "",
"text": "Hi team,I currently have a realm app in production using partitioned sync (Swift). I am planning on transitioning and releasing an update of the app on the App Store that will open the realm and sync using flexible sync.Ideally, I would like to still support sync for users who are on the current version of the app (running partition sync), until they update to the newer version of the app running flexible sync.If I have not specified in the codebase of the current version of the app to open and sync using flexible sync, how may you suggest I continue supporting syncing in the current version of the app when I terminate (partition) sync and re-enable (flexible) sync?I have heard whispers that this should be easy (https://www.mongodb.com/community/forums/t/partition-based-sync-transition-to-flexible-sync/115363) but I still can’t get my head around how this is possible.Thanks,Ben",
"username": "BenJ"
},
{
"code": "",
"text": "After further consideration, I’m thinking the only way I may be able to do this is by creating another app in App Services (which uses flexible sync), linking the existing atlas cluster to the new app and subscribing to the partition value, used for the current app.The only problem I can see in doing this comes when creating/authenticating users in the new flexible sync app. When a user is authenticated for the first time, app services stores the new user, with a new id, which will not be the same id as that given to the same user in the current version of the app.Fortunately, I have stored the partition value in the Keychain, so I am able to subscribe to the correct partition value. This however, does not fix the issue of realm storing users with a new id value, which obviously has complications when it comes time to issuing permissions.Further, when employing a second app to assist in solving this problem, I would like to sync the user’s current data stored in atlas back down to the newly created local realm on the user device.I’m still unable to find any help or guidance on this by the MongoDb team so unless anyone else has any thoughts/opinions to contribute (ANY HELP IS APPRECIATED), I’ll continue to post my progress here.",
"username": "BenJ"
},
{
"code": "",
"text": "@BenJ Hey there. We are working on a feature right now that will allow in-place migrations to flexible sync without needing to create a separate app and hopefully without changes to your client side code. We hope to have an initial version in the next quarter.To migrate yourself manually, yes, the procedure would be as you describe where you would create a new Device Sync app and attach it to the same Atlas cluster. You would then release a new version of the client app which has new the new app id and would sync via flexible sync APIs. Users would need to re-authenticate however which is why you’d want to abstract away your permissions and store it in some other format so that the same permissions and therefore the same data would be synced down to the client. We would recommend something like custom JWT with a 3rd party provider, custom user data, or a function auth.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "We hope to have an initial version in the next quarterHi @Ian_Ward, do you have any update regarding the release of this in-place migration to flexible sync?I’ve been an early adopter of the partition sync since its release and I’m using it in production. I have noticed the new “automatic client reset” feature - which is a godsend - but the doc only shows how to do this for flexible sync. I’m keen to migrate my app to flexible sync for that reason. I also think it’s the right direction for sync to move towards.Thanks in advance!",
"username": "Laekipia"
},
{
"code": "",
"text": "Soon… soon… hopefully in the next couple weeks. What SDK are you using?",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Soon… soon… hopefully in the next couple weeksHow exciting @Ian_Ward!I’m using the React Native SDK.",
"username": "Laekipia"
},
{
"code": "",
"text": "Looking forward to this for Swift and React Native. ",
"username": "Kurt_Libby1"
},
{
"code": "",
"text": "Hey all - support for a backwards-compatible migration from partition-based to flexible sync is now available! Note that there is a realm SDK component to this, so you’ll need to ensure that your end-users are on the latest version of the SDK in order to support connecting to the migrated app.See https://www.mongodb.com/docs/atlas/app-services/sync/migrate-sync-modes for additional requirements and instructions on how to perform the migration.",
"username": "Kiro_Morkos"
},
{
"code": "",
"text": "Thanks for update @Kiro_Morko, and for all the hard work that must have gone into this! If I read the docs correctly, this migration is not yet available for the React Native SDK? Any rough idea when this may become available?Keep up the great work!",
"username": "Laekipia"
},
{
"code": "",
"text": "Hi @Laekipia - this is now available in v11.10.0 of the react native SDK.",
"username": "Kiro_Morkos"
},
{
"code": "",
"text": "Hi @Ian_Ward,\nI have 2 questions regarding the in-place migration.What happens to the users who still use the current version of the app (i.e. the partition-sync). Will the app still work in partition-sync mode even though the app service has been migrated to flexible-sync configuration?Why I ask this is because, according to the Swift SDK I open a sync realm using flexibleConfiguration for the newer apps as mentioned. So I feel that apart from how I open the synced realm and specifying the subscriptions no other change would be required at client side of the code when I’m configuring flexible sync after doing the in-place migration.Your input would be really helpful regarding these two doubts that I have.Thanks,\nVijesh",
"username": "Vijesh_Krishna"
},
{
"code": "",
"text": "The migrator feature works without any SDK changes required - it simply changes the backend logic to flexible sync and interprets and partition-based sync queries as flexible. Of course, you should test this before rolling out to production.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "I would also note that older SDKs don’t automatically work with the migration. You’ll need to make sure your users have upgraded to a minimum version. For instance, if you’re using the Swift SDK, you’ll need a minimum version of 10.40.0. I have a production app where I have never made it mandatory to update and am waiting till most of my users are on an upgradable version before going ahead with the migration just to reduce the amount of customer support that will happen for those still running the older versions.",
"username": "Kurt_Libby1"
}
] | Migration from partition sync to flexible sync | 2022-12-18T00:04:03.791Z | Migration from partition sync to flexible sync | 2,694 |
[
"aggregation",
"queries",
"python"
] | [
{
"code": "",
"text": "Hello.,I have use case of storing large json files (200+GB) and chosen the mongoDB as a data management platform. I can be able to store using GridFS in to two collections (Files and Chucks) and I can be able to query all chucks for a given file as well, but the problem is actual data stored in binary with Binary.createFromBase64 format. But I want to apply aggregations and projection using mongoQuery. Will there be a way that I can convert “Binary.createFromBase64” to “string” when I’m projecting and apply aggregations?image1081×504 82.2 KB",
"username": "Venkat_Obillaneni"
},
{
"code": "",
"text": "GridFS is probably not the best way to store a 200GB JSON object. What’s the schema of these objects? It would likely be better to deconstruct the object into smaller pieces and store them in a regular collection so that they can be queried and updated. GridFS files are immutable and the contents are not queryable.",
"username": "Shane"
},
{
"code": "",
"text": "Make sense, here is the json schema, is there any tool in mongoDB that I can deconstruct these objects to smaller pieces or do I need to use any other programing languages for it like a python or so.image550×779 112 KB",
"username": "Venkat_Obillaneni"
},
{
"code": "",
"text": "I’m not aware of any tools to automatically suggest a schema for this kind of use case. My best suggestion would be to look into MongoDB Data Modeling resources, such as:For example, with a 200GB document some of the embedded lists and/or objects will need to be split out into other collection(s) using references: https://www.mongodb.com/docs/manual/reference/database-references/Getting to the appropriate schema will also depend on the read/write patterns of your app. Apologies if this is not as helpful as you expected. Perhaps someone else can offer some more specific advice?",
"username": "Shane"
},
{
"code": "",
"text": "Thank You for your comments.Unfortunately, I don’t have control on constructing this single large (200+GB) json document as we are consumer of these files and find way to extract the data out of it. Probably having this computation moving to spark would make sense and store extracted objects in SQL format.But I would like to hear from community to see if they propose any other ideas for this usecase.",
"username": "Venkat_Obillaneni"
},
{
"code": "",
"text": "I really do not understand why you could manipulate a single large JSON document into an SQL format while you cannot manipulate simply mongoimport that same JSON document.If you can do the SQL thing then you have control on what you do after you received the single large JSON document.Perhaps you would need something likejq 1.7 API documentation with instant search, offline support, keyboard shortcuts, mobile version, and more.But what ever you do, I am pretty sure that making small JSON documents from a big JSON file is much simpler than making SQL out of your big JSON file.",
"username": "steevej"
},
{
"code": "",
"text": "As Steevej said, why not just either pre-process the file into small files and then process those or process the large file but with JSONPath or something. I’ve not tested any of the JSONPath implementations in regards to parsing large files but you should be able to do that.\nYou could also always pre-process the data into a series of collections and then have another script to combine them as needed into final-form documents.So extract all the Provider groups and in_network and provider_reference data and then depending on how you want the document to look like, re-process them in Mongo, with indexes in place this could be pretty fast.\nThis would also allow you to discard any “fluff” that you don’t need that’s taking up space in that monster file.Another thought was to convert the JSON to XML and then use an XPath processor if some of those are better at handling large file sizes than the JSONPath ones.\nYou could also then look at a streaming SAX Parser or something.",
"username": "John_Sewell"
},
{
"code": "",
"text": "While reading John_Sewell reply which mentioned to extract in_network and provider_reference, it made me think that I might have skipped to quick the screenshot about the document.And I did.I just notice the presence of the field _id as an ObjectId. This is very mongodb specific. So I suspect that the 200GB json file you are receiving is a json file generated with mongoexport from another instance of mongod. This means you should be able to mongoimport it directly. You might need to play around with the option –jsonArray.So try mongoimport and report any issue and success.",
"username": "steevej"
},
{
"code": "",
"text": "Thank You for your input I will check out jq.",
"username": "Venkat_Obillaneni"
},
{
"code": "",
"text": "Thank You John, I prefer to have as multiple smaller json documents and have indexes on those to query faster for required data points extraction.",
"username": "Venkat_Obillaneni"
},
{
"code": "",
"text": "I have used mongofiles to import this large document and it’s got stored in binary format, which I have shared the screen shot in the original post, I took smaller document < 16MB for schema reference for the community. based on documentation jsonArray works for only smaller documents.",
"username": "Venkat_Obillaneni"
},
{
"code": "",
"text": "I have used mongofilesand it gotgot stored in binary formatBecause mongofiles is quite different from mongoimport.Sotry mongoimport and report any issue and success.",
"username": "steevej"
},
{
"code": "",
"text": "I have tried with smaller file (150MB) to test mongoimport and its failed with “document is too large” message. Will there be any other mongo tools that can split to smaller files while uploading to mongodb or do we need to use programing tools?image939×178 31.7 KB",
"username": "Venkat_Obillaneni"
},
{
"code": "",
"text": "Share the first few lines of the json document.",
"username": "steevej"
},
{
"code": "",
"text": "here we go…\nimage1887×239 12.8 KB",
"username": "Venkat_Obillaneni"
},
{
"code": "",
"text": "Is the closing curly brace at line 14013 closes the opening curly brace of line?Both jq and jsonpath should be able to extract the in_network massive array into separate files.",
"username": "steevej"
},
{
"code": "",
"text": "Yes, it’s closes of the opening brace of document. I will explore jq into seperate files.",
"username": "Venkat_Obillaneni"
},
{
"code": "",
"text": "Quick Update… I have used python to pre-process this large json document to smaller chucks and able to process to MongoDB.",
"username": "Venkat_Obillaneni"
}
] | Store and Query 200+GB json document from MongoDB | 2023-10-19T18:46:11.513Z | Store and Query 200+GB json document from MongoDB | 381 |
|
null | [
"spark-connector"
] | [
{
"code": "",
"text": "What are the possibilities of only inserting documents with _id that isn’t in the collection already, otherwise reject the document? , I have tried using spark-3.3.2, spark-mongo-connector 10X and instance version 5X with no success.",
"username": "Andani_Alex"
},
{
"code": "",
"text": "_id in mongodb has to be unique.. So it should work out of the box.",
"username": "Kobe_W"
},
{
"code": "",
"text": "So it sIt doesn’t, it throws a duplicate key error",
"username": "Andani_Alex"
},
{
"code": "/* insert documents in a temporary non-existing collection */\ndb.temp_collection.insertMany( documents )\n/* use $merge to keep existing document and insert new documents */\ndb.temp_collection.aggregate( [ { \"$merge\" : {\n \"into\" : \"permanent_collection\" ,\n \"on\" : \"_id\" ,\n \"whenMatched\" : \"keepExisting\" ,\n \"whenNotMatched\": \"insert\"\n} } ] )\ndb.aggregate( [\n { \"$documents\" : documents } ,\n { \"$merge\" : {\n \"into\" : \"permanent_collection\" ,\n \"on\" : \"_id\" ,\n \"whenMatched\" : \"keepExisting\" ,\n \"whenNotMatched\": \"insert\"\n } }\n] )\n\n",
"text": "If insertMany() throws a duplicate key error and does not insert all the non-existing documents what you may try:Alternatively, you may also use the $documents stage to avoid a temporary collection.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Writing data do a collection | 2023-10-26T10:37:22.719Z | Writing data do a collection | 177 |
null | [
"queries"
] | [
{
"code": "{\n \"company\": {\n \"name\": \"Google\"\n },\n \"countries\": [\n {\n \"name\": \"India\",\n \"cities\": [\n {\n \"name\": \"Bengaluru\"\n ]\n }\n ]\n}\ndb.queries.update(\n {\n \"_id\": ObjectId(\"65320deff43afadac2cce910\")\n },\n {\n $set: {\n \"countries.0.cities.0.name\": \"Bengaluru\",\n \"countries.0.name\": \"India\",\n \"company.name\": \"Google\"\n }\n },\n {upsert:true}\n)\n{\n _id: ObjectId(\"65320deff43afadac2cce910\"),\n company: { name: 'Google'},\n countries: [\n {\n name: 'India'\n cities: [\n { name: 'Bengaluru'}\n ]\n }\n ]\n}\n{\n '_id': ObjectId(\"65320deff43afadac2cce910\"),\n 'company': { name: 'Google'},\n 'countries': {\n '0' : {\n 'name': 'India'\n \"cities': {\n '0': {\n 'name': 'Bengaluru'\n }\n }\n }\n }\n}\ncountriescountries.cities",
"text": "I am using $set to perform an update/upsertThis is my update query using $set to set array of documents. My understanding is dot notation with index number as a separator should treat these fields as an arrayMy expectation in DBActual entry in DBExpected countries and countries.cities to be set as arrays. However, they were interpreted as object and set accordingly in db.Kindly help understanding this.",
"username": "Raveendra_Bhat"
},
{
"code": "c.insertOne( { _id : 2 , \"countries\" : []})\nc.updateOne( { _id : 2 } , { $set : { \"countries.4.name\" : \"369\"}})\nc.findOne( { _id : 2 } )\n/* result, notice that the non-existing entries becomes null */\n{\n _id: 2,\n countries: [\n null,\n null,\n null,\n null,\n {\n name: '369'\n }\n ]\n}\n\n",
"text": "As far as I know, for the dot notation to work with array indexes, the array has to exists. Yours is an edge case where it does not work because the array does not exist before the update.For example:",
"username": "steevej"
}
] | $Set an array of documents using dot notation resulting in ambiguous behaviour | 2023-10-26T09:09:06.118Z | $Set an array of documents using dot notation resulting in ambiguous behaviour | 161 |
null | [
"sharding",
"database-tools",
"backup"
] | [
{
"code": "",
"text": "I want to perform oplog backups without using mongodump. is it possible?\nIf yes how can this backed up oplog useful in performing restore for sharded cluster ?",
"username": "rejith_mohan"
},
{
"code": "",
"text": "Hi @rejith_mohan,Thank you for reaching out with your question. So that I can provide the best information, I would like to know a little bit more about your environment first.Are you using the MongoDB community version currently? If yes, is this what you plan to continue using in the future?Is your environment sharded today or are you trying to prepare and understand more for sharding in the future?Do you have a specific Recovery Point Objective (RPO) requirement that you are trying to satisfy?",
"username": "Evin_Roesle"
},
{
"code": "",
"text": "I want to perform oplog backups without using mongodump.Why?If you are using self-managed mongo servers, you can use native disk snapshot utility if provided.",
"username": "Kobe_W"
},
{
"code": "",
"text": "want to perform oplog backups without using mongodump. is it possible?\nIf yes how can this backed up oplog useful in performing restore for sharded cluster ?Yes, it’s possible to perform oplog backups without mongodump. Oplog backups are useful for point-in-time recovery and migrating data to a new cluster.",
"username": "Lorenzo_G_Vick"
}
] | Oplog/incremental backup for point in time restore | 2023-04-07T08:40:06.597Z | Oplog/incremental backup for point in time restore | 968 |
null | [] | [
{
"code": "",
"text": "Hi,I have a trigger in mongodb atlas which will call an API with subcription key.\nconst response = await context.http.post({ url: ‘xxxx’,\nheaders: {‘Ocp-Apim-Subscription-Key’: [‘1234’]}, encodeBodyAsJSON: true}\nI dont want to use the subcription as above. Is there any way we can save this key somewhere and use it here.please help me in this… thanks in advance",
"username": "Rajalakshmi_R"
},
{
"code": "",
"text": "Hi @Rajalakshmi_R and welcome to MongoDB community forums!!As mentioned in the documentation, the Values and Secrets in Atlas app services, provides you the privilege to use the environment variables or secrets which could be used in the Atlas Triggers.\nYou can define the subscription key in your case as secrets or environment variable and the triggers can use it from the backend.If this is not what you are looking for, could you please help me understand the use case you are trying to achieve?Warm regards\nAasawari",
"username": "Aasawari"
}
] | How to use the subcription key of an endpoint in atlas trigger | 2023-10-23T09:26:16.281Z | How to use the subcription key of an endpoint in atlas trigger | 199 |
null | [
"aggregation"
] | [
{
"code": " const cursor = collection.aggregate([ \n {\n $vectorSearch: {\n \"index\": \"default\",\n \"queryVector\": embedding,\n \"path\": \"promptEmbedding\",\n \"numCandidates\": 50,\n \"limit\": 10\n },\n }\n ];\n",
"text": "Greetings all, I was following the guide on using $vectorSearch with this code snippet below. I would like to ask where and how should we add the pre-filtering statements e.g. “genre”:“horror” in the aggregation pipeline? Thank you!",
"username": "anther"
},
{
"code": "db.embedded_movies.aggregate([\n {\n \"$vectorSearch\": {\n \"index\": \"default\",\n \"path\": \"plot_embedding\",\n \"filter\": {\n \"$and\": [{\n \"year\": {\n \"$gt\": 1955\n },\n \"year\": {\n \"$lt\": 1975\n }\n }]\n },\n \"queryVector\": [-0.0072121937,-0.030757688,0.014948666,-0.018497631,-0.019035352,0.028149737,-0.0019593239,-0.02012424,-0.025649332,-0.007985169,0.007830574,0.023726976,-0.011507247,-0.022839734,0.00027999343,-0.010431803,0.03823202,-0.025756875,-0.02074262,-0.0042883316,-0.010841816,0.010552791,0.0015266258,-0.01791958,0.018430416,-0.013980767,0.017247427,-0.010525905,0.0126230195,0.009255537,0.017153326,0.008260751,-0.0036060968,-0.019210111,-0.0133287795,-0.011890373,-0.0030599732,-0.0002904958,-0.001310697,-0.020715732,0.020890493,0.012428096,0.0015837587,-0.006644225,-0.028499257,-0.005098275,-0.0182691,0.005760345,-0.0040665213,0.00075491105,0.007844017,0.00040791242,0.0006780336,0.0027037326,-0.0041370974,-0.022275126,0.004775642,-0.0045235846,-0.003659869,-0.0020567859,0.021602973,0.01010917,-0.011419867,0.0043689897,-0.0017946466,0.000101610516,-0.014061426,-0.002626435,-0.00035540052,0.0062174085,0.020809835,0.0035220778,-0.0071046497,-0.005041142,0.018067453,0.012569248,-0.021683631,0.020245226,0.017247427,0.017032338,0.01037131,-0.036296222,-0.026334926,0.041135717,0.009625221,0.032155763,-0.025057837,0.027827105,-0.03323121,0.0055721425,0.005716655,0.01791958,0.012078577,-0.011117399,-0.0016005626,-0.0033254733,-0.007702865,0.034306653,0.0063854465,-0.009524398,0.006069535,0.012696956,-0.0042883316,-0.013167463,-0.0024667988,-0.02356566,0.00052721944,-0.008858967,0.039630096,-0.0064593833,-0.0016728189,-0.0020366213,0.00622413,-0.03739855,0.0028616884,-0.0102301575,0.017717933,-0.0041068504,-0.0060896995,-0.01876649,0.0069903834,0.025595559,0.029762903,-0.006388807,0.017247427,0.0022080203,-0.029117636,-0.029870447,-0.0049739266,-0.011809715,0.023243025,0.009510955,0.030004878,0.0015837587,-0.018524516,0.007931396,-0.03589293,0.013590919,-0.026361812,0.002922182,0.025743432,0.014894894,0.0012989342,-0.0016232478,0.006251016,0.029789789,-0.004664737,0.017812036,-0.013436324,-0.0102301575,0.016884465,-0.017220542,0.010156221,0.00014503786,0.03933435,0.018658947,0.016897907,0.0076961434,-0.029843561,-0.02021834,0.015056211,0.01002179,-0.0031994449,-0.03796316,-0.008133043,0.03707592,0.032128878,9.483648E-05,0.0017627194,-0.0007544909,0.006647586,0.020903936,-0.032559056,0.025272924,-0.012804501,0.019210111,0.0022987607,0.013301893,-0.0047218697,-0.022853177,-0.02162986,0.006788738,0.0092286505,0.024184039,-0.015419173,-0.006479548,-0.00180977,0.0060728956,-0.0030919004,0.0022449887,-0.004046357,0.012663349,-0.028579915,0.0047722813,-0.6775295,-0.018779935,-0.018484188,-0.017449073,-0.01805401,0.026630674,0.008018777,0.013436324,-0.0034683058,0.00070912065,-0.005027699,0.009658828,-0.0031792803,-0.010478854,0.0034951917,-0.011594627,0.02441257,-0.042533796,-0.012414653,0.006261098,-0.012266779,0.026630674,-0.017852364,-0.02184495,0.02176429,0.019263884,0.00984031,-0.012609577,-0.01907568,-0.020231783,-0.002886894,0.02706085,-0.0042345594,0.02265153,0.05769755,0.021522315,-0.014195856,0.011144285,0.0038077426,0.024573887,-0.03578539,-0.004476534,0.016521502,-0.019815048,0.00071836275,0.008173372,0.013436324,0.021885278,-0.0147604635,-0.021777734,0.0052595916,-0.011668564,-0.02356566,-0.0049974523,0.03473683,-0.0255149,0.012831387,-0.009658828,-0.0031036632,-0.001386314,-0.01385978,0.008294359,-0.02512505,-0.0012308789,0.008711093,0.03610802,0.016225755,0.014034539,0.0032431346,-0.017852364,0.017906137,0.005787231,-0.03514012,0.017207097,-0.0019542826,-0.010189828,0.010808208,-0.017408744,-0.0074944976,0.011009854,0.00887241,0.009652107,-0.0062409337,0.009766373,0.009759651,-0.0020819916,-0.02599885,0.0040665213,0.016064439,-0.019035352,-0.013604362,0.020231783,-0.025272924,-0.01196431,-0.01509654,0.0010233518,-0.00869765,-0.01064017,0.005249509,-0.036807057,0.00054570363,0.0021777733,-0.009302587,-0.00039362916,0.011386259,0.013382551,0.03046194,0.0032380936,0.037801843,-0.036807057,-0.006244295,0.002392862,-0.01346321,-0.008953068,-0.0025861058,-0.022853177,0.018242212,-0.0031624765,0.009880639,-0.0017341529,0.0072054723,0.014693249,0.026630674,0.008435511,-0.012562525,0.011581183,-0.0028768117,-0.01059312,-0.027746446,0.0077969665,2.468059E-05,-0.011151006,0.0152712995,-0.01761039,0.023256468,0.0076625356,0.0026163526,-0.028795004,0.0025877862,-0.017583502,-0.016588718,0.017556617,0.00075491105,0.0075885993,-0.011722336,-0.010620005,-0.017274313,-0.008025498,-0.036376882,0.009457182,-0.007265966,-0.0048663826,-0.00494368,0.003616179,0.0067820163,0.0033775652,-0.016037554,0.0043320213,-0.007978448,-0.012925488,0.029413383,-0.00016583256,-0.018040568,0.004180787,-0.011453475,-0.013886666,-0.0072121937,0.006486269,0.008005333,-0.01412864,-0.00061796,-0.025635887,-0.006630782,0.02074262,-0.007192029,0.03906549,-0.0030885397,-0.00088976155,-0.022033151,-0.008758144,0.00049361185,0.009342916,-0.014988995,-0.008704372,0.014276514,-0.012300386,-0.0020063745,0.030892119,-0.010532626,0.019653732,0.0028583275,0.006163636,0.0071517,-0.017489402,-0.008448954,-0.004352186,0.013201071,0.01090231,0.0004110631,0.03306989,0.006916447,0.002922182,0.023888292,-0.009067334,0.012434817,-0.051298663,0.016279528,-0.02741037,0.026227381,-0.005182294,0.008153207,-0.026603786,0.0045571923,0.018067453,0.038016934,0.028042194,0.0077431942,0.015499831,-0.020298999,0.0013123773,-0.021334114,-0.026281154,-0.0012720482,-0.0045571923,0.006086339,0.0028952959,-0.003041489,0.007931396,-0.0005406625,-0.023444671,-0.0038715971,0.0070374343,-0.0019979726,0.024089938,0.0020903936,-0.024210924,0.007319738,-0.005995598,0.032478396,0.020998036,0.01654839,0.033876475,0.025098165,0.021132467,-0.017099554,-0.013516982,0.01306664,0.010525905,-0.02335057,-0.013543868,-0.03583916,0.021172797,-0.033607613,-0.0036094578,-0.007911232,-0.0054578763,0.013227956,0.00993441,0.025810648,0.02255743,-0.013678298,0.012273501,0.00040497174,0.0019072321,0.0008170851,0.01540573,0.015580489,0.005239427,0.003989224,-0.013254843,0.024708318,0.0046680975,-0.034360424,-0.0041942303,0.0077095865,-0.0053503322,-0.024399128,-0.02644247,0.0062476555,0.021885278,-0.0010922474,-0.014209299,0.018295985,0.0135640325,0.0033842868,0.0017812036,0.004735313,0.006486269,-0.008072549,0.009551284,0.007938119,0.0101696635,0.021750847,0.014034539,0.0071449787,-0.008448954,0.010841816,-0.008274195,-0.014531932,-0.0024785616,0.0018601815,0.009564727,-0.011130841,-0.020581303,0.012985982,0.019976366,-0.030542599,-0.021818062,-0.018551402,-0.0092286505,-0.024385685,0.0036901159,-0.0061367503,-0.00034048714,-0.007057599,-0.014558818,-0.022221355,0.023377456,0.026119838,-0.0008813597,0.004520224,0.0027843907,-0.022382671,0.0018248934,0.13313992,0.013685021,-6.170148E-05,0.015876237,0.005417547,-0.008314524,-0.019169783,-0.016494617,0.016844137,-0.0046412116,0.024305027,-0.027827105,0.023162367,0.0143034,-0.0029893972,-0.014626034,-0.018215327,0.0073264595,0.024331912,-0.0070777633,-0.0004259765,-0.00042345593,-0.0034262962,-0.00423792,-0.016185427,-0.017946465,-5.9706024E-05,0.016467731,-0.014773907,-0.022664975,-0.009322752,-0.027585128,0.0020651878,-0.010532626,-0.010546069,0.009174879,-0.0011098915,0.026469355,0.022006266,-0.013039754,0.023458114,0.005481402,-0.00050705485,-0.012092019,0.0055990284,-0.007057599,-0.012266779,0.03253217,0.007071042,-0.01699201,0.06597847,-0.013436324,0.0070038266,-0.009981461,0.024829306,0.0067383265,0.0056292755,0.0018534599,-0.020057024,0.011735778,0.0025491375,-0.022194467,0.0012468424,-0.0051621296,-0.018457301,-0.008509448,-0.011594627,-0.0152712995,-0.001858501,-0.014921781,-0.0056696045,-0.0066979975,-0.02008391,0.0040093884,0.032935463,-0.0032935461,-0.0074205613,-0.014088311,-0.0014762144,-0.011218221,0.011984475,-0.01898158,-0.027208723,-0.008072549,0.010942639,0.0183632,0.04148524,-0.0009922648,-0.017086111,0.013483374,0.019841935,0.024264697,0.011601348,-0.0077431942,-0.020258669,-0.005770427,0.013429603,-0.011554297,-0.012831387,-1.4752561E-06,0.011594627,-0.012683514,-0.012824666,0.02180462,0.011023297,0.012468425,-0.0029860365,-0.0076289284,-0.021293784,0.005068028,0.017812036,0.0007708746,-0.008684208,0.0048126103,-0.0076558143,0.019169783,-0.0076558143,0.028579915,-0.011574462,-0.03196756,-0.0011334168,-0.030219967,0.023901735,0.014021097,-0.016776921,0.0030045207,-0.0019257163,-0.023579102,0.004197591,0.00012497831,-0.016803807,0.01915634,-0.010472132,-0.042130504,-0.038016934,-0.007702865,-0.0025861058,-0.010512462,-0.013537147,-0.013382551,-0.0036397045,0.0053032814,0.0046277684,-0.021952493,-0.016588718,-0.031886905,0.0058208387,-0.00043689896,-0.01337583,0.018349757,0.015244413,0.00900684,-0.017677605,0.01523097,0.010337702,-0.024426013,-0.021965936,-0.014182413,0.008596827,0.029628472,0.058611676,-0.015446059,0.021374442,-0.0095042335,0.00091748784,0.021132467,-0.011285436,-0.0035724894,-0.027907763,0.027302826,0.004184148,0.026281154,-0.0026802071,-0.015163755,0.005699851,0.023122039,0.0075415485,-0.020057024,-0.0109359175,-0.018309427,0.017529732,0.0020685487,-0.012441538,0.0023239665,0.012038247,-0.017543174,0.029332725,0.01399421,-0.0092488155,-1.0607403E-05,0.019371428,-0.0315105,0.023471557,-0.009430297,0.00022097006,0.013301893,-0.020110795,-0.0072928523,0.007649093,0.011547576,0.026805433,-0.01461259,-0.018968137,-0.0104250815,0.0005646079,0.031456728,-0.0020147765,-0.024224367,0.002431511,-0.019371428,-0.025017507,-0.02365976,-0.004318578,-0.04457714,0.0029826758,-0.020473758,-0.016118212,-0.00068181445,-0.03446797,-0.020715732,-0.04256068,-0.013792564,0.013873223,0.011413146,-0.002419748,0.0123877665,-0.0011115718,0.007978448,0.021441657,0.004405958,0.0042480025,0.022920392,-0.0067920987,0.011083791,-0.017529732,-0.03659197,-0.0066005355,-0.023888292,-0.016521502,0.009591613,-0.0008590946,0.013846337,-0.021092137,-0.012562525,-0.0028415236,0.02882189,5.3378342E-05,-0.006943333,-0.012226449,-0.035570297,-0.024547001,0.022355784,-0.018416973,0.014209299,0.010035234,0.0046916227,0.009672271,-0.00067635323,-0.024815861,0.0007049197,0.0017055863,-0.0051251613,0.0019391594,0.027665788,-0.007306295,-0.013369109,0.006308149,0.009699157,0.000940173,0.024842748,0.017220542,-0.0053032814,-0.008395182,0.011359373,0.013214514,0.0062711807,0.004110211,-0.019277327,-0.01412864,-0.009322752,0.007124814,0.0035119955,-0.024036165,-0.012831387,-0.006734966,-0.0019694061,-0.025367027,-0.006630782,0.016010666,0.0018534599,-0.0030717358,-0.017717933,0.008489283,0.010875423,-0.0028700903,0.0121323485,0.004930237,0.009947853,-0.02992422,0.021777734,0.00015081417,0.010344423,0.0017543174,0.006166997,-0.0015467904,0.010089005,0.0111711705,-0.010740994,-0.016965123,-0.006771934,0.014464716,0.007192029,-0.0006175399,-0.010855259,-0.003787578,0.015647706,0.01002179,-0.015378844,-0.01598378,0.015741806,-0.0039119264,-0.008422068,0.03253217,-0.019210111,-0.014975552,0.0025810648,0.0035556855,8.449164E-05,-0.034172222,-0.006395529,-0.0036867552,0.020769505,0.009766373,-0.017543174,-0.013557311,0.0031994449,-0.0014577302,0.01832287,-0.009907524,-0.024654545,0.0049940916,0.016965123,0.004476534,0.022261683,-0.009369803,0.0015308268,-0.010102449,-0.001209874,-0.023807634,-0.008348132,-0.020312442,0.030892119,-0.0058309208,-0.005128522,-0.02437224,0.01478735,-0.011016576,-0.010290652,-0.00503106,0.016884465,0.02132067,-0.014236185,-0.004903351,0.01902191,0.0028179984,0.019505858,-0.021535758,-0.0038514326,0.0112115,0.0038682362,0.003217929,-0.0012770894,-0.013685021,-0.008381739,0.0025256122,0.029386498,0.018645504,0.005323446,-0.0032784226,-0.0043253,0.0007998612,0.019949479,0.025770318,-0.0030868594,0.018968137,-0.010236879,-0.005370497,-0.024748646,-0.014047982,0.005760345,-0.03610802,0.0042009517,-0.0034817487,0.003385967,0.006560206,-0.006294706,-0.02400928,-0.006140111,-0.0017980073,-0.012481867,-0.0033960494,-0.00097210024,0.014061426,-0.017596947,-0.023202697,0.0028499255,-0.016010666,-0.028149737,0.0024752007,-0.018941252,0.0056158323,-0.012912045,0.0054410724,0.003054932,0.019559631,-0.0048932685,-0.007823853,-0.017099554,0.025662774,0.02572999,0.004379072,-0.010223436,0.0031036632,-0.011755943,-0.025622444,-0.030623257,0.019895706,-0.02052753,-0.006637504,-0.001231719,-0.013980767,-0.02706085,-0.012071854,-0.0041370974,-0.008885853,0.0001885177,0.2460615,-0.009389968,-0.010714107,0.0326666,0.0009561366,0.022624645,0.009793258,0.019452088,-0.004493338,-0.007097928,-0.0022298652,0.012401209,-0.0036229007,-0.00023819396,-0.017502844,-0.014209299,-0.030542599,-0.004863022,0.005128522,-0.03081146,0.02118624,-0.0042177555,0.0032448152,-0.019936036,0.015311629,0.0070508774,-0.02021834,0.0016148458,0.04317906,0.01385978,0.004211034,-0.02534014,-0.00030309867,-0.011930703,-0.00207527,-0.021643303,0.01575525,-0.0042883316,0.0069231684,0.017946465,0.03081146,0.0043857936,3.646951E-05,-0.0214551,0.0089933975,0.022785962,-0.008106156,0.00082884775,-0.0006717322,-0.0025457768,-0.017059224,-0.035113234,0.054982055,0.021266898,-0.0071046497,-0.012636462,0.016965123,0.01902191,-0.0061737187,0.00076247274,0.0002789432,0.030112421,-0.0026768465,0.0015207445,-0.004926876,0.0067551304,-0.022624645,0.0005003333,0.0035523248,-0.0041337362,0.011634956,-0.0183632,-0.02820351,-0.0061737187,-0.022355784,-0.03796316,0.041888528,0.019626847,0.02211381,0.001474534,0.0037640526,0.0085228905,0.013140577,0.012616298,-0.010599841,-0.022920392,0.011278715,-0.011493804,-0.0044966987,-0.028741231,0.015782135,-0.011500525,-0.00027621258,-0.0046378504,-0.003280103,0.026993636,0.0109359175,0.027168395,0.014370616,-0.011890373,-0.020648519,-0.03465617,0.001964365,0.034064677,-0.02162986,-0.01081493,0.014397502,0.008038941,0.029789789,-0.012044969,0.0038379894,-0.011245107,0.0048193317,-0.0048563,0.0142899575,0.009779816,0.0058510853,-0.026845763,0.013281729,-0.0005818318,0.009685714,-0.020231783,-0.004197591,0.015593933,-0.016319858,-0.019492416,-0.008314524,0.014693249,0.013617805,-0.02917141,-0.0052058194,-0.0061838008,0.0072726877,-0.010149499,-0.019035352,0.0070374343,-0.0023138842,0.0026583623,-0.00034111727,0.0019038713,0.025945077,-0.014693249,0.009820145,-0.0037506097,0.00041127318,-0.024909964,0.008603549,-0.0041707046,0.019398315,-0.024022723,-0.013409438,-0.027880875,0.0023558936,-0.024237812,0.034172222,-0.006251016,-0.048152987,-0.01523097,-0.002308843,-0.013691742,-0.02688609,0.007810409,0.011513968,-0.006647586,-0.011735778,0.0017408744,-0.17422187,0.01301959,0.018860593,-0.00068013405,0.008791751,-0.031618044,0.017946465,0.011735778,-0.03129541,0.0033607613,0.0072861305,0.008227143,-0.018443858,-0.014007653,0.009961297,0.006284624,-0.024815861,0.012676792,0.014222742,0.0036632298,0.0028364826,-0.012320551,-0.0050478633,0.011729057,0.023135481,0.025945077,0.005676326,-0.007192029,0.0015308268,-0.019492416,-0.008932903,-0.021737404,0.012925488,0.008092714,0.03245151,-0.009457182,-0.018524516,0.0025188907,-0.008569942,0.0022769158,-0.004617686,0.01315402,0.024291582,-0.001880346,0.0014274834,0.04277577,0.010216715,-0.018699275,0.018645504,0.008059106,0.02997799,-0.021576088,0.004846218,0.015741806,0.0023542133,0.03142984,0.01372535,0.01598378,0.001151901,-0.012246614,-0.004184148,-0.023605987,0.008657321,-0.025770318,-0.019048795,-0.023054823,0.005535174,-0.018161554,-0.019761277,0.01385978,-0.016655933,0.01416897,0.015311629,0.008919461,0.0077499156,0.023888292,0.015257857,0.009087498,0.0017845642,0.0013762318,-0.023713533,0.027464142,-0.014021097,-0.024681432,-0.006741687,0.0016450927,-0.005804035,-0.002821359,0.0056796866,-0.023189254,0.00723908,-0.013483374,-0.018390086,-0.018847149,0.0061905226,0.033365637,0.008489283,0.015257857,0.019694062,-0.03019308,-0.012253336,0.0021744126,-0.00754827,0.01929077,0.025044393,0.017677605,0.02503095,0.028579915,0.01774482,0.0029961187,-0.019895706,0.001165344,-0.0075281053,0.02105181,-0.009221929,0.023404341,-0.0028079161,-0.0037237236,0.02847237,0.0009821824,0.04629785,-0.017771706,-0.038904175,0.00869765,0.0016249281,0.020984594,-0.10867358,-0.008395182,-0.0010830053,0.008059106,-0.020097353,0.0020383017,0.008038941,-0.009047169,-0.007252523,0.0286068,-0.0037774958,-0.024923407,0.005279756,-0.009524398,0.011527412,-0.0020198175,0.019452088,0.014384058,-0.025609002,0.006025845,-0.030542599,0.016790364,0.019223554,-0.012434817,0.003901844,-0.007817131,-0.027612016,0.008314524,0.007938119,-0.0004868903,0.014747021,-0.009457182,0.014706692,-0.018847149,0.015311629,0.015647706,-0.0031288688,-0.0032717013,0.008879132,-0.034629285,0.0090337265,0.004382433,0.011305601,-0.028391711,0.0053268066,0.0003566608,-0.019169783,0.011507247,0.023592545,-0.006603896,-0.009685714,0.010714107,-0.027907763,0.006412333,0.0045706355,-0.029816674,0.0047958065,0.0018500991,-0.011500525,0.0030179636,0.015997224,-0.022140697,-0.0001849469,-0.014263071,0.011540854,-0.006607257,-0.01871272,-0.0038480717,-0.0024903242,-0.031214751,-0.0050478633,0.021481987,-0.012912045,0.028122852,-0.018605174,-0.00723908,0.0023609349,-0.0073331813,0.014935223,-0.005699851,-0.0068895607,-0.015244413,0.029789789,-0.02458733,0.0004453009,0.0015577129,0.0048596608,0.009376524,-0.011984475,-0.014518489,0.015647706,0.0068794787,0.0065534846,0.003107024,-0.01973439,0.027383484,-0.015459502,-0.006318231,0.020863606,-0.0021357639,-0.0076692575,-0.021266898,-0.046862457,0.025326697,0.016521502,-0.0036833945,0.0029860365,-0.016306413,0.026496243,-0.016803807,0.008724537,-0.0025407355,-0.027302826,0.017798591,0.0060796174,-0.014007653,-0.01650806,-0.0095042335,0.009242094,-0.009342916,0.010330981,0.009544563,0.018591732,0.0036867552,0.0194252,0.0092488155,-0.007823853,0.0015501512,-0.012031525,0.010203271,-0.0074272826,-0.020258669,0.025662774,-0.03032751,0.014854565,0.010835094,0.0007708746,0.0009989863,-0.014007653,-0.012871716,0.023444671,0.03323121,-0.034575514,-0.024291582,0.011634956,-0.025958521,-0.01973439,0.0029742739,0.0067148013,0.0022399474,0.011802994,0.011151006,-0.0116416775,0.030166194,0.013039754,-0.022517102,-0.011466918,-0.0033053088,0.006156915,0.004829414,0.006029206,-0.016534945,0.015325071,-0.0109359175,0.032854803,-0.001010749,0.0021155993,-0.011702171,-0.009766373,0.00679882,0.0040900465,-0.019438643,-0.006758491,-0.0040060277,0.022436442,0.025850976,0.006150193,0.018632062,-0.0077230297,-0.015298186,-0.017381858,0.01911601,-0.005763706,-0.0022281848,-0.031994447,0.0015972018,0.028848775,0.014572261,-0.0073264595,-0.009551284,-0.0052058194,0.014518489,-0.0041068504,0.010754436,0.0055519775,-0.005804035,-0.0054007433,0.028579915,-0.01791958,-0.015284742,0.036807057,0.015069654,-0.0023810994,-0.0038648755,0.0015467904,-0.0037136413,0.0023458113,0.019008467,-0.011547576,-0.010001626,0.012347437,0.0155267175,0.01907568,-0.003041489,-0.0132414,0.017449073,0.00060073606,-0.008536334,0.008233866,-0.0085430555,-0.02365976,0.024089938,-0.0034615842,-0.006580371,0.008327967,-0.01509654,0.009692436,0.025635887,0.0020282194,-0.04022159,-0.0021290423,-0.012407931,-0.0021727323,0.006506434,-0.005320085,-0.008240587,0.020984594,-0.014491603,0.003592654,0.0072121937,-0.03081146,0.043770555,0.009302587,-0.003217929,0.019008467,-0.011271994,0.02917141,0.0019576435,-0.0077431942,-0.0030448497,-0.023726976,0.023377456,-0.006382086,0.025716545,-0.017341528,0.0035556855,-0.019129453,-0.004311857,-0.003253217,-0.014935223,0.0036363439,0.018121226,-0.0066543072,0.02458733,0.0035691285,0.0039085653,-0.014209299,0.020191453,0.0357585,0.007830574,-0.024130266,-0.008912739,0.008314524,-0.0346024,-0.0014005973,-0.006788738,-0.021777734,0.010465411,-0.004012749,-0.00679882,0.009981461,-0.026227381,0.027033964,-0.015567047,-0.0063115098,0.0023071626,0.01037131,0.015741806,-0.020635074,-0.012945653],\n \"numCandidates\": 150,\n \"limit\": 10\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"title\": 1,\n \"plot\": 1,\n \"released\": 1,\n \"score\": { $meta: \"vectorSearchScore\" }\n }\n }\n])\n",
"text": "Click on “Filter Example” at https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-stage/It looks like you can use MQL to filter on other fields.",
"username": "Mannie_Schumpert"
},
{
"code": "",
"text": "Great, thanks @Mannie_Schumpert ! Exactly what i was looking for, looks like the docs were freshly updated.",
"username": "anther"
},
{
"code": "\"filter\": {\n \"location_id\": { \"$eq\": location_id }\n}\n",
"text": "Greeting all.\nHow to filter by an ObjectId?\nI’ve tried the following but it produces an error:where location_id is an ObjectId.\nThe error is the following: “pymongo.errors.OperationFailure: Operand type is not supported for $vectorSearch: objectId”.\nThank you!",
"username": "Mathieu_Morgensztern"
}
] | $vectorSearch with pre-filtering | 2023-10-07T00:22:10.198Z | $vectorSearch with pre-filtering | 378 |
null | [
"connecting",
"golang",
"serverless"
] | [
{
"code": "",
"text": "I have several micro services connecting with a Mongo Atlas database, using different dbNames and collections. I’ve changed the service from the free tier Mongo Atlas version to the paid Serverless version. At the exact moment I moved to the Serverless version, occasionally (I would say twice per hour), all the services get blocked from the database and can not realize any operation over the MongoDB, receiving this specific error:Failed with error: connection(“SERVER_URI”) unable to write wire message to network: write tcp IP:PORT->IP:PORT: write: broken pipeThere are some operations that I need to repeat instantly in order to not to lose information, and so I have two questions:Any answer would be appreciated, thanks!",
"username": "Tomas_Gomez"
},
{
"code": "",
"text": "I’m getting the same issue",
"username": "Felipe_Garcia"
},
{
"code": "",
"text": "Hello @Tomas_Gomez and @Felipe_Garcia ,Welcome to The MongoDB Community Forums! To understand your use case better, I would like to know a few details such as:You can take a look at below thread as a similar error has been discussed thereLastly, I would recommend you go through below documentation to understand more about retryable read/write operations and how MongoDB can you help you build resilient applications that can withstand network outages and failover events.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "@Tarun_Gaur, we have the same problem. We are using MongoDB Atlas serverless and a Go service running on GCP Cloud Run using the latest Mongo driver (v1.11.2).“Error while executing aggregation pipeline: connection(dev-xxx-lb.xxx.mongodb.net:27017[-3]) unable to write wire message to network: write tcp xxxxx:51945->xxxx:27017: write: broken pipe”Furthermore, we haven’t enabled retries explicitly, but the latest Golang driver has this enabled by default. I think this behavior can happen with any type of query, but most of the time we see it happen with aggregation pipeline queries, which make up the bulk of the queries we do.The thread you referenced does seem to discuss a similar problem, but without any solution, though.",
"username": "Maarten_Baijs"
},
{
"code": "",
"text": "@Tarun_Gaurthanks",
"username": "Felipe_Garcia"
},
{
"code": "",
"text": "I would advise you to bring this up with the Atlas chat support team. They may be able to check if anything on the Atlas side could have possibly caused this broken pipe message. In saying so, if a chat support is raised, please provide them with the following:Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "We also encountered the same problem, but it was not resolved by retrying. You can check if the maximum connection idle time (SetMaxConnIdleTime) is set. If this value is not set, the connection pool will not actively release some connections that have been interrupted by the MongoDB server.",
"username": "xiongyan_zhong"
}
] | Recurrent error with Atlas Serverless MongoDB connection - Error: "unable to write wire message to network: write tcp" | 2023-02-14T15:39:10.566Z | Recurrent error with Atlas Serverless MongoDB connection - Error: “unable to write wire message to network: write tcp” | 2,003 |
null | [
"aggregation",
"data-modeling",
"mongodb-shell"
] | [
{
"code": "db.collection_name.aggregate([\n {\n \"$match\": {\n \"field_name\": {\n //Some Conditions\n },\n \"field_name2\": {\n // list of object ids to be filtered out\n }\n },\n {\n \"$sort\": {\n \"name\": 1\n }\n },\n {\n \"$facet\": {\n \"new_createdField\": [\n {\n \"$skip\": 0\n },\n {\n \"$limit\": 10\n }\n ],\n \"totalCount\": [\n {\n \"$count\": \"count\"\n }\n ]\n }\n },\n {\n \"$project\": {\n \"new_createdField\": 1,\n \"totalCount\": {\n \"$arrayElemAt\": [\n \"$totalCount.count\",\n 0\n ]\n }\n }\n }\n],{\n$hint: ({\nfield_name: 1,\nfield_name2: 1,\nname: 1\n})\n);\n",
"text": "Hello All,\nI was running an aggregate, on which I am using the hint operator of Mongodb. But when I ran a query with $hint it took nearly 7.2 seconds, and when I ran the same query with hint, it took nearly 3 seconds.I am unable to find the reason why it is happening. Can you please let me know what can be the reason or am I executing the query in the wrong manner?Query:The same query is executed with the hint-operator.\nAdditional Information\nfield_name, field_name2, and name are already in Compound Index.",
"username": "Mehul_Sanghvi"
},
{
"code": "$hintfindaggregatehint$hinthint",
"text": "Hi @Mehul_Sanghvi,The $hint operator has been deprecated since MongoDB 3.2 and was likely only ever supported for find operations.When performing an aggregate, as can be seen in the syntax documentation a hint field can be used to pass an index hint to the operation.I am unable to find the reason why it is happening. Can you please let me know what can be the reason or am I executing the query in the wrong manner?Replace $hint with hint (no leading $ sign) and you should get the desired result.",
"username": "alexbevi"
},
{
"code": "The $hint operator forces the query optimizer to use a specific index to fulfill the query. Specify the index either by the index name or by document.\nUse $hint for testing query performance and indexing strategies. The mongo shell provides a helper method hint() for the $hint operator.\n",
"text": "Hi @alexbevi\nOkay. One more thing is that in the documentation of the hint operator, it is stated :Does this mean that if I don’t have any index and use a hint operator in my database, it would forcefully create an index on that field just for experimental or testing purposes?",
"username": "Mehul_Sanghvi"
},
{
"code": "$hint",
"text": "No, using $hint with an index specification that doesn’t exist will not create the index",
"username": "alexbevi"
},
{
"code": "",
"text": "Okay. Thank You @alexbevi",
"username": "Mehul_Sanghvi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | What's the difference between $hint and hint? | 2023-10-25T11:52:11.153Z | What’s the difference between $hint and hint? | 196 |
null | [
"queries",
"node-js",
"crud",
"next-js",
"data-api"
] | [
{
"code": "const { MongoClient, ServerApiVersion, ObjectId } = require('mongodb');\nrequire('dotenv').config()\nconst jwt = require('jsonwebtoken');\nconst express = require('express');\nconst app = express();\nconst cors = require('cors');\nconst port = process.env.PORT || 5000;\n\napp.use(cors());\napp.use(express.json())\n\n\nconst uri = `mongodb+srv://${process.env.DB_USER}:${process.env.DB_PASS}@ass-11.8xhsqle.mongodb.net/?retryWrites=true&w=majority&appName=AtlasApp`;\n\n// Create a MongoClient with a MongoClientOptions object to set the Stable API version\nconst client = new MongoClient(uri, {\n serverApi: {\n version: ServerApiVersion.v1,\n strict: true,\n deprecationErrors: true,\n }\n});\n\n// JWT token verify \nconst verifyJWT = (req, res, next) => {\n const authorization = req.headers.authorization;\n if (!authorization) {\n return res.status(401).send({ error: true, message: 'unauthorized access' });\n }\n const token = authorization.split(' ')[1];\n jwt.verify(token, process.env.ACCESS_TOKEN_SECRET, (err, decoded) => {\n if (err) {\n return res.status(401).send({ error: true, message: 'unauthorized access' })\n }\n req.decoded = decoded;\n next();\n })\n}\n\n\nasync function run() {\n try {\n // Connect the client to the server (optional starting in v4.7)\n await client.connect();\n const servicesCollection = client.db('carService').collection('services');\n const bookingCollection = client.db('carService').collection('booking');\n\n // jwt\n app.post('/jwt', (req, res) => {\n const user = req.body;\n console.log(user);\n const token = jwt.sign(user, process.env.ACCESS_TOKEN_SECRET, { expiresIn: '1h' });\n console.log(token);\n res.send({ token });\n })\n\n app.get('/services', async (req, res) => {\n const options = {\n sort: {\n service_id: 1\n }\n }\n const cursor = servicesCollection.find({}, options);\n const result = await cursor.toArray();\n res.send(result);\n })\n // load a specific data for booking page\n app.get('/services/:id', async (req, res) => {\n const id = req.params.id;\n const query = { _id: new ObjectId(id) }\n\n const options = {\n // Include only the `title` and `imdb` fields in the returned document\n projection: { title: 1, price: 1, service_id: 1, img: 1 },\n };\n\n const result = await servicesCollection.findOne(query, options);\n res.send(result);\n })\n // user all booking show page \n // bookings routes\n app.get('/userOrder', verifyJWT, async (req, res) => {\n const decoded = req.decoded;\n\n if (decoded.email !== req.query.email) {\n return res.status(403).send({ error: 1, message: 'forbidden access' })\n }\n\n let query = {};\n if (req.query?.email) {\n query = { email: req.query.email }\n }\n const result = await bookingCollection.find(query).toArray();\n res.send(result);\n })\n\n app.post('/addService/:id', async (req, res) => {\n const booking = req.body;\n console.log(booking);\n const result = await bookingCollection.insertOne(booking);\n res.send(result);\n });\n\n app.patch('/userOrder/:id', async (req, res) => {\n const id = req.params.id;\n const filter = { _id: new ObjectId(id) };\n const updatedBooking = req.body;\n const updateDoc = {\n $set: {\n status: updatedBooking.status\n },\n };\n const result = await bookingCollection.updateOne(filter, updateDoc);\n res.send(result);\n })\n\n app.delete('/userOrder/:id', async (req, res) => {\n const id = req.params.id;\n const query = { _id: new ObjectId(id) }\n const result = await bookingCollection.deleteOne(query);\n res.send(result);\n })\n // Send a ping to confirm a successful connection\n await client.db(\"admin\").command({ ping: 1 });\n console.log(\"Pinged your deployment. You successfully connected to MongoDB!\");\n } finally {\n // Ensures that the client will close when you finish/error\n // await client.close();\n }\n}\nrun().catch(console.dir);\n\n\n\napp.get('/', (req, res) => {\n res.send('Car repair is running!');\n})\n\napp.listen(port, (req, res) => {\n console.log(`${port} listening this site!`);\n})\n",
"text": "I created a simple CRUD project with Reactjs. For the server side, I use Express and Vercel to deploy the server codes. I use MongoDB for a database. In Mongodb I create a database and create 2 collections for relevant usage. However, after deploying the code to Vercel, MongoDB does not provide data suddenly. This means, the vercel API working well. Suddenly, 1 hour or 30 minutes later if I go to the API, then the Mongodb may never hit. After some reloading, I got data from MongoDB. The same issue again after 5 minutes or 10 minutes later.here are my server-side codes:The main issue is why it is taking time or delay to provide data. I checked everything was okay but I did not find out solution. Here is my vercel deploy API link: https://car-solution.vercel.app/servicesIf you see Cannot GET /services then reload a few times. Then it will show data from MongoDB. this is my problem. Why is data not showing the first time? Why do I need to reload? Kindly help me!!",
"username": "Md_Tanvir_Parvej_Badhon"
},
{
"code": "",
"text": "Hi @Md_Tanvir_Parvej_Badhon,Welcome to the MongoDB Community The main issue is why it is taking time or delay to provide data. I checked everything was okay but I did not find out solution. Here is my vercel deploy API link: https://car-solution.vercel.app/services I’ve tested the link you provided, and it seems to be working well on my end. Have you tried accessing it using a different browser, incognito mode, or on another system? I would recommend clearing your browser cache and trying to run the API again. It should work as expected after doing so.In case the issue persists please feel free to reach out.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodb delay to provide data by vercel API | 2023-10-26T06:56:19.119Z | Mongodb delay to provide data by vercel API | 231 |
null | [
"node-js"
] | [
{
"code": "new UUID()new Binary()Document ID: [{sub_type 4} {buffer [{0 0} {1 0} {2 0} {3 0} {4 0} {5 0} {6 0} {7 0} {8 0} {9 0} {10 0} {11 0} {12 0} {13 0} {14 0} {15 0} {16 0} {17 0} {18 0} {19 0} {20 0} {21 0} {22 0} {23 0} {24 0} {25 0} {26 0} {27 0} {28 0} {29 0} {30 0} {31 0} {32 0} {33 0} {34 0} {35 0} {36 0} {37 0} {38 0} {39 0} {40 0} {41 0} {42 0} {43 0} {44 0} {45 0} {46 0} {47 0} {48 0} {49 0} {50 0} {51 0} {52 0} {53 0} {54 0} {55 0} {56 0} {57 0} {58 0} {59 0} {60 0} {61 0} {62 0} {63 0} {64 0} {65 0} {66 0} {67 0} {68 0} {69 0} {70 0} {71 0} {72 0} {73 0} {74 0} {75 0} {76 0} {77 0} {78 0} {79 0} {80 0} {81 0} {82 0} {83 0} {84 0} {85 0} {86 0} {87 0} {88 0} {89 0} {90 0} {91 0} {92 0} {93 0} {94 0} {95 0} {96 0} {97 0} {98 0} {99 0} {100 0} {101 0} {102 0} {103 0} {104 0} {105 0} {106 0} {107 0} {108 0} {109 0} {110 0} {111 0} {112 0} {113 0} {114 0} {115 0} {116 0} {117 0} {118 0} {119 0} {120 0} {121 0} {122 0} {123 0} {124 0} {125 0} {126 0} {127 0} {128 0} {129 0} {130 0} {131 0} {132 0} {133 0} {134 0} {135 0} {136 0} {137 0} {138 0} {139 0} {140 0} {141 0} {142 0} {143 0} {144 0} {145 0} {146 0} {147 0} {148 0} {149 0} {150 0} {151 0} {152 0} {153 0} {154 0} {155 0} {156 0} {157 0} {158 0} {159 0} {160 0} {161 0} {162 0} {163 0} {164 0} {165 0} {166 0} {167 0} {168 0} {169 0} {170 0} {171 0} {172 0} {173 0} {174 0} {175 0} {176 0} {177 0} {178 0} {179 0} {180 0} {181 0} {182 0} {183 0} {184 0} {185 0} {186 0} {187 0} {188 0} {189 0} {190 0} {191 0} {192 0} {193 0} {194 0} {195 0} {196 0} {197 0} {198 0} {199 0} {200 0} {201 0} {202 0} {203 0} {204 0} {205 0} {206 0} {207 0} {208 0} {209 0} {210 0} {211 0} {212 0} {213 0} {214 0} {215 0} {216 0} {217 0} {218 0} {219 0} {220 0} {221 0} {222 0} {223 0} {224 0} {225 0} {226 0} {227 0} {228 0} {229 0} {230 0} {231 0} {232 0} {233 0} {234 0} {235 0} {236 0} {237 0} {238 0} {239 0} {240 0} {241 0} {242 0} {243 0} {244 0} {245 0} {246 0} {247 0} {248 0} {249 0} {250 0} {251 0} {252 0} {253 0} {254 0} {255 0}]} {position 0}] \" _ id\" : {\n\t\t\t\"$binary\": {\n\t\t\t\t\"base64\": \"qr8jwljDRkOBerhppKh8Gw==\",\n\t\t\t\t\"subType\": \"04\"\n\t\t\t}\n\t\t}\n",
"text": "I am trying to insert an object with a _id that has a UUID type. I am using the BSON library for it and the function new UUID(), I also tried with new Binary(), but it is not working, instead my _id returns an object like it:Document ID: [{sub_type 4} {buffer [{0 0} {1 0} {2 0} {3 0} {4 0} {5 0} {6 0} {7 0} {8 0} {9 0} {10 0} {11 0} {12 0} {13 0} {14 0} {15 0} {16 0} {17 0} {18 0} {19 0} {20 0} {21 0} {22 0} {23 0} {24 0} {25 0} {26 0} {27 0} {28 0} {29 0} {30 0} {31 0} {32 0} {33 0} {34 0} {35 0} {36 0} {37 0} {38 0} {39 0} {40 0} {41 0} {42 0} {43 0} {44 0} {45 0} {46 0} {47 0} {48 0} {49 0} {50 0} {51 0} {52 0} {53 0} {54 0} {55 0} {56 0} {57 0} {58 0} {59 0} {60 0} {61 0} {62 0} {63 0} {64 0} {65 0} {66 0} {67 0} {68 0} {69 0} {70 0} {71 0} {72 0} {73 0} {74 0} {75 0} {76 0} {77 0} {78 0} {79 0} {80 0} {81 0} {82 0} {83 0} {84 0} {85 0} {86 0} {87 0} {88 0} {89 0} {90 0} {91 0} {92 0} {93 0} {94 0} {95 0} {96 0} {97 0} {98 0} {99 0} {100 0} {101 0} {102 0} {103 0} {104 0} {105 0} {106 0} {107 0} {108 0} {109 0} {110 0} {111 0} {112 0} {113 0} {114 0} {115 0} {116 0} {117 0} {118 0} {119 0} {120 0} {121 0} {122 0} {123 0} {124 0} {125 0} {126 0} {127 0} {128 0} {129 0} {130 0} {131 0} {132 0} {133 0} {134 0} {135 0} {136 0} {137 0} {138 0} {139 0} {140 0} {141 0} {142 0} {143 0} {144 0} {145 0} {146 0} {147 0} {148 0} {149 0} {150 0} {151 0} {152 0} {153 0} {154 0} {155 0} {156 0} {157 0} {158 0} {159 0} {160 0} {161 0} {162 0} {163 0} {164 0} {165 0} {166 0} {167 0} {168 0} {169 0} {170 0} {171 0} {172 0} {173 0} {174 0} {175 0} {176 0} {177 0} {178 0} {179 0} {180 0} {181 0} {182 0} {183 0} {184 0} {185 0} {186 0} {187 0} {188 0} {189 0} {190 0} {191 0} {192 0} {193 0} {194 0} {195 0} {196 0} {197 0} {198 0} {199 0} {200 0} {201 0} {202 0} {203 0} {204 0} {205 0} {206 0} {207 0} {208 0} {209 0} {210 0} {211 0} {212 0} {213 0} {214 0} {215 0} {216 0} {217 0} {218 0} {219 0} {220 0} {221 0} {222 0} {223 0} {224 0} {225 0} {226 0} {227 0} {228 0} {229 0} {230 0} {231 0} {232 0} {233 0} {234 0} {235 0} {236 0} {237 0} {238 0} {239 0} {240 0} {241 0} {242 0} {243 0} {244 0} {245 0} {246 0} {247 0} {248 0} {249 0} {250 0} {251 0} {252 0} {253 0} {254 0} {255 0}]} {position 0}]when I want a result similar to this structure:I am not using mongo library because the node.js Driver is not compatible with it, and I am using it inside a Mongo Function",
"username": "Ana_Carolyne_Matos_Nascimento"
},
{
"code": "",
"text": "I am facing the same problem. I am trying to add a document whose ID is UUID inside a function but it’s not working. Here is what I am trying:\n_id: new UUID(Buffer.from(‘CBlFiADzQruCocFpG9841Q==’, ‘base64’).toString(‘hex’)),",
"username": "Mariana_Zagatti"
},
{
"code": "",
"text": "Hey @Mariana_Zagatti/@Ana_Carolyne_Matos_Nascimento,I am trying to add a document whose ID is UUID inside a function but it’s not working. Here is what I am trying:\n_id: new UUID(Buffer.from(‘CBlFiADzQruCocFpG9841Q==’, ‘base64’).toString(‘hex’)),May I ask what error message you’re encountering? Are you currently saving the data in a MongoDB collection? I’d like to understand if there’s a particular use case that necessitates generating UUIDs manually, as MongoDB can generate them automatically when you save a document.However, please share your workflow and a code snippet so, we can assist you better.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Inserting _id as UUID using node Js | 2023-10-25T18:25:29.941Z | Inserting _id as UUID using node Js | 217 |
null | [
"node-js"
] | [
{
"code": "",
"text": "I have created free cluster and db on Mongo Atlas but I try to connect my app by mongoose.connect(‘mongodb+srv://Ahmed:Admin123@st.0qt0i.mongodb.net/bktrans?retryWrites=true&w=majority’)And the below error appears when I run rpm start on my API:(Unauthorized)not authorized on admin to execute command {listIndexes: “consumers“, cursor:{ }, $clusterTime: { clusterTime: {1634734692 10}, signature: { hash: {203 165 207 4 203 170 4 127 37 213 33 4 100 167 170 44 201 49 111 36} }} } …",
"username": "Ahmed_Habib"
},
{
"code": "",
"text": "Hi Ahmed ,I got this error recently while connecting from a node app on Heroku to MongoDB Atlas - I had issue with connection string .I tried using below the below format - and it worked.mongodb://:@-cluster0-shard-00-00.scxjr.mongodb.net:27017,-cluster0-shard-00-01.scxjr.mongodb.net:27017,-cluster0-shard-00-02.scxjr.mongodb.net:27017/?ssl=true&replicaSet=atlas--shard-0&authSource=admin&retryWrites=true&w=majorityIt worked.I got this from MongoDB Atlas tab → Deployment side bar → Databases → on right Connect button inside . Connect to your application section → Select Driver - Node.js and Version 2.2.12 or later - below it will show the example connection string.Please let me know if this solved your issue",
"username": "Paulson_Edamana"
},
{
"code": "",
"text": "God bless you. *Tears of joy :_ ))",
"username": "Amir_Abbas_Mousavi"
},
{
"code": "",
"text": "hello, I am getting same kind of error like:\n(Unauthorized) not authorized on admin to execute command { create: “sample”, capped: true, size: 5242880, writeConcern: { w: “majority” }, lsid: { id: {4 [133 92 127 66 175 140 79 35 174 118 57 40 59 137 39 110]} }, $clusterTime: { clusterTime: {1678037479 5}, signature: { hash: {0 [235 83 26 247 241 208 26 217 203 154 205 184 108 184 228 33 93 229 204 81]}, keyId: 7153350679543676928.000000 } }, $db: “admin” }i tried multi[ple ways for creating collection in db. This error i got is when i got connected to mongodb compass and tried to build a connection in db.\nThere are 3 radio buttons available with that, but cant choose which one to choose.\nPlease help me out.",
"username": "Gauravi_Raut"
},
{
"code": "",
"text": "I think you are trying to create your collection in admin db\nConnect to test db and try\nAlso share your compass connect string after hiding sensitive information like password,cluster address",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "hi\nI use your method to get connect string. But what it gives me is mongodb://zmz2:123@ac-64yh6ax-shard-00-00.pbddalw.mongodb.net:27017",
"username": "mingzi_zhang"
}
] | (Unauthorized) not authorized on admin to execute command | 2021-10-20T13:28:35.345Z | (Unauthorized) not authorized on admin to execute command | 31,140 |
null | [
"aggregation",
"replication",
"sharding"
] | [
{
"code": "",
"text": "We have deployed a sharded mongodb cluster on kubernetes and now we want to stress test that, if anyone of u have done benchmarking and stress testing of mongodb it will be very helpful if you can suggest me siome better tools , I tried sysbench but it seems non of the recentt blogs or support is there, so if any other better options are available please tell me.",
"username": "Vatsal_Sharma"
},
{
"code": "",
"text": "You could start with mgenerate to generate test documents.",
"username": "steevej"
},
{
"code": "",
"text": "@Vatsal_Sharma We have run 1000+ performance measurements for MongoDB and MongoDB Atlas. (we are doing it as our profession at benchANT). We are usually using YCSB or the BenchBase Benchmarking Suite.It always depends on which workload type you want to test.If you need professional help to run benchmarks, please let me know.",
"username": "Jan_Ocker"
}
] | Performance testing of MOngodb | 2023-10-09T16:20:43.632Z | Performance testing of MOngodb | 319 |
null | [] | [
{
"code": " MongoDB shell version v5.0.3\n connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\n Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\n connect@src/mongo/shell/mongo.js:372:17\n @(connect):2:6\n exception: connect failed\n exiting with code 1\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23017, \"ctx\":\"listener\",\"msg\":\"removing socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\"}}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the global connection pool\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"SignalHandler\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784908, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the PeriodicThreadToAbortExpiredTransactions\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784909, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ReplicationCoordinator\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784910, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ShardingInitializationMongoD\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784911, \"ctx\":\"SignalHandler\",\"msg\":\"Enqueuing the ReplicationStateTransitionLock for shutdown\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784912, \"ctx\":\"SignalHandler\",\"msg\":\"Killing all operations for shutdown\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4695300, \"ctx\":\"SignalHandler\",\"msg\":\"Interrupted all currently running operations\",\"attr\":{\"opsKilled\":3}}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"TENANT_M\", \"id\":5093807, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down all TenantMigrationAccessBlockers on global shutdown\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.249+07:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784913, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down all open transactions\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784914, \"ctx\":\"SignalHandler\",\"msg\":\"Acquiring the ReplicationStateTransitionLock for shutdown\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":4784915, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the IndexBuildsCoordinator\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784916, \"ctx\":\"SignalHandler\",\"msg\":\"Reacquiring the ReplicationStateTransitionLock for shutdown\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784917, \"ctx\":\"SignalHandler\",\"msg\":\"Attempting to mark clean shutdown\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down free monitoring\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20609, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down free monitoring\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the HealthLog\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the TTL monitor\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":3684100, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down TTL collection monitor thread\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":3684101, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down TTL collection monitor thread\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"SignalHandler\",\"msg\":\"Acquiring the global lock for shutdown\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784930, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the storage engine\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22320, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down journal flusher thread\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22321, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down journal flusher thread\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.250+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22322, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down checkpoint thread\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.251+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22323, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down checkpoint thread\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.251+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20282, \"ctx\":\"SignalHandler\",\"msg\":\"Deregistering all the collections\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.251+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22261, \"ctx\":\"SignalHandler\",\"msg\":\"Timestamp monitor shutting down\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.251+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22317, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTigerKVEngine shutting down\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.251+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22318, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down session sweeper thread\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.251+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22319, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down session sweeper thread\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.251+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795902, \"ctx\":\"SignalHandler\",\"msg\":\"Closing WiredTiger\",\"attr\":{\"closeConfig\":\"leak_memory=true,\"}}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.252+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1634722334:252129][142289:0x7f6df9745700], close_ckpt: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 18, snapshot max: 18 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 7\"}}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.274+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795901, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTiger closed\",\"attr\":{\"durationMillis\":23}}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.274+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22279, \"ctx\":\"SignalHandler\",\"msg\":\"shutdown: removing fs lock...\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.274+07:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"SignalHandler\",\"msg\":\"Dropping the scope cache for shutdown\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.274+07:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":4784926, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down full-time data capture\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.274+07:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20626, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down full-time diagnostic data capture\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.279+07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"SignalHandler\",\"msg\":\"Now exiting\"}\n {\"t\":{\"$date\":\"2021-10-20T16:32:14.279+07:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":0}}\n",
"text": "Very sorry for my short question but I don’t know how to describe my problem. Here is info when I try to run mongo shell:And here are some logs:",
"username": "MAY_CHEAPER"
},
{
"code": "",
"text": "but when I run sudo mongod -f /etc/mongod.conf, I can connect to mongo shell",
"username": "MAY_CHEAPER"
},
{
"code": "sudo mongodsudomongodmongod",
"text": "Hi @MAY_CHEAPER,Were you able to find a solution to your issue? Your log snippet starts after shutdown has initiated, so I think the most interesting log lines are missing.If sudo mongod works fine, my first guess would be that there are problems with file & directory permissions that are ignored when you use sudo to start the mongod process as the root user.If so, I recommend fixing file & directory permissions so your mongod process can run as an unprivileged user.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I dont know what reason is but after many search on google, I reinstall MongoDB and it works fine. Thanks for your support.",
"username": "MAY_CHEAPER"
},
{
"code": "",
"text": "@Stennie_X I have the same issue when I am logged in as root to my system and then installing mongodb wondering why will I have permission issues ? and how do i give permission if at all to resolve this ? after i exec to my pod",
"username": "Shilpa_Agrawal"
},
{
"code": "[root@srv ~]# mongo\nMongoDB shell version v5.0.15\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nError: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:372:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1\n[root@srv ~]# netstat -an | grep 27017\n[root@srv ~]#\nation: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongo\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# how the process runs\nprocessManagement:\n fork: true # fork and run in background\n pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile\n timeZoneInfo: /usr/share/zoneinfo\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1,<my server ip for example 178.1.1.1> # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.\n\n\nsecurity:\n#authorization: enabled\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options\n\n#auditLog:\n",
"text": "Hello friends\nI have the same problem and I have been searching for about 2 days and I did not get any result and my problem is not solved. Please help me to solve my problem.\nThe error that is displayed for me:i use Centos 7.6 with cpanel,litespeed,cloudlinuxI also checked the port, but it doesn’t show anything:and the /etc/mongod.conf file is as follows:",
"username": "Hesam_Ramezani"
},
{
"code": "",
"text": "Your mongod should be up & running for you to connect to it\nIf you installed it as service need to start the service first\nIf not installed as service you need to start your mongod manually from command line with appropriate parameters",
"username": "Ramachandra_Tummala"
},
{
"code": "[root@srv ~]# sudo service mongodb start\nRedirecting to /bin/systemctl start mongodb.service\nFailed to start mongodb.service: Unit not found.\n[root@srv ~]# service mongod status\nRedirecting to /bin/systemctl status mongod.service\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)\n Active: failed (Result: exit-code) since Mon 2023-02-27 16:57:53 +0330; 2h 20min ago\n Docs: https://docs.mongodb.org/manual\n Main PID: 1047495 (code=exited, status=2)\n\nFeb 27 16:57:53 srv.sayansite.com systemd[1]: Started MongoDB Database Server.\nFeb 27 16:57:53 srv.sayansite.com systemd[1]: mongod.service: main process exited, code=exited, status=2/INVALIDARGUMENT\nFeb 27 16:57:53 srv.sayansite.com systemd[1]: Unit mongod.service entered failed state.\nFeb 27 16:57:53 srv.sayansite.com systemd[1]: mongod.service failed.\n",
"text": "The Mongo service is installed but does not start:and status:pleas help me.",
"username": "Hesam_Ramezani"
},
{
"code": "service mongod statussudo service mongodb start",
"text": "What is funny is that you use the correct name forservice mongod statusbut the wrong one tosudo service mongodb start",
"username": "steevej"
},
{
"code": "sudo service mongodb start",
"text": "sudo service mongodb startYou are right, but it will be redirected to service mongod start.",
"username": "Hesam_Ramezani"
},
{
"code": "Redirecting to /bin/systemctl start mongodb.serviceFailed to start mongodb.service: Unit not found.",
"text": "it will be redirected to service mongod start.To mongodb.service perhaps as indicated by the warning:Redirecting to /bin/systemctl start mongodb.servicewhich is still wrong, otherwise you would not get an error message that saysFailed to start mongodb.service: Unit not found.As mentioned, you are using the correct name, that is mongod when you are querying the status but the wrong name, that is mongodb when you try to start the service.",
"username": "steevej"
},
{
"code": "mv /opt/homebrew/var/mongodb /opt/homebrew/var/mongodb-old\nbrew reinstall mongodb-community@4.4\nbrew services restart mongodb/brew/mongodb-community@4.4\n",
"text": "A reinstall helped me as well, but I had to move mongodb out of the way first. Something about the directory being in place that was preventing mongo from grabbing it again. I’m using brew…",
"username": "leemr"
}
] | Connect mongod exiting with code 1, can not see specific any error in log file | 2021-10-26T06:27:29.355Z | Connect mongod exiting with code 1, can not see specific any error in log file | 12,260 |
null | [
"aggregation"
] | [
{
"code": "//Test\n{\n _id: '12345',\n student_id: '3350235',\n dictionary_id: '12345',\n test_score: 35,\n test_percentile: 56,\n term_name: 'fall',\n test_grade: '4',\n school_year: 2024,\n risk_level: 'someRisk'\n}\n\n//Dictionary\n{\n _id: '12345',\n default_bands: [\n {\n _id: '1',\n term_name: \"*\", //wildcard\n grade: \"*\", //wildcard\n field: \"percentile\",\n bands: [\n {\n _id:'1',\n low:0,\n high:50,\n background_color:'red'\n },\n {\n _id:'2',\n low:51,\n high:89,\n background_color:'green'\n },\n {\n _id:'3',\n low:90,\n high:100,\n background_color:'blue'\n }\n ]\n },\n {\n _id: '2',\n term_name: \"fall\", \n grade: \"4\", \n field: \"score\",\n bands: [\n {\n _id:'1',\n low:0,\n high:335,\n background_color:'red'\n },\n {\n _id:'2',\n low:336,\n high:768,\n background_color:'green'\n },\n {\n _id:'3',\n low:769,\n high:1000,\n background_color:'blue'\n }\n ]\n },\n {\n _id: '3',\n term_name: \"fall\", \n grade: \"*\", //wildcard \n field: \"risk_level\",\n bands: [\n {\n _id:'1',\n low:'highRisk',\n high:'highRisk',\n background_color:'red'\n },\n {\n _id:'2',\n low:'someRisk',\n high:'someRisk',\n background_color:'green'\n },\n {\n _id:'3',\n low:'noRisk',\n high:'noRisk',\n background_color:'blue'\n }\n ]\n }\n ]\n}\n//Output\n{\n school_year:2024,\n test_grade: '4',\n term_name: 'fall',\n band_color: 'green',\n students:35, //Number of students who fall into this band\n}\ndb.tests.aggregate([\n { $match: { dictionary_id: \"12345\", school_year: 2024, test_grade: '4', term_name: 'fall', student_id: \"3350235\" } }, //filtered down to one student for testing\n {\n $lookup: {\n from: \"dictionary\",\n localField: \"dictionary_id\",\n foreignField: \"_id\",\n as: \"ad\"\n }\n },\n { $unwind: \"$ad\" },\n { $unwind: \"$ad.default_bands\" },\n {\n $match: { //This is the stage in question\n $expr: { //[v,v]\n $and: [\n { $ne: [\"$ad.default_bands.term_name\", { $literal: \"*\" }] },\n { $ne: [\"$ad.default_bands.term_grade\", { $literal: \"*\" }] },\n { $eq: [\"$ad.default_bands.field\", { $literal: \"percentile\" }] }\n ]\n }\n }\n },\n { $unwind: \"$ad.default_bands.bands\" },\n {\n $match: {\n $expr: {\n $and: [\n { $gte: [\"$test_percentile\", { $toDecimal: \"$ad.default_bands.bands.low\" }] },\n { $lt: [\"$test_percentile\", { $toDecimal: \"$ad.default_bands.bands.high\" }] }\n ]\n }\n\n }\n },\n {\n $project: {\n _id: 0,\n student_id: \"$student_id\",\n school_year: \"$school_year\",\n test_grade: \"$test_grade\",\n term_name: \"$term_name\",\n test_score: \"$test_score\",\n test_percentile: \"$test_percentile\",\n background_color: \"$ad.default_bands.bands.color\",\n dictionary_id: \"$dictionary_id\"\n }\n },\n {\n $group: {\n _id: {\n test_grade: \"$test_grade\",\n term_name: \"$term_name\",\n school_year: \"$school_year\",\n background_color: \"$background_color\",\n dictionary_id: \"$dictionary_id\"\n },\n count: { $sum: 1 }\n }\n },\n {\n $project: {\n _id: 0,\n school_year: \"$_id.school_year\",\n test_grade: \"$_id.test_grade\",\n term_name: \"$_id.term_name\",\n background_color: \"$_id.background_color\",\n students: \"$count\"\n }\n }\n])\nCombination Structure (in my head) [term_name, grade]\n#1 [*,*] //both term and grade are wildcard\n#2 [v,*] //term has a value ('fall', etc), and grade is a wildcard\n#3 [*,v] //term is wildcard and grade has a value\n#4 [v,v] //both term and grade have values\n#5 [null,null] //This shouldn't happen - but try to account for it?\n{\n $match: {\n $expr: { \n $cond: {\n \n if: { //[*,*]\n $and: [\n {$eq: [\"$ad.default_bands.term_name\",{$literal:\"*\"}]},\n {$eq: [\"$ad.default_bands.term_grade\",{$literal:\"*\"}]},\n {$eq: [\"$ad.default_bands.field\",{$literal:\"percentile\"}]}\n ]},\n then: {\n $and: [\n {$eq: [{$literal:\"*\"}, \"$ad.default_bands.term_name\"]},\n {$eq: [{$literal:\"*\"}, \"$ad.default_bands.grade\"]},\n {$eq: [{$literal:\"percentile\"}, \"$ad.default_bands.field\"]}\n ]\n \n },\n else:\n { \n if: { //[v,*]\n $and: [\n {$ne: [\"$ad.default_bands.term_name\",{$literal:\"*\"}]},\n {$eq: [\"$ad.default_bands.term_grade\",{$literal:\"*\"}]},\n {$eq: [\"$ad.default_bands.field\",{$literal:\"percentile\"}]}\n ]},\n then: {\n $and: [\n {$eq: [\"$term_name\", \"$ad.default_bands.term_name\"]},\n {$eq: [{$literal:\"*\"}, \"$ad.default_bands.grade\"]},\n {$eq: [{$literal:\"percentile\"}, \"$ad.default_bands.field\"]}\n ]\n \n },\n else:\n { \n if: { //[*,v]\n $and: [\n {$eq: [\"$ad.default_bands.term_name\",{$literal:\"*\"}]},\n {$ne: [\"$ad.default_bands.term_grade\",{$literal:\"*\"}]},\n {$eq: [\"$ad.default_bands.field\",{$literal:\"percentile\"}]}\n ]},\n then: {\n $and: [\n {$eq: [{$literal:\"*\"}, \"$ad.default_bands.term_name\"]},\n {$eq: [\"$test_grade\", \"$ad.default_bands.grade\"]},\n {$eq: [{$literal:\"percentile\"}, \"$ad.default_bands.field\"]}\n ]\n \n },\n else:\n { \n if: { //[v,v]\n $and: [\n {$ne: [\"$ad.default_bands.term_name\",{$literal:\"*\"}]},\n {$ne: [\"$ad.default_bands.term_grade\",{$literal:\"*\"}]},\n {$eq: [\"$ad.default_bands.field\",{$literal:\"percentile\"}]}\n ]},\n then: {\n $and: [\n {$eq: [\"$term_name\", \"$ad.default_bands.term_name\"]},\n {$eq: [\"$test_grade\", \"$ad.default_bands.grade\"]},\n {$eq: [{$literal:\"percentile\"}, \"$ad.default_bands.field\"]}\n ]\n \n },\n else:\n { \n \n } \n } \n } \n } \n }\n } \n \n }\n },\n",
"text": "Hi,First off, I am mid-level in terms of aggregations. Not an expert, but not a Noob either. But, I’m stuck.Here’s my scenario. I’ve got a collection of test documents in an education app. Those documents contain a score, and a percentile. They also contain a term (Fall, Winter, Spring) and a grade level. I also have a collection of test definitions (dictionary) - that contain templates for indicator bands (my best attempt at describing this). For example, 0 to 50 is low, 51 to 89 is proficient, 90-100 is advanced.The collections look like this:I’m trying to write an aggregation that will output the following:So, the goal here is to take the test record, lookup the associated dictionary record, find the proper default band group to use, and then evaluate the score to determine what background color to use.If you notice, in the dictionary, there are two fields (term_name and test_grade) that can either have a value, or a wildcard (*). I have the aggregation working perfectly if both fields have the wildcard, or both fields have a value - separately. No problem. However, once I try to include mixed case scenarios - it stops working, and I’m not sure why.This is what I’ve got so far.This is fantastic and fast. However, in the “stage in question” above, I need to account for all possibilities - including no possibilities.I’ve tried to use the $cond operator to set up a series of $if statements to run through the possibilities.However, it just skips everything. I’m trying to get to “if x conditions exist, use y set of criteria for the match.”It’s not working.I suspect I’m close - or very very far off.Adding the bands as a subdocument on the test record is not an option. These are flexible and can change. I need them applied at runtime.Any thoughts or insights would be greatly appreciated!",
"username": "Kyle_Holder"
},
{
"code": "{$facet: {\n ww: [\n {$match: {\n $expr:{\n $and: [\n {$eq:[\"$ad.default_bands.term_name\",{$literal:\"*\"}]},\n {$eq:[\"$ad.default_bands.grade\",{$literal:\"*\"}]},\n {$eq:[\"$ad.default_bands.field\", {$literal:\"percentile\"}]}\n ]\n }\n }}\n ],\n wv: [\n {$match: {\n $expr:{\n $and: [\n {$eq:[\"$ad.default_bands.term_name\",\"$term_name\"]},\n {$eq:[\"$ad.default_bands.grade\",{$literal:\"*\"}]},\n {$eq:[\"$ad.default_bands.field\", {$literal:\"percentile\"}]}\n ]\n }\n }}\n ],\n vw: [\n {$match: {\n $expr:{\n $and: [\n {$eq:[\"$ad.default_bands.term_name\",{$literal:\"*\"}]},\n {$eq:[\"$ad.default_bands.grade\",\"$test_grade\"]},\n {$eq:[\"$ad.default_bands.field\", {$literal:\"percentile\"}]}\n ]\n }\n }}\n ],\n vv: [\n {$match: {\n $expr:{\n $and: [\n {$eq:[\"$ad.default_bands.term_name\",\"$term_name\"]},\n {$eq:[\"$ad.default_bands.grade\",\"$test_grade\"]},\n {$eq:[\"$ad.default_bands.field\", {$literal:\"percentile\"}]}\n ]\n }\n }}\n ]\n }},\n\n {\n $project: {\n result: {\n $setUnion: [ \"$ww\", \"$wv\", \"$vw\", \"$ww\" ]\n }\n }\n }, \n {$unwind:\"$result\"},\n",
"text": "OK. I figured it out - it’s quite nice.Facets to the rescue!",
"username": "Kyle_Holder"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Help with Aggregation - self-referencing join | 2023-10-26T17:38:13.371Z | Help with Aggregation - self-referencing join | 147 |
null | [
"security",
"field-encryption"
] | [
{
"code": "",
"text": "Can we enable to encryption in community edition\nCan anyone please confirm",
"username": "Siva_Gandham1"
},
{
"code": "",
"text": "Encryption in transit (TLS), yes.Encryption at rest, no, this is only supported by Enterprise Edition. You can encrypt with OS/Filesystem tools though.In-Use Encryption for Queryable Encryption and Client Side Field Level Encryption are also available but Automatic Encryption is an Enterprise Edition feature.All these are described in the manual under encryption in the Security section.",
"username": "chris"
}
] | Can we enable to encryption in community edition | 2023-10-26T06:39:20.132Z | Can we enable to encryption in community edition | 185 |
[
"containers",
"kotlin"
] | [
{
"code": "org.bson.codecs.configuration.CodecConfigurationException: Can't find a codec for CodecCacheKey{clazz=class com.example.kotlinmongo.SampleEntity, types=null}.\n",
"text": "Good day All,\nI am using the latest Kotlin Coroutine driver (org.mongodb:mongodb-driver-kotlin-coroutine:4.10.2) to connect to my local MongoDB instance (running locally via docker).\nFollowing the quick start guides I am able to connect to DB, send a ping and create a new empty collection with no issues.But when I try to insert a new record to my new collection, I get a codec not found error:My entity type is a fairly trivial data class, which contains a single String field and according to the following page kotlin data classes are supposed to be serialized/de-serialized out-of-the-box using a default codec.I tried a data class only with primitive types, I tried it with @BsonId field, tried with and without kotlinx serialization library in the project - same result.\nI am using JDK17 and Kotlin 1.8.\nAm I missing something or found a bug?Here’s a sample code repo to reproduce the problem:Contribute to Legga/kotlin-mongo development by creating an account on GitHub.Appreciate your help!",
"username": "Legga_L"
},
{
"code": "",
"text": "Update: it appears that this issue is only occurring when running in a SpringBoot.\nI tried to run my code from a plain gradle project and everything worked fine.\nNote: I did not use Spring Data (which has its own mongo libs) at all.\nHaven’t got to the bottom of why this is happening under Spring though…",
"username": "Legga_L"
},
{
"code": "api(\"org.mongodb:bson-kotlin:4.11.0\")package cz.pos.library.mongo\n\nimport com.mongodb.ConnectionString\nimport com.mongodb.MongoClientSettings\nimport com.mongodb.reactivestreams.client.MongoClients\nimport org.bson.UuidRepresentation\nimport org.bson.codecs.configuration.CodecRegistries.fromProviders\nimport org.bson.codecs.configuration.CodecRegistries.fromRegistries\nimport org.bson.codecs.configuration.CodecRegistry\nimport org.bson.codecs.kotlin.DataClassCodecProvider\nimport org.springframework.context.annotation.Bean\nimport org.springframework.context.annotation.Configuration\n\n@Configuration\nclass MongoConfig {\n\n var databaseName: String = \"appDatabase\"\n var connectionString: String = \"mongodb://admin:admin@localhost:27017/\"\n\n fun getClientSettings() = MongoClientSettings.builder()\n .uuidRepresentation(UuidRepresentation.STANDARD)\n .applyConnectionString(ConnectionString(connectionString))\n .codecRegistry(getCodecRegistry())\n .build()\n\n fun getCodecRegistry(): CodecRegistry {\n return fromRegistries(\n MongoClientSettings.getDefaultCodecRegistry(),\n fromProviders(DataClassCodecProvider())\n )\n }\n\n @Bean\n fun mongoClient() = MongoClients.create(getClientSettings())\n\n @Bean\n fun mongoDatabase() = mongoClient().getDatabase(databaseName)\n\n}\n",
"text": "I found the solution: You need to register a codec from api(\"org.mongodb:bson-kotlin:4.11.0\").\nAnd then:",
"username": "Jan_Rahm"
}
] | Kotlin Coroutine Driver can't serialise a simple data class | 2023-09-02T12:53:50.870Z | Kotlin Coroutine Driver can’t serialise a simple data class | 504 |
|
null | [
"aggregation",
"node-js",
"mongoose-odm"
] | [
{
"code": "const db = mongoose.connection.useDb(\"company\")\nconst collection = db.model('...\n",
"text": "Hi community,We have a NodeJS + Express Rest API application that manages requests for multiple clients. We have separated our customers data in own databases. Based on the incoming request, we can easily decide which database we have to use.Now let’s consider how we can aggregate database connections to avoid high memory consumption:The simplest but, in our opinion, least performant solution: We create a new connection to the customer database per REST call in order to access customer-specific data.We create a Mongoose instance and switch databases using .useDb() as follows:Is it ensured that the data does not mix if there are two incoming connections from different customers at the same time and the database is changed in between?The same database is used but different collection prefixes are used. However, there is no easy way to determine data consumption per customer (compared to database size).Another consideration was to introduce a “customer key” and store all data in the same collection. However, a level of security feels lost there. Same loss of consumption reporting feature.Are there any better approaches? Any ideas about this?I’m looking forward to a swarm of knowledgeable reactions Greetings\nTimo",
"username": "tkempken"
},
{
"code": "useDb",
"text": "Hello @tkempken, Welcome to the MongoDB community forum,Is it ensured that the data does not mix if there are two incoming connections from different customers at the same time and the database is changed in between?I think you can use useDb method to switch the database because it uses a single connection pool, it does not mix data because nodejs is built on a non-blocking, event-driven architecture, which allows it to handle multiple concurrent client requests without conflicts.I am writing an article about multiple Mongodb connections in a single application, but currently, I have this example in GitHub.",
"username": "turivishal"
},
{
"code": "",
"text": "You’ve refactored your application into a multi-tenant setup, mainly to ensure that each client has access only to their specific data. While implementing multi-tenancy with a “customer key” is relatively straightforward, the crucial aspect is how you plan to utilize and manage this data.As your user base grows, you’ll likely encounter challenges related to data and schema merging. Ensuring proper data segregation and access control from the outset is essential to avoid complications as your system scales.For example if you pan to provide data for using in some power BI - different data makes your life easer.",
"username": "Donis_Rikardo"
}
] | NodeJS Express with separate databases | 2023-10-25T06:57:13.878Z | NodeJS Express with separate databases | 176 |
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": " // config variables\n const PORT: string = config.get('PORT');\n const MODE: string = config.get('MODE');\n const SESSION_SECRET: string = config.get('SESSION_SECRET');\n const CLIENT_DOMAIN: string = config.get('CLIENT_DOMAIN');\n const ORDER_CLIENT_DOMAIN: string = config.get('ORDER_CLIENT_DOMAIN');\n const AUTH_COOKIE_MAX_AGE: number = config.get('AUTH_COOKIE_MAX_AGE');\n const mongoUrl: string = config.get('mongoURI');\n \n const app = express();\n const server = createServer(app);\n const io = initializeSocketServer(server, {\n \tcors: {\n \t\torigin: [CLIENT_DOMAIN, ORDER_CLIENT_DOMAIN],\n \t},\n });\n \n //-----------------------------Middleware Start------------------//\n connectDB();\n",
"text": "I’m working on a Node.js project using Mongoose and MongoDB, and I have a specific use case where I need to create dynamic database connections for different branches. Here’s what I’m trying to achieve:I have a main database called adminBranches, which stores information\nabout all the available branches.When a user logs in, I want to fetch the list of branches available\nto that user.The user should then be able to select a branch to work with, and the\napplication should establish a connection to the corresponding\nbranch-specific database.The selected database connection should be stored in the user’s\nsession for the duration of their session.I’ve attempted to set up a global connection, but I’m now looking for guidance on how to create these dynamic connections for each user and branch. Can anyone provide insights or code examples on how to achieve this in Node.js and Mongoose?For now I have a connection like that:",
"username": "Donis_Rikardo"
},
{
"code": "",
"text": "create dynamic database connections for different branchesDo you mean to different servers/clusters or simply different database/collection name space?Different database/collection is really just a name and this should be easily part of the session data.Different servers/clusters (cluster to make the rest of text lighter) is a different ball game. Switching to different database/collection within the same cluster is trivial; database and collection names can easily be part of the session data since it is only a identifier. Switching to a different cluster connection is more involve because there are internal structures (like send/receive buffers/credentials) associated with each connections. Some design patterns like momoization could help achieving this. In the literature, it is mostly use to cache the evaluation of a complex function but I consider establishing a connection a complex function. Is it like a pool of connection pool where each name is associated to a different pool.But your sample seems to imply a single URI so your case is the trivial one where use a map to map (I do not know how to express that in any other way) a branch name into a list of database/collection names.",
"username": "steevej"
},
{
"code": "",
"text": "In general, I already implemented it and it is not just a change in the name of the connection.So the basics of multi-tenant here: Multitenant Node.js Application with mongoose (MongoDB) | by Prakhar Jain | Connected Lab TechBlog | Medium.So, the most of painful for me here was accepting what for each database we should have connections with ALL models for each but in cache. Provided solution is not working for population, because when it do population we should set that model to use (that tenant).so in code unfortunately we have to register ALL model in first request for that tenant.",
"username": "Donis_Rikardo"
}
] | Creating Dynamic MongoDB Connections in Node.js and Mongoose | 2023-10-20T23:10:58.226Z | Creating Dynamic MongoDB Connections in Node.js and Mongoose | 251 |
null | [
"atlas-cluster",
"next-js"
] | [
{
"code": "MONGODB_URI = mongodb+srv://<user>:<password>@rgcluster.x7ixk.mongodb.net/?retryWrites=true&w=majorityMONGODB_URI = mongodb+srv://<user>:<password>@rgcluster.x7ixk.mongodb.net/sample_mflix?retryWrites=true&w=majorityok: 0,\n code: 8000,\n codeName: 'AtlasError',\n [Symbol(errorLabels)]: Set(1) { 'HandshakeError' }\n}\n",
"text": "Hi all, I’m new to MongoDB and Next.js and am following the MongoDB Next.js integration tutorial I have changed the env file name to .env.local and I am replacing the MONGODB_URI with the same I got from MongoDB atlas, connect to your application, and clicking on Drivers.I have tried with this URI:\nMONGODB_URI = mongodb+srv://<user>:<password>@rgcluster.x7ixk.mongodb.net/?retryWrites=true&w=majorityI also tried with this URI to mention the the collection name:\nMONGODB_URI = mongodb+srv://<user>:<password>@rgcluster.x7ixk.mongodb.net/sample_mflix?retryWrites=true&w=majorityboth are not working and showing that I am not connected to the database.I also get this error in the terminal:I have disabled my firewall as I thought it might be affecting the connection, but no success.Do you know what is the problem?",
"username": "Remy_Ghalayini"
},
{
"code": "mongoshmongodb+srv://test:test@cluster0.4zdj5vb.mongodb.net/?retryWrites=true&w=majority",
"text": "Hi @Remy_Ghalayini and welcome to MongoDB community forums!!Based on the error message you have posted in the above thread, it seems there could be an authentication error.\nIt would be helpful if you could clarify more on the following:Below is an example that shows the correct connection string to use while making the connection with the database:mongodb+srv://test:test@cluster0.4zdj5vb.mongodb.net/?retryWrites=true&w=majority\nwhere test and test are my username and password.I believe the documentation on Configuring user on Atlas and Whitelisting the IPs would be a good starting point to read.Warm Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "It would be helpful if you could clarify more on the following:Hi Aaasawari, answering your question:Note that I am using windows 11.Screenshot 2023-10-20 1701451650×418 15.1 KB",
"username": "Remy_Ghalayini"
},
{
"code": "{ ok: 0,\n code: 8000,\n codeName: 'AtlasError',\n connectionGeneration: 0,\n [Symbol(errorLabels)]: Set(2) { 'HandshakeError', 'ResetPool' }\n}\n ⚠ Fast Refresh had to perform a full reload. Read more: https://nextjs.org/docs/messages/fast-refresh-reload\nMongoServerError: bad auth : Authentication failed.\n",
"text": "Hi @Remy_Ghalayini and thank you for getting back.I tried to follow the same documentation and with correct username and password I was successfully able to make the connection.\nHowever, when I mis spelled my username in the .env. I see the same error as yours.Also, I see the error logs as:Could you share the error logs that you are observing while you perform the npm start.\nAlso, could you share the node js version that you are using.Regards,\nAasawari",
"username": "Aasawari"
},
{
"code": "MongoServerError: bad auth : authentication failed\n at Connection.onMessage (C:\\Users\\remyg\\Programming\\githubRepositories\\ShoppingList_Next_mongodb\\node_modules\\mongodb\\lib\\cmap\\connection.js:207:30)\n at MessageStream.<anonymous> (C:\\Users\\remyg\\Programming\\githubRepositories\\ShoppingList_Next_mongodb\\node_modules\\mongodb\\lib\\cmap\\connection.js:60:60)\n at MessageStream.emit (node:events:514:28)\n at processIncomingData (C:\\Users\\remyg\\Programming\\githubRepositories\\ShoppingList_Next_mongodb\\node_modules\\mongodb\\lib\\cmap\\message_stream.js:132:20)\n at MessageStream._write (C:\\Users\\remyg\\Programming\\githubRepositories\\ShoppingList_Next_mongodb\\node_modules\\mongodb\\lib\\cmap\\message_stream.js:33:9)\n at writeOrBuffer (node:internal/streams/writable:399:12)\n at _write (node:internal/streams/writable:340:10)\n at Writable.write (node:internal/streams/writable:344:10)\n at TLSSocket.ondata (node:internal/streams/readable:785:22)\n at TLSSocket.emit (node:events:514:28)\n at addChunk (node:internal/streams/readable:343:12)\n at readableAddChunk (node:internal/streams/readable:316:9)\n at Readable.push (node:internal/streams/readable:253:10)\n at TLSWrap.onStreamRead (node:internal/stream_base_commons:190:23) {\n ok: 0,\n code: 8000,\n codeName: 'AtlasError',\n [Symbol(errorLabels)]: Set(1) { 'HandshakeError' }```",
"text": "hi Aasawari,Thanks for the reply. I am using node v20.7.0. I am definitely sure of my username and password and double-checked them multiple times. I have no console errors, but I get this error in the terminal when i navigate to localhost.",
"username": "Remy_Ghalayini"
},
{
"code": "",
"text": "Hi @Remy_Ghalayini\nThank you for sharing the logsBased on the error logs, it seems there is an authentication error in the connection URI.\nCan you help me with the code snippet where you are trying to make the connection (possibly the .env file).Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Hi Aasawari,I found the issue. I hardcoded the uri in server/index.js file and I was successfully connected. When I tried to debug the issue, I noticed that my password was not retrieved properly from .env.local file. The reason is that my password contains a special character “$” and this was not read properly. So I changed my password and it is now working.Thank you for your help and assistance.",
"username": "Remy_Ghalayini"
}
] | Cannot connect to MongoDB atlas when following the Next.js MongoDB tutorial | 2023-10-17T09:59:58.755Z | Cannot connect to MongoDB atlas when following the Next.js MongoDB tutorial | 366 |
null | [
"queries",
"java",
"crud",
"android"
] | [
{
"code": "Bson filter = Filters.eq(\"_id\", userid);\nBson update = Updates.combine(\n Updates.inc(\"numberToIncrement\", 1),\n Updates.push(\"animalArray\", \"fish\")\n );\nusersCollection.updateOne(filter , update).getAsync(task -> { ...\nBSON_CODEC_NOT_FOUND(realm::app::CustomError:1100): Could not resolve codec for SimpleEncodingFilter\norg.bson.codecs.configuration.CodecConfigurationException: Can't find a codec for CodecCacheKey{clazz=class com.mongodb.client.model.Filters$SimpleEncodingFilter, types=null}.\n at org.bson.internal.ProvidersCodecRegistry.lambda$get$0$org-bson-internal-ProvidersCodecRegistry(ProvidersCodecRegistry.java:87)\nBSON_CODEC_NOT_FOUND(realm::app::CustomError:1100): Could not resolve codec for CompositeUpdate\norg.bson.codecs.configuration.CodecConfigurationException: Can't find a codec for CodecCacheKey{clazz=class com.mongodb.client.model.Updates$CompositeUpdate, types=null}.\n at org.bson.internal.ProvidersCodecRegistry.lambda$get$0$org-bson-internal-ProvidersCodecRegistry(ProvidersCodecRegistry.java:87)\npackagingOptions {\n exclude 'META-INF/native-image/org.mongodb/bson/native-image.properties'\n}\nExecution failed for task ':app:mergeDebugJavaResource'.\n> A failure occurred while executing com.android.build.gradle.internal.tasks.MergeJavaResWorkAction\n > 2 files found with path 'META-INF/native-image/org.mongodb/bson/native-image.properties' from inputs:\n - E:\\.gradle\\caches\\transforms-3\\a6834347cc909f58f15e170b45aed9c5\\transformed\\jetified-mongodb-driver-core-4.11.0.jar\n - E:\\.gradle\\caches\\transforms-3\\7887ee97f50a4509db80ce209ca44106\\transformed\\jetified-bson-4.11.0.jar\n Adding a packagingOptions block may help, please refer to\n https://developer.android.com/reference/tools/gradle-api/8.1/com/android/build/api/dsl/ResourcesPackagingOptions\n for more information\n",
"text": "Hello I am using mongo realm for my Android Studio application. When using .updateOne method to update a Document inside one of my “users” Collection, I use this code:In Android Studio it shows me no errors, but when I try to run the code, the error for “Filters” comes:and a similar error emerges for the “Updates”-part when I replace “Bson filter = Filters.eq(”_id\", userid);\" with “Document filter = new Document(”_id\", userid);\":My specs:\nAndroid Gradle Plugin Version 8.1.2 . Gradle Version 8.4 . Compile SDK Version 34. Java Version 17. For implementation I use in my build.gradle: implementation ‘org.mongodb:mongodb-driver-sync:4.10.2’\n.Additional information:\nBefore getting this error, I had to include in my build.gradle:because I got the error:Can anyone help me in this ??? I am desperate and haven’t found helpful info about this affair yet.\nBest regards and thanks in advance <3\n",
"username": "Murian-Coush_Addem"
},
{
"code": "usersCollection = mongoDatabase.getCollection(\"users\");CodecRegistry defaultJavaCodecRegistry = MongoClientSettings.getDefaultCodecRegistry();\n usersCollection = mongoDatabase.getCollection(\"users\").withCodecRegistry(defaultJavaCodecRegistry);\n",
"text": "17:37 CET just found the solution. The problem laid in the initialization of the usersCollection. Before, I just did:usersCollection = mongoDatabase.getCollection(\"users\");Now, I changed the code to incorporate a CodecRegistry:My god this solution might seem obvious to people, but I really have burned some hours to find that out. But there’s just no other people having that precitation online, so you gotta find out what is the cause yourself ",
"username": "Murian-Coush_Addem"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Android Studio: BSON_CODEC_NOT_FOUND error when using "Updates" or "Filters" | 2023-10-26T11:34:17.998Z | Android Studio: BSON_CODEC_NOT_FOUND error when using “Updates” or “Filters” | 158 |
null | [
"database-tools",
"backup"
] | [
{
"code": "2023-08-21T09:18:15.777+0100\tFailed: error creating intents to dump: error creating intents for database config: error getting collections for database `config`: (Unauthorized) not authorized on config to execute command { listCollections: 1, filter: {}, cursor: {}, lsid: { id: UUID(\"5552942c-94f3-4c0b-ac87-63c67b236814\") }, $db: \"config\" }\nmongodump100.8.0db 6.0",
"text": "I am trying to dump my db but am getting the following error:Reading this post it seems like this can be caused by my mongodump version not being compatible with my db version (7.0.0). My mongodump version is 100.8.0 the latest, which in its own docs only advertises support up to db 6.0. How can I dump my db?",
"username": "Josh_Kirk"
},
{
"code": "",
"text": "it seems is’n’t not compatible with version 7.0 or at least this is what the official documention said: https://www.mongodb.com/docs/database-tools/installation/installation/#mongodb-server-compatibilityplease, if you find something else, share with me to my email: velasquez.julio@gmail.com.Thanks!",
"username": "Julio_Velasquez"
},
{
"code": "",
"text": "Try mongodump without authorisation (If mongodump version issues it will not dump the documents)If without authorisation it’s work create other user and provide root permissionLast restart machine",
"username": "Bhavya_Bhatt"
},
{
"code": "",
"text": "Thanks @Bhavya_Bhatt! I see that the installation docs for the Database Tools now do say that MongoDB 7.0 is supported. Could it be that the docs linked by @Josh_Kirk have not been updated to reflect this change?",
"username": "Max_Anderson"
},
{
"code": "",
"text": "I opened an issue with MongoDB, it seems to be a docs issue. You can follow it here https://jira.mongodb.org/browse/SERVER-80299",
"username": "Josh_Kirk"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodump authorization error on MongoDB 7.0.0 | 2023-08-21T08:28:58.574Z | Mongodump authorization error on MongoDB 7.0.0 | 634 |
null | [
"java",
"android",
"kotlin"
] | [
{
"code": "",
"text": "Hello,Sorry in advance if this isn’t the right place to post this, I couldn’t find a real issue tracker (maybe I didn’t search enough). Please redirect me if that is the case.I have been using Java Realm in the past for an Android app, that I migrated recently to Kotlin Multiplatform in order to make it compatible with iOS. After testing, it appears that by running it in an iOS environment, my memory was getting saturated by a leak, which affected both the RAM, and the disk (which it would fill up entirely in a few weeks of testing, all 64Gb). So I made a really simple downgraded version of it which showcases what exactly fails in it. In short, by calling the realm.open and realm.close methods, I leak memory (instances of realm_scheduler according to my memory graph on Xcode, but anything related to realms seems to create a way too large amount of instances in it).Here’s a repo to reproduce the error : GitHub - ALXgba/realmKMMIssue: Memory leak using Kotlin API in a KMM project, when run in iOS environment.As you can see, my Kotlin code only opens a realm instance with a test class before closing it. And I do this every .1s in my iOS code. That’s it. If the same operation is done in its Android counterpart, there is no memory leak. (Sorry, I realize that I didn’t edit this code to also execute the same way in Android. If you wish to test it, you can simply instantiate a RealmLeaker and have it run RealmLeaker.leak() in a while loop).Possible duplicate without resolution → Memory leak in iOS Realm sdk - #3 by Rajat_Dhasmana",
"username": "gba_ALX"
},
{
"code": "",
"text": "Hi @gba_ALXThis should have been fixed by Fix realm scheduler leak by clementetb · Pull Request #1463 · realm/realm-kotlin · GitHub available from 1.11.0. Would you mind giving it a try and report back?",
"username": "Claus_Rorbech"
},
{
"code": "",
"text": "Tried updating realms to 1.11.0, the issue is still there. I pushed my modifications to the git repo as well.",
"username": "gba_ALX"
},
{
"code": "",
"text": "@Claus_Rorbech I opened a proper issue, please see KMM Realm Memory Leak (RAM + Disk) issue · Issue #1501 · realm/realm-kotlin · GitHub",
"username": "gba_ALX"
},
{
"code": "",
"text": "For anyone experimenting this issue :\nThe issue has been closed by the following pull request : Fix darwin native leak by clementetb · Pull Request #1530 · realm/realm-kotlin · GitHub\nIt isn’t pushed in an actual release yet, but using the 1.11.2-SNAPSHOT linked with its current branch fixes the problem. (GitHub - realm/realm-kotlin at releases; see “Using Snapshots” in the README.md for more on that).",
"username": "gba_ALX"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm Memory Leak (RAM + Disk) issue | 2023-09-05T09:26:15.860Z | Realm Memory Leak (RAM + Disk) issue | 480 |
null | [] | [
{
"code": "\"Error getting resource : POST https://cloud.mongodb.com/api/atlas/v1.0/groups/6523c0dd3746682b04c83c9b/databaseUsers: 400 (request \"ATLAS_INVALID_AUTH_SETTINGS\") Invalid authentication settings. Users\ncannot be configured for multiple authentication mechanisms (SCRAM, MONGODB_AWS).\"\n .dbUserProps(DatabaseUserProps.builder()\n .awsiamType(CfnDatabaseUserPropsAwsiamType.ROLE)\n .username(\"role name\")\n .databaseName(\"admin\")\n .roles(List.of(RoleDefinition.builder()\n .databaseName(\"admin\")\n .roleName(\"readWriteAnyDatabase\")\n .build()))\n .build())\n",
"text": "Creating Atlas cluster cloudformation resource with CDK fails on the database user creation. Is something missing or it is a configuration on the organization:CDK code snippet:",
"username": "Nikita_Levi"
},
{
"code": "",
"text": "Hi @Nikita_Levi and welcome to MongoDB community forums!!Based on the above error message, it looks like you are trying to set the IAM role while creating the user.\nAs mentioned in the API documentation, you would need to set the ARN in the username section.Please let us know if this works.\nIf not, could you help me with the documentation you are referring to create the user?Warm regards\nAasawari",
"username": "Aasawari"
},
{
"code": "awsIamUser{\n \"AWSTemplateFormatVersion\": \"2010-09-09\",\n \"Description\": \"Returns all database users that belong to the specified project. To use this resource, the requesting API Key must have the Project Read Only role. This resource doesn't require the API Key to have an Access List.\",\n \"Parameters\": {\n \"ProjectId\": {\n \"Type\": \"String\",\n \"Description\": \"\",\n \"ConstraintDescription\": \"\"\n },\n \"Profile\": {\n \"Type\": \"String\",\n \"Description\": \"\",\n \"ConstraintDescription\": \"\"\n }\n },\n \"Mappings\": {},\n \"Resources\": {\n \"ScramUser\": {\n \"Type\": \"MongoDB::Atlas::DatabaseUser\",\n \"Properties\": {\n\"MongoAtlasDatabaseUser\": {\n \"Type\": \"MongoDB::Atlas::DatabaseUser\",\n \"Properties\": {\n \"AWSIAMType\": \"ROLE\",\n \"DatabaseName\": \"auditLog\",\n \"LdapAuthType\": \"NONE\",\n \"X509Type\": \"NONE\",\n \"ProjectId\": \"64e9191111111118119fcfb6\", //Our project ID (Updated differently)\n \"Roles\": [\n {\n \"DatabaseName\": \"admin\",\n \"RoleName\": \"readAnyDatabase\"\n }\n ],\n \"Username\": \"arn:aws:iam::071429502830:role/common-insurance-auto-ame-cancelautoaohuonmainlamb-SZT2B11YOZ2M\",\n \"Profile\": \"atlasMongoDbAuditLog\"\n },\n \"Metadata\": {\n \"aws:cdk:path\": \"MongoAtlas-Database-Stack-us-west-2/Mongo Atlas DatabaseUser\"\n }\n }\n",
"text": "Hi @Aasawari , we facing the same issue while adding an “IAM Role” as a Database User. We are following this documentation. https://github.com/mongodb/mongodbatlas-cloudformation-resources/blob/master/cfn-resources/database-user/README.mdThis is the template that’s expected (awsIamUser) and the template that’s created by our stack is almost the same template.",
"username": "Darmadeja_Venkatakrishnan"
},
{
"code": "",
"text": "Hi @aasawari_sahasrabuddhe\nThe username is set to an IAM role ARN. We use Mongo Atlas CDK resources to create a new cluster: GitHub - mongodb/awscdk-resources-mongodbatlas: MongoDB Atlas AWS CDK Resources.\nI see that a default user gets created together with the project - ‘atlas-user’ with SCRAM authentication method. This could be causing the error.",
"username": "Nikita_Levi"
}
] | Atlas CloudFormation resources fails on user creation in CDK | 2023-10-09T09:14:53.360Z | Atlas CloudFormation resources fails on user creation in CDK | 438 |
null | [
"java",
"connecting",
"spring-data-odm"
] | [
{
"code": "Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}.\n\nClient view of cluster state is {type=REPLICA_SET, servers=[\n {address=mongo01:27017, type=UNKNOWN, state=CONNECTING*},\n {address=mongo02:27017, type=REPLICA_SET_SECONDARY, roundTripTime=2.5 ms, state=CONNECTED},\n {address=mongo03:27017, type=REPLICA_SET_SECONDARY, roundTripTime=1.1 ms, state=CONNECTED}\n];\n",
"text": "I am getting the following errorusing spring data mongoits a multitenant environment\nchange/create the mongoclient connection per request. Database/user/password change as tenant request.Issue is intermittent and keeps happening randomly.",
"username": "Kushal_Somkuwar"
},
{
"code": "mongo01",
"text": "Hi @Kushal_Somkuwar,Looking at the logging message - the driver is having trouble connecting to the primary node. The client view of the cluster state shows the state as seen from the Java driver (which spring data uses underneath).Connecting appears to take longer than the 30s timeout to connect to the mongo01 node. So things to check are the server logs - has there been a change in the replicaset topology - eg new primary and secondaries? Also have there been any networking issues between the application and the mongodb nodes?Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "Hi @Ross_Lawley ,so application connection with mongodb works fine initially but if the data json is big than it become slow for application and start showing the above error.we have checked the server logs there so no such of connection and I primary node is running fine there is no issue with cluster as well we suspect the same but why would it connect initially if there is a network issue.Kushal",
"username": "Kushal_Somkuwar"
},
{
"code": "",
"text": "Hi @Kushal_Somkuwar,If the driver initially connects then later cannot connect, then that points to some networking issue. It may be intermittent and not happen all the time. But from the drivers perspective it is timing out trying to select a server. The log message just shows you the current view of the servers after selection has failed, which is it is in the process of connecting to the mongo1 node.Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "Hi @Ross_LawleyI also think the same but we are not getting any exception on mongodb server logs and application just get that error. Is there a way to identify what is exactly is the network issue ?",
"username": "Kushal_Somkuwar"
},
{
"code": "",
"text": "@Kushal_Somkuwar We are facing a similar issue and we are connecting to mongo from Google Cloud.\nDid you find out what was the issue? If so, how did you resolve it?",
"username": "Sandhya_Kripalani"
},
{
"code": "",
"text": "@Ross_Lawley Kindly help. Below is the stack trace\n“No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=xxxx, type=UNKNOWN, state=CONNECTING, exception={java.lang.OutOfMemoryError: Java heap space}}, ServerDescription{address=xxx, type=UNKNOWN, state=CONNECTING, exception={java.lang.OutOfMemoryError: Java heap space}}, ServerDescription{address=xxxx, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=1401391508, setName=xxx’, canonicalAddress=xxx, hosts=[xxx], passives=[], arbiters=[], primary=xxxx’, tagSet=TagSet{[Tag{name=‘nodeType’, value=‘ELECTABLE’}, Tag{name=‘provider’, value=‘GCP’}, Tag{name=‘region’, value=‘EASTERN_US’}, Tag{name=‘workloadType’, value=‘OPERATIONAL’}]}, electionId=null, setVersion=6, lastWriteDate=Wed Apr 13 03:33:15 UTC 2022, lastUpdateTimeNanos=6819537012507}]}. Waiting for 30000 ms before timing out”",
"username": "Sandhya_Kripalani"
},
{
"code": "java.lang.OutOfMemoryError: Java heap space",
"text": "Hi @Sandhya_Kripalani,I noticed this java.lang.OutOfMemoryError: Java heap space in the error message.I’ve never seen that reported before in such a way (as part of the No server chosen exception) but its a sign that your JVM needs more resources.Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "@Sandhya_Kripalani you will need to check mongoclient and the number of connections it is making with MongoDB it should be on the higher side. This issue comes when your application keep creating mongoclient object and not closing the existing mongoclient connections. Hope that will help",
"username": "Kushal_Somkuwar1"
},
{
"code": "org.springframework.data.mongodb.core.ReactiveMongoTemplate - Streaming aggregation: [{ \"$match\" : { \"type\" : { \"$in\" : [\"INACTIVE_SITE\", \"DEVICE_NOT_BILLED\", \"NOT_REPLYING_POLLING\", \"MISSING_KEY_TECH_INFO\", \"MISSING_SITE\", \"ACTIVE_CIRCUITS_INACTIVE_RESOURCES\", \"INCONSISTENT_STATUS_VALUES\", \"TEST_SUPERVISION_RANGE\"]}}}, { \"$project\" : { \"extractionDate\" : 1, \"_id\" : 0}}, { \"$group\" : { \"_id\" : null, \"result\" : { \"$max\" : \"$extractionDate\"}}}, { \"$project\" : { \"_id\" : 0}}] in collection kpi\norg.mongodb.driver.cluster - No server chosen by com.mongodb.reactivestreams.client.internal.ClientSessionHelper$$Lambda$1183/0x000000080129dd60@292878db from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=10.235.79.213:7915, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=10.235.79.89:7915, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out\nluster - No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=10.235.79.213:7915, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=10.235.79.89:7915, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out\ncom.obs.dqsc.interceptor.GlobalExceptionHandler - Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Client view of cluster state is {type=REPLICA_SET, servers=[{address=10.235.79.213:7915, type=UNKNOWN, state=CONNECTING}, {address=10.235.79.89:7915, type=UNKNOWN, state=CONNECTING}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Client view of cluster state is {type=REPLICA_SET, servers=[{address=10.235.79.213:7915, type=UNKNOWN, state=CONNECTING}, {address=10.235.79.89:7915, type=UNKNOWN, state=CONNECTING}]\norg.springframework.web.servlet.mvc.method.annotation.ExceptionHandlerExceptionResolver - Resolved [org.springframework.dao.DataAccessResourceFailureException: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Client view of cluster state is {type=REPLICA_SET, servers=[{address=10.235.79.213:7915, type=UNKNOWN, state=CONNECTING}, {address=10.235.79.89:7915, type=UNKNOWN, state=CONNECTING}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Client view of cluster state is {type=REPLICA_SET, servers=[{address=10.235.79.213:7915, type=UNKNOWN, state=CONNECTING}, {address=10.235.79.89:7915, type=UNKNOWN, state=CONNECTING}]]\norg.springframework.web.servlet.DispatcherServlet - Completed 500 INTERNAL_SERVER_ERROR\n @Override\n public MongoClient reactiveMongoClient() {\n ConnectionString connectionString = new ConnectionString(\n \"mongodb://\" + username + \":\" + password +\n \"@\" + host1 + \":\" + port\n + \",\" + host2 + \":\" + port +\n \"/?authSource=\" + authSource +\n \"&replicaSet=\" + replica\n );\n\n MongoClientSettings mongoClientSettings = MongoClientSettings.builder()\n .applyToSslSettings(builder ->\n builder.enabled(true).invalidHostNameAllowed(true)\n )\n .applyToConnectionPoolSettings(\n builder -> builder.minSize(10)\n .maxSize(100)\n .maxWaitTime(8,TimeUnit.MINUTES)\n )\n .applyToSocketSettings(builder -> builder.applySettings(SocketSettings.builder().connectTimeout(5,TimeUnit.MINUTES).readTimeout(5,TimeUnit.MINUTES).build()))\n .applyConnectionString(connectionString)\n .build();\n\n return MongoClients.create(mongoClientSettings);\n }\n",
"text": "Hello @Ross_Lawley ,I’m having the same issue here, but I use reactive mongodb in a spring boot application.This is my logs messages:And this is my mongodb config:I tried to use different networks to avoid this problem, bit it always gives me the same error.\nCould you help me please?",
"username": "Hassan_EL-MZABI"
},
{
"code": "",
"text": "Hi Fellow developers,I have recently started seeing this issues again as soon as I switched to ReactMongoTemplate.\nIn my case I am doing multiple find operations as part of flux streams and the connection does not go through after first few subscriptions. It also many times bring the whole cluster down.Please tell me if someone managed to fix this issue.",
"username": "Prateek_Sumermal_Bagrecha"
},
{
"code": "",
"text": "Facing similar issue, any solution?",
"username": "sunny_tyagi"
},
{
"code": "",
"text": "Did you find the root cause as similar issue i am facing.",
"username": "sunny_tyagi"
},
{
"code": "Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}.serverSelectionTimeout",
"text": "Hi @sunny_tyagi welcome to the community!By “similar issue”, I assume you see an error message that looks like this:Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}.Is this correct?If yes, then this is a typical timeout error that is shared across all official drivers, where it tries to connect to the primary member of a replica set, and gives up when it cannot find one after 30 seconds (default). See MongoClient settings serverSelectionTimeout.Typically this is caused by network issues where the driver cannot reach the servers. It could be caused by network partition, security settings that prevent the client to reach the server (e.g. IP whitelisting issues), DNS issues, blocked port, among many others.If your app used to be able to connect without issues and now it cannot, then perhaps there is something different now. Things to check may include whether you’re connecting from the same network as before, whether there are DNS changes to the server, whether security settings was changed in the server, or any other network reachability issues that wasn’t there before.If you have checked everything and all seems to be in order, I would suggest you to consult your network administrator to troubleshoot your connectivity issues.Just to be complete, if anyone else is encountering this error in the future, it’s best to create a new topic describing the exact error encountered, along with more specific information such as MongoDB version, driver version, topology description, and the complete error message, since everyone’s situation is different and the same error may be the result of two very different causes Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "It was fixed my increasing the socket settings to the approximate time taken for cluster to come up again or reelect a leader. Also increase maxSize to like 10, spring could not handle 100 connections adequately. Now process is a bit slow but still more reliable.",
"username": "Prateek_Sumermal_Bagrecha"
},
{
"code": "",
"text": "could you please add the code snippet ? i am also facing the same issue",
"username": "Madhan_Kumar"
},
{
"code": "",
"text": "Hi, my configuration for the services that is using the mongodb service, using application.properties, to contact it with the uri mongodb://mongodbservice:27015/dbname. I found in the logs it is calling localhost:27017. Any clue?",
"username": "Muhammed_Alghwell"
}
] | Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector | 2021-08-12T13:35:14.822Z | Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector | 84,535 |
[
"node-js",
"data-modeling",
"graphql",
"graphql-api"
] | [
{
"code": "\"description\": {\n \"bsonType\": \"object\",\n \"title\": \"description\",\n \"properties\": {\n \"short\": {\n \"bsonType\": \"object\",\n \"title\": \"formattedText\",\n \"properties\": {\n \"markdown\": {\n \"bsonType\": \"string\"\n }\n }\n },\n \"long\": {\n \"bsonType\": \"object\",\n \"title\": \"formattedText\",\n \"properties\": {\n \"markdown\": {\n \"bsonType\": \"string\"\n }\n }\n }\n }\n},\n",
"text": "I’m working with App Services and Realm. I’m defining my schema in Realm (typescript) then sync it to App Services. I noticed that when I have the same double embedded object in two collection schemas and I query those two related collections, the further embedded property is null. Sometimes you refresh the query and the value appears, some other times it disappears.Work around: remove the titles from the embedded objects. Something you can’t do from Realm (or I don’t know how ?) but then you graphQL schema is much more complex and prevents reuse (fragments, …)The graphQL query (counters.catalog.description.short is not supposed to be null)Screenshot 2023-10-26 at 11.17.081366×792 105 KBBoth description properties in the schema are as follow",
"username": "Olivier_Wouters"
},
{
"code": "",
"text": "… in some cases (I got a property where it’s working, and it’s exactly the same schema …). I tried to reduce the case to a minimum, changed all of the property names but the issue stayed.",
"username": "Olivier_Wouters"
},
{
"code": "",
"text": "I created a repo with a minimum App Service app to reproduce the issue + a video showing is Contribute to olivier5741/mongo-app-service-double-embbed-issue development by creating an account on GitHub.",
"username": "Olivier_Wouters"
}
] | GraphQL API not supporting double embedded objects in schema | 2023-10-26T10:38:43.409Z | GraphQL API not supporting double embedded objects in schema | 185 |
|
null | [] | [
{
"code": "",
"text": "i see mongod [JournalFlusher] constandly doing high disk IO on my system. is there any way to optimize this?",
"username": "ton_test"
},
{
"code": "",
"text": "Hello @ton_test ,Welcome back to The MongoDB Community Forums! Can you please confirm which version of MongoDB are you running?Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "i am running version 5.0.8",
"username": "ton_test"
},
{
"code": "",
"text": "Same problem\nVersion 4.4.25",
"username": "Yunus_Dal"
}
] | Mongod [JournalFlusher] high disk IO | 2023-09-05T12:56:14.337Z | Mongod [JournalFlusher] high disk IO | 421 |
null | [
"flutter",
"dart"
] | [
{
"code": "",
"text": "Hi,While waiting for few critical issues which are blocking the release to be resolved (f.e. Realm-flutter causes iPhone to crash and restart on iOS 17 · Issue #1411 · realm/realm-dart · GitHub and .skip doesn't work well with .take · Issue #1409 · realm/realm-dart · GitHub) it appeared that the entirety of the Realm Dart team was laid off.So I would like to know if there is a future for Realm for Flutter or it’s development is dropped and we should be migrating to some other db.",
"username": "Andrew_M1"
},
{
"code": "",
"text": "This is pretty concerning, my company is about to migrate a large Flutter IOS app from Amplify to Atlas Device Sync, but with both @Kasper_Nielsen1 and @Desislava_Stefanova gone (plus the people I might not know of), who is left to maintain the Flutter/Dart support? Can someone from MongoDB clarify what the future of this library is?Flutter usage in general (not related to Realm/AppSync) is just shy of that of React Native in 2023 and I wouldn’t be surprised if it surpasses React Native soon. I my mind Flutter support is a must!\n\nGit commits (no commits since Sep 19, 2023)\nScreenshot 2023-10-22 at 12.06.261920×655 110 KB",
"username": "Steen_Laursen"
},
{
"code": "",
"text": "We are already using realm sync in production. \bI want to know what will happen with flutter support. If flutter support will disappear, we will have to reduce the usage of mongodb atlas and move to another service.",
"username": "minsu.lee"
},
{
"code": "",
"text": "Hey all - Product for Device Sync here. It is true that we’ve had a reduction on the team and we have communicated with those impacted directly. However, MongoDB does not plan to deprecate or discontinue our Flutter SDK. If you have any support issues, please file an issue on our support portal here MongoDB Support Hub | MongoDB - where we can triage and resolve HELP tickets directly. Going forward we will prioritize paid customer support tickets.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Hi Ian. Giving that Realm is not just “some lib”, but the core piece of technology of the mobile app, sorry for not finding you answer very reassuring.When picking the db, we’ve made an extensive research into the available options on the market and picked Realm almost entirely because of highest maintainability prospects among the document db competitors:But then comes the month of complete silence, fact that the whole dart team is gone and a forced response without any specific details regarding the future with the main point being “who needs to know - knows”. That is not a strong case for maintainability.What would be great to know is whether there would be anyone working on fixing issues in Realm Dart? Will there be new features/improvements implemented?Also I want to note that I appreciate what MongoDB has done and does for the community, and don’t want to sound ungrateful, especially not being a paid customer yet (though we were planning to enable sync when the app is properly migrated), but again, we are talking about crucial piece of software, where in case of a bug or a lack of proper maintenance, app just won’t work at all.",
"username": "Andrew_M1"
}
] | Realm Dart future plans | 2023-10-22T15:11:21.101Z | Realm Dart future plans | 490 |
null | [
"node-js",
"mongoose-odm",
"serverless",
"next-js"
] | [
{
"code": "",
"text": "I am using Next.js, deployed to Vercel’s serverless platform. Mongoose is the ORM.\nI am seeing connection times of ~1s and unsure why it is taking so long just to connect to the DB?",
"username": "Colton_Ehrman"
},
{
"code": "",
"text": "Where is your mongo server hosted? Is it an Atlas instance and if so what type is it?Can you reproduce the connection times when running locally as well as on Vercel’s platform?",
"username": "John_Sewell"
},
{
"code": "",
"text": "I am using a dedicated M10 Atlas instance with the default 3 servers. Yes, I can replicate the connection time in local. P75 is under 1MS, P95 is +500ms.",
"username": "Colton_Ehrman"
},
{
"code": "",
"text": "HiDid you find a solution, I have the same problem",
"username": "Martin_Ratinaud"
}
] | Mongoose DB connection very slow sometimes - Next.js (serverless w/ Vercel) | 2023-07-07T21:04:45.564Z | Mongoose DB connection very slow sometimes - Next.js (serverless w/ Vercel) | 926 |
null | [
"flutter",
"dart"
] | [
{
"code": "realmServices.realm.query<Item>(\"TRUEPREDICATE SORT(_id ASC)\").changes",
"text": "Hello,I have a small question regarding Realm Stream( realmServices.realm.query<Item>(\"TRUEPREDICATE SORT(_id ASC)\").changes). Considering this line of code: link, does it mean that if the collection is empty, the loader would keep spinning indefinitely? Is there a way to determine when the query has finished and the collection is empty, so that this can be displayed in the UI?",
"username": "shyshkov.o.a"
},
{
"code": "date == nulldata.resultsListView",
"text": "No, date == null means nothing has been pumped on the stream yet. The stream will pump the first value almost immediately. If the db is empty, then data.results will be empty, and an empty ListView will rendered in the code you link to.",
"username": "nielsenko"
},
{
"code": "@Riverpod(keepAlive: true)\nclass SalePointsStreamSlice extends _$SalePointsStreamSlice {\n late Realm realm;\n\n @override\n Stream<RealmResultsChanges> build() async* {\n realm = ref.read(realmDbServiceProvider);\n\n final allSalePoints = realm.all<SalePoint>().changes;\n\n final transformedStream = allSalePoints.transform(\n StreamTransformer<RealmResultsChanges<SalePoint>,\n RealmResultsChanges>.fromHandlers(\n handleData: (RealmResultsChanges<SalePoint>? data, sink) {\n /// causing infinite loading for empty collections.\n if (data != null) {\n sink.add(data);\n }\n /// workaround that is not ideal\n /// Future.delayed(const Duration(seconds: 10), () {\n /// sink.add(data);\n /// });\n /// if (data.results.isNotEmpty) {\n /// sink.add(data);\n /// }\n },\n ),\n );\n yield* transformedStream;\n }\n",
"text": "@nielsenko, thanks for a reply.\nMy issue is in …'almost immediately '. For the first sync between client and cloud if collection is big it would not immediately pull the data but Stream emits empty value at first emission and Im not able to implement loading screen. I’ve tried to .transform a stream to wait for the data but then I get an issue where for empty collections I end up with infinite loading I might be wrong in my assumptions, but if not is there any work around I could use?P.S. By the way, thanks for ‘Observable Flutter’ session. Nice one.",
"username": "shyshkov.o.a"
},
{
"code": " final token = TimeoutCancellationToken(const Duration(seconds: 3)); // or how long you are prepared to wait\n await realm.syncSession.waitForDownload(token);\nTimeoutCancellationTokenwaitForDownloadchangesriverpod",
"text": "If I understand correctly, you should use:This ensures that all inbound data is downloaded, as per current subscriptions. Be aware that this call will block, if you cannot access Atlas for some reason, hence it is important to pass a cancellation token to allow the operation to be aborted. Here I use a TimeoutCancellationToken with a 3s duration, but you can also hook it up to a cancel button in the UI or whatever.For the widget tree, you can use a future builder (for waitForDownload) that wraps a stream builder (for changes). Or use riverpod to orchestrate.And thanks! I’m glad you could enjoyed the MongoDB & Realm episode on Observable Flutter",
"username": "nielsenko"
},
{
"code": "waitForDownload(...)getProgressStream(...)",
"text": "Last questions .\nwaitForDownload(...) - ‘Waits for the Session to finish all pending downloads.’ Doest it mean that it waits for download for ALL ADDED SUBSCRIPTIONS, so If I’ve added 5 sub from 5 collections it will wait for all data to sync? Same question about getProgressStream(...).\nAsking this because I’ve noticed that when Im adding/removing subscriptions and initializing fresh synced realm amount of bytes stays the same.",
"username": "shyshkov.o.a"
},
{
"code": "getProgressStream",
"text": "One last answer then Yes to both, but last I checked there was an issue with realm-core (or really the Atlas Device Sync service), that makes the events pumped on getProgressStream inaccurate and mostly pointless.For full disclosure, I no longer work for MongoDB. Just hanging around of old habit.",
"username": "nielsenko"
},
{
"code": "",
"text": "Thanks a lot @nielsenko. I wish you all the best in a ‘new journey’. GOOD LUCK.",
"username": "shyshkov.o.a"
},
{
"code": "",
"text": "@Desislava_St_Stefanova any help would be appreciated.\nIs there any way to resolve my issue with first SYNC. (When sync starts it emits empty value that cause issues with loading, no data state).\nI’ve tried to use the solution Kasper suggested but TimeoutCancellationToken class doesn’t exist.",
"username": "shyshkov.o.a"
},
{
"code": "TimeoutCancellationToken",
"text": "Thank you @shyshkov.o.a. BTW, you need to import the cancellation_token package to get TimeoutCancellationToken.Like @Kasper_Nielsen1 for me, the @Desislava_St_Stefanova account belonged to desistefanova when she was still with MongoDB. She will no longer be notified by using that tag.However, perhaps she still responds to her community account @Desislava_Stefanova?",
"username": "nielsenko"
},
{
"code": "await realm.syncSession.waitForDownload()flutter: 2023-10-26T12:36:25.825513: [INFO] Realm: Realm sync client ([realm-core-13.17.2])\nflutter: 2023-10-26T12:36:25.826543: [INFO] Realm: Platform: iOS Darwin 23.0.0 Darwin Kernel Version 23.0.0: Fri Sep 15 14:43:05 PDT 2023; root:xnu-10002.1.13~1/RELEASE_ARM64_T6020 arm64\nflutter: 2023-10-26T12:36:25.829123: [INFO] Realm: Connection[1]: Session[1]: Binding '/Users/olegshyshkov/Library/Developer/CoreSimulator/Devices/27F2D33B-7F82-4BE0-A0C4-6376B29B1C8C/data/Containers/Data/Application/E3E88586-969A-4526-AF2A-A23B564654E7/Documents/mongodb-realm/devicesync-uedcn/64c3ecfc915240965aef5c7d/default.realm' to ''\nflutter: 2023-10-26T12:36:25.829214: [INFO] Realm: Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, client reset = false\nflutter: 2023-10-26T12:36:25.829287: [INFO] Realm: Connection[1]: Connecting to 'wss://ws.eu-central-1.aws.realm.mongodb.com:443/api/client/v2.0/app/devicesync-uedcn/realm-sync'\nflutter: 2023-10-26T12:36:25.869792: [INFO] Realm: Connected to endpoint '3.124.220.115:443' (from '192.168.1.138:58446')\nflutter: 2023-10-26T12:36:26.360776: [INFO] Realm: Connection[1]: Connected to app services with request id: \"653a412ab77a706c9936b7d1\"\nflutter: 2023-10-26T12:36:27.955290: [INFO] Realm: Connection[1]: Session[1]: Begin processing pending FLX bootstrap for query version 0. (changesets: 1, original total changeset size: 2348)\nflutter: 2023-10-26T12:36:27.956953: [INFO] Realm: Connection[1]: Session[1]: Integrated 1 changesets from pending bootstrap for query version 0, producing client version 10 in 1 ms. 0 changesets remaining in bootstrap\nflutter: >>> waitForDownLoad FINISHED. Amount of Items in the collection: 0\nflutter: 2023-10-26T12:36:32.257177: [INFO] Realm: Connection[1]: Session[1]: Begin processing pending FLX bootstrap for query version 1. (changesets: 2, original total changeset size: 89509)\nflutter: 2023-10-26T12:36:32.269323: [INFO] Realm: Connection[1]: Session[1]: Integrated 2 changesets from pending bootstrap for query version 1, producing client version 16 in 12 ms. 0 changesets remaining in bootstrap\n",
"text": "Tested await realm.syncSession.waitForDownload() and it seems to work not as expected. It resolves before the actual download finishes. @Mongo_BD could you help?\nLOGS:",
"username": "shyshkov.o.a"
}
] | Realm Stream changes | 2023-10-09T12:33:46.566Z | Realm Stream changes | 471 |
null | [
"node-js"
] | [
{
"code": "",
"text": "hello community! i am new to MongoDB, i wanna know if mongodb is good for storing images and how i can store images in mongodb. I am using Node, Express framework and reactjs. Pls i need a perfect tutorial on how i can work with images. Thanks a bunch.",
"username": "Phemmyte_Electronic_Designs_Arduino"
},
{
"code": "",
"text": "Look at https://docs.mongodb.com/manual/core/gridfs/",
"username": "steevej"
},
{
"code": "destinationdestination",
"text": "No, MongoDB is not a good place for storing files. If you want to store files, you should use storages like Amazon S3 or Google Could Storage.The good practice is to store the files in a storage and then to just save the URL of the uploaded image in the MongoDB.When it comes to Node.js and Express, I would suggest multer package middleware, which is primarily used for uploading files. If you would use Amazon S3 for a storage, there is also a plug-in multer-s3 package that can be helpful.You can check this video on how to configure Multer and implement image upload on your Node.js server. Once the image is uploaded, Multer will return data to you with destination field included. You can then save that destination in the MongoDB.",
"username": "NeNaD"
},
{
"code": "",
"text": "i am very grateful. I hope it solves my problem.",
"username": "Phemmyte_Electronic_Designs_Arduino"
},
{
"code": "",
"text": "I had a task last week that required me to upload images to a database. I’ve S3 buckets to store images in the past but I tried MongoDB this time. So, I used a package called express-fileuploader, very similar to Multer, and uploaded my file to a temp directory in the server, from there I grabbed it using the native fs methods and uploaded it to the DB. The image file was uploaded as binary data, ultimately when getting the data you get it as a buffer base 64 if I’m not mistaken. JS can render using different methods.In conclusion, use an S3 bucket or google cloud storage. Uploading to MongoDB is not as complicated as it sounds, but you’ll definitely save some time using other services.",
"username": "Kevin_Grimaldi"
},
{
"code": "",
"text": "Wish people could be much of a use like you",
"username": "MyName_MyLastname"
},
{
"code": "",
"text": "can u explain why mongo is not a good place to store files does that include image files ?",
"username": "J_C4"
},
{
"code": "",
"text": "hey can u provide some code to do so i am new to node and mongo",
"username": "J_C4"
}
] | Storing images in MongoDB | 2022-03-16T10:12:35.360Z | Storing images in MongoDB | 29,336 |
null | [] | [
{
"code": "",
"text": "Hi,Newbie here and really, really new!Is there any MongoDB command that will display information about my current connection after I connected to the database, i.e. username, database and hostt that I am connected to?I found db.currentOp() but this list all sessions. There is also db.hostInfo() but I am just after the hostname. I can’t find one for the current user that I am login as. My version from db.version() shows 3.2.7.",
"username": "Edwin_49627"
},
{
"code": "",
"text": "My version from db.version() shows 3.2.7.i can no longer find the manual for this version.",
"username": "Kobe_W"
},
{
"code": "db.hostInfo()mongodb.getMongo()getHostName()_isWindows()",
"text": "Hi @Edwin_49627,Hostname is available from db.hostInfo() on v3.2 as shown here. I am not sure there’s much available via the mongo shell itself. You can find any available here. You may also be able to see connection info with db.getMongo().There’s also getHostName() which is a native method that the shell supports. You can also loosely determine the operating system type with _isWindows()",
"username": "Jacob_Latonis"
},
{
"code": "",
"text": "Hi @Jacob_LatonisThanks for your reply.\nTurns out a simple hostname() will do and db will show the database that I am connected to.\nSo, now am just running db or hostname() to check.There doesn’t appear to be an option to check who I am login as, do you know of any?\nBasically, I want to check who am I when I am connected to the database?Something like db.runCommand({connectionStatus : 1}) or any command as plain as showing just the user account ",
"username": "Edwin_49627"
}
] | How to find information about 'my' current connection? | 2023-10-19T04:08:51.076Z | How to find information about ‘my’ current connection? | 234 |
null | [
"android",
"kotlin"
] | [
{
"code": "",
"text": "I’m building an android app, where I’ve integrated an Admob SDK to reward my users with Coins after they view an ad. I’m also using MongoDB Device Sync and authenticating users with it as well. I’m using rewarded ads, and when the callback triggers, I’m updating Coins value for the current user, and increasing it’s value.My question is related to security. Is it safe for my users that they do have write permission to add/update their Coins value? Or it will be easy for them to spoof my app, and add custom number of coins on their own? How can I properly handle this situation?",
"username": "111757"
},
{
"code": "",
"text": "How can I properly handle this situation?Using server side verification should be a way to achieve this. You could set up a Custom HTTP endpoint to receive the Google callbacks.",
"username": "otso"
},
{
"code": "",
"text": "That sounds like a nice solution, especially because I’m already have a database on atlas. However I don’t have experience with Javascript, and I’m for the first time working to implement the SSV for admob callback. I’m not really sure how all of that would work and function, there are not so many information about that process out there on the internet, except that link that you’ve sent.I’ve been searching all around, but couldn’t find any example to learn from. Do you happen to have/know a real example that showcases how to achieve that?Thank you for your answer btw, I really appreciate it!",
"username": "111757"
},
{
"code": "",
"text": "Do you happen to have/know a real example that showcases how to achieve that?Unfortunately I do not, you might want to ask this from your Google representative. I’d start with firing the callback and verifying the parameters it sends in the receiving function - you’ll also need to setup SSV for your rewarded ad unit. Hope this helps!",
"username": "otso"
}
] | Admob Rewarded Ads and MongoDB Sync? | 2023-10-25T09:28:31.740Z | Admob Rewarded Ads and MongoDB Sync? | 194 |
null | [] | [
{
"code": "services: [\n\n{\n name: \"repair tire\"\n},\n{\n name: \"oil change\"\n}\n\n]\n",
"text": "HiI have an array of objects likeI set a MongoDB atlas full text search with autocomplete on “services.name” but when i search using autocomplete it doest not show anything.",
"username": "Pushaan_Nayyar"
},
{
"code": "",
"text": "Did you find a solution for this yet? I am experiencing the same kind of problem…",
"username": "Jakob_Noggler"
},
{
"code": "",
"text": "Hello!Thanks so much for your questions about Atlas Search. Can you please share the sample code of your search query and your index, so I can have a deeper look?Karen",
"username": "Karen_Huaulme"
},
{
"code": "{ \"name\" : \"Phani\", \"phones\" : [ { \"phoneNumber\" : \"123456789\" } ] }\n{ \"name\" : \"Yuva\", \"phones\" : [ { \"phoneNumber\" : \"987654321\" } ] }\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"phones\": {\n \"fields\": {\n \"phoneNumber\": {\n \"tokenization\": \"edgeGram\",\n \"type\": \"autocomplete\"\n }\n },\n \"type\": \"document\"\n }\n }\n }\n}\n",
"text": "Hello Karen, we are facing similar kind of issue. Please find details below:Sample collection data:Autocomplete search index on phoneNumber:Query used:{ $search: {“autocomplete”: {“query”: “1234”,“path”: “phones.phoneNumber”}} }=> returns no result",
"username": "Yuva_Phani_Chennu"
},
{
"code": "",
"text": "I’m facing the same issue. It works on a simple nested document, but not for an array of nested documents. It works with “text” instead of “autocomplete” but that requires the exact string which is not really possible.",
"username": "Shadoweb_EB"
},
{
"code": "",
"text": "Hello everybody, same issue here Somebody find a solution ? Please help, because this is very impactful. Thanks in advance.",
"username": "Nicolas_Sandoz"
},
{
"code": "",
"text": "Please Karen, help us with this topic ",
"username": "Nicolas_Sandoz"
},
{
"code": "\"fuzzy\": { \"maxEdits\": 2 }",
"text": "Hello, everyone. Thank you so much for the code and index definitions. You are correct that this does not currently work, but this is a particular use case we are actively working on right now. @Shadoweb_EB, you could include a fuzzy object into the text- but you would have to get close to the end of the string for this to work.\n\"fuzzy\": { \"maxEdits\": 2 }\nSo this is not exactly what you need.\nHopefully, we will have a resolution soon.",
"username": "Karen_Huaulme"
},
{
"code": "",
"text": "How much time until it will be supported?",
"username": "Omer_Yaari"
},
{
"code": "",
"text": "Hello Karen, Thanks for your answer. Do you think it’s going to take a long time ? Do you have approximatively an idea on how much time ? Many of us are waiting for a solution to this problem. Also, could you posted here once a solution has been found ? Thank you in advance for your help.",
"username": "Nicolas_Sandoz"
},
{
"code": "",
"text": "Hello. It should be done this quarter. Please vote on the issue on feedback.mongodb.com, you should be notified when it is worked out. Allow autocomplete search on multiple fields using a wildcard path or by specifying multiple fields in the path – MongoDB Feedback EngineKaren",
"username": "Karen_Huaulme"
},
{
"code": "",
"text": "Hi, I voted on the linked enhancement request. There aren’t any updates on that ticket, though. Can you provide an update on progress? I would love this feature. Thanks!",
"username": "Kellen_Busby"
},
{
"code": "",
"text": "Yes, and it’d be great if this limitation were mentioned in the docs. I losta bunch of time on this today for nothing. Thanks.",
"username": "Warren_Wonderhaus"
},
{
"code": "",
"text": "Actually I was wrong it is in the docs, kind of. According to the index definition page (https://docs.atlas.mongodb.com/atlas-search/index-definitions/#array), you can’t even use autocomplete for an array of strings, let alone an array of docs:“You can’t use the autocomplete type to index fields whose value is an array of strings.”FWIW, I’ve got an array of ‘tags’ for my use case. Attempting to refactor it as a string that can be searched using autocomplete instead.",
"username": "Warren_Wonderhaus"
},
{
"code": "",
"text": "Hi, when is it planned to be possible to use autocomplete on array of objects?\nThanks in advance",
"username": "Alexander_Wieland"
},
{
"code": "",
"text": "Hi.\nWhen do you plan to implement autocomplete on arrays?\nThank you.",
"username": "V_C1"
},
{
"code": "autocompleteautocompleteautocomplete",
"text": "Hi @Alexander_Wieland and @V_C1 - Welcome to the community.At this point in time as per the autocomplete documentation:Alex, I believe the second dot point is related to your question. V_C1, if this is also the case for you too (array of documents), then please refer to the docs link. If not, I would create a new topic and advise your use case details including sample documents.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi Jason.\nWhat is the point to index something if one cannot search for it?\nAre there any plans to implement such a search?\nThank you.",
"username": "V_C1"
},
{
"code": "{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": {\n \"type\": \"autocomplete\"\n },\n \"roles\": {\n \"dynamic\": true,\n \"type\": \"embeddedDocuments\",\n \"fields\": {\n \"externalId\": {\n \"type\": \"autocomplete\"\n },\n \"phone\": {\n \"type\": \"autocomplete\"\n }\n }\n }\n }\n }\n}\ndb.entities.aggregate([\n {\n \"$search\": {\n \"compound\": {\n \"should\": [\n {\n \"autocomplete\": {\n \"query\": \"search query input\",\n \"path\": \"name\"\n }\n },\n {\n \"embeddedDocument\": {\n \"path\": \"roles\",\n \"operator\": {\n \"compound\": {\n \"should\": [\n {\n \"autocomplete\": {\n \"path\": \"roles.externalId\",\n \"query\": \"search query input\"\n }\n },\n {\n \"autocomplete\": {\n \"path\": \"roles.phone\",\n \"query\": \"search query input\"\n }\n }\n ]\n }\n }\n }\n }\n ]\n }\n }\n },\n {\n $addFields: {\n \"score\": { $meta: \"searchScore\" }\n }\n }])\n\n",
"text": "Hi Jason, thanks for your response. Since I had a hard time making it, here is an implementation of this use case that works for me (it might help others):The search index JSON on mongo AtlasThe query",
"username": "Robin_FERRE"
},
{
"code": "",
"text": "Thank you Robin.\nThis the solution.",
"username": "V_C1"
}
] | Atlas Search Autocomplete on an array of object | 2020-12-30T19:51:51.603Z | Atlas Search Autocomplete on an array of object | 9,595 |
null | [
"change-streams"
] | [
{
"code": "{\nid: 139234234,\ncount: 12,\ntype: apple,\nbuyer: alex,\n}\nbuyerchangeStream",
"text": "When the buyer got updated, I don’t want to receive the update information from changeStream, is it possible ignore some of the fields when using change stream?",
"username": "WONG_TUNG_TUNG"
},
{
"code": "db.collection.watch(\n [\n {\n $match: {\n operationType: \"update\",\n \"updateDescription.updatedFields.buyer\": {\n $exists: false\n }\n }\n }\n ]\n)\n",
"text": "The change stream watch accepts a pipeline filter as an argument, so you can do some thing like this:",
"username": "chris"
},
{
"code": "buyer: [subdocument] updateDescription: {\n updatedFields: {\n 'buyer.2.name': \"alex\",\n 'buyer.2.price': 1.4,\n 'buyer.2.max': 12146,\n }}\n",
"text": "What if I want to ignore the field is buyer: [subdocument] , the updateDescription is:I want to ignore all the update&remove related to the buyer field. Thank you.",
"username": "WONG_TUNG_TUNG"
},
{
"code": "",
"text": "Can you give an example of the documents and the updates occurring on them?",
"username": "chris"
},
{
"code": "id: 139234234,\ncount: 12,\ntype: apple,\nbuyer: [{name: \"peter\", price: 1.23, max: 129}, \n{\"alex\", price: 1.4, \"max\": 12146}]\n}\ndb.updateMany({ id: 139234234 }, [\n {\n $set: {\n buyer: {\n $concatArrays: [\n {\n $filter: {\n input: \"$buyer\",\n cond: {\n { $ne: [\"$$this.name\", \"john\"] },\n },\n },\n },\n [\n {\n name: \"john\",\n price: 12,\n max: 234,\n },\n ],\n ],\n },\n },\n },\n ]);\n",
"text": "Document example:Updates occurring on it:",
"username": "WONG_TUNG_TUNG"
},
{
"code": "updateFieldsbuyer",
"text": "Is there anything you need from me? I want to ignore all updateFields that starting with buyer.",
"username": "WONG_TUNG_TUNG"
},
{
"code": "buyerbuyer",
"text": "Are you wanting to ignore any updates that have buyer in them or are you wanting any update not to include buyer ?",
"username": "chris"
},
{
"code": "buyerbuyer",
"text": "I want any update not to include buyer, although I will not update/delete buyer with other field(both of them you mentioned will work for me in this case), I want to decrease the workload of change stream by not monitoring the buyer field from update and delete.",
"username": "WONG_TUNG_TUNG"
},
{
"code": "",
"text": "May I have any update in here, thank you.",
"username": "WONG_TUNG_TUNG"
},
{
"code": "db.collection.watch(\n [\n {\n $project: {\n \"updateDescription.updatedFields.buyer\": 0\n }\n }\n ]\n)\n",
"text": "This could be as simple as:",
"username": "chris"
},
{
"code": " updateDescription: {\n updatedFields: {\n 'buyer.2.name': \"alex\",\n 'buyer.2.price': 1.4,\n 'buyer.2.max': 12146,\n }}\n",
"text": "By applying your solution, I will also receive the above update.",
"username": "WONG_TUNG_TUNG"
}
] | How to ignore tracking certain field when using change stream | 2023-10-13T18:45:35.297Z | How to ignore tracking certain field when using change stream | 370 |
null | [
"aggregation",
"queries",
"node-js"
] | [
{
"code": "",
"text": "Hello fellow community members! I’m excited to share my latest article: Pagination in MongoDB: The Only Right Way to Implement it (Avoid Common Mistakes).I noticed that many articles, tutorials, and courses do not implement pagination correctly, leading to issues such as data inconsistency and decreased performance. So, in this article I showed how to implement the pagination correctly in MongoDB with the use of Aggregation Framework, as well as how to avoid common mistakes.I’ve put a lot of effort into creating and I would be thrilled if you could take a look and provide me with any feedback you have. Your input would be greatly appreciated as I am just starting my blog.Thank you for your time and support!",
"username": "NeNaD"
},
{
"code": "",
"text": "Thanks for sharing.An alternative way of paging is described at MongoDB Pagination, Fast & Consistent | by Mosius | The Startup | Medium.",
"username": "steevej"
},
{
"code": "",
"text": "Hey @steevej,Thanks for sharing that article. I just checked it. I personally don’t like 2 things from that approach:I would say that article covers only specific use-case, and not the pagination in general. But it’s definitely a nice solution for the use-case it covers. ",
"username": "NeNaD"
},
{
"code": "",
"text": "Thanks.Your 2 points are valid and something that needs to be considered.",
"username": "steevej"
},
{
"code": "",
"text": "Hello can some one help how to do pagination in mongodb .I used facet but it’s take too much time",
"username": "Anjana_Varandani"
},
{
"code": "",
"text": "@Anjana_Varandani Hi,Can you share your model and your aggregate pipeline?",
"username": "NeNaD"
},
{
"code": " ]\n }\n}\n",
"text": "[\n{\n“$match”: {\n“customerStatus”: {\n“$nin”: [\n“Not Prospective”,\n“Not Interested”\n]\n},\n“ownerId”: {\n“$in”: [\nObjectId(“63c63c5d04f654e24dbec031”),},\n{\n“$sort”: {\n“name”: 1\n}\n},\n{\n“$facet”: {\n“customers”: [\n{\n“$skip”: 0\n},\n{\n“$limit”: 10\n}\n],\n“totalCount”: [\n{\n“$count”: “count”\n}\n]\n}\n},\n{\n“$project”: {\n“customers”: 1,\n“totalCount”: {\n“$arrayElemAt”: [\n“$totalCount.count”,\n0\n]\n}\n}\n}\n]",
"username": "Anjana_Varandani"
},
{
"code": "",
"text": "@Anjana_VarandaniYou should create indexes for the following fields:You can even consider creating one compound index.",
"username": "NeNaD"
},
{
"code": "",
"text": "still take time we have 1.6M Records",
"username": "Anjana_Varandani"
}
] | Pagination in MongoDB: Right Way to Do it VS Common Mistakes | 2023-01-12T16:12:22.802Z | Pagination in MongoDB: Right Way to Do it VS Common Mistakes | 1,549 |
null | [
"aggregation",
"queries",
"data-modeling",
"time-series"
] | [
{
"code": "{\n \"timestamp\": {\n \"$date\": {\n \"$numberLong\": \"1697522494362\"\n }\n },\n \"meta\": {\n \"session\": \"ccb8277b-2598-45ca-9edc-f4ad0726cf36\",\n \"page\": \"/home\",\n \"app\": \"4c1780ef-1cc3-4bba-95d5-4767ceb6c16c\",\n \"type\": \"button_click\",\n \"userId\": \"3e084afd-787f-4a11-a0b8-bbc01fa88e70\"\n },\n \"count\": 2\n}\n{\n \"timestamp\": {\n \"$date\": {\n \"$numberLong\": \"1697522494362\"\n }\n },\n \"meta\": {\n \"session\": \"ccb8277b-2598-45ca-9edc-f4ad0726cf36\",\n \"type\": \"button_click\"\n },\n \"count\": 2,\n \"page\": \"/home\",\n \"app\": \"4c1780ef-1cc3-4bba-95d5-4767ceb6c16c\",\n \"userId\": \"3e084afd-787f-4a11-a0b8-bbc01fa88e70\"\n}\n",
"text": "I am trying to save click stream data in a timeseries collection. There are some parameters which I want to add in meta data like belowI am trying to model the data which is feasible for the queries which can be like belowExample DocumentWith appropriate indices, I am able to get good query performance, but wanted a suggestion if the meta fields I selected are appropriate as it will create too many buckets.\nMy application has around 30K active users who will visit at-least a single page everydaySince type and sessionId are unique, can I move the other fields outside the meta and can use them as part of aggregation pipeline match filters ?Also, from storage, which design is optimal ?Since meta is stored only once and only the non-meta field are in documents, having the page, app and userId outside will increase the single document size.I am inclined towards storing the data inside the meta, but wanted to know if having too many buckets will be a performance hit when the number of documents increase?Expecting around 6-8M documents per day and max number days the data will be stored is around 45",
"username": "Abhishek_Gn"
},
{
"code": "bucketMaxSpanSecondsbucketRoundingSeconds",
"text": "Hi @Abhishek_Gn and welcome to MongoDB community forums!!The granularity field in time series definition would play an important role in forming the buckets and eventually the effect the performance of the application.\nChoosing granularity is often a trade off between storage and query efficiency. If you store with a very coarse granularity you may have to scan more when reading.\nAlso, if you are on version 6.3 and above, you can make use of bucketMaxSpanSeconds and bucketRoundingSeconds to specify bucket boundaries and more accurately control how time series data is bucketed.However, this would also depend on your use cases and the queries that you are trying to perform on the collections.I am inclined towards storing the data inside the meta, but wanted to know if having too many buckets will be a performance hit when the number of documents increase?In a case where you consistently insert data with varied metafield values, there will be an increase in the number of data buckets and you may not be able to fully utilise the storage capacity of these specific buckets, which can lead to increased query latency and decreased throughput.Based on the two designed that you have shared and the granularity defined, the collection schema which results into lesser buckets and give you optimal performance, you should choose the schema based on that.\nIf however, one of the design results onto latency issues, you shouldWarm Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Thank you for the input @AasawariMy requirement has to have query capability on all fields which are currently part of meta. Since indexes cannot be created on non-meta fields, query lookup time is drastically increasing if I move the other keys outside meta.Also, as per your suggestion, increasing the granularity is one approach which I would want to test out and see the results.One question regarding granularity - Is granularity only used for bucket’s storage grouping, or is there any data de-duplication applied while storing the data depending on the timestamp.For example: If I have multiple events coming in every minute for the same meta field, will there be any loss of data if I change the granularity to minutes from seconds ?",
"username": "Abhishek_Gn"
}
] | Need suggestion to store click stream data | 2023-10-18T12:38:07.750Z | Need suggestion to store click stream data | 239 |
null | [
"aggregation"
] | [
{
"code": "db.createRole(\n {\n role: \"supp_export\",\n privileges: [\n { resource: { db: \"\", collection: \"export\" }, actions: [ \"createCollection\", \"createIndex\", \"dropCollection\" ]},\n\t { resource: { db: \"\", collection: \"export2\" }, actions: [ \"createCollection\", \"createIndex\", \"dropCollection\" ]}\n ],\n roles: []\n }\n)\n \ndb.createUser({\n user: \"ro_test\",\n pwd: \"ro_test\",\n roles: [ { role: \"readAnyDatabase\", db: \"admin\" }]\n});\t\t\n \ndb.grantRolesToUser(\n \"ro_test\",\n [ { role: \"supp_export\", db: \"admin\" } ]\n);\n",
"text": "I use mongodb 4.4, and I am trying to give RO users privilleges to export result to csv.\nI have done custom role and granted it to user:I authorized as ro_test and tried to create “export” collection, it was success operation, but in aggregation pipeline I got an error on “out” step:not authorized on test to execute command { aggregate: “movies”, pipeline: [ { $out: “export” } ], allowDiskUse: true, cursor: {}, maxTimeMS: 60000, lsid: { id: UUID(“b0a59b57-b8b6-43a8-a37f-497a83c6ed46”) }, $clusterTime: { clusterTime: Timestamp(1698241761, 2), signature: { hash: BinData(0, DB8554C97E979B3CDD7135650156BE51478028F7), keyId: 7241112465017143305 } }, $db: “test” }\nAs I unserstand, “out” creates new collection.",
"username": "Asel_Magzhanova"
},
{
"code": "",
"text": "$out will need permissions to create a collection as far as I’m aware. We have a situation setup with a temporary working database on production where developers can create collections on that for working with data which works well.",
"username": "John_Sewell"
},
{
"code": "",
"text": "May you are using older mongo version try with 6 version or above version",
"username": "Bhavya_Bhatt"
}
] | "not authorized to execute command aggregate out" with createCollection | 2023-10-25T15:11:10.103Z | “not authorized to execute command aggregate out” with createCollection | 159 |
null | [
"dot-net",
"golang"
] | [
{
"code": "",
"text": "This post was flagged by the community and is temporarily hidden.",
"username": "MBee"
},
{
"code": "",
"text": "Here’s the compatibility table for Go Driver - https://www.mongodb.com/docs/drivers/go/current/compatibility/#std-label-golang-compatibility",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 7.0 Support | 2023-10-25T21:22:15.529Z | MongoDB 7.0 Support | 193 |
null | [
"app-services-user-auth",
"php",
"api"
] | [
{
"code": "{\n \"error\": \"multiple authentication methods used\",\n \"error_code\": \"InvalidParameter\",\n \"link\": \"[log link]\"\n}\nfunction register($req)\n{\n global $apiKey;\n\n $url = \"https://asia-south1.gcp.data.mongodb-api.com/app/application-0-gfdpr/endpoint/user\";\n $init = curl_init($url);\n curl_setopt($init, CURLOPT_HTTPHEADER, [\n \"apikey: \" . $apiKey,\n \"Content-Type: application/json\",\n ]);\n curl_setopt($init, CURLOPT_RETURNTRANSFER, true);\n curl_setopt($init, CURLOPT_CUSTOMREQUEST, 'POST');\n $data = json_encode([\n \"email\" => $req['email'],\n \"username\" => $req['username'],\n \"password\" => password_hash($req['password'], PASSWORD_DEFAULT),\n ]);\n curl_setopt($init, CURLOPT_POSTFIELDS, $data);\n $response = curl_exec($init);\n curl_close($init);\n var_dump($response);\n}\n",
"text": "I am currently attempting to authenticate using an API key at the app service endpoint in MongoDB Atlas, but I encountered an error:I am using the ‘POST’ method. I have tried to investigate this issue on several forums, and they have suggested ensuring that only the API key is active, which I have already done. Strangely, I have used a similar approach with the API key in several previous codes, and it worked successfully, providing a ‘success’ response.Here is the code I am using:",
"username": "surya_agung"
},
{
"code": " curl_setopt($init, CURLOPT_CUSTOMREQUEST, 'POST');\n $data = json_encode([\n \"email\" => $req['email'],\n \"username\" => $req['username'],\n \"password\" => password_hash($req['password'], PASSWORD_DEFAULT),\n ]);\n curl_setopt($init, CURLOPT_POSTFIELDS, $data);\nContent-Type",
"text": "I suspect that what this is doing and what you think it is doing are two different things, and further that the Content-Type header doesn’t match this clause.Review PHP: curl_setopt - Manual",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "The problem has been resolved, I replaced the key (username, password) inside the body with anything other than username/password.thanks…",
"username": "surya_agung"
}
] | Received an error from the MongoDB Atlas endpoint: "multiple authentication methods used" | 2023-10-22T17:36:09.857Z | Received an error from the MongoDB Atlas endpoint: “multiple authentication methods used” | 203 |
null | [] | [
{
"code": "",
"text": "I hope I am putting this in the right place. We have a DB with about 700 documents in it for our sales team. I am looking for a dead simple GUI/web app to give to the sales team so they can query the DB for the specific items they are looking for in the documents, i.e. territory, products currently purchased or a combination of terms to search on.I have been googling my fingers off and everything I see seems to be waaaay more complicated than what I am looking for. Is there anything like this in the real world or am I going to have to code something up?Thanks!",
"username": "Phillip_DuMas"
},
{
"code": "",
"text": "Hello @Phillip_DuMas,The simplest way to query MongoDB data is to use the GUI tool, MongoDB Compass. You can connect to the database and start accessing the data and write the required queries. The Compass can be downloaded on to your computer, and start using it. Note that its a desktop GUI tool. The User Manual has further instructions:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "I agree, and I use compass for 95% of my desktop management but it would take a week to train the sales team to use it, and even then I don’t think they would understand. I am thinking of a website front end with canned queries where they just fill in the search terms or something.",
"username": "Phillip_DuMas"
},
{
"code": "",
"text": "Simple search GUI has text inputs to enter some search text, lists to select from, radio buttons and check boxes - some combination of these widgets. The user enters, selects and checks some of these and click a button and wait to get a result in table or a list or just a number as a message.When the user requests for a result, the inputs from the user are sent to the back-end / application where the inputs are transformed into a MongoDB query. The query is further sent to the database server, where it is executed and the result is sent back to the application and then the user interface.This is your application. This can be a desktop application like the Compass or a web application.I agree that such an application will be easy to use - as it is totally customized for your needs.Now, this application has two main requirements - (1) the data being queried (the data in your MongoDB database) and (2) the queries which you want for your sales team. And, you have the data and the requirements.You can possibly build such an application in-house in a few days and start using it. And you will also have an opportunity to improve and enhance it as it is being used.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Prasad this is EXACTLY what I am looking for. I lack the expertise so I will see if I can find someone on Fiverr or rent a coder to do it for me.Thank you!Phil",
"username": "Phillip_DuMas"
},
{
"code": "",
"text": "Wish it works well for you !",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hi, i am working on such Web GUI for mongodb:MongoVision: Revolutionizing MongoDB Development. Your all-in-one web-based MongoDB GUI, solving Compass limitations. Streamline queries, aggregations, and data management with innovative features. Auto-backups before write operations, intuitive...just made it live, but It is still in progress.\nMongoVision is all-in-one web-based MongoDB GUI, solving Compass limitations. Streamline queries, aggregations, and data management with innovative features. Auto-backups before write operations, intuitive foreign key navigation, intellisense, schema detection, and workspaces. Empower developers with flexible access control. Unleash the full potential of MongoDB development.",
"username": "Paras_Saxena"
},
{
"code": "",
"text": "I am one of the many clicks on the link your provided.But I am hesitant to try your application because it is not clear what you do with the credentials we specify to access our databases. May be the landing page could provide a link that states how you handle the credentials.I will definitively create a test user with read-only privilege to see how it work.Thanks for sharing.",
"username": "steevej"
}
] | Dead simple web gui for end users | 2022-02-24T20:25:03.902Z | Dead simple web gui for end users | 3,237 |
null | [
"compass",
"sharding",
"migration"
] | [
{
"code": "I NETWORK [listener] connection accepted from 10.180.12.59:47654 #5069275 (30995 connections now open)Query failed with error code 6 and error message 'End of file' on server",
"text": "Hi all, we are having an issue with shard balancing/migration, and are looking for any insight into the cause, how it can be fixed, or experience others have had with similar issues. We are using MongoDB 3.6.23 (I know it’s old; we hope to update it soonish).Our shard balancer has been running without issue for over a year. However, we had a maintenance window last week, after which we started to see issues (incidentally we did not touch the database during this maintenance beyond creating a new index on a collection).The issue is this: when migrating records for a particular collection (not the one we created a new index on btw), we see a major disruption in our application servers’ ability to query the database. The database becomes effectively unresponsive while the migration occurs.And just to highlight - this ONLY happens for a single collection. Migration of all collections are succeeding, but only this single collection correlates with the disruption we’re seeing. The only unique characteristic I could see with the problem collection is that it is shared on the “_id” field whereas all others are sharded on a “userId” field instead.By monitoring the issue, we have been able to observe a timeline of behaviors:I NETWORK [listener] connection accepted from 10.180.12.59:56484 #5187456 (32636 connections now open)\nI - [listener] pthread_create failed: Resource temporarily unavailable\nW EXECUTOR [conn5187456] Terminating session due to error: InternalError: failed to create service entry worker threadSo yeah, we are perplexed about why we would suddenly start to see this issue after a seemingly innocuous maintenance window. At first, we thought this was a bug in our application server code or perhaps a malicious user, but simply disabling balancing of the problem collection causes the problem to disappear entirely. However, we would of course like to be able to continue balancing our collections for the health of the cluster.Has anyone encountered this behavior before? Seeing that it is specifically caused by shard migration, it seems like maybe a bug? The only thing that comes to mind immediately to fix this is to try cycling the config server primary or a “brute force” option is maybe to just restart the entire cluster during a maintenance window.",
"username": "Clark_Kromenaker"
},
{
"code": "",
"text": "Sorry, quick update/clarification. After running some more tests, we found that this issue is not related to a specific collection in the DB. Even with balancing disabled on the apparent problem collection, the exact same issue arose, but with a different collection.So, some issue here where the data migration process between shards is somehow causing the cluster to become unresponsive to queries and spool up a massive number of connections/threads while migrating chunks between shards.",
"username": "Clark_Kromenaker"
},
{
"code": "",
"text": "Just to follow up here, we tried a few different things, and this ultimately fixed the problem:We discovered that some of the machines in the cluster were using a different patch revision than other machines. A few machines were still using v3.6.5, while others were using the latest patch revision v3.6.23.After updating all machines to use v3.6.23, we found that this issue no longer occurred.Though we don’t know the exact cause, this leads us to believe that some incompatibility between these revisions was causing problems.",
"username": "Clark_Kromenaker"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Shard Migration Leads to Service Disruption | 2023-10-02T22:42:55.832Z | Shard Migration Leads to Service Disruption | 325 |
null | [] | [
{
"code": "",
"text": "Hi,\nI am trying to create a chart with the out type as “Table” so that I am display the content of a collection as a tablular out put. We want to add hyperlinks to some of the table output fields so that the user can click and go to that. At a collection level here does not seem to be a data type called URL but is there a possibility to make this work where the tabular output has some hyperlinks.Regards\nRavi",
"username": "Ravi_Shankar5"
},
{
"code": "",
"text": "Hi @Ravi_Shankar5, today we don’t support urls in tables but we are enabling this functionality early next year.",
"username": "Avinash_Prasad"
}
] | Charts : Support for Table column which is a URL | 2023-10-16T20:38:07.666Z | Charts : Support for Table column which is a URL | 205 |
null | [
"queries",
"python"
] | [
{
"code": "",
"text": "Hi! The use-case I have here is that I have a large volume of data I’m returning from a collection with one query – up to several million documents. I’m running a series of calculations on this data and generating visualizations with it. Each document is < 300B, but each query is returning millions. I’ve already indexed what I can in the collection and am using projections to return a subset of the document fields, and my actual query time itself (the execution time in the .explain() method) is reasonable. However, I’m seeing really slow times when I try to convert the cursor to a data structure that I can work with, like a list. Is there any way to better optimize this process? I’ve tried using batching and parallelization, but neither was very effective. Any advice at all is appreciated, thank you so much!",
"username": "Grace_Zhang"
},
{
"code": "",
"text": "Do your calculation with aggregation rather than downloading document and using python.",
"username": "steevej"
},
{
"code": "",
"text": "hi! i would do calculations with aggregation, but i need to have all of the fields accessible to be used in other places as well, or if i’m plotting visualizations that need all the fields, i need every field returned as well.",
"username": "Grace_Zhang"
},
{
"code": "",
"text": "hi! i would do calculations with aggregation, but i need to have all of the fields accessible to be used in other places as well, or if i’m plotting visualizations that need all the fields, i need every field returned as well.",
"username": "Grace_Zhang"
},
{
"code": "",
"text": "If the explain plan indicates that the query time is reasonable then the issue is really about transferring million of documents from the server to your application.If mongod and your application runs on different machine then you need a faster network.If mongod and your application runs on the same machine then you need more RAM and/or CPU.If your mongod is on Atlas, then you might want to try Charts for your visualisation.More details about your system is needed to provide any more comments.One last thing aboutreally slow times when I try to convert the cursor to a data structureHow to you convert? One document at a time or you download all documents in an array and then converting all the array? The latter might require more memory because all documents in raw form and all documents in your data structure are present at the same time. Converting one by one, might reduce the memory requirement.",
"username": "steevej"
}
] | Performance issues with processing large volumes of data from a cursor | 2023-10-23T23:20:10.818Z | Performance issues with processing large volumes of data from a cursor | 198 |
null | [
"queries",
"python"
] | [
{
"code": "{\n ...\n search_preferences: {\n ...\n bad_job_titles: []\n }\n}\ndb[collection].update_one({\"id\": id}, {\"$set\": {\"search_preferences.bad_job_titles\": [\"xyz\"]}})\n{\n ...\n search_preferences: {\n ...\n bad_job_titles: []\n },\n search_preferences.bad_job_titles: [\"xyz\"]\n}\n",
"text": "I have a document with nested objects in the following format:Now, we update the nested field using the update_one method and $set operator.It works mostly, but sometimes in production we are seeing documents where instead of updating the bad_job_titles, there’s a separate field in the document as “search_preferences.bad_job_titles” in the follow format:I have been scratching my head, but not sure what’s happening here, and why it would happen only to some documents.",
"username": "Adarsh_Pugalia"
},
{
"code": "",
"text": "In principal running the same code with the same data leads to the same results. I do not see any reason why $set would start to ignore the dot notation.What I suspect is one of the following scenario:1 - someone used Compass to create the field with the dot\n2 - someone used $setField to create the field with the dot",
"username": "steevej"
}
] | Pymongo update_one issue with nested fields | 2023-10-24T05:53:45.825Z | Pymongo update_one issue with nested fields | 204 |
null | [
"unity"
] | [
{
"code": "var dbMatchgames = realm.All<DBMatchgame>().Where(i => i.PlayerIDs.Contains(MyApp.CurrentUser.Id));\nrealm.Subscriptions.Add(dbMatchgames, new SubscriptionOptions() { Name = \"MyMatchgame\" });\n\nContains",
"text": "In Unity, I’m using Realm to sync with my MongoDB database. When I try to filter the results using the following code:I get an error saying that the Contains method is not supported. How can I avoid downloading all the documents in a collection?",
"username": "Scoz_Auro"
},
{
"code": "var myUserId = MyApp.CurrentUser.Id;\n\nvar dbMatchgames = realm.All<DBMatchgame>().Filter(\"ANY PlayerIDs == $0\", myUserId);\n",
"text": "You can use the string-based query syntax:",
"username": "nirinchev"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodb Realm doesn't support LINQ contain and any? | 2023-10-25T03:17:22.462Z | Mongodb Realm doesn’t support LINQ contain and any? | 147 |
null | [] | [
{
"code": "",
"text": "Hi there, is it possible to create our own custom shape schemes for a Choropleth chart or are you only limited to built in librarys? For example I’m working with a wireless ISP and they have their customers defined within custom regions. I would like to make a Choropleth chart with these regions.Thank you",
"username": "spencerm"
},
{
"code": "",
"text": "Sorry this is not supported at the moment. You may want to suggest this in the MongoDB Feedback Engine to see if others are looking for something similar.",
"username": "tomhollander"
},
{
"code": "",
"text": "Hey @tomhollander , is there a way to show the tooltip directly on countries rather than hover? in Choropleth maps?",
"username": "Kavya_B"
},
{
"code": "",
"text": "No you can’t, but you can reduce the opacity of the shaded regions and have the map labels show through.",
"username": "tomhollander"
},
{
"code": "",
"text": "Hi @tomhollander . Will search indexes be utilized in mongodb charts and lookups in charts?\nI have created search indexes on certain fields, and building the chart directly on dashboard. Will they be used and my queries be optimized ? I get alerts for maximum cpu utilization on loading charts, wiill that be eliminated??",
"username": "Kavya_B"
},
{
"code": "$search$match$lookup",
"text": "I believe search indexes are only used when you use the $search pipeline stage. Charts never does this on its own, but you could put this stage into a chart or data source query if you wanted to.Regular indexes can be used for filters ($match stage) or lookups ($lookup stage) built into charts.Tom",
"username": "tomhollander"
}
] | Custom Shape Schemes | 2022-04-18T13:45:02.779Z | Custom Shape Schemes | 2,295 |
null | [
"database-tools",
"backup",
"time-series"
] | [
{
"code": "2023-10-25T17:02:49.062+0000 writing soiltech.system.buckets.v2SignalData to signal_dump/soiltech/system.buckets.v2SignalData.bson.gz\n2023-10-25T17:02:49.208+0000 Failed: error getting count from db: (Unauthorized) not authorized on soiltech to execute command { count: \"system.buckets.v2SignalData\", lsid: { id: UUID(\"6d9b51c5-21a4-47ff-9b2c-3813da9b6913\") }, $db: \"soiltech\" }\n",
"text": "Hello, I am trying to backup a time-series collection with mongodump. All of my other collections are dumping out fine. However the time series collection thows this error:Using DB version: 5.0.6Any insight appreciated, thank you!",
"username": "dev_ops9"
},
{
"code": "",
"text": "Hi @dev_ops9,\nDo you have the necessary permissions to perform operations in that database?Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Hi @Fabio_Ramohitaj,Yes, it should have the necessary permisions as an “admin” role, its the only role. That role has “find” action which includes “count” I believe.All other collections (97 of them) dump fine. It is just this one collection that has a problem and it happens to be the only time-series collection.",
"username": "dev_ops9"
},
{
"code": "",
"text": "Hi @dev_ops9,\nTo remove all doubt can you create a root user and try to dump that collection with it?\nIf it doesn’t work, can you try to update the database tools to the latest version and then retry the backup operation?Regards",
"username": "Fabio_Ramohitaj"
}
] | Error dumping time-series collection with mongodump | 2023-10-25T17:56:45.691Z | Error dumping time-series collection with mongodump | 157 |
null | [
"queries"
] | [
{
"code": "var METERS_PER_MILE = 1609.34\n\ndb.restaurants.find({ location: { $nearSphere: { $geometry: { type: \"Point\", coordinates: [ -73.93414657, 40.82302903 ] }, $maxDistance: 5 * METERS_PER_MILE } } })\nUncaught:\nMongoServerError: error processing query: ns=test.restaurantsTree: GEONEAR field=location maxdist=8046.7 isNearSphere=0\nSort: {}\nProj: {}\n planner returned error :: caused by :: unable to find index for $geoNear query\n$geoWithindb.restaurants.find({ location: { $geoWithin: { $centerSphere: [[-73.93414657, 40.82302903], 5 / 3963.2] } } })...\n {\n _id: ObjectId(\"55cba2476c522cafdb053b24\"),\n location: { coordinates: [ -73.9805679, 40.7659436 ], type: 'Point' },\n name: 'Cafe Atelier (Art Students League)'\n },\n {\n _id: ObjectId(\"55cba2476c522cafdb053b25\"),\n location: { coordinates: [ -73.965531, 40.765431 ], type: 'Point' },\n name: \"Donohue'S Steak House\"\n }\n]\nType \"it\" for more\n",
"text": "Just trying to follow the tutorial https://www.mongodb.com/docs/manual/tutorial/geospatial-tutorial/#sorted-with--nearsphereI’ve followed every step, until the last one:Sorted with $nearSphere\nYou may also use $nearSphere and specify a $maxDistance term in meters. This will return all restaurants within five miles of the user in sorted order from nearest to farthest:Which I got:Yet, finding the very location with $geoWithin works as expected:db.restaurants.find({ location: { $geoWithin: { $centerSphere: [[-73.93414657, 40.82302903], 5 / 3963.2] } } })What might be the problem? thx",
"username": "MBee"
},
{
"code": "var METERS_PER_MILE = 1609.34\n\ndb.restaurants.find({ location: { $nearSphere: { $geometry: { type: \"Point\", coordinates: [ -73.93414657, 40.82302903 ] }, $maxDistance: 5 * METERS_PER_MILE } } })\nUncaught:\nMongoServerError: error processing query: ns=test.restaurantsTree: GEONEAR field=location maxdist=8046.7 isNearSphere=0\nSort: {}\nProj: {}\n planner returned error :: caused by :: unable to find index for $geoNear query\n2dsphere$geoWithin$geoIntersects2dspheremongoshdb.restaurants.createIndex({ location: \"2dsphere\" })\ndb.neighborhoods.createIndex({ geometry: \"2dsphere\" })\n",
"text": "Hello @MBee ,I was able to replicate the error after I deleted the 2dsphere index.As mentioned in the DocumentationA geospatial index, and almost always improves performance of $geoWithin and $geoIntersects queries.Because this data is geographical, create a 2dsphere index on each collection using mongosh:Kindly create the index and try again.Let me know in case you face any issues, would be happy to help you! Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Yep. I missed the creating 2dsphere index step, and having created the index everything works when I tried again.Thanks!BTW, I’ve followed every step in the tutorial https://www.mongodb.com/docs/manual/tutorial/geospatial-tutorial/#sorted-with--nearsphere, until the last one.I.e., it’s interesting that all commands except the last one work without a 2dsphere index .",
"username": "MBee"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Find location with $nearSphere | 2023-10-19T21:41:16.447Z | Find location with $nearSphere | 220 |
null | [
"aggregation",
"atlas-search"
] | [
{
"code": "{\n index: 'test',\n \"compound\": {\n \"must\":[\n {\n \"wildcard\": {\n \"query\": \"blo*\",\n \"path\": \"Text\",\n allowAnalyzedField: true\n }\n },\n {\n text: {\n query: \"blo\",\n path: \"Text\",\n fuzzy: {'maxEdits':2.0}\n }\n }\n ]\n }\n}\n",
"text": "I would like to do both wildcard and fuzzy search for a text in one call to Atlas cluster. Is there an example I can follow to do it?This is something I tried:Thank you,\nSupriya",
"username": "Supriya_Bansal"
},
{
"code": "{\n index: 'test',\n \"compound\": {\n \"must\":[\n {\n \"wildcard\": {\n \"query\": \"blo*\",\n \"path\": \"Text\",\n allowAnalyzedField: true\n }\n },\n {\n text: {\n query: \"blo\",\n path: \"Text\",\n fuzzy: {'maxEdits':2.0}\n }\n }\n ]\n }\n}\n index: 'default',\n \"compound\": {\n \"must\":[\n {\n \"wildcard\": {\n \"query\": \"Soho*\",\n \"path\": \"name\",\n allowAnalyzedField: true\n }\n },\n {\n text: {\n query: \"Clan\",\n path: \"summary\",\n fuzzy: {'maxEdits':2}\n }\n }\n ]\n }\n}\nsample_airbnb.listingsAndReviews \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": {\n \"analyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n },\n \"summary\": {\n \"type\": \"string\"\n }\n }\n }\n}\n",
"text": "Hi @Supriya_Bansal I tried this query and wasn’t able to reproduce. Maybe revisit your index definition. Here’s my test query:Using the sample_airbnb.listingsAndReviews sample dataset in Atlas, I also used this index definition:I hope this helps. Let me know! ",
"username": "Marcus"
},
{
"code": "",
"text": "Hi @Marcus … in your reply i noticed that field (path) is different in the wildcard and fuzzy.\nI have a similar requirement but need to perform the Wildcard and fuzzy logic on the SAME field.\nSo the value of “path:name” under both wildcard and fuzzy does not return any results for the sample_airbnb. listingsAndReviews.I tried the autocompleter index where wildcard works perfectly fine but as soon as I add Fuzzy logic into the mix, the SCORE is equal to 1 for all the documents matched, which defeats the whole purpose.So in Atlas Mongodb is it possible to perform Wildcard match search along with Fuzzy search with appropriate score values ?",
"username": "Sunil_Manoharan"
}
] | Wildcard and Fuzzy together | 2021-02-16T19:55:05.145Z | Wildcard and Fuzzy together | 2,801 |
null | [] | [
{
"code": "",
"text": "Hello everyone,We have recently migrate to version 7.0.2, as result we observe a terible performance in queries that we have in the same way in our application for more than 3 years.\nWe upgrade the atlas cluser from a M20 to a M40, but it continues with a lower response time than before.Follows a simple query that is with this terrible issue for our users. find: Object\nOrgId: AAAAAAAAA\nPId: BBBBBB\nAttributeId: _CASO\nValue: Object\n$regex: .JOAO DA .\n$options: i\nsort: Object\nValue: 1Any are facing the same issue?I could migrate to atlas search, but it is a legacy system version. We will not invest to do that, it is waiting for planing the migration.",
"username": "Cleiton_dos_Santos_Garcia"
},
{
"code": ".explain(\"allPlansExecution\")",
"text": "Hi @Cleiton_dos_Santos_Garcia!My name is Chris and I’m presently a Senior Product Manager here at MongoDB after spending a number of years in our Technical Services organization. We are sorry to hear that you’re encountering issues since the upgrade and would definitely like to help figure out what is going on here.Unfortunately there is not enough information at this point for us to draw any conclusions about what might be happening or to make any recommendations. In terms of next steps, it would be very helpful if you could:Alternative methods for continuing the discussion and getting further help can be found in Request Support.Thanks again for letting us know about your experience and for the proactive steps that you have taken to attempt to mitigate the issue thus far.Best,\nChris",
"username": "Christopher_Harris"
},
{
"code": "",
"text": "I was in version 6.x probably the last as it is managed by Atlas, now it is 7.x\nAccording our users it was 10x faster.\nUnfortunatly, I do not have any environment at version 6 anymore.Follows the explain\n{\n“explainVersion”: “2”,\n“queryPlanner”: {\n“namespace”: “DBX.AttributeValues”,\n“indexFilterSet”: false,\n“parsedQuery”: {\n“$and”: [\n{\n“OId”: {\n“$eq”: “AAAAAAA”\n}\n},\n{\n“AttributeId”: {\n“$eq”: “_CASO”\n}\n},\n{\n“PId”: {\n“$eq”: “BBBBBBBB”\n}\n},\n{\n“Value”: {\n“$regex”: “JOAO ALBERTO ROBERTO”,\n“$options”: “i”\n}\n}\n]\n},\n“queryHash”: “DC93163D”,\n“planCacheKey”: “C05B5A91”,\n“maxIndexedOrSolutionsReached”: false,\n“maxIndexedAndSolutionsReached”: false,\n“maxScansToExplodeReached”: false,\n“winningPlan”: {\n“queryPlan”: {\n“stage”: “FETCH”,\n“planNodeId”: 2,\n“inputStage”: {\n“stage”: “IXSCAN”,\n“planNodeId”: 1,\n“filter”: {\n“Value”: {\n“$regex”: “JOAO ALBERTO ROBERTO”,\n“$options”: “i”\n}\n},\n“keyPattern”: {\n“AccessGroupId”: 1,\n“ProcessId”: 1,\n“AttributeId”: 1,\n“Value”: 1\n},\n“indexName”: “AccessGroupId_1_ProcessId_1_AttributeId_1_Value_1”,\n“isMultiKey”: false,\n“multiKeyPaths”: {\n“AccessGroupId”: ,\n“ProcessId”: ,\n“AttributeId”: ,\n“Value”: \n},\n“isUnique”: false,\n“isSparse”: false,\n“isPartial”: true,\n“indexVersion”: 2,\n“direction”: “forward”,\n“indexBounds”: {\n“AccessGroupId”: [\n“[\"AAAAAAA\", \"AAAAAAA\"]”\n],\n“ProcessId”: [\n“[\"BBBBBBBB\", \"BBBBBBBB\"]”\n],\n“AttributeId”: [\n“[\"_CASO\", \"_CASO\"]”\n],\n“Value”: [\n“[\"\", {})”,\n“[/JOAO ALBERTO ROBERTO/i, /JOAO ALBERTO ROBERTO/i]”\n]\n}\n}\n},\n“slotBasedPlan”: {\n“slots”: “$$RESULT=s31 env: { s2 = Nothing (SEARCH_META), s30 = PcreRegex(/JOAO ALBERTO ROBERTO/i), s9 = IndexBounds(\"field #0[‘AccessGroupId’]: [CollationKey(0x63613632633736652d386131312d346239632d616538612d303765363438653637343639), CollationKey(0x63613632633736652d386131312\"…), s3 = Timestamp(1698251553, 80) (CLUSTER_TIME), s28 = true, s13 = {\"AccessGroupId\" : 1, \"ProcessId\" : 1, \"AttributeId\" : 1, \"Value\" : 1}, s4 = 1698251555018 (NOW), s1 = TimeZoneDatabase(Pacific/Pohnpei…Africa/Mbabane) (timeZoneDB), s29 = /JOAO ALBERTO ROBERTO/i, s17 = Nothing }”,\n“stages”: “[2] nlj inner [s23, s24, s25, s26, s27] \\n left \\n [1] filter {(((s5 == s29) ?: false) || (regexMatch(s30, s5) ?: false))} \\n [1] branch {s28} [s5, s23, s24, s25, s26, s27] \\n [s6, s8, s10, s11, s12, s13] [1] ixscan_generic s9 s12 s8 s10 s11 lowPriority [s6 = 3] @\"0726b45e-2098-44ce-b055-3972a03c5877\" @\"AccessGroupId_1_ProcessId_1_AttributeId_1_Value_1\" true \\n [s7, s14, s20, s21, s22, s13] [1] nlj inner [s15, s16] \\n left \\n [1] project [s15 = getField(s18, \\“l\\”), s16 = getField(s18, \\“h\\”)] \\n [1] unwind s18 s19 s17 false \\n [1] limit 1 \\n [1] coscan \\n right \\n [1] ixseek s15 s16 s22 s14 s20 s21 [s7 = 3] @\"0726b45e-2098-44ce-b055-3972a03c5877\" @\"AccessGroupId_1_ProcessId_1_AttributeId_1_Value_1\" true \\n right \\n [2] limit 1 \\n [2] seek s23 s31 s32 s24 s25 s26 s27 @\"0726b45e-2098-44ce-b055-3972a03c5877\" true false \\n”\n}\n},\n“rejectedPlans”: \n},\n“executionStats”: {\n“executionSuccess”: true,\n“nReturned”: 0,\n“executionTimeMillis”: 34551,\n“totalKeysExamined”: 3313762,\n“totalDocsExamined”: 0,\n“executionStages”: {\n“stage”: “nlj”,\n“planNodeId”: 2,\n“nReturned”: 0,\n“executionTimeMillisEstimate”: 34550,\n“opens”: 1,\n“closes”: 1,\n“saveState”: 3728,\n“restoreState”: 3728,\n“isEOF”: 1,\n“totalDocsExamined”: 0,\n“totalKeysExamined”: 3313762,\n“collectionScans”: 0,\n“collectionSeeks”: 0,\n“indexScans”: 0,\n“indexSeeks”: 1,\n“indexesUsed”: [\n“AccessGroupId_1_ProcessId_1_AttributeId_1_Value_1”,\n“AccessGroupId_1_ProcessId_1_AttributeId_1_Value_1”\n],\n“innerOpens”: 0,\n“innerCloses”: 0,\n“outerProjects”: ,\n“outerCorrelated”: [\n{\n“low”: 23,\n“high”: 0,\n“unsigned”: false\n},\n{\n“low”: 24,\n“high”: 0,\n“unsigned”: false\n},\n{\n“low”: 25,\n“high”: 0,\n“unsigned”: false\n},\n{\n“low”: 26,\n“high”: 0,\n“unsigned”: false\n},\n{\n“low”: 27,\n“high”: 0,\n“unsigned”: false\n}\n],\n“outerStage”: {\n“stage”: “filter”,\n“planNodeId”: 1,\n“nReturned”: 0,\n“executionTimeMillisEstimate”: 34550,\n“opens”: 1,\n“closes”: 1,\n“saveState”: 3728,\n“restoreState”: 3728,\n“isEOF”: 1,\n“numTested”: 3313761,\n“filter”: \"(((s5 == s29) ?: false) || (regexMatch(s30, s5) ?: false)) \",\n“inputStage”: {\n“stage”: “branch”,\n“planNodeId”: 1,\n“nReturned”: 3313761,\n“executionTimeMillisEstimate”: 28242,\n“opens”: 1,\n“closes”: 1,\n“saveState”: 3728,\n“restoreState”: 3728,\n“isEOF”: 1,\n“numTested”: 1,\n“thenBranchOpens”: 1,\n“thenBranchCloses”: 1,\n“elseBranchOpens”: 0,\n“elseBranchCloses”: 0,\n“filter”: \"s28 \",\n“thenSlots”: [\n{\n“low”: 6,\n“high”: 0,\n“unsigned”: false\n},\n{\n“low”: 8,\n“high”: 0,\n“unsigned”: false\n},\n{\n“low”: 10,\n“high”: 0,\n“unsigned”: false\n},\n{\n“low”: 11,\n“high”: 0,\n“unsigned”: false\n},\n{\n“low”: 12,\n“high”: 0,\n“unsigned”: false\n},\n{\n“low”: 13,\n“high”: 0,\n“unsigned”: false\n}\n],\n“elseSlots”: [\n{\n“low”: 7,\n“high”: 0,\n“unsigned”: false\n},\n{\n“low”: 14,\n“high”: 0,\n“unsigned”: false\n},\n{\n“low”: 20,\n“high”: 0,\n“unsigned”: false\n},\n{\n“low”: 21,\n“high”: 0,\n“unsigned”: false\n},\n{\n“low”: 22,\n“high”: 0,\n“unsigned”: false\n},\n{\n“low”: 13,\n“high”: 0,\n“unsigned”: false\n}\n],\n“outputSlots”: [\n{\n“low”: 5,\n“high”: 0,\n“unsigned”: false\n},\n{\n“low”: 23,\n“high”: 0,\n“unsigned”: false\n},\n{\n“low”: 24,\n“high”: 0,\n“unsigned”: false\n},\n{\n“low”: 25,\n“high”: 0,\n“unsigned”: false\n},\n{\n“low”: 26,\n“high”: 0,\n“unsigned”: false\n},\n{\n“low”: 27,\n“high”: 0,\n“unsigned”: false\n}\n],\n“thenStage”: {\n“stage”: “ixscan_generic”,\n“planNodeId”: 1,\n“nReturned”: 3313761,\n“executionTimeMillisEstimate”: 27994,\n“opens”: 1,\n“closes”: 1,\n“saveState”: 3728,\n“restoreState”: 3728,\n“isEOF”: 1,\n“indexName”: “AccessGroupId_1_ProcessId_1_AttributeId_1_Value_1”,\n“keysExamined”: 3313762,\n“seeks”: 1,\n“numReads”: 3313762,\n“indexKeySlot”: 12,\n“recordIdSlot”: 8,\n“snapshotIdSlot”: 10,\n“indexIdentSlot”: 11,\n“outputSlots”: [\n{\n“low”: 6,\n“high”: 0,\n“unsigned”: false\n}\n],\n“indexKeysToInclude”: “00000000000000000000000000001000”\n},\n“elseStage”: {\n“stage”: “nlj”,\n“planNodeId”: 1,\n“nReturned”: 0,\n“executionTimeMillisEstimate”: 0,\n“opens”: 0,\n“closes”: 0,\n“saveState”: 3728,\n“restoreState”: 3728,\n“isEOF”: 0,\n“totalDocsExamined”: 0,\n“totalKeysExamined”: 0,\n“collectionScans”: 0,\n“collectionSeeks”: 0,\n“indexScans”: 0,\n“indexSeeks”: 0,\n“indexesUsed”: [\n“AccessGroupId_1_ProcessId_1_AttributeId_1_Value_1”\n],\n“innerOpens”: 0,\n“innerCloses”: 0,\n“outerProjects”: ,\n“outerCorrelated”: [\n{\n“low”: 15,\n“high”: 0,\n“unsigned”: false\n},\n{\n“low”: 16,\n“high”: 0,\n“unsigned”: false\n}\n],\n“outerStage”: {\n“stage”: “project”,\n“planNodeId”: 1,\n“nReturned”: 0,\n“executionTimeMillisEstimate”: 0,\n“opens”: 0,\n“closes”: 0,\n“saveState”: 3728,\n“restoreState”: 3728,\n“isEOF”: 0,\n“projections”: {\n“15”: \"getField(s18, \"l\") \",\n“16”: \"getField(s18, \"h\") \"\n},\n“inputStage”: {\n“stage”: “unwind”,\n“planNodeId”: 1,\n“nReturned”: 0,\n“executionTimeMillisEstimate”: 0,\n“opens”: 0,\n“closes”: 0,\n“saveState”: 3728,\n“restoreState”: 3728,\n“isEOF”: 0,\n“inputSlot”: 17,\n“outSlot”: 18,\n“outIndexSlot”: 19,\n“preserveNullAndEmptyArrays”: 0,\n“inputStage”: {\n“stage”: “limit”,\n“planNodeId”: 1,\n“nReturned”: 0,\n“executionTimeMillisEstimate”: 0,\n“opens”: 0,\n“closes”: 0,\n“saveState”: 3728,\n“restoreState”: 3728,\n“isEOF”: 0,\n“limit”: 1,\n“inputStage”: {\n“stage”: “coscan”,\n“planNodeId”: 1,\n“nReturned”: 0,\n“executionTimeMillisEstimate”: 0,\n“opens”: 0,\n“closes”: 0,\n“saveState”: 3728,\n“restoreState”: 3728,\n“isEOF”: 0\n}\n}\n}\n},\n“innerStage”: {\n“stage”: “ixseek”,\n“planNodeId”: 1,\n“nReturned”: 0,\n“executionTimeMillisEstimate”: 0,\n“opens”: 0,\n“closes”: 0,\n“saveState”: 3728,\n“restoreState”: 3728,\n“isEOF”: 0,\n“indexName”: “AccessGroupId_1_ProcessId_1_AttributeId_1_Value_1”,\n“keysExamined”: 0,\n“seeks”: 0,\n“numReads”: 0,\n“indexKeySlot”: 22,\n“recordIdSlot”: 14,\n“snapshotIdSlot”: 20,\n“indexIdentSlot”: 21,\n“outputSlots”: [\n{\n“low”: 7,\n“high”: 0,\n“unsigned”: false\n}\n],\n“indexKeysToInclude”: “00000000000000000000000000001000”,\n“seekKeyLow”: \"s15 \",\n“seekKeyHigh”: \"s16 \"\n}\n}\n}\n},\n“innerStage”: {\n“stage”: “limit”,\n“planNodeId”: 2,\n“nReturned”: 0,\n“executionTimeMillisEstimate”: 0,\n“opens”: 0,\n“closes”: 0,\n“saveState”: 3728,\n“restoreState”: 3728,\n“isEOF”: 0,\n“limit”: 1,\n“inputStage”: {\n“stage”: “seek”,\n“planNodeId”: 2,\n“nReturned”: 0,\n“executionTimeMillisEstimate”: 0,\n“opens”: 0,\n“closes”: 0,\n“saveState”: 3728,\n“restoreState”: 3728,\n“isEOF”: 0,\n“numReads”: 0,\n“recordSlot”: 31,\n“recordIdSlot”: 32,\n“seekKeySlot”: 23,\n“snapshotIdSlot”: 24,\n“indexIdentSlot”: 25,\n“indexKeySlot”: 26,\n“indexKeyPatternSlot”: 27,\n“fields”: ,\n“outputSlots”: \n}\n}\n},\n“allPlansExecution”: \n},\n“command”: {\n“find”: “AttributeValues”,\n“filter”: {\n“AccessGroupId”: “AAAAAAA”,\n“ProcessId”: “BBBBBBBB”,\n“AttributeId”: “_CASO”,\n“Value”: {\n“$regex”: {},\n“$options”: “i”\n}\n},\n“$db”: “”\n},\n“serverInfo”: {\n“host”: “”,\n“port”: ,\n“version”: “7.0.2”,\n“gitVersion”: “02b3c655e1302209ef046da6ba3ef6749dd0b62a”\n},\n“serverParameters”: {\n“internalQueryFacetBufferSizeBytes”: 104857600,\n“internalQueryFacetMaxOutputDocSizeBytes”: 104857600,\n“internalLookupStageIntermediateDocumentMaxSizeBytes”: 104857600,\n“internalDocumentSourceGroupMaxMemoryBytes”: 104857600,\n“internalQueryMaxBlockingSortMemoryUsageBytes”: 104857600,\n“internalQueryProhibitBlockingMergeOnMongoS”: 0,\n“internalQueryMaxAddToSetBytes”: 104857600,\n“internalDocumentSourceSetWindowFieldsMaxMemoryBytes”: 104857600,\n“internalQueryFrameworkControl”: “trySbeEngine”\n},\n“ok”: 1,\n“$clusterTime”: {\n“clusterTime”: {\n“$timestamp”: “7293935030840066050”\n},\n“signature”: {\n“hash”: “fmLmFFOv5b16TyWEmH7QdMqgU8M=”,\n“keyId”: {\n“low”: 553,\n“high”: 1686852637,\n“unsigned”: false\n}\n}\n},\n“operationTime”: {\n“$timestamp”: “7293935030840066050”\n}\n}",
"username": "Cleiton_dos_Santos_Garcia"
}
] | After migrate to version 7.0.2 using Atlas we are facing slow queries with regex | 2023-10-25T11:30:49.832Z | After migrate to version 7.0.2 using Atlas we are facing slow queries with regex | 175 |
null | [
"queries"
] | [
{
"code": "configdb.changelog.find(){\n _id: 'myserver:27017-2022-12-20T05:28:58.272-08:00-63a1b89a2cccc07e737b297',\n server: 'myserver:27017',\n shard: 'shardSrc',\n clientAddr: '',\n time: ISODate(\"2022-12-20T12:28:58.272Z\"),\n what: 'moveChunk.from',\n ns: 'mydb.mycol',\n details: {\n 'step 1 of 6': 0,\n 'step 2 of 6': 7,\n 'step 3 of 6': 115,\n 'step 4 of 6': 2182,\n 'step 5 of 6': 14,\n 'step 6 of 6': 8625,\n min: { _id: Long(\"-8078899907612850299\") },\n max: { _id: Long(\"-8078890509510611981\") },\n to: 'shardDst',\n from: 'shardSrc',\n note: 'success'\n }\n }\n {\n _id: 'myserver:27017-2022-12-20T05:28:56.578-08:00-63a1b898asdaae95291a541a',\n server: 'myserver:27017',\n shard: 'shardDst',\n clientAddr: '',\n time: ISODate(\"2022-12-20T12:28:56.578Z\"),\n what: 'moveChunk.to',\n ns: 'mydb.mycol',\n details: {\n 'step 1 of 8': 2,\n 'step 2 of 8': 1455,\n 'step 3 of 8': 4,\n 'step 4 of 8': 719,\n 'step 5 of 8': 0,\n 'step 6 of 8': 11,\n 'step 7 of 8': 0,\n 'step 8 of 8': 6935,\n min: { _id: Long(\"-8078899907612850299\") },\n max: { _id: Long(\"-8078890509510611981\") },\n to: 'shardDst',\n from: 'shardSrc',\n note: 'success'\n }\n }\n",
"text": "Hi,\nI’m investigating the performance of the slow balancer in my cluster and some steps are clearly taking longer than others, however, I can’t find anywhere an explanation of what each step of moveChunk.from and moveChunk.to means, could somebody explain or provide documentation?What I do is on the config DB use the following query db.changelog.find() and such example responses are returned:Thanks for your time",
"username": "Bartosz_Dabrowski1"
},
{
"code": "",
"text": "@Bartosz_Dabrowski1 All moveChunk steps mentioned at mongo/README.md at master · mongodb/mongo · GitHub\nSeems cloning & commit steps running for a longer duration. What is the version of your mongo? Do you see any latencies, or replication lag in source, destination & config servers?",
"username": "K.V"
},
{
"code": "tofrommoveChunk.to _id: ‘xxx-bc66fe-2:27017-2023-10-24T13:46:57.676+00:00-6537cad16a36f38b3c1d01fa',\n server: ‘xxx-bc66fe-2:27017',\n shard: 'shard-two',\n clientAddr: '',\n time: ISODate(\"2023-10-24T13:46:57.676Z\"),\n what: 'moveChunk.to',\n ns: 'config.system.sessions',\n details: {\n 'step 1 of 8': 1,\n 'step 2 of 8': 0,\n 'step 3 of 8': 1,\n 'step 4 of 8': 1,\n 'step 5 of 8': 0,\n 'step 6 of 8': 11,\n 'step 7 of 8': 0,\n 'step 8 of 8': 15,\n min: { _id: { id: new UUID(\"17000000-0000-0000-0000-000000000000\") } },\n max: { _id: { id: new UUID(\"17400000-0000-0000-0000-000000000000\") } },\n to: 'shard-two',\n from: 'shard-one',\n note: 'success'\n }\n }\nmoveChunk.from {\n _id: ‘xxx-bc66fe-2:27017-2023-10-24T14:06:33.851+00:00-6537cf696a36f38b3c1d8453',\n server: ‘xxx-bc66fe-2:27017',\n shard: 'shard-two',\n clientAddr: '',\n time: ISODate(\"2023-10-24T14:06:33.851Z\"),\n what: 'moveChunk.from',\n ns: 'config.system.sessions',\n details: {\n 'step 1 of 6': 0,\n 'step 2 of 6': 4,\n 'step 3 of 6': 10,\n 'step 4 of 6': 7,\n 'step 5 of 6': 15,\n 'step 6 of 6': 26,\n min: { _id: { id: new UUID(\"56000000-0000-0000-0000-000000000000\") } },\n max: { _id: { id: new UUID(\"56400000-0000-0000-0000-000000000000\") } },\n to: 'shard-three',\n from: 'shard-two',\n note: 'success'\n }\n }\ntopmongodmongos",
"text": "@K.V Although I am not the original poster, I am running into a similar issue on MongoDB 6.0, where RemoveShard takes 30 minutes, even with no databases written to the cluster. (3 shards, one config server) somehow, chunk migration is taking a long time.(I have played around with removing and adding shards, so I apologise if the field to and from are confusing.)\nSpecifically here are my values for moveChunk.to:and moveChunk.from:looking at top on each shard (and config server), I see low consumption from mongod and mongos processes.Can you help me identify which steps these are? And what issues they might correspond with? I had a hard time identifying which step corresponds to which action via the link you sent.",
"username": "Mia_Altieri"
},
{
"code": "_waitForDelete",
"text": "More info _waitForDelete is unset in my charm, so I find it odd that the cleanup phase might be blocking here. When I did some digging in the docs I got a little more confused.From docs: \"By default, the balancer does not wait for the on-going migration’s delete phase to complete before starting the next chunk migration. \"But from the docs “MongoDB can perform parallel data migrations, but a shard can participate in at most one migration at a time.”So when a single shard is draining, I dont understand how \" the balancer does not wait for the on-going migration’s delete phase to complete\" if \"a shard can participate in at most one migration at a time.”",
"username": "Mia_Altieri"
},
{
"code": "sh.status() database: { _id: 'config', primary: 'config', partitioned: true },\n collections: {\n 'config.system.sessions': {\n shardKey: { _id: 1 },\n unique: false,\n balancing: true,\n chunkMetadata: [ { shard: 'shard-two', nChunks: 1024 } ],\n chunks: [\n 'too many chunks to print, use verbose if you want to force print'\n ],\n tags: []\n }\nmongod2023-10-25T11:49:21Z charmed-mongodb.mongod[4753]: {\"t\":{\"$date\":\"2023-10-25T11:49:21.835+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn179\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"admin.$cmd\",\"command\":{\"_shardsvrMoveRange\":\"config.system.sessions\",\"toShard\":\"shard-one\",\"min\":{\"_id\":{\"id\":{\"$uuid\":\"83400000-0000-0000-0000-000000000000\"}}},\"waitForDelete\":false,\"epoch\":{\"$oid\":\"6538e3c125ae3b07d28d5eaf\"},\"fromShard\":\"shard-three\",\"maxChunkSizeBytes\":200000,\"forceJumbo\":2,\"secondaryThrottle\":false,\"writeConcern\":{\"w\":1,\"wtimeout\":0},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1698234561,\"i\":1}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"BDUe5gGly3g4iPDSiuCwtTRJTLU=\",\"subType\":\"0\"}},\"keyId\":7293828730399490051}},\"$configTime\":{\"$timestamp\":{\"t\":1698234561,\"i\":1}},\"$topologyTime\":{\"$timestamp\":{\"t\":1698233824,\"i\":2}},\"mayBypassWriteBlocking\":false,\"$db\":\"admin\"},\"numYields\":0,\"reslen\":236,\"locks\":{},\"writeConcern\":{\"w\":1,\"wtimeout\":0,\"provenance\":\"clientSupplied\"},\"remote\":\"10.18.246.22:60792\",\"protocol\":\"op_msg\",\"durationMillis\":536}}\nshard_key shardKey: { _id: 1 },",
"text": "I am now getting closer to understanding the issue, as I have now been able to consistently re-create the issue. The issue occurs when I remove the shard that hosts the chunks for the config database.From sh.status():From logs of mongod on the shard that is getting removed:I looked at a similar post on the forum and saw a message from @Doug_Duncan suggesting looking at how shard_key is defined. From what I can tell the shard key for this collection is shardKey: { _id: 1 }, I still am new to shard keys and need to read up on the significance of this. But perhaps that might be the issue",
"username": "Mia_Altieri"
},
{
"code": "mongodmongos",
"text": "So now I have a new question, I’ve read about config.system.sessions and since this is setup + monitored by mongod/mongos it makes me wonder two things:From my understanding it takes 20-30 minutes to drain this collection when it is on a single shard. 10ish minutes when it is balanced.",
"username": "Mia_Altieri"
}
] | Explain steps moveChunk operation in config db changelog | 2022-12-20T14:17:14.672Z | Explain steps moveChunk operation in config db changelog | 1,526 |
null | [] | [
{
"code": "https://*.my-subdomain.netlify.app",
"text": "Hi, we are using deploy previews for our PRs, and each one has its own subdomain.Is it possible to use wildcards within the allowed_request_origins property?Ideally we should be able to set something like https://*.my-subdomain.netlify.app",
"username": "Cesar_Varela"
},
{
"code": "",
"text": "No one? How do people work around this? No one uses deploy previews? People use deploy previews and just add allowed origins manually?",
"username": "Cesar_Varela"
},
{
"code": "",
"text": "@Cesar_Varela did you find any solution to this?I’m facing the same problem here using Vercel.",
"username": "Guima_Ferreira"
},
{
"code": "",
"text": "Nope, and no answers more than a year later… it is a bit discouraging around here.",
"username": "Cesar_Varela"
},
{
"code": "",
"text": "Felt, also using Vercel and have this exact problem. I’ll come back here if I ever find a workaround.",
"username": "Samuel_Ifeanyi"
},
{
"code": "",
"text": "+1 Mongo team! Very important to app workflows",
"username": "beef1"
}
] | Using wildcars within allowed_request_origins | 2021-11-22T20:33:12.810Z | Using wildcars within allowed_request_origins | 2,465 |
null | [
"aggregation",
"java",
"spring-data-odm"
] | [
{
"code": "pathidsQuery query = new Query(Criteria.where(\"ancestors\").is(null).and(\"path\").in(ids));\nvar data = mongoTemplate.findDistinct(query, \"path\", Orders.class, String.class);\n",
"text": "I’ve a mongo collection having around 550,000 documents. Document has an array field path on which i have a below query in my java code. This field is indexed.Problem is ids in below query can go up to 6000 causing the query to take ~8 secs. Tried to use aggregator to bring it down but no luck.Could some one please guide here what else can be done.",
"username": "Arun_Raj_Singh"
},
{
"code": "",
"text": "Hi @Arun_Raj_Singh and welcome to MongoDB community forums!!Based on the above information shared, could you help me with some details:Regards\nAasawari",
"username": "Aasawari"
}
] | Optimize mongo $in query | 2023-10-24T01:57:07.242Z | Optimize mongo $in query | 188 |
null | [] | [
{
"code": "",
"text": "Hello,After performing a minor upgrade from MongoDB 6.0.8 to 6.0.11 I’m receiving the bellow spam in logs.\n{“t”:{“$date”:“2023-10-16T08:30:36.811+00:00”},“s”:“I”, “c”:“-”, “id”:4939300, “ctx”:“monitoring-keys-for-HMAC”,“msg”:“Failed to refresh key cache”,“attr”:{“error”:“KeyNotFound: No keys found after refre\nsh”,“nextWakeupMillis”:22200}}\n{“t”:{“$date”:“2023-10-16T08:30:59.031+00:00”},“s”:“I”, “c”:“-”, “id”:4939300, “ctx”:“monitoring-keys-for-HMAC”,“msg”:“Failed to refresh key cache”,“attr”:{“error”:“KeyNotFound: No keys found after refre\nsh”,“nextWakeupMillis”:22400}}\n{“t”:{“$date”:“2023-10-16T08:31:21.442+00:00”},“s”:“I”, “c”:“-”, “id”:4939300, “ctx”:“monitoring-keys-for-HMAC”,“msg”:“Failed to refresh key cache”,“attr”:{“error”:“KeyNotFound: No keys found after refre\nsh”,“nextWakeupMillis”:22600}}\n{“t”:{“$date”:“2023-10-16T08:31:44.060+00:00”},“s”:“I”, “c”:“-”, “id”:4939300, “ctx”:“monitoring-keys-for-HMAC”,“msg”:“Failed to refresh key cache”,“attr”:{“error”:“KeyNotFound: No keys found after refre\nsh”,“nextWakeupMillis”:22800}}\n{“t”:{“$date”:“2023-10-16T08:32:06.883+00:00”},“s”:“I”, “c”:“-”, “id”:4939300, “ctx”:“monitoring-keys-for-HMAC”,“msg”:“Failed to refresh key cache”,“attr”:{“error”:“KeyNotFound: No keys found after refre\nsh”,“nextWakeupMillis”:23000}}\n{“t”:{“$date”:“2023-10-16T08:32:29.891+00:00”},“s”:“I”, “c”:“-”, “id”:4939300, “ctx”:“monitoring-keys-for-HMAC”,“msg”:“Failed to refresh key cache”,“attr”:{“error”:“KeyNotFound: No keys found after refre\nsh”,“nextWakeupMillis”:23200}}\n{“t”:{“$date”:“2023-10-16T08:32:53.114+00:00”},“s”:“I”, “c”:“-”, “id”:4939300, “ctx”:“monitoring-keys-for-HMAC”,“msg”:“Failed to refresh key cache”,“attr”:{“error”:“KeyNotFound: No keys found after refre\nsh”,“nextWakeupMillis”:23400}}\n{“t”:{“$date”:“2023-10-16T08:33:16.535+00:00”},“s”:“I”, “c”:“-”, “id”:4939300, “ctx”:“monitoring-keys-for-HMAC”,“msg”:“Failed to refresh key cache”,“attr”:{“error”:“KeyNotFound: No keys found after refre\nsh”,“nextWakeupMillis”:23600}}Cluster status look ok and no error was observed there.Any ideas why is happening this?Thank you!",
"username": "Flavius-Bogdan_Ciocirdel"
},
{
"code": "",
"text": "Hello,Any news?Thank you!",
"username": "Flavius-Bogdan_Ciocirdel"
}
] | No keys found after refresh | 2023-10-16T09:02:22.100Z | No keys found after refresh | 241 |
null | [
"canberra-mug"
] | [
{
"code": "",
"text": "We will be launching the Canberra MongoDB User Group on Wednesday 25th October - in conjunction with our friends at AWS.We are delighted to announce that we will be joined by Shahab Qamar from the National Film and Sound Archives (NFSA)!How MongoDB helped NFSA model its catalogue data into MongoDB Atlas, covering:Shahab Qamar, Software Engineering Manager, NFSA\nShahab currently leads the software engineering team at the NFSA which develops innovative solutions for the whole organisation in areas of workflow automation, search (lexical, LLM-based), speech recognition, named entity recognition, knowledge graphs and API development at scale. Previously, Shahab worked with Government and Academic institutions delivering content management and search related solutions.\nShahab is always on the lookout for interesting engineering problems that are meant to make our lives a little easier. Shahab also holds a PhD in Machine Learning in Smart Transportation Systems. In his thesis, he proposed an algorithm that assists drivers find suitable parking spots on busy CBD streets using queuing theory and neural nets.RSVP Here: Inaugaural MongoDB Canberra Meetup, Wed, Oct 25, 2023, 5:30 PM | MeetupEvent Type: In-Person\nLocation: AWS Office: 64 Northbourne Ave 64 Northbourne Ave · Canberra, AC",
"username": "Harshit"
},
{
"code": "",
"text": "Thank you everyone for joining and being a part of the first Canberra MongoDB User Group.\nPlease join the group to stay abreast of the upcoming events.IMG_66111920×1440 243 KB\nIMG_66091920×1440 178 KB",
"username": "Harshit"
}
] | Canberra MUG: Inaugaural MongoDB Canberra Meetup | 2023-10-23T10:58:36.554Z | Canberra MUG: Inaugaural MongoDB Canberra Meetup | 317 |
null | [
"aggregation",
"queries"
] | [
{
"code": "",
"text": "Currently we use AWS open search for our searching use case, but we want to try atlas search, but I’m not sure if we can give weightage to every path we want to search on, like there is query term called ‘HAT’ and I want to search it on multiple path with different different weightage, but I’m not able to find the way to do it on atlas, as I can boost my search but that work for a whole query not for each fields. As I’m very new to Atlas Search, any one please help me out in this.So basically we are migrating from AWS OpenSearch to Atlas Search, so we need to change our query accordingly without affecting our logic.",
"username": "Hemendra_Chaudhary"
},
{
"code": "",
"text": "Hi @Hemendra_Chaudhary and welcome to MongoDB community forums!!Based on the information share, it would be helpful if you could share more details on the expectations and requirements with examples like:However to begin with, the documents which are returned by Atlas search queries are assigned a score based on the relevance. I believe, the documentations on Score the Documents in the Results and How to Run Atlas Search Compound Queries with Weighted Fields would be a good starting point to learn more about scoring and weights in Atlas search.Warm regards\nAasawari",
"username": "Aasawari"
}
] | How to search query using search index with weightage in each path | 2023-10-23T14:18:37.715Z | How to search query using search index with weightage in each path | 192 |
null | [
"aggregation",
"node-js"
] | [
{
"code": " const client = new MongoClient(this.configService.get('MONGO_LINK') || '');\n const namespace = 'sample_mflix.embedded_movies';\n const [dbName, collectionName] = namespace.split('.');\n const collection = client.db(dbName).collection(collectionName);\n\n const vectorStore = new MongoDBAtlasVectorSearch(\n new OpenAIEmbeddings({\n openAIApiKey: this.configService.get('OPENAI_API_KEY'),\n }),\n {\n collection,\n indexName: 'default', // The name of the Atlas search index. Defaults to \"default\"\n textKey: 'plot', // The name of the collection field containing the raw content. Defaults to \"text\"\n embeddingKey: 'plot_embedding', // The name of the collection field containing the embedded text. Defaults to \"embedding\"\n },\n );\n\n const resultOne: any = await vectorStore.similaritySearch(question, 4);\n console.log(resultOne);\n{\n \"mappings\": {\n \"fields\": {\n \"plot_embedding\": [\n {\n \"dimensions\": 1536,\n \"similarity\": \"cosine\",\n \"type\": \"knnVector\"\n }\n ]\n }\n }\n}\n",
"text": "Hello,since today I’m getting this error message: “$vectorSearch is not allowed or the syntax is incorrect, see the Atlas documentation for more information”and this is my function that get called.And this is my search index:I use langchain 0.0.166What do I missing?",
"username": "Eugen_Gette"
},
{
"code": "MongoDBAtlasVectorSearchLangChain$vectorSearch{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"embedding\": {\n \"dimensions\": 1536,\n \"similarity\": \"cosine\",\n \"type\": \"knnVector\"\n },\n \"location\": {\n \"type\": \"geo\"\n }\n }\n }\n}\n{\n \"$vectorSearch\": {\n \"index\": \"vector_geo_index\",\n \"path\": \"embedding\",\n \"numCandidates\": 10,\n \"limit\": 2,\n \"queryVector\": [...]\n }\n}\n$vectorSearchknnBeta{\n \"$search\": {\n \"index\": \"vector_geo_index\",\n \"knnBeta\": {\n \"vector\": [...],\n \"path\": \"embedding\",\n \"k\": 2\n }\n }\n}\n",
"text": "I have the same problem however my code worked a few weeks ago. I also use MongoDBAtlasVectorSearch from LangChain. I did some research and found out that even direct querying from Atlas UI page using $vectorSearch doesn’t work anymore.My search index looks like thisThere is also a geo-index part here but it doesn’t matter. Here is the search query:Error: Failed to run this query. Reason: $vectorSearch is not allowed or the syntax is incorrect, see the Atlas documentation for more informationThis did work previously. Also, the old (deprecated in favor of $vectorSearch) knnBeta operator works fine:My guess it is not related to Langchain. It is probably something on the Atlas side. Docs say that the vector search is a preview feature and not recommended for prod. Maybe it is not very stable at the moment.",
"username": "Alexey_Zelenovsky"
},
{
"code": "",
"text": "Hey Alexey_Zelenovsky, I am also facing the same issue. Not able to perform vector search with mongoDB using langchain documentation and even MongoDBs own documentation . Can you suggest some other ways using which I can store the vector embeddings in database and perform a vector search on same.",
"username": "Deepak_Dev"
},
{
"code": "MongoDBAtlasVectorSearch$vectorSearch$searchknnBetadb = client[database_name]\nmy_collection = db[\"collection_name\"]\nembeddings = OpenAIEmbeddings()\nvectorstore = MongoDBAtlasVectorSearch(my_collection, embeddings, index_name=\"my_search_index\")\nsearch_result = self.vectorstore.similarity_search(query=\"tacos\", k=2)\ndb = client[database_name]\nmy_collection = db[\"collection_name\"]\nembeddings = OpenAIEmbeddings()\nmongo_query = [\n {\n \"$search\": {\n \"index\": \"vector_geo_index\",\n \"knnBeta\": {\n \"vector\": embeddings.embed_query('tacos'),\n \"path\": \"embedding\",\n \"k\": 2,\n }\n }\n }\n ]\n\nsearch_result = my_collection.aggregate(mongo_query);\nvectorstore.similarity_search",
"text": "@Deepak_Dev, I decided not to use MongoDBAtlasVectorSearch because there is no way I can switch it from $vectorSearch to $search with the knnBeta operator.I replaced thiswith something like this:This code does the same as vectorstore.similarity_search internally.\nI don’t know if I’m allowed to share a link to my Github here. DM me if you want to see the whole implementation.",
"username": "Alexey_Zelenovsky"
},
{
"code": "",
"text": "@Alexey_Zelenovsky , You can share your github link over here only. As In this community I can’t DM you or you can tell me a way through which I can connect with you and get the the github link. And one more thing, KnnBeta operator is deprecated now. according to below mentioned documentation.https://www.mongodb.com/docs/atlas/atlas-search/knn-beta/#:~:text=You%20can%20run%20kNN%20queries,than%20the%20value%20for%20%24limit%20.",
"username": "Deepak_Dev"
},
{
"code": "\"langchain\": \"^0.0.165\",",
"text": "I’ve kind of “solved” it for me by using an older version of langchain.This problem occurs since version \"langchain\": \"^0.0.165\",\nSo just use version “langchain”: “^0.0.164”,",
"username": "Eugen_Gette"
},
{
"code": "$vectorSearch",
"text": "Hi everybody, I think it might not be related to LangChain, as I tried the $vectorSearch tutorial provided my MongoDB using their own movies dataset, and I’m facing the exact same issue.",
"username": "Victor_Carrara"
},
{
"code": "",
"text": "Same here, @Victor_Carrara , It is not langchain issue.",
"username": "Deepak_Dev"
},
{
"code": "knnBeta$vectorSearch$vectorSearchknnBeta$vectorSearchknnBeta",
"text": "It is true that knnBeta is deprecated in favor of $vectorSearch. However, the $vectorSearch itself is a Preview feature which means it is not stable. So switching to the knnBeta operator is a temporary solution until the $vectorSearch starts working properly.Here is my implementation on Github of the vector search with knnBeta if anybody is looking for an example.",
"username": "Alexey_Zelenovsky"
},
{
"code": "",
"text": "I’m not using LangChain, but running into this problem as well. Deeply disappointing, with all the “you don’t need a vector database” messaging Mongo’s been using.",
"username": "Mannie_Schumpert"
},
{
"code": "“$vectorSearch is not allowed or the syntax is incorrect, see the Atlas documentation for more information”$vectorSearchLangChainknnBeta",
"text": "Hello all, apologies for the error and for the delayed response. To clarify, the error that you’re seeing here “$vectorSearch is not allowed or the syntax is incorrect, see the Atlas documentation for more information” is due to an issue in M0-M5 Clusters which will be resolved after our deployment tomorrow, it is not related to LangChain.The origin of this error stems from a slight difference in how our shared tier clusters work when compared to dedicated tier, hence clusters M10 and above are not affected.To provide a bit more context, $vectorSearch is our target interface for Vector Search in the long term, and as such we’ve updated LangChain to utilize this new syntax. That said, we do not currently have plans to remove support in Atlas for knnBeta, and will not do so until every Cluster in Atlas is running a version that supports $vectorSearch. We will provide additional guidance ahead of any changes to support for knnBeta.And one last clarification, the entire Vector Search service is in Preview, whether it be through $vectorSearch or knnBeta.Finally, for anyone who would like to chat in more depth please feel free to follow up here or reach out to me directly at Benjamin.flast@mongodb.com.",
"username": "Benjamin_Flast"
},
{
"code": "",
"text": "Thank you so much, Benjamin!\nI’d much prefer to keep my company’s vector search needs here with our preferred vendor, so a near-future resolution of this issue is very good news.\n(BTW, our production cluster is M10, but our dev cluster, where I was working when I encountered the error, is on a shared tier.)",
"username": "Mannie_Schumpert"
},
{
"code": "",
"text": "Hey Benjamin! thanks for the update. I still have the problem and before to restart everything, I would like to know if the fix has been deployed?\nif so, how can i know / check if my cluster has been updated?Thanks",
"username": "Venturio_Labs"
},
{
"code": "$vectorSearch",
"text": "Hey Benjamin, do you know when this $vectorSearch gonna be released? It has been a blocker for our project. We have vector DB in pinecone but were planning to move to MongoDB for vector search capability and now we are basically stuck. Would really appreciate it if you let us know the timeline.This was hyped but now has been a real pain for developers like us who trusted the MongoDB ecosystem. Would appreciate it if Mongo could communicate it better.",
"username": "810dded0833b489ba4ca2b46bb0e6d0"
},
{
"code": "knnBeta",
"text": "Hello All,Just to provide a quick update, the update has been made, it’s just making it through all of the testing environments. This should be available on shared tier clusters (M0-M5) by the end of day Monday.As a quick reminder, this can be accessed on dedicated cluster M10 and above today. Or you can continue to query utilizing the knnBeta operator until Monday.",
"username": "Benjamin_Flast"
},
{
"code": "",
"text": "Hi Everyone!Thank you all for your patience, this issue is now resolved. You should be able to use the $vectorSearch stage on shared tier clusters (M0-M5) running on MongoDB versions 6.0.11 and 7.0.2.",
"username": "Henry_Weller"
},
{
"code": "",
"text": "I am still getting this same error. I am on version 7.0.2 M30. What am I missing? Your previous message 5 days ago says M10 and above can access today. This is massively frustrating …",
"username": "Dustin_Dier"
},
{
"code": "",
"text": "You’ll have to upgrade to 7.0.2",
"username": "Henry_Weller"
},
{
"code": "",
"text": "I just updated my message please review it.",
"username": "Dustin_Dier"
},
{
"code": "",
"text": "It is working for me now",
"username": "Vitor_Batista"
}
] | $vectorSearch is not allowed | 2023-10-13T08:57:21.063Z | $vectorSearch is not allowed | 1,039 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "MongoDB Community Server: 5.0.18\nTest Application uses .NET CSharp Driver 2.21.0Periodically in the MongoDB logs I see a message “Deprecated operation requested. The client driver may require an upgrade in order to ensure compatibility with future server versions.”However I am using a fairly recent version of the CSharp driver.My theory is that the driver is detecting the max wire version of the server - 13 I believe for MongoDB 5.0 - and is issuing a command using a deprecated format for this reason. I am hoping that upgrading to a newer version of the MongoDB server (say 6.0+) would lead to the CSharp driver detecting a higher max wire version, sending a command using the new message format and therefore the deprecation warning message would be eliminated.Wondering if anyone could confirm this is indeed what is happening? Thanks.",
"username": "David_O_Shea"
},
{
"code": "OP_QUERYgetLastError{\n \"t\":{\n \"$date\":\"2022-04-29T15:28:30.850-06:00\"},\n \"s\":\"W\",\n \"c\":\"COMMAND\",\n \"id\":5578800,\n \"ctx\":\"conn1384\",\n \"msg\":\"Deprecated operation requested. For more details see https://dochub.mongodb.org/core/legacy-opcode-compatibility\",\n \"attr\":{\n \"op\":\"getLastError\",\n \"clientInfo\":{\n \"application\":{\"name\":\"MongoDB Shell\"},\n \"driver\":{\"name\":\"MongoDB Internal Client\",\"version\":\"5.0.7\"},\n \"os\":{\"type\":\"Darwin\",\"name\":\"Mac OS X\",\"architecture\":\"x86_64\",\"version\":\"21.4.0\"}\n }\n }\n}\ngetLastErrormongomongod",
"text": "Hi, @David_O_Shea,Welcome to the MongoDB Community Forums. Thank you for reaching out about the “Deprecated operation requested” warning when using the .NET/C# Driver. This warning indicates that a legacy op code such as OP_QUERY or deprecated command such as getLastError was received by the server. The following is an example:We can see that the deprecated command was getLastError and that it was sent by the legacy mongo shell version 5.0.7.Searching through my mongod logs, I do not see any deprecated commands coming from the .NET/C# Driver. Please file a CSHARP JIRA ticket with the relevant complete log lines so that we can investigate further.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "{\"t\":{\"$date\":\"2023-10-25T08:47:03.465+00:00\"},\"s\":\"W\", \"c\":\"NETWORK\", \"id\":23234, \"ctx\":\"conn13587\",\"msg\":\"No SSL certificate provided by peer\"}\n{\"t\":{\"$date\":\"2023-10-25T08:47:03.719+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn13587\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"x.x.x.x:36848\",\"client\":\"conn13587\",\"doc\":{\"application\":{\"name\":\"MyTestApp\"},\"driver\":{\"name\":\"mongo-csharp-driver\",\"version\":\"2.21.0.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux 5.10.192-183.736.amzn2.x86_64 #1 SMP Wed Sep 6 21:15:41 UTC 2023\",\"architecture\":\"x86_64\",\"version\":\"5.10.192-183.736.amzn2.x86_64\"},\"platform\":\".NET 7.0.4\"}}}\n{\"t\":{\"$date\":\"2023-10-25T08:47:03.719+00:00\"},\"s\":\"D1\", \"c\":\"ACCESS\", \"id\":20226, \"ctx\":\"conn13587\",\"msg\":\"Returning user from cache\",\"attr\":{\"user\":{\"user\":\"mytestapp_user\",\"db\":\"admin\"}}}\n{\"t\":{\"$date\":\"2023-10-25T08:47:03.719+00:00\"},\"s\":\"D1\", \"c\":\"ACCESS\", \"id\":20226, \"ctx\":\"conn13587\",\"msg\":\"Returning user from cache\",\"attr\":{\"user\":{\"user\":\"mytestapp_user\",\"db\":\"admin\"}}}\n{\"t\":{\"$date\":\"2023-10-25T08:47:03.719+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn13587\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"admin.$cmd\",\"appName\":\"MyTestApp\",\"command\":{\"isMaster\":1,\"helloOk\":true,\"client\":{\"application\":{\"name\":\"MyTestApp\"},\"driver\":{\"name\":\"mongo-csharp-driver\",\"version\":\"2.21.0.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux 5.10.192-183.736.amzn2.x86_64 #1 SMP Wed Sep 6 21:15:41 UTC 2023\",\"architecture\":\"x86_64\",\"version\":\"5.10.192-183.736.amzn2.x86_64\"},\"platform\":\".NET 7.0.4\"},\"compression\":[],\"saslSupportedMechs\":\"admin.mytestappdb\",\"speculativeAuthenticate\":{\"saslStart\":1,\"mechanism\":\"SCRAM-SHA-256\",\"payload\":{\"$binary\":{\"base64\":\"biwsbj1jY3NqcG1vcmdhbmZ4c19xYSxyPUFqXVJQPUA8YUdlJ2IpWjMvO2FE\",\"subType\":\"0\"}},\"options\":{\"skipEmptyExchange\":true},\"db\":\"admin\"},\"$readPreference\":{\"mode\":\"secondaryPreferred\"},\"$db\":\"admin\"},\"numYields\":0,\"reslen\":1159,\"locks\":{},\"authorization\":{\"startedUserCacheAcquisitionAttempts\":2,\"completedUserCacheAcquisitionAttempts\":2,\"userCacheWaitTimeMicros\":1},\"remote\":\"x.x.x.x:36848\",\"protocol\":\"op_query\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-10-25T08:47:03.911+00:00\"},\"s\":\"W\", \"c\":\"COMMAND\", \"id\":5578800, \"ctx\":\"conn13587\",\"msg\":\"Deprecated operation requested. The client driver may require an upgrade in order to ensure compatibility with future server versions. For more details see https://dochub.mongodb.org/core/legacy-opcode-compatibility\",\"attr\":{\"op\":\"query\",\"clientInfo\":{\"application\":{\"name\":\"MyTestApp\"},\"driver\":{\"name\":\"mongo-csharp-driver\",\"version\":\"2.21.0.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux 5.10.192-183.736.amzn2.x86_64 #1 SMP Wed Sep 6 21:15:41 UTC 2023\",\"architecture\":\"x86_64\",\"version\":\"5.10.192-183.736.amzn2.x86_64\"},\"platform\":\".NET 7.0.4\"}}}\n{\"t\":{\"$date\":\"2023-10-25T08:47:03.911+00:00\"},\"s\":\"D1\", \"c\":\"ACCESS\", \"id\":20226, \"ctx\":\"conn13587\",\"msg\":\"Returning user from cache\",\"attr\":{\"user\":{\"user\":\"mytestapp_user\",\"db\":\"admin\"}}}\n{\"t\":{\"$date\":\"2023-10-25T08:47:03.911+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn13587\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":true,\"principalName\":\"mytestapp_user\",\"authenticationDatabase\":\"admin\",\"remote\":\"x.x.x.x:36848\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2023-10-25T08:47:03.912+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn13587\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"admin.$cmd\",\"appName\":\"MyTestApp\",\"command\":{\"saslContinue\":1,\"conversationId\":1,\"payload\":\"xxx\",\"$readPreference\":{\"mode\":\"secondaryPreferred\"},\"$db\":\"admin\"},\"numYields\":0,\"reslen\":265,\"locks\":{},\"authorization\":{\"startedUserCacheAcquisitionAttempts\":1,\"completedUserCacheAcquisitionAttempts\":1,\"userCacheWaitTimeMicros\":3},\"readConcern\":{\"level\":\"local\",\"provenance\":\"implicitDefault\"},\"remote\":\"x.x.x.x:36848\",\"protocol\":\"op_query\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-10-25T08:47:03.913+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn13587\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"mytestappdb.message_store\",\"appName\":\"MyTestApp\",\"command\":{\"find\":\"message_store\",\"filter\":{\"NotificationId\":\"f61a34c1-c5a7-4b25-8f4b-b0a60090c2bb\"},\"limit\":1,\"$db\":\"mytestappdb\",\"lsid\":{\"id\":{\"$uuid\":\"b78aeed8-3a39-4df6-a2a2-401cb33ce1ff\"}},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1698223623,\"i\":17}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"gGCqNiRGGh1rsEooZOmnE5UIXuM=\",\"subType\":\"0\"}},\"keyId\":7236338067701760005}}},\"planSummary\":\"IXSCAN { NotificationId: 1 }\",\"keysExamined\":0,\"docsExamined\":0,\"cursorExhausted\":true,\"numYields\":0,\"nreturned\":0,\"queryHash\":\"FF227F32\",\"planCacheKey\":\"00A59619\",\"reslen\":242,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":1}},\"Global\":{\"acquireCount\":{\"r\":1}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"readConcern\":{\"level\":\"local\",\"provenance\":\"implicitDefault\"},\"storage\":{\"data\":{\"bytesRead\":3737,\"timeReadingMicros\":11}},\"remote\":\"x.x.x.x:36848\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n",
"text": "Before opening a Jira ticket - our further investigation indicates that this deprecated operation requested warning is associated with a MongoDB driver connection attempt. The following are the logs we have captured when this occurs with SetLogLevel(1):",
"username": "David_O_Shea"
}
] | Deprecated operation requested with latest CSharp driver | 2023-10-20T14:34:06.144Z | Deprecated operation requested with latest CSharp driver | 263 |
null | [
"java",
"kotlin"
] | [
{
"code": "return list.where{ //group 1\n ($0.quadrant == quadrant &&\n $0.due_date < Date() &&\n $0.due_date != nil &&\n $0.complete == false &&\n $0.deleted == false)\n \n || //or\n \n// group 2\n ($0.quadrant == \"q2\" &&\n $0.due_date < Date() &&\n $0.due_date != nil &&\n $0.complete == false &&\n $0.deleted == false)}\n",
"text": "I am in the process of converting from Realm-Java to Realm-Kotlin, and I had a question about grouping queries/filter. Right now I have my realm-java code return a realm result that matches 2 possible sets of queries.How would I write this in realm-kotlin so that it will return result that are either in group 1’s queries or group 2’s queries?Thanks",
"username": "Deji_Apps"
},
{
"code": "return realm.where(Tasks::class.java)\n .equalTo(\"quadrant\", quadrant)\n .lessThan(\"due_date\", now).and()\n .isNotNull(\"due_date\").and()\n .equalTo(\"complete\", false).and()\n .equalTo(\"deleted\", false)\n \n .or()\n \n .equalTo(\"quadrant\", \"q2\")\n .lessThan(\"due_date\", now).and()\n .isNotNull(\"due_date\").and()\n .equalTo(\"complete\", false).and()\n .equalTo(\"deleted\", false)\n \n .sort(mySort.sort!!, mySort.direction!!)\n .findAllAsync()\n\n",
"text": "I just realized I was working on the IOS version of my app as well and pasted in swift code instead of Kotlin. it should actually be:",
"username": "Deji_Apps"
}
] | Grouping queries/filter | 2023-10-23T11:45:52.175Z | Grouping queries/filter | 182 |
null | [
"compass",
"mongodb-shell"
] | [
{
"code": "",
"text": "How compare UUID in mongodb-shell script ?\nUUID is Object type without equals method, can’t compare with ‘==’ or ‘===’. Comparing .toString() also not work correctly.",
"username": "Kurako"
},
{
"code": "uuid1.toString('hex') === uuid2.toString('hex')",
"text": "@Kurako Something like uuid1.toString('hex') === uuid2.toString('hex') should work here ",
"username": "Anna_Henningsen"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Compare UUID in mongodb-shell script | 2023-10-25T09:19:37.809Z | Compare UUID in mongodb-shell script | 166 |
null | [] | [
{
"code": "",
"text": "Hi, I am Divyansh Agrawal, CSE Sophomore from Gurgaon. I am new to the group and wanted to know how can I stay updated about upcoming virtual or in-person meetups for MongoDB. I am really curious to connect with industry leaders and learn from them.",
"username": "Divyansh_Agrawal"
},
{
"code": "",
"text": "Hello @Divyansh_AgrawalHead over to User Groups & Events - MongoDB Developer Community Forums and see if there are any in your region.Also look at Events | MongoDB for MongoDB events or events where MongoDB will be present.",
"username": "chris"
},
{
"code": "",
"text": "Heya @chris ,\nThanks for the links . As told, I found my user group and joined it. Can’t wait to attend a event soon and connect with likeminded people and industry leaders.\nThanks",
"username": "Divyansh_Agrawal"
},
{
"code": "",
"text": "This post was flagged by the community and is temporarily hidden.",
"username": "James_Cork"
}
] | Updates for Meetups | 2023-05-16T19:38:55.987Z | Updates for Meetups | 807 |