{"_id": 0, "text": "TMP Square Characters when adding Chinese I'm trying to implement Chinese language in my app. But I'm seeing nothing but squares. I've tried to create from file and hex. They can't be added. For example, there are 3000 letters but only 17 are included."} {"_id": 0, "text": "Simulate parallel dimensions subspaces in Unity without colliding coordinate systems I've built an online game with Unity with a very large open world. The server (which can be a local player, or run as a standalone application without a local player) simulates everything around players so for example if there is an object or NPC at coordinates (100,34,20) but there aren't any players nearby then the object is not simulated. Objects are streamed dynamically from a database or generated procedurally depending on the object. Problem I want to enable players to go to other worlds. Of course it is impractical if not impossible to have global coordinates shared across all of these different worlds due to floating point limitations. I'll have to share the limited coordinate system for each simulated world. The issue is the server would still need to render and handle physics of multiple worlds but could now potentially have conflicts with objects and or people on one world appearing in other worlds. Essentially I need a way to divide up these \"worlds\" into parallel dimensions that are executed simultaneously on the same server (in one Unity scene). Parallel Dimensions A partial solution would be to have some sort of indicator, such as an integer (I'll call it \"worldIndex\") on each object to indicate which \"world\" that object exists in. This would work for clients since the server would be able to tell the clients only about objects that are on their world. There would still be a problem on the server itself. Every world needs to be simulated in the same coordinate space. For a local player I could use the \"worldIndex\" to hide objects that aren't in the local player's current \"world\". But I still need to simulate physics for every world. Unity's Layers could be used to mask physics interactions between objects. However I would then need to have at least one layer per world and I already use multiple layers for various different interactions. My thoughts are I would need something like a layer I could specify on each GameObject or some way to group a bunch of GameObjects together and instruct the physics system to only allow interactions within the group. But as far as I know no such mechanism exists within Unity. Subspace Pockets Another thought I've had is rather than using the Unity coordinate space directly, I could slice it up into \"subspaces\" or pockets. Each player would be in their own subspace, unless a player physically enters the same space that is already being simulated by another player, in which case that player would join the same subspace. But then I would have shift around the coordinate systems and it would increase complexity of merging players into and out of these subspaces. I would also have to generate subspaces dynamically as players enter new areas that are not currently being simulated. Even if each world is extremely large, it is not necessary to use the entire Unity coordinate system around each player a few thousand kilometers is more than enough. Overall I think something along these lines is probably the best solution. Multiple Servers I could run one server per \"world\" and have a hand off process to make it transparent to players. But then I'd have to run multiple server instances and players would not be able to host their own server at least not without a lot of additional overhead. Question So my question is what is the best method for essentially running multiple, parallel simulations within a single Unity scene?"} {"_id": 0, "text": "Gamemaker vs Unity2d Speed of Development I know coding, so programming is not a problem Money is not a problem at all, I can get both GameMaker Master Collection and Unity Pro I want to make a 2d game. So which one is better for speed of development? On which one would I be able to finish my project earlier?"} {"_id": 0, "text": "Make a light only affect one object I'm wondering if it's possible to make a light source only affect one object? I know that it's possible to use layers and culling mask, but then I would need one layer for every single object which is not possible. Is there a way for example that I could make only children or siblings be affected by a light source? Or other solution? Have a nice day! Update 1"} {"_id": 0, "text": "How to clamp the rotation between two values in unity 3d How can i clamp the rotation of cube between two values and then rotate between them gradually Details Of what i Want When I press A or D keys cube should Rotate On Y Axis from 0 to 13 deg gradually and when key is released cube back to straight position"} {"_id": 0, "text": "Object goes straight through game object without colliding The \"shot\" is supposed to collide with the left bumper and then be destroyed, but the shot goes right through it. Both objects have a box collider. The following is the code I am using pragma strict var speed int 2 var collided with GameObject function Update () transform.Translate(Vector3( 1 speed Time.deltaTime, 0, 0)) function OnCollisionEnter (col Collision) if (collided with.tag \"Left\") Destroy(gameObject) I have made sure that all the tags are assigned correctly and that there are no spelling errors. The variable collided with is also assigned to the left bumper. What am I doing wrong?"} {"_id": 0, "text": "Unity Smooth player input direction when changing camera Preface I'm very new to Unity. I'm working on an exercise using fixed cameras like how they work in the PSX final fantasy games. I have the camera scripts working fine but it's an aspect of the character control that I'm struggling with. My character moves relative to the cameras facing position, but when the camera suddenly changes to another angle, my running direction instantly and abruptly changes too. In the final fantasy games it sort of keeps going roughly in the same direction you were briefly. input new Vector2(Input.GetAxis( quot Horizontal quot ), Input.GetAxis( quot Vertical quot )) input Vector2.ClampMagnitude(input, 1) get camera direction need to add something here for in between camera switches! camF Camera.main.transform.forward camR Camera.main.transform.right camF.y 0 camR.y 0 camF camF.normalized camR camR.normalized transform.position (camF input.y camR input.x) Time.deltaTime 8 I saw someone with a similar game comment on an article with his solution, but I don't quite know how to put it into code. The quote quot Player just gets a rotation based on where the camera is looking and multiplies the character s motor force by that .forward and .right. I only update that rotation after a big delta in input so you can smoothly move between shots. quot This is his small (online) game which he's talking about, I'm trying to mimic the movement basically https ocias.com games foggy shore Can anyone give me some pointers? Updated It doesn't quite play how it does in the demo game, but it's getting there. input new Vector2(Input.GetAxis( quot Horizontal quot ), Input.GetAxis( quot Vertical quot )) input Vector2.ClampMagnitude(input, 1) if (!cameraChanged) lastCameraInput input camF Camera.main.transform.forward camR Camera.main.transform.right camF.y 0 camR.y 0 camF camF.normalized camR camR.normalized else if (Vector3.Dot(input, lastCameraInput) lt 0f) cameraChanged false transform.position (camF input.y camR input.x) Time.deltaTime 8"} {"_id": 0, "text": "Add parabola curve to straight MoveTowards() movement I have some units shooting arrows, right now they move in a straight line to the target. I want them to move like a real arrow, i.e using a parabola curve. However I don't want to use physics. The math I need to do seems to be beyond me, I've tried to convert 2d solution I found online and other semi similar solution with no luck. My current straight movement private void Update() var nextPos Vector3.MoveTowards(transform.position, target, speed Time.deltaTime) transform.LookAt(nextPos, transform.forward) transform.position nextPos if (Vector3.Distance(nextPos, target) lt .1f) Arrived() An example of a 2d version I tried to replicate Here."} {"_id": 0, "text": "Why does this raycast code give me a NullReferenceException? The Error in the code is NullReferenceException Object reference not set to an instance of an object for if (hitxx.collider.tag \"soldier\") Please help! Thanks in advance void Update () Physics.Raycast (eyeenemy.transform.position, (eyeenemyfront.transform.position eyeenemy.transform.position) 20,out hitxx) Debug.DrawRay (eyeenemy.transform.position, (eyeenemyfront.transform.position eyeenemy.transform.position) 20,Color.green) if (hitxx.collider.tag \"soldier\") gameObject.transform.LookAt (soldier.transform.position) Debug.Log (\"chuchu\")"} {"_id": 0, "text": "Why is my mouse disappearing at runtime? I'm using Unity 5.3.5f1 and every time I play my game in the editor, my mouse disappears. I really need to test some of my parameters in the inspector but I can't because the mouse is gone. How can I fix this, and if possible, make it permanent?"} {"_id": 1, "text": "How to correctly implement 'layered lighting' with Box2D Lights How does one only allow Box2D Lights to affect one and only one OrthographicCamera. After researching, I found the following answer. This answer goes into detail about how to prevent one camera from being lit by the RayHandler by blending layers of the FBO object. I'm unsure if this answer will produce the results I require. If it indeed does create the results I require, how can I do so more simply elegantly. For example 1) In this image you can see ambient light is set to this.rayHandler.setAmbientLight(new Color(0.1f, 0.1f, 0.1f, 0.25f)) 2) There is a point light on the far right with purple colour. 4) There is a star background on a different OrthographicCamera to the other objects. When looking at this image, you notice that the stars are brighter near the planet due to the point light casting light on the background. However, I require the inverse. The stars should be brighter when light is not drowning them out. Knowing that the star background is on a different OrthographicCamera how can I allow the texture to be completely unaffected by light so the ambient light of the RayHandler does not dim them further away from the planet. The texture used for the stars contains transparency. The black is created using Gdx.gl.glClearColor(0f, 0f, 0f, 1) Preferably, I would like to be able to simply place cameras into two lists, affected by light and not affected by light. How does I produce the results I require? I can manage to get the lighting to ignore the star background by rendering it after the RayHandler has been updated and rendered, however this obviously causes the star background to be drawn over the top of everything like so After achieving this, I was reminded of the OpenGL depth buffer. I enabled it using Gdx.gl.glEnable(GL20.GL DEPTH TEST), I then attempted to change the z coordinate of the spritebatch used in the camera for the star background by using SpriteBatch.getTransformMatrix().idt().translate(0f, 0f, 1f) however after playing with the values of the other sprite batches I can still not get the effect I require. It is possible that the above method did indeed work, although I am aware that using the depth buffer incorrectly can cause issues with alpha blending? Making it seem like my problem was not fixed because the planet became translucent."} {"_id": 1, "text": "clamp a 2D coordinate to fit within an ellipse I need to clamp a 2D coordinate to fit within an ellipse. Call of Duty Modern Warfare 2 does something similar where capture points are translated from a 3D vector in the world to a 2D screen coordinate and then the 2D coordinates are clamped within an ellipse. When the capture points are in view they're within the bounds of the ellipse. When they're behind you they are clamped to be within the bounds of the ellipse. Given a 2D coordinate that could be off screen, etc, what is the math behind clamping it within an ellipse?"} {"_id": 1, "text": "Cant import Sun openGL I've been working with JOGL and I've ran into a problem. The problem is when I try to import import com.sun.opengl. on Eclipse, Eclipse doesn't recognize it. Does anyone know where this library is? Did I miss a step during installation? I'm working with the latest build. Here are the included libraries gluegen rt natives windows i586.jar JOGL lib gluegen rt JOGL lib gluegen.jar JOGL lib jogl all natives windows i586.jar JOGL lib jogl all.jar JOGL lib This is within my JOGL project folder gluegen rt.dll"} {"_id": 1, "text": "Optimal number of work groups for compute shaders In OpenGL, is there a way to dynamically determine the preferred workgroup size of the underlying hardware? Or at least, to approximate it using some queried metrics? In OpenCL, you can do this using some implementation defined constants, but so far I wasn't able to find something similar for OpenGL."} {"_id": 1, "text": "How do I build Assimp with MinGW? How can I build Assimp with cMake and MinGW? I tried, but I don't get a functioning library... Details of my attempt I am trying to build the Open Asset Import Library (Assimp) but I have been running into problems. The assimp documentation is really poor and expects you to know exactly what you are doing. The developers haven't been particularly helpful either. I hope someone here has successfully built assimp and can let me know where I am going wrong. I suspect that I have several problems that are contributing to my failure. I am using 64 bit Windows 8.1 pro and using MinGW version 4.8.1. The first thing I tried was downloading assimp 3.1.1 and boost 1.57. I extracted both folders and tried to use cMake to generate the makefile for MinGW. I haven't used cMake before and the assimp instructions are use cMake as you normally would, so I have no idea if I configured it right. I pointed BOOST ROOT to the boost folder I extracted from the download, set it to build static libraries and generated the makefile. I then tried running the makefile and got a number of errors. The first was IFCReaderGen.cpp.obj too many sections and was too big. After some googling, I found a workaround was to set CMAKE BUILD TYPE to release. That seemed to work and it finished the build but I only got the files assimp32.dll and libassimp.dll.a, which I thought was odd because I was expecting lib relese libassimp.a to be generated as per the details on the assimp website. Though the website might also be wrong or out of date. I linked with lassimp.dll and that allowed me to build my program. However it crashed upon start up the error that appeared immediately at start up was Program has stopped working (There was no additional info.) I guessed that this was a dll problem which was odd because I had (tried) to build the static libraries through cMake. I copied assimp32.dll into my executable folder. This time, the program wouldn't crash but the screen would be blank. I'm guessing that there was something wrong with the library I build that was causing it to link incorrectly. At this point, I deleted everything to try a fresh start. I tried to follow this article I downloaded assimp 3.1.1 and boost 1.57 and extracted them. I opened cmd, changed to the boost root directory and ran bootstrap.bat mingw I then ran b2 build dir \"C Libraries boost \" variant release link static address model 32 toolset gcc The result of this was 598 targets updates, 3 targets skipped, 2 targets failed. I now have a folder C Libraries boost boost bin.v2 with two folders libs and standalone, but I'm not certain what my BOOST ROOT directory is anymore. I opened cMake, selected the assimp folder I had extracted, and configured the following BOOST ROOT \"C Libraries boost boost \" ASSIMP BUILD STATIC LIB TRUE ASSIMP BUILD TESTS FALSE ASSIMP ENABLE BOOST WORKAROUND FALSE BUILD SHARED LIBS FALSE I then pressed configure and got the error that the boost libraries were not found. I'm only guessing what these cMake settings do, as I can't find any documentation for assimp. I'd like to build some version of assimp that I can then link to and use in a simple test program. At some point I will go back and build the shared libraries, but first I just want to get something working and understand how to do it again. Can someone see what has gone wrong?"} {"_id": 1, "text": "OpenGL GL LINE STRIP produces arcs with visible line edges Another member mentioned an issue while drawing arcs with GL LINE STRIP. He has GL LINE SMOOTH enabled. The question is, how could he avoid the tiny gaps between vertices without increasing their number? Edit The original discussion was on SO. Click here for more info."} {"_id": 1, "text": "How to create OpenGL models like the ones created by blender's wireframe modifier? How could I create programatically wireframe models like this out of a triangular mesh? What'd be the algorithm behind? (Source) Creating an additional mesh using lines instead triangles or just using a single mesh with the typical geometry shader using barycentric coordinates are the most straightforward approaches. But they're far away from being as cool as the results shown in the above link. So I was wondering how difficult would be creating a new mesh like the shown in the above link, would it be possible to achieve by just having a simple triangle soup?"} {"_id": 1, "text": "Creating a frustum for culling in world space glm matrices I need to do frustum culling where the bounding boxes are in world space to determine which entities get to be updated drawn. I was trying to use the classic projection view matrix plane extraction method but it doesn't seem to work with perspective matrices created by GLM. Is this method appropriate for world space culling? It seems like it would be (takes the eye position into account and the projection matrix shapes the frustum). I've only looked at the near far planes extract so far and they're wrong for a frustum sitting at the origin (Both have a c component that's negative which means near and far are facing the same direction). Also, since the d components can't match the near far clipping values with this method is it wrong for world space culling?"} {"_id": 1, "text": "3D Camera manager tool is there any? Are there some libraries or example codes, of a camera manager a tool, that would move camera using some smooth lines (bsplines or similar)? So basically I request an animation to move to a new eye center possition, and the tool takes care of the rest?"} {"_id": 1, "text": "Different shader programs but same buffer drawing I've setup 3 different instances of the same shader code (e.g. I've compiled the code into 3 separate programs) For each program, I've gone through the whole route of something like this Setup gl.useProgram(program) const buffer gl.createBuffer() const location gl.getAttribLocation(program, aName) Upload (each uses the buffer location it created in Setup gl.useProgram(program) gl.bindBuffer(gl.ARRAY BUFFER, buffer) gl.bufferData(gl.ARRAY BUFFER, data, usagePattern) gl.vertexAttribPointer(location, nComponents, dataType, false, 0, 0) gl.enableVertexAttribArray(location) Draw gl.useProgram(program) gl.vertexAttribPointer(location, 2, gl.FLOAT, false, 0, 0) gl.drawArrays(gl.TRIANGLE STRIP, 0, 4) However, no matter what I pass into data e.g. different sized quads, all three are the same size. Notably, changes to other uniforms attributes like matrices and color do change per object How do I change this so that the changes to data per instance will be drawn properly? (I realize if they actually use the same shader I should maybe keep a reference to one program and use some sort of buffer pool but I'm not there yet, just getting started with WebGL)"} {"_id": 2, "text": "How can I increase framerate, when drawing tiles to a HTML canvas? I am using the HTML 5 canvas to make a simple platformer game. I am currently drawing the tiles using a for loop that runs through a list of tiles and checks if they will be drawn to the screen. for (var i 0 i lt tiles.length i ) if (tiles i .x scale gt pos.x (canvas.width 2) amp amp tiles i .x lt (pos.x scale) (canvas.width)) canvas.drawImage(tiles i .x pos.x, tiles i .y pos.y, scale, scale, \"img Tiles \" tiles i .Tile \".png\") I am also using this custom library to draw the image to the screen this.drawImage function(x, y, width, height, src, alpha) if (alpha) ctx.globalAlpha alpha if (document.getElementById(src) undefined) document.head.innerHTML ' lt img width \"16px\" height \"16px\" src ' src ' id ' src ' gt lt img gt ' var img document.getElementById(src) ctx.drawImage(img, x, y, width, height) ctx.globalAlpha 1.0 document.getElementById(src).outerHTML \"\" When I run the for loop, the frame rate of the game seems to drop. Is there an alternate option to going through every element of the array that I'm not aware of?"} {"_id": 2, "text": "UV Texture coordinates out of 0,1 (WebGL) I'm creating a 3D game with WebGL and i'm using Wavefront Objects as a base model format. I recently found some models, their texture coordinates of which are out of the typical 0,1 range and i really don't know how to handle them (in code). I know of course this is a known issue, texture coordinates can be outside of 0,1 and the solution is to use GL REPEAT wrap mode. But this is no solution to me since i have a lot of non powered by 2 textures and i need to use CLAMP TO EDGE wrap mode (the solution to non pow 2 textures). I tried to bypass this issue by coding, so to convert 0 ,1 ranges into 0,1 but with not much of success (although somewhat). Here's what i did I looped \"vt\" entries (from my parser) and the cases i'm checking are these (pseudo) U,V texture coordinates as they come from file if (U lt 0) U U Math.abs( Math.floor(U) ) if (U gt 1) U U Math.floor(U) if (V gt 0) V Math.ceil(V) V if (V lt 0) V Math.abs( Math.floor(V) V ) The above fragment will take a U,V of unknown range and will convert it into 0,1 , taking also care the WebGL texture coordination system (0,0 buttom left for WebGL but 0,1 for Wavefront, the 3rd case above) The numbers are calculated correctly but the result is not the expected and i'm afraid there is a misunderstanding by my side on how the GL REPEAT mode works. So my question(s) Can this be solved by code, so i can use CLAMP TO EDGE? In case there's no way, is there any program than can take a model of REPEAT ed coordinates and produce one in 0,1 range? I'm already using Blender but could not find a setting for this in exporter."} {"_id": 2, "text": "How can I draw lines and curves on a canvas that's not rendered? I need to draw lines and curves on a canvas that is not rendered on the screen but just updates the array of pixels that would otherwise be rendered. I need this to separate my game engine from my renderer. This has proven rather difficult for my case because I don't use sprites that are loaded, but I draw lines and curves on a canvas. I have used the p5.js library to accomplish my game, but p5 doesn't let you work without a \"setup()\" and \"draw()\" function. I don't want any draw loop or something because that beats the purpose of separating my game engine from my rendering. I want to use p5 to render after I have all my game logic separate and I can just call nextFrame() or something from my game logic inside draw() in a constant 60 fps. In p5 it is possible to createGraphics() which works outside of the canvas and doesn't have to render but again, p5 needs a draw() loop to work. Does anyone know how I can accomplish this?"} {"_id": 2, "text": "Separating .js elements .js element communication So I have 2 questions regarding the code below function goldClick(number) gold gold number document.getElementById(\"gold\").innerHTML gold function buyMiner() var minerCostG Math.floor(100 Math.pow(2.1,miner)) var minerCostW Math.floor(10 Math.pow(3.1,miner)) if(gold gt minerCostG) if(wood gt minerCostW) miner miner 1 gold gold minerCostG wood wood minerCostW document.getElementById('miner').innerHTML miner document.getElementById('gold').innerHTML gold document.getElementById('wood').innerHTML wood var nextCostMG Math.floor(100 Math.pow(2.1,miner)) document.getElementById('minerCostG').innerHTML nextCostMG var nextCostMW Math.floor(10 Math.pow(3.1,miner)) document.getElementById('minerCostW').innerHTML nextCostMW window.setInterval(function() goldClick(miner) , 1000) Question 1 Is it possible to separate these functions (goldclick, buyminer, window interval,) into their own documents so that if I have say, 50 different XXXXclick functions, and 50 buyXXXX functions and 50 interval functions, I have just 3 .js documents instead of 50. Question 2 If it is possible to separate them, how would I ensure that they are communicating with each other and variable values are not being confused or sent to the wrong lt span gt in the displaying html."} {"_id": 2, "text": "Tile Map Coordinates I am have now this code http jsfiddle.net DK67k 2 In here is 2D tile map and when you click on tile you get coordinates on alert. But for get precises coordinate you need click on top left tile(tiles is 16x16) and if I click on bottom right tile I am get second tile coordinates. Maybe anyone have idea how to fix this?"} {"_id": 2, "text": "How to access files in Chrome So, I am making a game like html file that needs to load a few assets. I have a loading animation which uses the timing of the loading files to create a progress bar. This all goes fine. The files all load, and the app launches. However, none of the files can access each other as they are all in different script tags. And I cannot merge them because the JS is never actually put into the html. So what I need to do is to access the local file system (it uses the file prototype) and read the file, then stick the text into a script tag. However, all of the examples I have found on the web throw this error Xhtmlrequests can only access (list of protocols not including file) What I need is a JS way to access the files with no input from the user (other than opening the application). How can this be done?"} {"_id": 2, "text": "Scrolling Box2D DebugDraw I'm developing a game using Box2D (javascript implementation Box2DWeb), and I would like to know how I can pan the debug draw. I know the usual answer is don't use debug draw, it's just for debugging. I'm not, however not all my objects are on the same screen, and i'd like to see where they are in the physics representation. How can I pan the debug drawing? As you can see the debug draw stuff, is show on the top left, but it only shows a small part of the world. Here is an example of what I mean http onedayitwillmake.com ChuClone The game is open source, If you'd like to poke through and note something that perhaps i'm doing something that is obviously wrong https github.com onedayitwillmake ChuClone Here's my hacky way that I'm using now to scroll the b2DebugDraw view, in which I added a property offsetX and offsetY into b2DebugDraw"} {"_id": 2, "text": "How to fix thi gameover scene audio transition? I created a gameover scene in a crafty.js project but it does not run. Crafty.scene('gameover', function() Crafty.background(\" ccc\") Crafty.audio.stop(\"bg\") ) var gameOver function () Crafty.scene('gameOver') The background music (\"bg\") doesn't stop playing once the game ends. How do I fix this?"} {"_id": 2, "text": "I need a map generation algorithm for towerdefense I'm making a simple JavaScript tower defense game, I've experimented on some map generation my self, i seem to get one that works properly. What I want is for you to provide a start point (possibly an end point) and an algorithm to generate a random path between them in a tower defense like style. Examples"} {"_id": 2, "text": "Phaser How to call a function inside another function? So I actually have the following code that works var player var box tnt function create () this.physics.add.collider(player, box tnt, hitTnt, null, this) the function hitTnt stop the game because the player died function hitTnt (player, boxes) this.physics.pause() console.log('Game Over!') textGameOver.setText('GAME OVER') player.setTint(0xff0000) player.anims.play('default') gameOver true and I want to do something like var player var box tnt function create () this.physics.add.collider(player, box tnt, hitTnt, null, this) the function hitTnt stop the game because the player died function hitTnt (player, boxes) gameOver() other stuff here function gameOver () this.physics.pause() console.log('Game Over!') textGameOver.setText('GAME OVER') player.setTint(0xff0000) player.anims.play('default') gameOver true but I have the following error message TypeError gameOver is not a function Do you have please any ideas how to do it properly?"} {"_id": 3, "text": "MVC How To Handle Animations? I am working on a turn based game that utilizes the model, view, controller design pattern to separate logic from input from rendering. I am still a little new to the pattern, but from my understanding I have laid out the following MVC system. The model(s) comprise a set of pawns with game flow and interaction logic. Pawns models know how to interact with each other and the model world, but only when signaled to do so via function calls. (i.e. pawn.move(startLocation, endLocation) or pawn.attack(enemyPawn)) The view(s) make up everything that is viewable by the player. This includes menus, visual representations of the pawns models, and any user interface components like cursors and buttons. The view(s) query the model for data to draw visual representations of pawns. The view(s) also can dispatch user interface events to any observers interested in these events. The controller(s) handle all transition and input logic. When an interesting user interface or input events occur, the controller decides what the model should do and signals the model to do it. The controller also handles swapping out of views when necessary. I know there are many ways to implement MVC and I am open to suggestions if something seems off with this design, but the real problem I am having is how to handle animations in the view. Say the user commands a pawn to move 5 spaces North and 1 space West. I want an animation to show this over several seconds. How would I handle something like that?"} {"_id": 3, "text": "Skeletal Animation No initial bone rotation I am currently trying to implement animations in my game. I am having an issue where the bones do not rotate properly. In my program, I apply an inverse bind pose to each bone to move them to bone space. Then I recursively calculate the transformations for each bone for the keyframe. The bone transformation looks like this bone.Transform InverseRootGlobal Global bone.InversePose InverseRootGlobal is the inverted root node transformation Global is the recursively calculated transformation bone.InversePose is the inverse pose matrix It appears that all is correct apart from the bones initial rotation. When I move the bones to bone space, they loose their rotation from the bind pose and 'point up'. Here's what it looks like If it helps, I am using Assimp to import the data. Thanks in advance."} {"_id": 3, "text": "Objective c Cocos2d moving a sprite I hope someone knows how to do the following with cocos2d I want a sprite to move but not in a single line by using cocosGuy runAction CCMoveTo actionWithDuration 1 position location What I want is the sprite to do some kind of movements that I preestablish. For example in some point i want the sprirte to move for instance up and then down but in a curve. Do I have to do this with flash like this documents says? http www.cocos2d iphone.org wiki doku.php prog guide animation Does animation in this page means moving sprites or what? thanks"} {"_id": 3, "text": "Stop current animation and re play immediately? When the player hit something, I'm playing hit animation from animation player. However when player hit something while hit animation is playing, it just ignored and play previous hit animation. I need to stop the animation, but couldn't find any related information. How do I stop the animation and play again immediately?"} {"_id": 3, "text": "3ds Max Rigs to Cinema 4D I exported the rig from 3ds Max as a FBX file and opened it with Cinema 4D. Everything works great, but the model and a different look to it. Also the mask on her head has a different texture, but it was imported with only 1 object. Just wondering if it can be fixed, if not its ok."} {"_id": 3, "text": "Efficient sprite animation display methods? I'm making a game sprite, with animations for transitioning to different states, such as idle, sleeping, etc. I'm wondering what the most efficient way to play the animations is. As of now, I have a couple of ideas store all frames as separate PNG images, and display only the frames for the current state (e.g. play frames 1 5 for idle, 5 10 for transitioning, 10 15 for sleeping) make the animations into GIFs and play the one corresponding to the current state. merge them all into a spritesheet. Please let me know which would be the best approach, or if there's a better way to do things. I'm new to handling images and animation with code, so I'm not sure what the advantages and disadvantages are. Thanks! In case the information helps anyone, I'm planning to use Python for this (although I could also do Java or C , if there are good solutions in those languages), and the sprite image size isn't large, it's pixel art."} {"_id": 3, "text": "Guidelines when rigging a character Just wondering that how many IK (Inverse Kinematic) bones should you apply to a game character before it starts to go resource hog. Basically should you try to get rid of IK bones and just try to animate without them but using IK bones makes things easier for the animator but might cause unnecessary use of resources and possibly cause bottleneck when calculating bone and mesh translations. I'm using Blender and couple of animation tutorials are suggesting to use IK controllers and they indeed help a lot but when importing the character into a game they might be plain pointless since they are used only to help the animator."} {"_id": 3, "text": "What are the pros and cons of these voxel data file formats? .VXL .VOX .KVX .KV6 .V3A .V3B I am trying to decide whether it's worth going with any of the above, or some other, or if I should roll my own. The deciding factors in order of importance are Animation support (I am aware this is a difficult aspect when it comes to voxels), using voxel deltas or numerical transform descriptions. Simplicity (or at least a concise format) Compression From what I can tell, the Tiberian Sun VXL format is the only one which is designed for animation, but Ken Silverman of Voxelstein3D fame claimed that VXL does not support animation (in a single file, were his words). So I wonder if maybe there are not two different .VXL formats, since it seems an obvious choice of file extension for voxel data... could be from a medical imaging context. I do need someone with solid experience of voxel formats to come and comment on the practical pros and cons, in their experience. Consider this question in the same way you might compare JPG to PNG to GIF."} {"_id": 3, "text": "Animation with non uniform frame sizes How do you know when to draw the next frame if its bounding box is larger smaller than the current? For example, in KOF, some characters have elastic arms and things of the like..How are these handled in a game?"} {"_id": 3, "text": "Calculate smooth transition between local space angles I'm trying to calculate direction of moving character (to rotate it's animation). I have something like this Character can have 4 cardinal directions. The rotation is calculated from the current cardinal direction. So for example For north we have 90 degrees from N to W and 90 degrees from N to E. From E to N we have 90 degrees and from E to S we have 90 degrees. The problem occurs when character is switching directions.It happens at about half the distance between cardinal directions. At this point current rotation is switching from negative to positive number (or other way around), and there is noticeable hitch. When for example going from E to N, it switches to N direction but it was at about 45 degrees which in N space it places for a frame or two between N and W directions. Why do I calculate angles in cardinal direction space ? Animations are prepared for strafing or moving backward (or any other angle) and I need to assume that the direction of animation is 0 degrees and calculate from here. Is there way to smoothly change angle between cardinal directions ? For now I tried to lerping between them when switching (which improved overall look but is far from smooth transition), tried to track from which direction the switch is coming, but it didn't really improved anything. FVector Right Character gt GetActorRightVector() FVector Forward Character gt GetActorForwardVector() FTransform Transform Character gt GetTransform() FVector CurrentAcceleration CMC gt GetCurrentAcceleration() FVector CurrentVelocity CMC gt Velocity FVector AccelerationDirection CurrentAcceleration.GetSafeNormal2D() FVector VelocityDirection CurrentVelocity.GetSafeNormal2D() FVector LocalAcceleration Transform.InverseTransformVectorNoScale(AccelerationDirection) FVector LocalVelocity Transform.InverseTransformVectorNoScale(VelocityDirection) float DeltaTime Output.AnimInstanceProxy gt GetDeltaSeconds() float Atan2Angle FMath Atan2(LocalVelocity.Y, LocalVelocity.X) const int32 ChildIndex FMath RoundToInt((Atan2Angle 2 PI) 4) 4 float VelAngle FMath RadiansToDegrees(VelQuat.GetAngle()) float OldOrient CurrentOrient EFCardinalDirection Dir static cast lt EFCardinalDirection gt (ChildIndex) EFCardinalDirection OldDir static cast lt EFCardinalDirection gt (OldDirection) float DirDot 0 main switch wihin which I determine from where calculate angle. switch (Dir) case EFCardinalDirection N FQuat ForwardQuat FQuat FindBetween(Forward, CurrentVelocity) float OrientN FRotator(ForwardQuat).Yaw CurrentOrient OrientN FMath FInterpConstantTo(OldOrient, OrientN, DeltaTime, 300.0f) break case EFCardinalDirection E FQuat LeftQuat FQuat FindBetween(Right, CurrentVelocity) float OrientE FRotator(LeftQuat).Yaw CurrentOrient OrientE FMath FInterpConstantTo(OldOrient, OrientE, DeltaTime, 300.0f) break case EFCardinalDirection S FQuat BackQuat FQuat FindBetween(Forward ( 1), CurrentVelocity) float OrientS FRotator(BackQuat).Yaw CurrentOrient OrientS break case EFCardinalDirection W FQuat RightQuat FQuat FindBetween(Right ( 1), CurrentVelocity) float OrientW FRotator(RightQuat).Yaw CurrentOrient OrientW FMath FInterpConstantTo(OldOrient, OrientW, DeltaTime, 300.0f) break default break"} {"_id": 4, "text": "Optimizing the economy of a resource management game I am an amateur game designer. I am designing a resource management game involving real money. This is my first time designing such a game. I explain the design and mechanics below. I need feedback from experienced game designers regarding i) The sustainability of the core economic loop of this game, ii) Concrete suggestions to de risk or fine tune it (if you can think of any) My game is about digging up treasure. Treasure is real players can cash out. The objective of the game is to make as much money as possible. Players utilize digging machines (excavators) to unearth treasure. The attributes of these digging machines are Digging prowess Most machines are mediocre diggers. I.e. they unearth small treasure. A few rare machines can dig up really big treasure. And then there are other types in between these two extremes. Scarcity Only a finite number of machines are available in this game world. Transparency Machines' visual design varies in accordance to their digging prowess. I.e. players can easily spot which machine is better than the rest. Main mechanics When a new player joins this game, they utilize real to buy one or multiple such machines. Once bought, each machine can either be Stored away (unused), or Put to work. Note that when a machine accumulates 1 hour of digging time, it unearths treasure. The amount unearthed is always proportional to the machine's digging prowess. Putting machines to work has an upside and a downside Upside Machines that are put to work unearth treasure (actual players can claim). Downside Working machines can be forcefully bought by another player (for the machine's current value 5 profit). No permission is needed. This permanently increases the value of the machine. Which means, if you want to snatch it back, you pay a further 5 increment on top of everything. Storing machines away has an upside and a downside Upside Nobody can forcefully buy them from the player. Downside They don't help the player earn anything. How do we finance the treasure finds? There is a pool of money in the back end that finances each and every treasure finding. We finance this pool in two ways The first time a digging machine is bought, 90 of the proceeds are routed to this pool (10 are pocketed by the game developer). I earlier mentioned that whenever a machine is forcefully bought, a 5 profit is paid by the buyer to the unwitting seller. We route 10 of that profit to the treasure pool, 10 to the game developer, and 80 to the unwitting seller of the machine. It would be great to get feedback from experienced game designers regarding the economic viability of the game's core loop. How do we Make it sustainable? What are the economic risks? How can they be quantified? Are there any risk minimization tactics we can bake in?"} {"_id": 4, "text": "Is correct to calculate values in percentage, using the total number of iterations? Trying to clarify the problem. Imagine a factory that produce soccer balls if you produce 1 ball, it will take a certain amount of game cycles. For sake of simplicity, let's say it could take between 5 and 15 cycles (depending from various conditions, like number of workers, quality of workers and so on). Once you make a certain amount of soccer balls, the process is considered done. Now, I want to display a value in percentage for costs, and a value of defective balls made. with \"cost\", I am trying to display how much did I spent (materials, workers wage and so on), at the moment. The number in currency is self esplicative, but I believe a percentage value could help to know if you are using too many workers, or burning too many resources, resulting in a waste of money. To calculate the percentage of defective is relatively simple, since every X amount of time, when I refresh, I can check the balls produced against the defective ones, and the result should be Total produced 100 defective percentage of defective. But for the costs, things are different so I was planning to use the number of cycles to calculate percentage for costs cycles 100 current costs percentage of costs Would this be considered correct? Since the number of cycles for a whole batch of balls may vary slightly, this may represent that when there are less cycles, it means that you spent less money, henche the percentage will be smaller."} {"_id": 4, "text": "How to compare different states of my game? (rpg) I'm working on a rpg, the battle system. I want to compare different scenario with each other. For example, what if I play move A then move B ? or two times move A ? or ...? I'm using a tree structure to generate all possible scenario that can happen during the battle. There is no movement involved, I'm using a battle system as in FF7, only skill choice matters. 4vs4 units Trick is, the battle is turn based but not static. There is an Action Point bar for each unit (filling at different speed based on the unit's stats) and the unit with the most Ap (only if 100) can play. If no unit is 100, a \"turn\" will happend and all unit will gain some AP based on their speed. If no unit can play, a new turn will happen...until a unit is 100 ap. Each skills takes a different amount a AP to use. At turns 0 for example, we may have a unit with 140 AP that will play two times in a row for 2 actions of 20 AP each. Then other units at 100 AP may play. It means that when I'm running a simulation of all possible outcome of a battle for my AI, I will have states (node in the tree), that will have a number a turns very different from other and from nodes at the same depth. I dont know how to compare these nodes at all. I was using Hp at first, but it is not enought. To prove it, consider the example 1 vs 1 match. Unit A (opponent of AI) has only 1 move, a small attack that requieres 20 ap. Unit B (AI) has two moves a small attack that takes 40 ap (and overall hit for very low damage compared to the opponent attack) and a second move that does nothing but consume 20 ap and that move can only be used during the 1st turn. I will then have to compare two states State 1 AI use its attack . Opponent his attack. Units are at 60 80 ap respectively. 20 game turns passes and they are now at 80 100 ap. Opponent use his attack. 20 game turs passes and they are at 100 100 again and we are at turn 40. State 2 AI use his move that does nothing. Opponent his attack. Units are at 80 80 ap. 20 games turns passes, they are now at 100 100 and we are at turns 20. If I compare only HP, In state 1, AI will have an overall score a lower than in state 2 since it got attacked twice in state 1 and 1 times only in state 2. If I only compare by Hp difference between AI amp its opponenent, I would choose state 2 since AI lost less Hp than in state 1. Running the simulation after state 1 and 2 will continue to show the same difference since after that point, both unit can only use one attack. In the end, it makes my AI choose a stupid move because she thinks she is less likely to loose selecting it. Granted both move will eventually result in the defeat of the AI since its main attack is weak but I'd like my AI to nonetheless select as its first move, her attack skill and not a skill that does nothing. I've tried weithing the overall gradient of score between the root state and the state I'm simulating by the number of turns but it does not work in all cases (it greatly depends on the number of turns and the difference between the root). I was thinking of adding Ap ratio to my evaluation of states but in that example, we can see that in State 1 amp 2, units are at the same AP anyway so it doesnt provide extra informations. Anyone got ideas on how to compares my states from different turns ?"} {"_id": 4, "text": "Player experience using a server in single player adventure to prevent cheating I'm making a game where the user can sell items he finds during the the \"single player\" adventure on an online, I was looking at ways to prevent any malicious user to create rare and expensive items from scratch to sell them on the net. I tried obfuscation but soon or later someone will successfully break into my code and I can't afford that, for obvious reasons. Items are stored in files, so I thought about making client add a digital signature to generated files but I have to keep the key on the client machine, so same problem if someone finds his key he can apply digital signature algorithm to generate the valid key for any file he creates. So I ended up think the only secure option was to make the single player adventure interact with the server (everyone around here tends to say this is the way to go, like this subject, in my opinion the accepted answer explains it well). But what about the user friendly aspect? My game is divided in two parts. Firstly the \"real\" single player adventure, where you can cheat all you want because everything is stored on your pc and server won't ever use it ( 24h), then you beat the final boss and you're notified you're able to go online to battle against other players with the stuff you acquired on the single player map. At this point of the adventure, my idea was to record everything the player is doing in singleplayer mode onto the server, so whenever he plays a tournament, his stuff is verified. I can't verify stuff just before a tournament begins because overpowered items can be obtained but are very rare, I want to systematically know if it was hardly obtained or cheated. Do you think it would be acceptable for a player to play \"single player mode using internet\" if he was told this will allow him later to rank up online and be The single player map will be kind of mixed with the multi player area, but maybe I can think about a feature to play totally offline but then you won't be able to use your \"online items\" during this time. EDIT My app might outputs txt file (representing an item) with its corresponding digital signature, so I just need a way to be sure that it's my app (and not a modified copy) that has created a given file. That's why I'm reading about Trusted computing and Memory curtaining right now, but maybe I should head to security.stackexchange?"} {"_id": 4, "text": "Creating Level iPhone I am thinking about a new game idea for the iPhone and presumably all the levels would have to be made programatically. So Im wondering what the best method to approach it? Say I have a game with a standard typical 2d, top down maze. Thise maze is simply made up of walls, multiple rectangles on screen, making paths. Do I have to manually work out the position of every rectangle and put it in with an x and y position or is there a better way? I see this taking, forever, there must be certain ways developers approach these situations to get by them efficiently and quickly surely? Thanks."} {"_id": 4, "text": "How to compare different states of my game? (rpg) I'm working on a rpg, the battle system. I want to compare different scenario with each other. For example, what if I play move A then move B ? or two times move A ? or ...? I'm using a tree structure to generate all possible scenario that can happen during the battle. There is no movement involved, I'm using a battle system as in FF7, only skill choice matters. 4vs4 units Trick is, the battle is turn based but not static. There is an Action Point bar for each unit (filling at different speed based on the unit's stats) and the unit with the most Ap (only if 100) can play. If no unit is 100, a \"turn\" will happend and all unit will gain some AP based on their speed. If no unit can play, a new turn will happen...until a unit is 100 ap. Each skills takes a different amount a AP to use. At turns 0 for example, we may have a unit with 140 AP that will play two times in a row for 2 actions of 20 AP each. Then other units at 100 AP may play. It means that when I'm running a simulation of all possible outcome of a battle for my AI, I will have states (node in the tree), that will have a number a turns very different from other and from nodes at the same depth. I dont know how to compare these nodes at all. I was using Hp at first, but it is not enought. To prove it, consider the example 1 vs 1 match. Unit A (opponent of AI) has only 1 move, a small attack that requieres 20 ap. Unit B (AI) has two moves a small attack that takes 40 ap (and overall hit for very low damage compared to the opponent attack) and a second move that does nothing but consume 20 ap and that move can only be used during the 1st turn. I will then have to compare two states State 1 AI use its attack . Opponent his attack. Units are at 60 80 ap respectively. 20 game turns passes and they are now at 80 100 ap. Opponent use his attack. 20 game turs passes and they are at 100 100 again and we are at turn 40. State 2 AI use his move that does nothing. Opponent his attack. Units are at 80 80 ap. 20 games turns passes, they are now at 100 100 and we are at turns 20. If I compare only HP, In state 1, AI will have an overall score a lower than in state 2 since it got attacked twice in state 1 and 1 times only in state 2. If I only compare by Hp difference between AI amp its opponenent, I would choose state 2 since AI lost less Hp than in state 1. Running the simulation after state 1 and 2 will continue to show the same difference since after that point, both unit can only use one attack. In the end, it makes my AI choose a stupid move because she thinks she is less likely to loose selecting it. Granted both move will eventually result in the defeat of the AI since its main attack is weak but I'd like my AI to nonetheless select as its first move, her attack skill and not a skill that does nothing. I've tried weithing the overall gradient of score between the root state and the state I'm simulating by the number of turns but it does not work in all cases (it greatly depends on the number of turns and the difference between the root). I was thinking of adding Ap ratio to my evaluation of states but in that example, we can see that in State 1 amp 2, units are at the same AP anyway so it doesnt provide extra informations. Anyone got ideas on how to compares my states from different turns ?"} {"_id": 4, "text": "How to design simple turn based combat system with multiple difficulties? I am developing a small, turn based game with spaceships. The user controlled spaceship has two modes, let's call them offense and defense mode. The user can freely switch between modes. There are three types of enemy ships. Whenever the user's ship meets one or more enemy ships, a fight calculation should take place, which can have exactly two outcomes either the user's ship wins (and does not carry away any damage), or the game is over. Encounters are kinda, but not completely random so one can expect a certain enemy and change the mode accordingly. For reference, multiple enemy encouters at once might happen, and as a rough idea, the userShip usually is expected to lose if it has to fight 2 5 enemies. Like 2 are too much, if the active fight mode is disadvantageous, whereas the userShip can win against up to four ships, if the mode is advantageous. For encounters with multiple enemy types, the numbers shall be somewhere in between. My requirements for the combat system are completely deterministic one mode must not be clearly superior to the other mode. Ideally, one mode is advantageous against a certain enemy type, while the other mode is advantageous against another enemy type (not sure what to do about the third enemy type though) must be somehow scaleable, i.e. there should be three difficulty settings which shall be taken into account. ships have three attributes Attack, Defense, Speed (however, this isn't set in stone and could be changed at will) My best idea so far assign each ship type base stats (Attack Defense Speed, floating point numbers) userShip (10 10 10) enemy 1 (5 10 15) enemy 2 (15 5 10) enemy 3 (10 15 5) create a modifier table that multiplicates the userShip baseStats with a multiplicator depending on a) userShip mode and b) enemyShip type. have the combat system compare each of the three stats (if multiple enemies involved, their combined strength), and depending on the difficulty setting, the userShip wins if the difference of at least 2 of the 3 attributes is smaller than . That way, I would have the scalability. However... I was unable to design the modifier table in a way that the modifiers actually make a noticeable difference AND the user has an incentive to switch between modes when he expects a certain enemy in the next turn. The biggest problem is the nature of the binary combat outcome. This makes it that the floating point multiplicators neglegible most of the time. Also, I feel like I have way too many parameters (fight mode, attack, defense, speed, difficulty setting, modifier) for such a simple calculation, but I cannot really remove any other than the modifier, and they all should be at least a little bit relevant for the combat outcome. Also, I'm not sure how to deal with the third ship type, as there only are two modes. If the mode does not matter at all, that's poor design I think. I could add a third mode for the userShip, however that would introduce even more parameters for the combat. Unsure about that. Any ideas how to design a combat system under these conditions?"} {"_id": 4, "text": "How would you define this mechanic in narrative based games? I'm a researching a certain game mechanic that seems very commonplace in modern narrative based games. The mechanic is defined (by me) as a user action that the player can take while in a dialogue or a cutscene, which affects the course of the narrative (and sometimes has a long lasting impact on the game progression in general). The interaction can be of several types press a button click a character hit a sequence of keys button mashing. Here's an example from Game of Thrones https www.youtube.com watch?v ykq213pPD6Q amp feature youtu.be amp t 181 In Mass Effect it is referred to as an interrupt https www.youtube.com watch?v 9TgTLJXoD o Here's one from King's Quest https www.youtube.com watch?v KhOkqn6qE2g I'm looking for more info on this type of mechanic. Specifically Is there an accepted name for it? What other games have this mechanic, and use it in an interesting way? (I would appreciate links to videos demonstrating it) What types of interactions (such as the ones I've described) have you seen implemented in this mechanic? Thanks ) (BTW if you think this game design related question is more suitable for StackExchange's Arqade do let me know and I'll ask to move it there)"} {"_id": 4, "text": "What are the other three \"game dynamics\"? Seth Priebatsch recently gave a tedtalk entitled Building the game layer on top of the world. In it, Seth described four game dynamics, techniques used by game designers to make games fun and addictive. The four dynamics that Priebtsch described were Appointment dynamic a dynamic in which to succeed, one must return at a predefined time to take a predetermined action. (Real life example happy hour) Influence and status the ability of one player to modify the behavior of another's actions through social pressure. (Example different color credit cards as a reflection of status) Progression dynamic a dynamic in which success is granularly displayed and measured through the process of completing itemized tasks. (Example linkedin profile progress bar) Communal discovery a dynamic wherein an entire community is rallied to work together to solve a challenge. (Example finding interesting content on Digg.com) Seth explained these four game dynamics in his talk and added that his company has an additional three. Does anyone know (or have any theories) about what the other three game dynamics are?"} {"_id": 4, "text": "How do you handle loss aversion in probability based games? The psychological phenomenon of loss aversion refers to how players feel losses twice as powerfully as victories. For example, Bite Fight's PvP is a simulation based on probabilities related on character skills, and players voice this feeling many times per week in the community forums. If you don't want to create a pay to win game, but you do want to let the worst players win often enough to feel good about it, how can you do that? The question has two parts How do you handle it technically? Do you use some kind of math techniques or a memory based simulation to avoid many loses in the row for a certain player? How do you handle it from the community point of view? What do you do about complaints like this on public forums?"} {"_id": 5, "text": "Huge procedurally generated 'wilderness' worlds I'm sure you all know of games like Dwarf Fortress massive, procedural generated wilderness and land. Something like this, taken from this very useful article. However, I was wondering how I could apply this to a much larger scale the scale of Minecraft comes to mind (isn't that something like 8x the size of the Earth's surface?). Pseudo infinite, I think the best term would be. The article talks about fractal perlin noise. I am no way an expert on it, but I get the general idea (it's some kind of randomly generated noise which is semi coherent, so not just random pixel values). I could just define regions X by X in size, add some region loading type stuff, and have one bit of noise generating a region. But this would result in just huge amounts of islands. On the other extreme, I don't think I can really generate a supermassive sheet of perlin noise. And it would just be one big island, I think. I am pretty sure Perlin noise, or some noise, would be the answer in some way. I mean, the map is really nice looking. And you could replace the ascii with tiles, and get something very nice looking."} {"_id": 5, "text": "What is the difference between dynamic generation and procedural generation ? When I think of a dynamically generated game, I think of things like Diablo with randomly generated levels. When I think of a procedurally generated game, I think of things like Flappy Bird and other infinite runners. But both of these just randomize a level. Is it that procedurally generated games are constantly being generated and dynamically generated games are all generated up front? Or are these terms interchangeable? What is the difference between a dynamically generated game and a procedurally generated game?"} {"_id": 5, "text": "How to build a better game save? I am making a roguelike card game like dream quest, and I want to improve my dungeon level save method. Here's what my levels look like I checked dream quest saves but I don't like their method because it uses a simple text file for all the information. it's also really messy, repetitive and not extendable. I used the format below for the first couple of months \"Wall\", \"Wall\", \"Room\", \"Room\", \"Room\", \"Merchant\", \"Room\", \"Room\", \"Wall\", \"Wall\", , \"Wall\", \"Wall\", ... But at the time I did not have multiple enemies, merchants, items. I simply read the file by using an enum which matched the strings used in the save format (item, enemy, merchant etc.) then passed them into an array to create the board. As I started to create different kinds of enemies, items, and merchants, my current game wouldn't support this. I don't want to create an enum like enemy1, enemy2, enemy3 ... How can I improve the scalability amp legibility of this save game format, so I don't have to keep adding new enum categories for each variety of content?"} {"_id": 5, "text": "Generating a set of islands in real time With thismethod I can create multiple separated islands just by randomly choosing a point as the center for a gradient. Now what I want to do is to have a system of real time generation like the one in Minecraft. When I generate a new chuck, some random points inside of it are chosen to be the center of a gradient. However, if one of these points gets positioned near the border of the chuck, the gradient will cross the border and affect the other chunk. But if that other chunk was already generated and visited by the player, I couldn't update that chunk to include the new gradient, so I would end up with something that looks like this What could be a method to avoid this?"} {"_id": 5, "text": "Town generation algorithms In this RPG.SE post a long lost page with several online generators is mentioned. I'm particularly curious about the way towns were generated. Take this image for example Although a lot of things could be improved graphics wise, it was very good considering the tools available at the time (mid to late 90s) The road layout seems fairly organic (even considering all roads are placed in an orthogonal manner). Not all roads are necessarily connected, but it feels right. Buildings are placed in believable spots. Even trees seem to be placed in logical spots. I think it would be fun to give it a shot and try it myself. Especially since my previous attempts have been too quot blocky quot You'll notice that I'm describing many of the qualities with not quite measurable adjectives (seems, feels, believable, etc) so I'm having a hard time translating them to instructions and ultimately to an algorithm. Are there any tried and true algorithms for town generation? I understand this seems to be too broad, so consider this If I asked for an algorithm for generating maps of continental land masses, I'd get references to Perlin and other noise algorithms right away closely followed by Voronoi. I've seen questions like this one but they seem to have a more concrete idea already in mind (ie 2x2 houses, fixed number of houses, canal and road placement restrictions). What I'd like to have is something less constrained. Except maybe for the grid layout, which should be a lot easier for a first attempt than, say, L system."} {"_id": 5, "text": "Workflow for creating spherical heightmaps I'm wondering how I can create a spherical heightmap. In my ideal workflow, you'd see a 3D sphere that you can edit using procedural terrain techniques (like World Creator) and that spherical terrain can be exported into six seamless images based on what layout you want your images to fold up into a cube in game to create your planet. Another option would be if you can at least tell the program you want specific images to tile with other images, such as the designated 'front, right, back, left' images tile with each other and the 'up, down' images seamlessly. Specifically, I'm making planets for a game that wraps six images into a cube and 'inflates' them into a planetary terrain. I've made a few basic heightmap sets in a program called Scape, and flipped them so at least four of the images mirror each other, then I edited the interior of those maps to all be different so the landscape isn't repetitive. However, manually editing the top and bottom maps is very daunting to get perfect, and I'm creating a dozen or more planets. I'd like to know how to make 'front, right, back, left' images tile with 'up and down' images and I can edit the interiors of the images in a 3D program of choice. I've been directed to this site from search queries many times, but I'm a newbie to all things code so I have no clue how to use things such as the Diamond Square method or any algorithms to achieve seamless textures (what program do you actually use, literally a python script you write in Notepad or something?)."} {"_id": 5, "text": "Floating point determinism with respect to procedural generation, clustering and GPU offloading I've been designing a distributed procedural generation system for a while now in my spare time and one of the problem's I've been thinking about recently, with respect to the broader architecture, is that of floating point determinism inconsistency when you can't ensure that you have a homogeneous cluster of machines running a persistent world. Part of my design requires that any given machine can regenerate procedural content as needed, allowing for unimportant but resource hungry content to be destroyed and recreated as needed on whatever machine is available to do the work. I have been running on the basic idea that I will use high precision 32 64 bit integers for most things, and that generally works fine, but the standard coherent noise algorithms all use floating point values in their calculations. Do I need to implement custom non floating point versions of all those algorithms (i.e. using longs) or is there a better approach? Should I be using fixed point types for this kind of thing, and if so, how does that impact my desire to have the option to offload some of the PCG work to the GPU, when possible? Also, are fixed point types fast enough for heavy use within a game engine? Can I ignore floating point precision issues if I can get away with a maximum level of precision? i.e. If I'm happy to use no more than, say, six decimal places of a result, does that keep me safe across different machines and architectures? Note my engine is purely the simulation side of things. It doesn't matter to me if there are slight rendering inconsistencies when a user is playing the game all that matters is that the procedurally generated source data is consistently regenerated no matter which machine in a cluster is doing the work."} {"_id": 5, "text": "Generating a set of islands in real time With thismethod I can create multiple separated islands just by randomly choosing a point as the center for a gradient. Now what I want to do is to have a system of real time generation like the one in Minecraft. When I generate a new chuck, some random points inside of it are chosen to be the center of a gradient. However, if one of these points gets positioned near the border of the chuck, the gradient will cross the border and affect the other chunk. But if that other chunk was already generated and visited by the player, I couldn't update that chunk to include the new gradient, so I would end up with something that looks like this What could be a method to avoid this?"} {"_id": 5, "text": "FMB alteration based on derivative and noise value I'm still working on my procedural terrain generation and kind of sniffing ideas from Sean Murray (No Mans Sky programmer). So I watched this GDC17 presentation, but some things just don't make sense to me Altering amplitude based on currentGain AND noise value? This just makes all heights (large noise val like 1) do no alteration and the small ones to make the amplitude really tiny. How does that represent Altitude Erosion? The audio also doesn't help much, has he isn't going into code detail. Any ideas to makes sense out of this?"} {"_id": 5, "text": "2D Procedural resource generation I'm currently building a Mars colonization game and I'm wondering if anyone can point me to resources on how to procedurally create resources and biomes within a 2d game. Much like in a game like Factorio were different resources and such are spread around the map. Thanks in advance!"} {"_id": 6, "text": "2D Car Simulation with Throttle Linear Physics I'm trying to make a simulation game for an automatic cruise control system. The system simulates a car on varying inclinations and throttle speeds. I've coded up to the car physics but these do note make sense. The dynamics of the simulation are specified as follows a V' V T (k1)V (k2) ma V' (1 (k1 m) V) T ( k2 m) Where T throttle position k1 viscous friction V speed V' next speed angle of incline k2 m g sin a acceleration m mass Notice that the angle of incline in the equation is not chopped up by sin or cos. Even the equation for acceleration isn't right. Can anyone correct them or am I misinterpreting the physics? Source \"scribd.com doc 105335356 124 INDUSTRIAL APPLICATIONS\" Page 508. Number 13.2."} {"_id": 6, "text": "How can velocity be normalized after a collision if a projectile needs to maintain its height? I want a thrown disc to travel along the same height, even after collisions. The problem is, there's an edge case that I'm not sure how to deal with. The lowest tech solution would probably be to quickly interpolate the rotation of the disc so that it's flat, and also interpolate its height so that it never collides with another object in a way that would impact its height, but that might be a bit heavy handed. I worry that it would look really bad if you're holding the disc at an angle when releasing it for a throw, and it just suddenly \"swooshes\" flat mid flight. Would that seem janky? If I don't do that, and I simply retain the disc's rotation as it flies along its path, then it might collide with an object's edge in a way where the correct bounce angle would send it upwards or downwards, which I don't want. But if I simply hard code the y velocity to 0, then in those instances, the lateral velocity becomes very low. Should I just check for a \"minimum\" speed, and increase it to that if it falls below that? Because the issue is that in the case where it bounces upward, there's a good chance that the correct bounce angle, if the y velocity is ignored, results in the disc continuing in the same direction rather than bouncing. So I'm honestly leaning towards extending the collision boxes for these objects really far above and below where the disc could possibly be thrown, that way regardless of its orientation, it would always be the side of a cylinder colliding either with the side of a box or one of its vertical edges, and the velocity change should thus (hopefully) always look like it makes sense."} {"_id": 6, "text": "How can I resolve collisions a little better? I'm currently developing a physics engine and I'm not sure the best way to go about resolving my problem. I have a little box, that I can move around with in my scene. When I'm resolving collisions, I take in the two bodies that are colliding, determine what both of their normal forces are and apply it to them (I also multiple it by a small number, if I don't I can't continue to walk around the scene). Anyways, while I'm resolving the collision I do this a.Position a.Velocity b.Position b.Velocity I know this isn't a very good way to resolve the collision, but I'm just not sure how to do it correctly. How should I go about resolve collision correctly? Also, here is the whole method I use to resolve collision at the moment private void ResolveCollision(Body a, Body b) Calculate normals forces (N M G) Vector2 normalForceA this.CalculateNormalForce(a) Vector2 normalForceB this.CalculateNormalForce(b) Resolve collision, which is super buggy and obviously a bad idea. a.Position a.Velocity b.Position b.Velocity Apply normal forces a.ApplyForce( normalForceA 11) b.ApplyForce( normalForceB 11) EDIT This is for a 2D game. EDIT Here is a video of what's going on when I applied TomTsagk's answer. The video is here"} {"_id": 6, "text": "Box2D Asteroids Like Spaceship Physics I'm new to Box2D (JavaScript) (and the world of physics mathematics) so I'm having a hard time working out the methods I need to use to make a spaceship act like it does in the classic arcade game Asteroids. I've tried varying methods and nothing works quite right (as I obviously don't understand what I'm doing). For example, in Asteroids you can rotate the ship but it turns at a constant rate and won't continue to spin after you let go of the key, unlike the rest of the ship's physics. So I tried SetPositionAndAngle(position, angle 0.1) as an example to make the ship turn without having it rotate after key up, however when I then press the up key to make the ship go forward, it turns left as well as going forwards (via ApplyForce). Can anybody give some tips on how I should be dealing with Asteroids type spaceship physics?"} {"_id": 6, "text": "What is a simple deformer in which vertices deform linearly with control points? In my project I want to deform a complex mesh, using a simpler 'proxy' mesh. In effect, each vertex of the proxy collision mesh will be a control point bone, which should deform the vertices of the main mesh attached to it depending on weight, but where the weight is not dependant on the absolute distance from the control point but rather distance relative to the other affecting control points. The point of this is to preserve complex three dimensional features of the main mesh while using physics implementations which expect something far simpler, low resolution, single surface, etc. Therefore, the vertices must deform linearly with their respective weighted control points (i.e. no falloff fields or all the mesh features will end up collapsed) as if each vertex was linked to a point on the plane created by the attached control points and deformed with it. I have tried implementing the weight computation algorithm in this paper (page 4) but it is not working as expected and I am wondering if it is really the best way to do what I want. What is the simplest way to 'skin' an arbitrary mesh, to another arbitrary mesh? By skin I mean I need an algorithm to determine the best control points for a vertex, and their weights."} {"_id": 6, "text": "Box2d Body to follow mouse movement I am trying a box2d body to rotate following the mouse. Here's an image to clarify what I mean. The red and blue circles are current point of the mouse and the body (of corresponding color) moving rotating to follow it. Basically the rectangle should rotate with its one end pointed towards where the mouse pointer is. Here's my code so far, World world Body body, bullet Box2DDebugRenderer debugRenderer OrthographicCamera camera Override public void create() world new World(new Vector2(0, 0f), true) debugRenderer new Box2DDebugRenderer() float w Gdx.graphics.getWidth() float h Gdx.graphics.getHeight() BodyDef bodyDef new BodyDef() bodyDef.type BodyDef.BodyType.DynamicBody bodyDef.position.set(10, 10) body world.createBody(bodyDef) PolygonShape shape new PolygonShape() shape.setAsBox(2, 5) FixtureDef fixtureDef new FixtureDef() fixtureDef.shape shape fixtureDef.density 1f Fixture fixture body.createFixture(fixtureDef) shape.dispose() Gdx.input.setInputProcessor(this) camera new OrthographicCamera(200, 100 (h w)) camera.position.set(camera.viewportWidth 2f, camera.viewportHeight 2f, 0) camera.update() Override public void render() Gdx.gl.glClearColor(1, 1, 1, 1) Gdx.gl.glClear(GL20.GL COLOR BUFFER BIT) world.step(Gdx.graphics.getDeltaTime(), 6, 2) camera.update() debugRenderer.render(world, camera.combined) Override public boolean mouseMoved(int screenX, int screenY) int mouseX screenX int mouseY Gdx.graphics.getHeight() screenY float rotateAngle MathUtils.radiansToDegrees MathUtils.atan2((float) mouseY (float) body.getPosition().y, (float) mouseX (float) body.getPosition().x) body.setTransform(body.getPosition(), rotateAngle 57.2957795f) return true And here's a gif of how it appears now. as you can see the body gets skewed and also doesn't rotate properly. What am I missing?"} {"_id": 6, "text": "Build a convex hull from a given mesh in Bullet According to this tutorial, a convex hull is the most accurate shape one can build from a mesh? I have two questions regarding this How do I build a convex hull from a given, complex mesh in Bullet? Should this be done offline? How do most people do this? (that is, create a collision shape from a mesh in games)"} {"_id": 6, "text": "Precision problem when doing collision detection? I've got this problem, and I don't know what may be the cause of it. I finished reading this article on sphere triangle collision. My character (the sphere) stops before it reaches the triangle, and then over like 2 seconds it slowly reaches the \"expected\" spot where I really want it to stop. Thing is, I tried switching my whole algorithm from floats to doubles, and they problem stays. So I'm not really sure if that's a number precision problem, although after many debugging I think that there could be a problem with the precision of the next calculation (t0 and t1) Real signedDistToTrianglePlane trianglePlane.signedDistanceTo(colPackage gt basePoint) Real normalDotVelocity glm dot(trianglePlane.normal(),colPackage gt velocity) t0 ((Real) 1.0 signedDistToTrianglePlane) normalDotVelocity t1 ((Real)1.0 signedDistToTrianglePlane) normalDotVelocity Could the dot product function be a trap for precise calculations? (signedDistanceTo() uses glm dot as well) Here's my collision detection amp response algorithms. One last thing to note is that on page 47 in the article, the author uses a verySmallDistance number where he only updates some variables if distanceToCollision is bigger than it. I don't understand these lines. I've been struggling with this issue for a week now. Any help or idea on the subject will be highly appreciated!"} {"_id": 6, "text": "multiplayer networking with physics I'm curious how multiplayer networking with physics is implemented in racing games. We have a physical world with multiple fast moving vehicles controlled by different people. Let's say that vehicles have weapons and can shoot each other (Twisted Metal, Vigilante v8) I'm anxious about hits and collisions. Authoritative server or a better alternative?"} {"_id": 6, "text": "What to call an Object that falls only when it's collided with? I am currently working on Objects Blocks for my 2D Jump'n'Run. Now I want to add a new type of block, which falls down only if the player an actor collides with it. But I don't know what to call this type. Or more generally Objects that only do stuff change behaviour if interacted with. How should I name the first type and the second more general type?"} {"_id": 7, "text": "Android 2d scrolling background I am a very beginner in game development. All I want to achieve in the beginning is a free scrollable background like in a strategy game with my custom graphics. In my case it is supposed to be a 2d city map where the user can select buildings. How can I achieve that as easy as possible in android? With OpenGL? Die you have any examples especially for this scrolling functionality?"} {"_id": 7, "text": "How do I store level data in Android? I'm building a game where enemies come in waves. I want to create a file where I can define data about the waves ( of enemies, spawn times, speeds, etc.). I come from a background in iOS and would normally use something like a plist. What would be the best way to do something like this in Java and Android?"} {"_id": 7, "text": "how to play a video with Android or AndEngine I don't know if it's possible to show a video like the videos in Angry Birds (Something in Flash or another format) in middle of a game with Android or AndEngine or are they animations what they are made with Sprites and a more programming process?? How do they show those videos?"} {"_id": 7, "text": "How do I find an optimal FPS for best user experience while preserving battery life? I'm developing an Android game. The \"graphics\" I'm using need little CPU GPU, so they run with high FPS. Because saving energy is important on Android devices, I want to limit frame rate (if deltaTime lt limitDeltaTime gt Thread.sleep(...) ). How high should the frame rate be to get an optimal balance between saving resources and a \"fluid\" gameplay experience. At which point is the user experience optimal or no longer improving?"} {"_id": 7, "text": "What's wrong with this Open GL ES 2.0. Shader? I just can't understand this. The code works perfectly on the emulator(Which is supposed to give more problems than phones ), but when I try it on a LG E610 it doesn't compile the vertex shader. This is my log error(Which contains the shader code as well) EDITED Shader uniform mat4 u Matrix uniform int u XSpritePos uniform int u YSpritePos uniform float u XDisplacement uniform float u YDisplacement attribute vec4 a Position attribute vec2 a TextureCoordinates varying vec2 v TextureCoordinates void main() v TextureCoordinates.x (a TextureCoordinates.x u XSpritePos) u XDisplacement v TextureCoordinates.y (a TextureCoordinates.y u YSpritePos) u YDisplacement gl Position u Matrix a Position Log reports this before loading compiling shader 11 05 18 46 25.579 D memalloc(1649) dev pmem Mapped buffer base 0x51984000 size 5570560 offset 4956160 fd 46 11 05 18 46 25.629 D memalloc(1649) dev pmem Mapped buffer base 0x5218d000 size 5836800 offset 5570560 fd 49 Maybe it has something to do with that men alloc? The phone is also giving a constant error while plugged ERROR FBIOGET ESDCHECKLOOP fail, from msm7627a.gralloc Edited \"InfoLog \" refers to glGetShaderInfoLog, and it's returning nothing. Since I removed the log in a previous edit I will just say i'm looking for feedback on compiling shaders. Solution More questions Ok, the problem seems to be that either ints are not working(generally speaking) or that you can't mix floats with ints. That brings to me the question, why on earth glGetShaderInfoLog is returning nothing? Shouldn't it tell me something is wrong on those lines? It surely does when I misspell something. I solved by turning everything into floats, but If someone can add some light into this, It would be appreciated. Thanks."} {"_id": 7, "text": "openGL ES change the render mode from RENDERMODE WHEN DIRTY to RENDERMODE CONTINUOUSLY on touch i want to change the rendermode from RENDERMODE WHEN DIRTY to RENDERMODE CONTINUOUSLY when i touch the screen. WHAT i Need Initially the object should be stationary. after touching the screen, it should move automatically. The motion of my object is a projectile motion ans it is working fine. what i get Force close and a NULL pointer exception. My code public class BallThrowGLSurfaceView extends GLSurfaceView MyRender renderObj Context context GLSurfaceView glView public BallThrowGLSurfaceView(Context context) super(context) TODO Auto generated constructor stub renderObj new MyRender(context) this.setRenderer( renderObj) this.setRenderMode(RENDERMODE WHEN DIRTY) this.requestFocus() this.setFocusableInTouchMode(true) glView new GLSurfaceView(context.getApplicationContext()) Override public boolean onTouchEvent(MotionEvent event) TODO Auto generated method stub if (event ! null) if (event.getAction() MotionEvent.ACTION DOWN) if ( renderObj ! null) Log.i(\"renderObj\", renderObj \"lll\") Ensure we call switchMode() on the OpenGL thread. queueEvent() is a method of GLSurfaceView that will do this for us. queueEvent(new Runnable() public void run() glView.setRenderMode(RENDERMODE CONTINUOUSLY) ) return true return super.onTouchEvent(event) PS i know that i am making some silly mistakes in this, but cannot figure out what it really is."} {"_id": 7, "text": "Rotating each quad in a batch separately? Background In my app I use the following code to rotate my quad Code Rotate the quad Matrix.setIdentityM(mRotationMatrix, 0) Matrix.translateM(mRotationMatrix, 0, centreX, centreY, 0f) Matrix.rotateM(mRotationMatrix, 0, angle, 0, 0, 0.1f) Matrix.translateM(mRotationMatrix, 0, centreX, centreY, 0f) And then apply the matrices Combine the rotation matrix with the projection and camera view Matrix.multiplyMM(mvpMatrix2, 0, mvpMatrix, 0, mRotationMatrix, 0) get handle to shape's transformation matrix mMVPMatrixHandle GLES20.glGetUniformLocation(iProgId, \"uMVPMatrix\") Apply the projection and view transformation GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix2, 0) The above code works great when I'm drawing single quads. Problem However, if I have a few quads with the same texture to draw, I batch them up and draw them with one call to glDrawArrays. I can't work out if it is possible to rotate each individual quad before drawing them (or how to do it if it is possible) I realise, they will all be rotated the same amount at the same time but this isn't an issue). Rotation method public void rotateBatchQuads(int coordinates, int angle) for (x 0 x lt coordinates.length x 2) Center of quad (Along the x) float centreX coordinates x quadWidth 2 Pseudo code float centreY coordinates x 1 quadHeight 2 Pseudo code Center of quad (Along the y) Rotate the quad Matrix.setIdentityM(mRotationMatrix, 0) Matrix.translateM(mRotationMatrix, 0, centreX, centreY, 0f) Matrix.rotateM(mRotationMatrix, 0, angle, 0, 0, 0.1f) Matrix.translateM(mRotationMatrix, 0, centreX, centreY, 0f)"} {"_id": 7, "text": "How to fit a bitmap to a random size rect so it does not strech? (Android Studio) I'm trying to fit a bitmap to a random size rect but I don't want the bitmap to be streched out. I've tried using BitmapShader and tile it but it become animated and not fixed. Here is my code without BitmapShader. public void draw(Canvas canvas) Paint paint new Paint() canvas.drawBitmap(spike1, null, rectangle, paint) canvas.drawBitmap(spike1, null, rectangle2, paint) Here is my code with BitmapShader. public void draw(Canvas canvas) Paint paint new Paint() paint.setShader(new BitmapShader(spike1, Shader.TileMode.REPEAT, Shader.TileMode.REPEAT)) canvas.drawRect(rectangle, paint) canvas.drawRect(rectangle2, paint) Here is a picture of my problems Can somebody tell what I need to do so the image is fix but not stretched?"} {"_id": 7, "text": "How do i update opengl lightning equation in my fragment shader to make my texture less glossy and more like a fabric I built a model in blender, and am currently trying to import it into my android app using assimp together with opengl, i dont have any issues with the importing but my goal is to make the object look realistic as possible, and this is where my problem lies...I have implemented normal mapping using both the diffuse and normals of my texture but after rendering it the objects looks smooth and glossy, yes the bumps were rendered well and i like it but i need the material to look more like a fabric (cloth textile), please how can i do this? do i need to update my lighting equation or the issue is with my original diffuse texture? Here is my original diffuse texture Here is my rendered model Here's my fragment shader void main() obtain normal from normal map in range 0,1 vec3 normal texture2D( normalMap, textureCoords ).xyz transform normal vector to range 1,1 normal normalize(normal 2.0 1.0) this normal is in tangent space get diffuse color vec3 color texture2D( textureSampler, textureCoords ).xyz ambient vec3 ambient 0.1 color diffuse vec3 lightDir normalize(tangentLightPos tangentFragPos) float diff max(dot(lightDir, normal), 0.0) vec3 diffuse diff color specular vec3 viewDir normalize(tangentViewPos tangentFragPos) vec3 reflectDir reflect( lightDir, normal) vec3 halfwayDir normalize(lightDir viewDir) float spec pow(max(dot(normal, halfwayDir), 0.0), 32.0) vec3 specular vec3(0.2) spec gl FragColor vec4(ambient diffuse specular, 1.0) Here's my vertex shader mat3 transpose(mat3 m) return mat3(m 0 0 , m 1 0 , m 2 0 , m 0 1 , m 1 1 , m 2 1 , m 0 2 , m 1 2 , m 2 2 ) void main() gl Position mvpMat vec4(vertexPosition, 1.0) vec3 fragPos vec3(model vec4(vertexPosition,1.0)) vec3 T normalize(normalMatrix tangent) vec3 N normalize(normalMatrix normal) T normalize(T dot(T, N) N) vec3 B cross(N, T) mat3 TBN transpose(mat3(T, B, N)) values for fragment shader tangentLightPos TBN lightPos tangentViewPos TBN viewPos tangentFragPos TBN fragPos textureCoords vertexUV"} {"_id": 7, "text": "Phone complains that identical GLSL struct definition differs in vert frag programs When I provide the following struct definition in linked frag and vert shaders, my phone (Samsung Vibrant Android 2.2) complains that the definition differs. struct Light mediump vec3 position lowp vec4 ambient lowp vec4 diffuse lowp vec4 specular bool isDirectional mediump vec3 attenuation constant, linear, and quadratic components uniform Light u light I know the struct is identical because its included from another file. These shaders work on a linux implementation and on my Android 3.0 tablet. Both shaders declare \"precision mediump float \" The exact error is Uniform variable u light type precision does not match in vertex and fragment shader Am I doing anything wrong here, or is my phone's implementation broken? Any advice (other than file a bug report?)"} {"_id": 8, "text": "2D platformers why make the physics dependent on the framerate? \"Super Meat Boy\" is a difficult platformer that recently came out for PC, requiring exceptional control and pixel perfect jumping. The physics code in the game is dependent on the framerate, which is locked to 60fps this means that if your computer can't run the game at full speed, the physics will go insane, causing (among other things) your character to run slower and fall through the ground. Furthermore, if vsync is off, the game runs extremely fast. Could those experienced with 2D game programming help explain why the game was coded this way? Wouldn't a physics loop running at a constant rate be a better solution? (Actually, I think a physics loop is used for parts of the game, since some of the entities continue to move normally regardless of the framerate. Your character, on the other hand, runs exactly fps 60 as fast.) What bothers me about this implementation is the loss of abstraction between the game engine and the graphics rendering, which depends on system specific things like the monitor, graphics card, and CPU. If, for whatever reason, your computer can't handle vsync, or can't run the game at exactly 60fps, it'll break spectacularly. Why should the rendering step in any way influence the physics calculations? (Most games nowadays would either slow down the game or skip frames.) On the other hand, I understand that old school platformers on the NES and SNES depended on a fixed framerate for much of their control and physics. Why is this, and would it be possible to create a patformer in that vein without having the framerate dependency? Is there necessarily a loss of precision if you separate the graphics rendering from the rest of the engine? Thank you, and sorry if the question was confusing."} {"_id": 8, "text": "Making Edges of Hand Drawn Tileset Match Up I'm making a tile based platformer. A friend at college is doing all my art, and so far it's beautifully hand drawn. The problem is, I can't seem to make the edges of the tilesets match up. He measures them meticulously to get sections to match up, but, due to variations in his art medium, tiles that aren't next to each other in his drawing don't match up. For example, he's drawn a basic 9 slice tileset, and the top to center, left top to left center, right top to right center, etc. all connect perfectly because they're touching each other in his drawing. But when I clip the tiles apart and put them together in other variations or try to put other \"change up\" tiles together (any tile that wasn't touching in his drawing), the larger features fit together because of his measurements, but you can see an obvious seam where they join. I've been using the heal tool in my graphic editor to fix the seams and re export the tiles with edited edges to fit together correctly, and it works well, but when I heal the edges for one seam, it makes another seam appear. Note I don't want a solution to fix the seams at runtime using graphical filter magic or something on the code side. I'm looking for a solution to process the images and export them so that they fit together. EDIT Here's one of the tiles for reference. It tiles along the bottom of a block of stone. And here's a 9 slice thumbnail of an early stage of the tileset Edit again To be clear, it's not the big ish things (stone sides, grass, etc.) connecting that's my problem. My artist already measures everything to make it match up. It's the variations in shading and impreciseness of the physical medium that's the problem."} {"_id": 8, "text": "What does the term 'photorealistic' really mean? I was wondering about the term 'photorealistic' in regards to rendering and was wondering how this is used. Is it used to describe a shader (or set of) that have certain quantifiable features? Or any rendering thats not meant to be abstract, like the cartoon effect seen in Borderlands? Or is it just a subjective term meaning 'really really realistic'?"} {"_id": 8, "text": "Creating graphics at different angles for sprites I am developing a Java game that uses sprites for the graphics. It's just a top down shooter and our ship sprites look like the following These work fine, but they're hard to create and the artist who created them is no longer working on the game. I could easily create a top down view of a ship in Photoshop myself, but I'm not sure how to get all the angles. What do you think would be the best program or approach would be for me to create more?"} {"_id": 8, "text": "Why are huge polygon amounts bad? It is always said that the polygon amount of a single modell must be as little as possible when it comes to realtime simulations such as computer games. (Or at least lower than when rendering a movie) I am fully aware that this must be done in order to save performance. But aside from that information i cannot find why huge polygon amounts must be avoided. (In Short I know that polygons eat performance. I want to know why they eat performance) So my question would be What happens when a frame is rendered? The polygons are surely somehow processed in the graphicard. What happens there?. If possible i would like to have some links to sites containig this information."} {"_id": 8, "text": "Drawing the same mesh or drawing the same material? I was wondering. Suppose I have a 1000 grass meshes. They all have the same material, but I create them separately, because they look slightly different, because they have different heights. Does my GPU speed up if I only draw one mesh over and over again? Or is only the material switching and uniform setting the main problem? So Should I consider going to only one mesh a 1000 times, or is it ok to have a lot of different meshes with only the same material?"} {"_id": 8, "text": "How many polycount for smartphone hardware starting from 2015? I already know that the answer is \"depends\". Depend if the hardware is low mid or high hend. So, I try to re formulate is 100K concurrent polycount in a 3d game for smartphone too much ? How many is it a good compromise ? Thanks"} {"_id": 8, "text": "How do games deal with Z sorting partially transparent foliage textures? I was busy implementing basic transparency in a prototype I'm working on when something occurred to me. In order for a given texture's transparency to work as expected, the (semi )transparent texture must be drawn after whatever is behind it, right? Well, if we take for example a tree or shrub in a game like Skyrim, the texture(s) that make up the foliage on that tree or shrub must include some transparency somewhere, right? A vertex perfect leaf model would be far too resource intensive. But the player can move around, and sometimes through, any plant at will, thus changing the relative position of all textures to the camera. Doesn't this mean that the game has to constantly Z sort all textures, both between models and even within a single plant model, whenever the player moves (so potentially every single frame)? Isn't that very resource intensive? How do games with lots of partially transparent textures deal with this?"} {"_id": 8, "text": "Gravity independent of game updates per second Edit Just for clarification, my sprite's 'movement' isn't the problem. If I set my Time variable to 4 seconds, then it will cross the screen in exactly 4 seconds regardless of logic updates rate per second, rendering rate per second or screen resolution. So I am pretty sure I'm scaling the sprite's movement correctly. What I'm pretty sure I'm not doing correctly, is scaling acceleration. Original Question I'm trying to implement gravity in my 2d platformer and am having a few problems understanding how to keep it consistent when I change my updates per second. Here's what I have. My Gameloop overview Currently, my gameloop renders at the maximum rate allowed by device and the updates are 'clamped' to an upper limit (At the moment, 60 per second). I am working on the assumption that most of the time, my game will have no problem hitting this, even if the actual rendering dips. Thus I am doing all of my calculations based Delta Time derived from this fixed 'ticksPerSecond' value. I don't know for sure that this will remain at 60, I may decide at some point during the development to lower this upper limit. My Gravity variable declaration and initial values At the moment, I have float spriteYTime 7f This is the (initial) amount in seconds that this sprite will take to move from the top of the screen to the bottom. float fallAccel .5 This is the value that will subtracted from he sprite's fall speed (To make it fall ever faster) float terminalVelocity 1.5f Cap speed at this rate (1.5 seconds) my sprite's position is worked out on dt as follows Delta time float ticksPerSecond 60 float dt 1f ticksPerSecond Velocity spriteYVel 1 spriteYTime Update position spriteYReal spriteYReal (spriteYVel dt) Convert to screen coordinates (will be drawn at this Y coordinate) spriteyScreen (int) (furmanYReal height) My Gravity code If my sprite's state is 'f' (meaning falling) then apply gravity if(sprite.getState('f') true) Calculate new position spriteYReal spriteYReal (spriteYVel dt) Convert to screen coordinates sprite.yScreen (int) (spriteYReal r.height) Reduce time by fallAccel amount amp update velocity value based on new time value (So sprite falls slightly faster this frame compared to the last spriteYTime fallAccel spriteYVel 1 spriteYTime Check that speed isn't faster than terminal velocity if (spriteYTime lt spriteVelocity) spriteYTime spriteVelocity The problem Now, this does work, but if I change my ticksPerSecond value, it goes wrong (it falls at different rates). I know that 'Earth Normal' gravity is approximately 9.8 meters per second per second, but this is measured 'per second' whereas I (think) I need to work with 'per frame'. This is where I'm getting confused. So, take this example If set my initial Time value to 4 seconds, then if it remained constant at this speed, it would take 4 seconds to reach the bottom, if I changed my ticksPerSecond, because the time value is worked out using delta, it would still take 4 seconds but if I apply 'fallAccel\" to the Time value (i.e. subract it), it goes wrong when I change the ticksPerSecond value why? How can I get this to fall at the same rate regardless of the value of 'ticksPerSecond' any help in understanding this would be appreciated."} {"_id": 8, "text": "Geometry Shader and Stream Output with Directx11 I am having trouble trying to send verticies generated in the Geometry Shader to Stream Output. What I am trying to accomplish is to generate verticies from the Geometry Shader and store them to a vertex buffer so that I can use the vertex buffer to draw later. I read that I have to use the CreateGeometryShaderWithStreamOutput function to create a Geometry Shader that can send verticies to Stream Output instead of the rasterization stage. This is how I am trying to use it device gt CreateGeometryShaderWithStreamOutput(this gt mGSBlobSO gt GetBufferPointer(), this gt mGSBlobSO gt GetBufferSize(), so decl, 1, amp stride, 1, D3D11 SO NO RASTERIZED STREAM, NULL, amp this gt mGeometryShaderSO) I am getting an E INVALIDARG at this line. I am specifying D3D11 SO NO RASTERIZED STREAM because I think this means that I do not want to send data to the rasterizer but I am not sure. When I replace D3D11 SO NO RASTERIZED STREAM with a 0, I do not get this runtime error, but I do not get the result I want. How can I setup the geometry shader to store vertices to a vertex buffer in Stream Output?"} {"_id": 9, "text": "Cocos2d Check if place is free before moving (all objects) Is there a method in Cocos2d like CGRectIntersectsRect, except instead of limiting it to one sprite, it checks for ALL objects?"} {"_id": 9, "text": "Creating multiple fixtures in one body I want to create this type of fixture in one body. Here square and circle both are different fixtures but attach to one body. How do I do this?"} {"_id": 9, "text": "What is the correct way of changing image of existing CCSprite? How do I correctly change sprite to show another image? Or to have another texture? This is the way I do it now. When I need to change sprite image I increase state variable. So i have one picture for 0, another for 1 and another for 2. First i tried just saying mysprite CCSpriteWithFile \"sprite.png\" but that didn't end up too well since I was making new objects every frame and it got messy pretty fast. So I added a check. There is another variable that is stateChanged and it starts at zero. So I compare state to stateChanged before actually making any sprites and if it differs I do the magic. But here comes the problem. Somehow this sprite won't flip. And I believe the problem is in this method. (void) updateState if (stateChanged ! state) if (state 0) currentSprite CCSprite spriteWithFile \"eggclosed.png\" self addChild currentSprite if (state 1) currentSprite nil self removeAllChildrenWithCleanup YES currentSprite CCSprite spriteWithFile \"eggopen.png\" self addChild currentSprite if (state 2) currentSprite nil self removeAllChildrenWithCleanup YES currentSprite CCSprite spriteWithFile \"yoshi.png\" self addChild currentSprite stateChanged state TLDR How to correctly change image of sprite. I have a Pet object that has a CCSprite property. And I sometimes need to change sprte to look another way. How do?"} {"_id": 9, "text": "Projectile immediately intersecting with target I have a fairly simple and somewhat cookie cutter method for testCollisions (FYI BasicProjectile is a CCNode with a CCSprite member). for (BasicProjectile basicProjectile in self.projectiles) NSLog( \"projectile sprite \",NSStringFromCGRect(basicProjectile.sprite.boundingBox)) NSLog( \"destination sprite \",NSStringFromCGRect(basicProjectile.destination.sprite.boundingBox)) if (CGRectIntersectsRect(basicProjectile.sprite.boundingBox, basicProjectile.destination.sprite.boundingBox)) basicProjectile.destination setHealth (basicProjectile.destination.hp basicProjectile.damage) self.projectiles removeObject basicProjectile self removeChild basicProjectile cleanup YES Those NSLog statements output 2013 03 23 10 02 59.124 GridWars 25268 c07 projectile sprite 10, 10 , 20, 20 2013 03 23 10 02 59.124 GridWars 25268 c07 destination sprite 13.5, 34 , 27, 68 The main problem is that the collision immediately happens. It should take a few frames to travel from the source to the destination. So although the logic of setting the health works fine the sprite itself is never seen on the screen because the very second its fired it is removed. From what I understand about CGRectIntersectsRect those 2 points listed above do not intersect. Am I wrong are they intersecting? What can be causing these projectiles to immediately intersect with their target? BasicProjectile class (CCNODE superclass) (id)initWithSourceAndDestination (GameCharacters )Source destination (GameCharacters )Destination if (self super init ) self.damage 25 self.sprite CCSprite spriteWithFile \"Projectile.png\" self addChild self.sprite z 1000 self.source Source self.destination Destination return self GameCharacters would look similar to this except have health and mana properties. WizardHero Class, Method Fire basicProjectile.position self.position self.hostLayer addChild basicProjectile z 2021 Determine where we wish to shoot the projectile to int realX Are we shooting to the left or right? CGPoint diff ccpSub(basicProjectile.destination.position, basicProjectile.source.position) if (diff.x gt 0) realX (self.hostLayer.tileMap.mapSize.width self.hostLayer.tileMap.tileSize.width) (basicProjectile.contentSize.width 2) else realX (self.hostLayer.tileMap.mapSize.width self.hostLayer.tileMap.tileSize.width) (basicProjectile.contentSize.width 2) float ratio (float) diff.y (float) diff.x int realY ((realX basicProjectile.position.x) ratio) basicProjectile.position.y CGPoint realDest ccp(realX, realY) Determine the length of how far we're shooting int offRealX realX basicProjectile.position.x int offRealY realY basicProjectile.position.y float length sqrtf((offRealX offRealX) (offRealY offRealY)) float velocity 380 1 380pixels 1sec float realMoveDuration length velocity Move projectile to actual endpoint id actionMoveDone CCCallFuncN actionWithTarget self.hostLayer selector selector(projectileMoveFinished ) basicProjectile runAction CCSequence actionOne CCMoveTo actionWithDuration realMoveDuration position realDest two actionMoveDone self.hostLayer.projectiles addObject basicProjectile Basically here is the class structure Three main superclasses GameProjectiles CCNodes GameLevels CCNodes GameChars CCNodes Then the actual more specific classes (used on the GameLevels) BasicProjectile GameProjectiles Level1Layer GameLevels WizardHero GameCharacters RedEnemy GameCharacters"} {"_id": 9, "text": "Recommended book for cocos2d? I'm an experienced programmer that recently got into iOS development by working through the big nerd ranch book by Aaron Hillegass and Joe Conway. I loved the way the book was structured in terms of typing in the code and doing the challenges. I'm interested in learning more about iOS gaming and cocos2d, but am a complete newbie in terms of game development design. There are a number of books on amazon on cocos2d, can anyone recommend one in particular?"} {"_id": 9, "text": "Importance of scripting engine at Cocos2d Game Engine Each Game Engine is different and solves different problems in different ways, so the engine design does vary greatly from engine to engine (even though a lot of principles are shared from engine to engine). Cocos2D is a great product on it s own, but it doesn t expose engine functionality to a scripting Language like Lua, JavaScript etc. My Question How much important to integrate a Scripting Engine at Cocos2d?"} {"_id": 9, "text": "How to use generic level editors with my arbitary code? I have a game, breakout clone. It has a Block class that contains CCSprite as a property. Usually I made levels by manually setting those blocks to positions. Here comes the question, how do I use generic level editors with my custom code? All editors work with CCSprites and not my class. How can I overcome that?"} {"_id": 9, "text": "DPad style movement for AI without using A What is the easiest way to implement DPad style movement (No diagonal) for AI without using and A algorithm? I thought about having the enemy catch up to the player in the Y axis first then the X axis (or vice versa) but then it would be too easy to evade the enemy. If it matters I'm doing this in Objective C using Cocos2d. Any input would be appreciated."} {"_id": 9, "text": "I need some help with stopping individual sound effects in OALSimpleAudio I'm trying to find out how to stop a specific sound effect rather than having to stop all of them. I'm using OALSimpleAudio. I can't seem to find how to do it. simple google searches result in nothing, which means my searches are crap, it's REALLY easy to do and i'm ... stupid p, or it isn't possible. I hope you guys can help me. In the background i have a humming engine sound, but when the object takes damage, the humming sound is stopped, that's not what i want. case kStateTakingDamage CCLOG( \"Ship taking damage\") self playTakingDamageSound self.characterHealth self.characterHealth 1.0f action CCActionAnimate actionWithAnimation damageAnimation GameManager sharedGameManager stopSoundEffect break (void) playSoundEffect (NSString ) soundEffect self playSoundEffect soundEffect andLoop NO (void) playSoundEffect (NSString ) soundEffect andLoop (BOOL) loop OALSimpleAudio sharedInstance playEffect soundEffect loop loop (void) stopSoundEffect OALSimpleAudio sharedInstance stopAllEffects (id) isSoundPlaying TODO find out how this works"} {"_id": 9, "text": "How to create fun and interactive menus with cocos2d? Possible Duplicate How to make animated buttons in a menu with Cocos2d? How do I make fun and interactive menus with cocos2d, all the tutorials are just for simple clickable menus. Can any one help me learn this? It would be see the menu flip or slide away, I know you can do this with scenes but how do I make this with just the menu? Let's say the menu is a bunch of cards and I want them to flip over like the sub menu is on the backside of the card, how would I do this? Or, make the Main menu slide in from the left, right, top or bottom side and then when a button is selected it slides away the same way and switch to a sub menu (settings, Score etc)? Anyone? David"} {"_id": 10, "text": "Image drawing tools (that aren't Photoshop) that work in the sRGB colorspace I'm looking for a way to create sRGB raster images. Now, I could use Inkscape and just have it render to a raster format, but are there any image drawing tools other than Photoshop that work in sRGB? Here's what I want to be able to do more specifically. Let's say the image is all one color. I want to be able to pick color 0x808080, which is in the sRGB colorspace. I want the file to write an image where every pixel is 0x808080. And then, when I read it, I can tell OpenGL that it's sRGB color data, so it will treat the 0x808080 color has being from the sRGB colorspace. The important part is that, when I'm selecting colors in the tool, the colors I'm selecting are from the sRGB colorspace."} {"_id": 10, "text": "Recording Xbox controller button presses? I would like to generate some sort of log file of xbox gamepad events while playing an existing game on PC, such as Call of Duty. I am not developing a game necessarily, I just would like a static file of all button presses recorded during a session of gameplay. I am writing an application that will use this data as input. Can anyone recommend any software tools that could enable me to do this? Thanks in advance. I have Mac, Windows, and Ubuntu that I can use."} {"_id": 10, "text": "Tool to create a bitmap font from a true type font What tool do you use to convert ttf fonts to bitmap fonts? Following is a list of features I'm looking for. I'd be happy with any tool really, but one that had these features would be better. outputs power of 2 texture atlas parsible config file (so I don't have to use their rendering library) output kerning information cross platform a rendering library would be great if it lets me make the actual OpenGL calls. So what tool have you used? Were you happy with it?"} {"_id": 10, "text": "Tool to create a bitmap font from a true type font What tool do you use to convert ttf fonts to bitmap fonts? Following is a list of features I'm looking for. I'd be happy with any tool really, but one that had these features would be better. outputs power of 2 texture atlas parsible config file (so I don't have to use their rendering library) output kerning information cross platform a rendering library would be great if it lets me make the actual OpenGL calls. So what tool have you used? Were you happy with it?"} {"_id": 10, "text": "Are bounding volumes created by artists? I'm interested in where in the tool chain bounding boxes are created. Are they made by the artists? In the modeling tool? How are they exported? Can Maya 3DS Max Blender ... mark geometry as bounding box? How does that work in Collada FBX?) Or is there a tool which calculates them? Are they calculated at run time? I basically know how bounding volumes work, but I'm a little bit confused on where to get them from."} {"_id": 10, "text": "Sprites saved in INA and AN format? I'm not sure if this is a suitable question but I found a game that had all it is sprites and stuff saved in INA and AN format and I got curious to know which program is it. Or is it some sort of compressing or alike and how it is usually done? Anyone know? Here is the content of one of the files http pastebin.com EzUsFvdG Here is a hex dump of the above file http pastebin.com CTErFBcS The above one is the AN file and the bellow one INA. Hex dump of a INA file which seems to be some sort of map or data file (I believe because of the layers info, etc?) http pastebin.com 2fWn8izR"} {"_id": 10, "text": "What language and tools are good to start with for Linux game development? I would like to start learning about game development and I would like to do it using Linux. My main experience has been with Java and a bit of C , but that was for applications and not games. Now that I want to try with gaming, I'm wondering which IDE language should I use in Linux, whether I should go with C and compiling files manually at the beginning, or maybe continue with Java Eclipse, also I think I can use Eclipse with C , I have also started reading about Python... I was thinking in starting with the typical \"Tetris\" just to understand how a game works internally, and I think I can achieve that with all the previous options, but what do you recommend me or how have you started game developming under Linux?"} {"_id": 10, "text": "What level editor is Phil Fish using in Indie Game The Movie? Does anyone know what program Phil Fish is using to edit models of levels? Or is it it something he built himself? I'm new to game development and was looking for something to design levels with besides my notebook."} {"_id": 10, "text": "What is a good tool for producing animated sprites? Has anyone come across a software package that allows you to build animations in a similar way to how you can in Flash (i.e. using techniques such as tweens amp bones amp easings, etc) and then have the result exported as a sprite sheet?"} {"_id": 10, "text": "How is console game development done? I'm curious what the process is for development for a game console such as Wii, Playstation, or Xbox. Do I need to use some game engine and compile for each platform? What IDE is used? Any C IDE? Are all console games built in C ?"} {"_id": 11, "text": "create terrain with some vertical cliffs, natural arcades, caves using octree I am about to start creating the first terrain. I would like to create a terrain with vertical cliffs, natural arcades and caves. For instance in one point the terrain might look like this. I want to discuss about how to store this information about the terrain. I think quadtree will not do the trick. From my understainding with quadtree perfectly vertical terrain is bound to be low quality in the vertical part, as you cannot subdivide it. So I thounght that octree should be the right tool. But I have a conceptual problem with octree. I cannot understand how to store the triangle data in a octree structure. Understanding how to store data in a quadtree is easy you subdivide the region in a lot of squares and you give a coordinate value to each vertex of the squares, then you just connect all the verticies with triangles. But what can you do when it comes to octtree? Which information should you store conceptually in the octree tree? I cannot guess the logic and the algorithm that lays behind a octtree terrain. Thank you"} {"_id": 11, "text": "Static \"LoD\" hack opinions I've been playing with implementing dynamic level of detail for rendering a very large mesh in XNA. It occurred to me that (duh) the whole point of this is to generate small triangles close to the camera, and larger ones far away. Given that, rather than constantly modifying or swapping index buffers based on a feature's rendered size or distance from the camera, it would be a lot easier (and potentially quite a bit faster), to render a single \"fan\" or flat wedge frustum shaped planar mesh that is tessellated into small triangles close to the near or small end of the frustum and larger ones at the far end, sort of like this (overhead view) (Pardon the gap in the middle I drew one side and mirrored it) The triangle sizes are chosen so that all are approximately the same size when projected. Then, that mesh would be transformed to track the camera so that the Z axis (center vertical in this image) is always aligned with the view direction projected into the XZ plane. The vertex shader would then read terrain heights from a height texture and adjust the Y coordinate of the mesh to match a height field that defines the terrain. This eliminates the need for culling (since the mesh is generated to match the viewport dimensions) and the need to modify the index and or vertex buffers when drawing the terrain. Obviously this doesn't address terrain with overhangs, etc, but that could be handled to a certain extent by including a second mesh that defines a sort of \"ceiling\" via a different texture. The other LoD schemes I've seen aren't particularly difficult to implement and, in some cases, are a lot more flexible, but this seemed like a decent quick and dirty way to handle height map based terrain without getting into geometry manipulation. Has anyone tried this? Opinions?"} {"_id": 11, "text": "3rd party terrain editor I am looking for a good application for generating 3d terrains (really allowing generation and then some user editing) . After a google search, I found many, but I need one that can export the entire terrain as a mesh. This, I was unable to find. (I need either a .obj .fbx or .3ds output) The size of the file that is written and the price of the program are not major factors in my decision."} {"_id": 11, "text": "Real world terrain import into UE4 I am trying to create a basic game in Unreal Engine 4.8.1 and I wish to have it set in the Dubai, UAE but i cannot find a way that will successfully use terrain data i have obtained (NASA Shuttle RADAR 90m) to create a 1 1 scale map in unreal without creating huge world spikes. Does anyone know a method that can import the data without creating terrain spikes?"} {"_id": 11, "text": "anisotropic fog of war Here I am not talking about how to render fog of war, but how to model exploring of terrain in a more sophisticated way. In a straight forward approach, terrain explored by a game unit is simply represented by a circle whose radius is how far this unit can see. And visible area is just union of isometric circles. But I can tell that it is not the case in popular games. Please take a look at the following screenshot from League of Legends, for example. The sight of view is anisotropic. The shape of surrounding terrain is also taken in to account. Any idea how this can be done?"} {"_id": 11, "text": "Heightmap based Terrain with a Road What's the best way to implement a detail feature, like a road, on a heightmap based terrain? Update It's a bit hard to see in the image, but the road descends from the top of the quarry to do its base."} {"_id": 11, "text": "How do I simplify terrain with tunnels or overhangs? I'm attempting to store vertex data in a quadtree with C , such that far away vertices can be combined to simplify the object and speed up rendering. This works well with a reasonably flat mesh, but what about terrain with overhangs or tunnels? How should I represent such a mesh in a quadtree? After the initial generation, each mesh is roughly 130,000 polygons and about 300 of these meshes are lined up to create the surface of a planetary body. A fully generated planet is upwards of 10,000,000 polygons before applying any culling to the individual meshes. Therefore, this second optimization is vital for the project. The rest of my confusion focuses around my inexperience with vertex data How do I properly loop through the vertex data to group them into specific quads? How do I conclude from vertex data what a quad's maximum size should be? How many quads should the quadtree include?"} {"_id": 11, "text": "Real world terrain import into UE4 I am trying to create a basic game in Unreal Engine 4.8.1 and I wish to have it set in the Dubai, UAE but i cannot find a way that will successfully use terrain data i have obtained (NASA Shuttle RADAR 90m) to create a 1 1 scale map in unreal without creating huge world spikes. Does anyone know a method that can import the data without creating terrain spikes?"} {"_id": 11, "text": "Rendering smooth ground I'm attempting to render terrain made out of a triangle mesh. The problem is that whenever I have a northwest gt southeast ramp in the terrain, I get this diamond pattern The issue is that at the top and bottom of the ramp, the terrain is more flat than at the middle, so if the ramp is aligned with the triangles, each triangle will have two light vertices (at the top and bottom), and one dark vertex (at the middle). How can I fix this effect? Subdividing the mesh helps, but doesn't fix it completely. Vertex shader uniform mat4 projection attribute vec3 position attribute vec3 normal attribute vec4 color varying vec3 f position varying vec3 f normal varying vec4 f color void main(void) gl Position projection position f color color f normal normal f position position Fragment shader varying vec3 f position varying vec3 f normal varying vec4 f color uniform vec3 light position void main(void) float light 0.5 0.5 abs(dot(normalize(f normal), normalize(light position f position))) gl FragColor vec4(light, light, light, 1) f color"} {"_id": 11, "text": "What do sandbox games usually do when generating new terrains? I am trying to make a minecraft. In my game, when the player moves to the boundary, new chunks will be generated. What I do is creating new chunks in my main thread and generating a new VBO to store them. But I find that it's quite inefficient and you can obviously feel the FPS is decreasing. What do these kinds of games usually do when generating new terrains? Do they use multi threads or some specific efficient algorithms or compute shaders (I still don't know much about OpenGL)? I once tried to generate my new terrains in another thread. But I found I can't use GL operation to generate VBO in another thread. I am thinking about pre generated some VBOs in my main thread. Is that the right way? I want to know what most people do in this situation."} {"_id": 12, "text": "Black bars on 720p emulator? I'm making a game for Windows Phone 8 using XNA 4.0 in Visual Studio express 2012. I'm trying to add a background image to the game with a size of 1366x768 (as I believe WP8 only supports up to 720p). Everything stretches well on the other emulators (WVGA and WXGA) until I use the 720p emulator and I get black bars on the sides How can I fix this, or is this normal? This is all the code I have used so far GraphicsDeviceManager graphics SpriteBatch spriteBatch Texture2D Background public Game1() graphics new GraphicsDeviceManager(this) Content.RootDirectory \"Content\" graphics.SupportedOrientations DisplayOrientation.LandscapeLeft DisplayOrientation.LandscapeRight this.graphics.IsFullScreen true Frame rate is 30 fps by default for Windows Phone. TargetElapsedTime TimeSpan.FromTicks(333333) Extend battery life under lock. InactiveSleepTime TimeSpan.FromSeconds(1) lt summary gt Allows the game to perform any initialization it needs to before starting to run. This is where it can query for any required services and load any non graphic related content. Calling base.Initialize will enumerate through any components and initialize them as well. lt summary gt protected override void Initialize() TODO Add your initialization logic here base.Initialize() lt summary gt LoadContent will be called once per game and is the place to load all of your content. lt summary gt protected override void LoadContent() Create a new SpriteBatch, which can be used to draw textures. spriteBatch new SpriteBatch(GraphicsDevice) Background Content.Load lt Texture2D gt (\"SkyBackground\") lt summary gt UnloadContent will be called once per game and is the place to unload all content. lt summary gt protected override void UnloadContent() TODO Unload any non ContentManager content here lt summary gt Allows the game to run logic such as updating the world, checking for collisions, gathering input, and playing audio. lt summary gt lt param name \"gameTime\" gt Provides a snapshot of timing values. lt param gt protected override void Update(GameTime gameTime) Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back ButtonState.Pressed) this.Exit() base.Update(gameTime) lt summary gt This is called when the game should draw itself. lt summary gt lt param name \"gameTime\" gt Provides a snapshot of timing values. lt param gt protected override void Draw(GameTime gameTime) GraphicsDevice.Clear(Color.CornflowerBlue) TODO Add your drawing code here spriteBatch.Begin() drawBackground() spriteBatch.End() base.Draw(gameTime) private void drawBackground() spriteBatch.Draw(Background, GraphicsDevice.Viewport.Bounds, Color.White)"} {"_id": 12, "text": "Writing via content pipeline in Xbox game project I'm creating an Xbox application and I have this problem with the content pipeline. Loading .xnb files is not a problem but I can't seem to find any helpful tutorials on writing via the content pipeline. I want to write an XML whenever the user presses a custom made \"save\" button. I've searched the web for \"saving game sate\" etc. but so far I haven't found a solution for my case. So, summarized is there a way to write data (in XML format) via the content pipeline, if my Save() method is called? Cheerz"} {"_id": 12, "text": "Confused about Content Pipeline I'm attempting to get my head around the XNA content pipeline, and how I can use it to simplify my game code. Specifically, I want to define sprites, sprite sheets, and animations as assets and have them automatically available to me in game code. At the end of the day, this is what I want to be able to do add a new file with .sprite extension to my content project call contentManager.Load lt Sprite gt (\"NameOfSprite\") and get back a Sprite instance My Sprite class looks like this public class Sprite public Texture2D Texture get ... public Rectangle? SourceRectangle get ... public Vector2 Origin get ... The purpose of the Origin property is not important it's specific to my game. The point is, I want it to be \"packaged up\" alongside the texture information. I've defined a SpriteData class as public sealed class SpriteData public string TextureFilename get set public Rectangle? SourceRectangle get set public Vector2 Origin get set And I've added a test.sprite file to my content project as lt ?xml version \"1.0\" encoding \"utf 8\" ? gt lt XnaContent xmlns data \"MyNamespace\" gt lt Asset Type \"data SpriteData\" gt lt TextureFilename gt Sprites Test.jpg lt TextureFilename gt lt SourceRectangle gt 0 5 20 25 lt SourceRectangle gt lt Origin gt 30 2 lt Origin gt lt Asset gt lt XnaContent gt I have defined a SpriteImporter as ContentImporter(\".sprite\", DisplayName \"Sprite Importer\", DefaultProcessor \"SpriteProcessor\") public class SpriteImporter ContentImporter lt SpriteData gt public override SpriteData Import(string filename, ContentImporterContext context) using (var xmlReader XmlReader.Create(filename)) return IntermediateSerializer.Deserialize lt SpriteData gt (xmlReader, filename) And a SpriteProcessor as ContentProcessor(DisplayName \"Sprite Processor\") public class SpriteProcessor ContentProcessor lt SpriteData, SpriteData gt public override SpriteData Process(SpriteData input, ContentProcessorContext context) var spritePath Path.GetDirectoryName(context.OutputFilename) var textureReference new ExternalReference lt Texture2DContent gt (input.TextureFilename) make sure the texture is built (not sure what else to do here) context.BuildAsset lt Texture2DContent, Texture2DContent gt (textureReference, string.Empty) return input Now I'm stuck. If I don't implement a content writer reader combination then I won't get a Sprite out of the content manager I'll get a SpriteData instead, which is not what I want. But my attempt to implement a SpriteWriter came up short. I have no idea how to get the texture information from the processor to the writer. Do I need to change the type from SpriteData to something else? Is there a sample anywhere that demonstrates how to do this end to end?"} {"_id": 12, "text": "Client and Server game update speed I am working on a simple two player networked asteroids game using XNA and the Lidgren networking library. For this set up I have a Lidgren server maintaining what I want to be the true state of the game, and the XNA game is the Lidgren client. The client sends key inputs to the server, and the server process the key inputs against game logic, sending back updates. (This seemed like a better idea then sending local positions to the server.) The client also processes the key inputs on its own, so as to not have any visible lag, and then interpolates between the local position and remote position. Based on what I have been reading this is the correct way to smooth out a networked game. The only thing I don t get is what value to use as the time deltas. Currently every message the server sends it also sends a delta time update with it, which is time between the last update. The client then saves this delta time to use for its local position updates, so they can be using roughly the same time deltas to calculate position updates. I know the XNA game update gets called 60 times a second, so I set my server to update the game state at the same speed. This will probably only work as long as the game is working on a fixed time step and will probably cause problems if I want to change that in the future. The server sends updates to clients on another thread, which runs at 10 updates per second to cut down on bandwidth. I do not see noticeable lag in movement and over time if no user input is received the local and remote positions converge on each other as they should. I am also not currently calculating for any latency as I am trying to go one step at a time. So my question is should the XNA client be using its current game time to update the local game state and not being using time deltas sent by the server? If I should be using the clients time delta between updates how do I keep it in line with how fast the server is updating its game state?"} {"_id": 12, "text": "How to animate a destruction of a model? I have a model (see image) and I am trying to animate a destruction. But it doesn't seem possible, since XNA is using only bones to animate. So my question is Which workflow should I use, to animate 4 independent objects (being one big model), which lie on top of each other? Regarding this model"} {"_id": 12, "text": "Strange anomalies when using WinForms with XNA I haven't seen any questions related to my issue, and this leads me to believe I'm missing something minor, however The Problem I'm creating a game that is effectively launched from a windows form. Consider the following code. static void Main(string args) main new TestGame() The Game class. menu new Menu(main) The form. while (true) if (menu.ShowDialog() DialogResult.OK) if (menu.ShouldStartGame) main.RunGame( Some params ) else ExitGame() break This loop works fine. My issue lies in the form itself. Firstly, DialogResult does not trigger at all. When the result from menu.ShowDialog() is printed, its result is \"Cancel\" which is not the option set for the button (OK.) As well as this, picture boxes, the form background colour and other such controls do not work properly. And the form appears to have a UI style from Windows versions before or during XP. Is this a known issue or am I missing something?"} {"_id": 12, "text": "Monogame Startup Memory Spike I am currently porting a game from WP7 using XNA to WP8.1 using MonoGame. I have been putting considerable effort into optimizing my game to be under the 185MB memory limit for low memory phones. I was making headway until I discovered a memory spike on startup. I continued to reduce the quality of my assets until the memory usage (while the game is running) is at about 140MB. However, on startup, the memory usage spikes to 205MB, sits there for about 10 seconds, and then drops to standard operating memory usage. It does this every time. Increasing or decreasing the asset sizes only changes how high the spike goes and the usage in general, but it does not change the 10 second delay at all. One idea I thought about was to delay the user with a longer splash screen, giving me time to load all the assets without blowing the hatch. But this seems to me like detracting from the user experience out of convenience. Any ideas? Has anyone else had this problem?"} {"_id": 12, "text": "Creating refraction map problem I'm reading riemers tutorial on how to make a water effect and trying to translate it from xna 3 to xna 4. So far I figured out I have to create a shader in order to clip the planes for my refraction map. I'm new to hlsl and how to implement custom shaders into my program, so I am wondering what I am doing wrong here. float4x4 World float4x4 View float4x4 Projection float4 ClipPlane0 void vs(inout float4 position POSITION0, out float4 clipDistances TEXCOORD0) clipDistances.y 0 clipDistances.z 0 clipDistances.w 0 position mul(mul(mul(position, World), View), Projection) clipDistances.x dot(position, ClipPlane0) float4 ps(float4 clipDistances TEXCOORD0) COLOR0 clip(clipDistances) return float4(0, 0, 0, 0) TODO whatever other shading logic you want technique pass VertexShader compile vs 2 0 vs() PixelShader compile ps 2 0 ps() This is the fx file that I found on an xna forum just for reference. Major edit I was able to get the program running successfully but when I viewed the saved screenshot of the refraction map, nothing was clipped. public void DrawRefractionMap(Effect clipEffect, Camera camera, GraphicsDevice device) Plane refractionPlane CreatePlane(waterHeight 1.5f, new Vector3(0, 1, 0), camera, false) clipEffect.Parameters \"ClipPlane0\" .SetValue(new Vector4(refractionPlane.Normal, refractionPlane.D)) device.SetRenderTarget(refractionRenderTarget) device.Clear(ClearOptions.Target ClearOptions.DepthBuffer, Color.Black, 1, 0) foreach (EffectPass pass in clipEffect.CurrentTechnique.Passes) pass.Apply() DrawTerrain(clipEffect, camera, device) device.SetRenderTarget(null) refractionTexture (Texture2D)refractionRenderTarget FileStream stream File.OpenWrite(\"Screenshot33.png\") refractionTexture.SaveAsJpeg(stream, refractionTexture.Width, refractionTexture.Height)"} {"_id": 12, "text": "How can I properly rotate a 2D vector in the \"flipped\" XNA client space? In my 2d XNA game, because SpriteBatch treats world space as client space and has positive Y axis down and negative up, I've built my game's world space with that coordinate system too. However, I've hit a snag when I try to rotate a position around the origin using a matrix. var p1 new Vector2(95f, 40f) var m Matrix.CreateRotationZ(MathHelper.ToRadians(90)) var p2 Vector2.TransformNormal(p1, m) This results in p2.X 40 and p2.Y 95. If there's an object that is positioned relative to another object and that other object rotates, the child object gets swung around the parent object in the opposite direction because the matrix works in the positive Y axis up and negative down coordinate system. What's the best way to account for this? Negate the Y value before and after transforming? EDIT To get more detailed I am trying to do a transformation like this, where child rotates with parent I though I would be able to multiply the matrices from the child object up through each of its parents' transform matrices in order to get its final world position, scale, and rotation that can be passed to SpriteBatch.Draw. Unfortunately the combined translations and rotations don't work out properly with the inverted Y axis."} {"_id": 12, "text": "How to clip cut off text in a textbox? I'm writing a textbox class for my XNA game and I'm using SpriteFont to draw the text. It's only a single line textbox so when the text width exceeds the size of the rectangle box, it currently just keeps going. In Windows textboxes extra text is cut off so that you may only see half a letter. I'd like to emulate this with my custom control. However, I see no overload with SpriteBatch.DrawString that would allow for anything that could cut off text that didn't fit within certain bounds. Does anyone know of a method that would allow for this? I'm still quite new to the XNA API so I'm not sure what out there exists for this sort of thing.. Thanks"} {"_id": 13, "text": "Dynamically Deformable Terrain In Game Engine I am looking for a game engine that is open to the public for free or at a payed price that allows for any reasonable way of doing deformable terrain over a network. The closest I have found to this is in udk where one can build a terrain in 3ds, cut it up, and import different chunks into udk, and fracture them. Unfortunately after a few hours work I discovered that this doesn't seem to work too well for what I am trying to do. Can anyone recommend a game engine, or even a rendering engine that supports this? Programming other features into a rendering engine is not an issue for me."} {"_id": 13, "text": "How do I use tiles and sprites together in an isometric scene? I'm trying to write a 2D isometric scene. Rendering order is complex, since both tilemaps and sprites are different concepts. Rendering one of the 2 before the other will draw the scene incorrectly. Tiles and Sprites both have some common data that can be used as render information. I thought of creating an extra object which simply holds the coordinates and texture data. However, this also meant having to couple a tile or sprite to a render info object. (something I haven't figured out). This adds complexity. However, I thought this way I could abstract over any renderable object. This tutorial defines tiles as any other sprite and then uses a topology algorithm to sort the scene every frame. I was actually thinking of using the pigeon hole algorithm by sorting the render info objects on their depth property. How is this usually done? I can't wrap my head around it. Bear in mind that I have no actual z depth to work with. Everything relies on the artificial depth from the x and y coordinates."} {"_id": 13, "text": "How to render Viva Pinata fur In the game Viva Pinata, cute virtual animals have color changing paper cut like furs. It didn't seem like using shell rendering because there are LOTS of animals in a scene and shell rendering each every one of them to render these furs sounded like a daunting process for a game. I tried to build 3d model with each triangle but that didn't seem like right solution either. I am out of my tricks in my pocket."} {"_id": 13, "text": "What are some of the more commonly used projectile rendering techniques? couldn't find a duplicate question (bit surprising to me) but anywho I'm starting to get near implementing the rendering of projectiles for my game. My question is what are some good techniques for efficiently rendering projectiles? I would like emphasis on techniques that leave room for the projectiles to be \"rich\" and dynamic (Cool to look at!) I'm also using DX11 for my rendering engine so bleeding edge techniques that can make use of that would be much appreciated too. Thanks!"} {"_id": 13, "text": "What is the best PBR real time Fresnel function? I'm working on a physically based renderer, and I've come to a sort of crossroads regarding the Fresnel factor. I'm having trouble finding the best way to represent it. I know that Schlick's Fresnel approximation is based off IOR, but IORs can go up to 38.6 for a meta material, and 4.05 for a natural element, which will make representing these in a 0 1 image difficult and confusing. I also noticed that no one really uses IOR maps. I also read a paper on Unreal's PBR integration, and I discovered that they initially wanted to use a F0 of 0.4, for non metals. What would be the F0 for metals, in this case, and isn't the static value of 0.4 worth the limitations for that tiny bit of memory? I believe the F0 tends to the base color, as it becomes more metallic, but I'd like confirmation. Finally, there's reflectivity or specular, as is used in modern PBR equations. Is there a standard for this, in regard to getting an F0? It seems arbitrary is it a float value up to that directly maps to F0? I am not sure if there are any real reasons to not combine specular color and base color, as Unreal has done this, before. I can't think of a single real reason, even for more stylized implementations. What is the best PBR real time Fresnel function?"} {"_id": 13, "text": "How to maintain char widths of non monospace fonts? Having a font via spritesheet (as PNG), the easiest way to render fonts from that is just showing chars as monospace, but as you can imagine, that looks not pretty with chars like l, i, and so on.. Is there a slick way to maintain the width of every char? I already thought of storing it into an extra file, telling the width of each char in pixels. Pro Fast rendering. Con That's one load of work for about 100 chars. I gave \"counting\" the pixel width of each char on rendering it a thought. Pro Once that algorithm does it's job there's no work with that afterwards. Con As there is pretty much font rendering going on each frame, this is plain bullshit performancewise. I'm open to suggestions and or known algorithms for this problem. EDIT TTF or some other real font is not an option, because they render wayy to pixelated on the small sizes needed. EDIT Thanks to lorenzo gatti, made simple marker pixels in the spritesheet like so Distance between these gets counted on startup of the game and the markers are replaced with transparent pixels. So no heavy additional logic in render which would slow down and startup time is not really slower than before, thanks!"} {"_id": 13, "text": "How should I set camera in Blender for example to render a sprite which can be used in an isometric map? I am trying to render some 3D model in blender and use them as a texture sprite in a game which uses Apple s SpriteKit. I have an isometric map with tiles size 32 as width and 24 as height, Since I use same size tile whole over the map (isometric) I need to use an orthographic projection I think! please tell me if I am wrong but I was pretty sure! anyway... I am using Blender to render my sprites (3D models) but I can not set the camera direction to have the rendered image really fit with the map. The attached picture shows the problem, take a look at the building. I have also attached the geometric math that I have used to create my render script in Blender. I found out that if I use a 32,24 tile I need to look at from 29Deg 29Min 28Sec but it seems not alright as the iOS simulator shows! Can anyone help me on this? How do I have to prepare my sprite to fit well on my isometric map? Those who have experience with isometric map games are welcome to answer and I really appreciate it. Here is the code I wrote in python to render the 3D Model, I have also set the camera to Orthographic manually. import bpy cam bpy.data.objects \"Camera\" def look at(cam, point) loc camera cam.matrix world.to translation() direction point loc camera point the cameras ' Z' and use its 'Y' as up rot quat direction.to track quat(' Z', 'Y') assume we're using euler rotation cam.rotation euler rot quat.to euler() meshObj bpy.data.objects \"plate\" meshObj.rotation mode 'XYZ' meshObj.rotation euler (0,0,0) d meshObj.dimensions Finding maximum dim of the object objectScale 1 if d 1 gt d 2 objectScale d 1 16 elif d 2 gt d 1 objectScale d 2 16 cam.rotation mode 'XYZ' cam.location (20.416 objectScale,20.416 objectScale,20.416 objectScale) cam.rotation euler (0.7853,0.7853,0.5147) cam.rotation euler (0.7853,0.7853,0) look at(cam, meshObj.matrix world.to translation()) alamp bpy.data.objects \"Lamp\" alamp.location (17.416 objectScale 4,17.416 objectScale 4,20 objectScale 4) print(\"Scaling \",objectScale) bpy.context.scene.render.filepath \" Users iman Documents Render ISOBUILDING.png\" bpy.ops.render.render(write still True, use viewport True, scene \"Camera )"} {"_id": 13, "text": "When a Render Pass decides what textures it needs, how are shaders written? I am studying render graph architectures (I've seen the Frostbite presentation). A RenderPass has outputs (i.e. textures you draw to) and inputs. How are these inputs bound to the internal pipeline ? Lets say I have an AO Pass and it has Normals and Depth as the input. Do I just bind the texture to a register and that's it and sample it? What about the actual shaders (for drawing geometry) that also use textures?"} {"_id": 13, "text": "What is a texture atlas? I've heard about this concept, but what is it?"} {"_id": 13, "text": "Rendering models in isometric view How to setup the rendering and camera for isometric gameworld projection? And specifically how do i get the images exactly the right size? What angles to use to get the exact 2 1 isomtric view? Methods to set the camera on the right position? Options to set like anti alias off. I have tried many things, 45, 30, 35.264 degree angles. What i do is set the angle of the camera, then place the camera in front of the model then use dolly fov lens settings to get the left and right edge of the model lined up with the save frame. Then adjust camera height so the bottom lines up with the bottom of the save frame. But i keep getting jagged edges and not the isometric style 2 width one height."} {"_id": 14, "text": "iPhone GLSL shader dynamic branching issue I am trying to pass an array of vec3 as uniform and then iterate through them on each pixel. The size of array varies on situations so I can't make the loop with constant number of iterations. Here is the code precision highp float precision highp int varying vec4 v fragmentColor varying vec4 v pos uniform int u numberOfParticles const int numberOfAccumsToCapture 3 const float threshold 0.15 const float gooCoeff 1.19 uniform mat4 u MVPMatrix uniform vec3 u waterVertices 100 void main() vec4 finalColor vec4(0.0, 0.0, 0.0, 0.0) vec2 currPos v pos.xy float accum 0.0 vec3 normal vec3(0, 0, 0) for ( int i 0 i lt u numberOfParticles i ) vec2 dir2 u waterVertices i .xy currPos.xy vec3 dir3 vec3(dir2, 0.1) float q dot(dir2, dir2) accum u waterVertices i .z q float normalizeToEdge 1.0 (accum threshold) 2.0 if (normalizeToEdge lt 0.4) finalColor vec4( 0.1, normalizeToEdge 0.5, 0.9 normalizeToEdge 0.4, 1.0) if ( normalizeToEdge lt 0.2 ) finalColor vec4( 120.0 255.0, 245.0 255.0, 245.0 255.0, 1.0) float shade mix( 0.7, 1.0, normal.x) finalColor shade gl FragColor vec4(finalColor) The problem is here for ( int i 0 i lt u numberOfParticles i ) vec2 dir2 u waterVertices i .xy currPos.xy vec3 dir3 vec3(dir2, 0.1) float q dot(dir2, dir2) accum u waterVertices i .z q When I make the for loop like this for ( int i 0 i lt 2 i ) ... I get double the framerate even though u numberOfParticles is also 2 Making it like this for ( int i 0 i lt 100 i ) if (i u numberOfParticles) break ... gives no improvement. The only way I know to cope with this situation is to create multiple shaders. But The size of array may vary from 1 to 40 and making 40 different shaders just because of the for loop is stupid. Any help or ideas how to deal with this situation ?"} {"_id": 14, "text": "OpenGL ES 2.0 Repository of Quality Shaders Could I kindly ask, to suggest me a repository of high quality OpenGL (OpenGL ES 2.0) vertex and fragment shaders, please? I am looking for pixel based ligting shaders (such as phong) and simmilar. It would be nice to see more of them, to be able to choose between quality vs shader performance."} {"_id": 14, "text": "2D day night mapping I'm looking for this kind of effect MINUS the lights and snow (Another problem). It needs to change depending on the time of year. Doesn't need snow or city lights. Now I'm pretty new to shaders (learnt them yesterday in my spare time) but so far I have achieved This moves across the screen display light on both sides. Now I'm completely lost, as to how I can make it seem like a light map I.E. it needs to be more square sharper (As it's going off the edges) amp have the bottom and top times of year.. I though maybe I could pass sphere mesh to the vertex? Or do something with the map normal. Or maybe use blending with two textures but I've looked around and it looks extremely difficult. Code I have so far Fragment Pixel shader attributes from vertex shader varying vec4 vColor varying vec2 vTexCoord our texture samplers uniform sampler2D u texture diffuse map uniform sampler2D u normals normal map values used for shading algorithm... uniform vec2 Resolution resolution of screen uniform vec3 LightPos light position, normalized uniform vec4 LightColor light RGBA alpha is intensity uniform vec4 AmbientColor ambient RGBA alpha is intensity uniform vec3 Falloff attenuation coefficients uniform float lightX X Position of light. Can also feed Y for times of the year. void main() RGBA of our diffuse color vec4 DiffuseColor texture2D(u texture, vTexCoord) RGB of our normal map vec3 NormalMap texture2D(u normals, vTexCoord).rgb int numberOfLights 3 vec3 lightPoses 3 vec3 lightDirections 3 LightX goes to 0, then back to 1. lightPoses 0 vec3(lightX 1.0, LightPos.y, LightPos.z) lightPoses 1 vec3(lightX, LightPos.y, LightPos.z) lightPoses 2 vec3(lightX 1.0, LightPos.y, LightPos.z) TODO Introduce one extra light for top and bottom. OR Figure out how to squash the Y. TODO Needs to be sharper light. vec3 Sum vec3(0.0) Go though both lights. for(int index 0 index lt numberOfLights index ) The delta position of light vec3 LightDir vec3(lightPoses index .xy (gl FragCoord.xy Resolution.xy), LightPos.z) Correct for aspect ratio LightDir.x LightDir.x (Resolution.x Resolution.y) Make it bigger. (smaller the value the bigger.) LightDir vec3(0.55, 0.4, 1.0) Determine distance (used for attenuation) BEFORE we normalize our LightDir float D length(LightDir) normalize our vectors vec3 N normalize(NormalMap 2.0 1.0) vec3 L normalize(LightDir) Pre multiply light color with intensity Then perform \"N dot L\" to determine our diffuse term vec3 Diffuse (LightColor.rgb LightColor.a) max(dot(N, L), 0.0) pre multiply ambient color with intensity vec3 Ambient AmbientColor.rgb AmbientColor.a Because there are more lights, take off total ambient power. Ambient vec3(1.0 float(numberOfLights)) Calculate attenuation (The amount of fade the light has.) float Attenuation 1.0 (Falloff.x (Falloff.y D) (Falloff.z D D)) the calculation which brings it all together vec3 Intensity Ambient Diffuse Attenuation vec3 FinalColor DiffuseColor.rgb Intensity Sum FinalColor gl FragColor vec4(Sum, DiffuseColor.a) Vertex Shader combined projection and view matrix uniform mat4 u projTrans \"in\" attributes from our SpriteBatch attribute vec4 a position attribute vec4 a color attribute vec2 a texCoord0 \"out\" varyings to our fragment shader varying vec4 vColor varying vec2 vTexCoord void main() vColor a color vTexCoord a texCoord0 gl Position u projTrans a position"} {"_id": 14, "text": "2D HLSL World position I'm trying to get world position from my vertex shader to my pixel shader so that I can disable the shader once a preset X coordinate has been passed (no shading once I'm over X). Getting the screen position is not a problem so far but despite my best efforts to look after it and implement examples the calculations just don't return the preferred world positions I'm looking for. Update So got it to somewhat work, after compiling the shaders the output changes to such Could anyone explain why this happens? I should mention that I'm really new to HLSL, been only scripting so far. Edit Added matrices. world Matrix.Identity view Matrix.CreateScale(new Vector3(1, 0.75f, 0)) Matrix.CreateTranslation( playerpos.X, playerpos.Y, 1) projection Matrix.CreateOrthographicOffCenter(0, view.Width, view.Height, 0, 0, 1) Matrix halfPixelOffset Matrix.CreateTranslation( 0.5f, 0.5f, 0) projection halfPixelOffset projection lt code texture lightMask sampler mainSampler register(s0) sampler lightSampler sampler state Texture lightMask float4x4 World float4x4 View float4x4 Projection struct vs2ps float4 Pos POSITION0 float4 TexCd TEXCOORD0 float3 PosW TEXCOORD1 vs2ps VS(float4 Pos POSITION0,float4 TexCd TEXCOORD0) vs2ps Out Out.Pos mul(Pos, World View Projection) Out.TexCd TexCd Out.PosW mul(Pos, World) return Out float4 PixelShaderFunction(vs2ps input) COLOR0 float2 texCoord input.TexCd float4 screenPosition (input.PosW,1.0f) float4 lightColor tex2D(lightSampler, texCoord) float4 mainColor tex2D(mainSampler, texCoord) if(screenPosition.x lt 3500) return (mainColor lightColor) else return mainColor"} {"_id": 14, "text": "Matcap and BRDF Shading I just would like to know what's the difference between the Matcap shaders used in ZBrush for example and the Bidirectional Radiance Distribution Function shader. Are there two techniques the same ? Is Matcap done using BRDF or are they different ?"} {"_id": 14, "text": "vertex pixel shaders and \"materials\" What relationship is there, if any, of \"materials\" and vertex pixel shaders (or the \"effects\" that combine the latter two)? I have the impression that before the advent of HLSL, materials were explicitly handled by Direct3D or OpenGL. I also have the impression that with HLSL, many prior effects are now handled directly (and generically) via the rendering pipeline in vertex pixel shaders. In other words, I hae the impression that materials are often little more than what DirectX would call an \"Effect.\" If so, this means that many models that specify materials would be transformed by a content pipeline into an \"Effect\". Is this correct? Where can I get a little more history info that either corroborates or corrects the impression I have?"} {"_id": 14, "text": "What are the pros and cons of HLSL vs GLSL vs cg? What are the pros cons of the three?"} {"_id": 14, "text": "DirectX 11, using Tessellation Geometry shader in a single pass Before all, sorry for my poor english ! With DirectX 11, i'm trying to create a random map full with GPU. Using Hull shader stage, I'm managing LOD with tessellation. Using Domain shader stage, I'm generating the map (based on perlin noise). Now my goal, is to compute normals in the geometry shader (normal on vertex). For that, I must use vertex adjency, like geometry is capable of. But here is the problem... For tessellation, my primitives must be D3D11 PRIMITIVE TOPOLOGY 3 CONTROL POINT PATCHLIST. But for geometry shader with 6 vertex (triangle primitive and adjency), I must use D3D11 PRIMITIVE TOPOLOGY TRIANGLELIST ADJ. Think I'm missing something... It must be possible to tessellate and use the results in the geometry shader... However, it's working with 3 points, but I cannot use the 3 others (they are 0.0, 0.0, 0.0).... Thank you in advance for any help )"} {"_id": 14, "text": "Wrap texels between desired values I'm using a basic pixel shader here uniform sampler2D texture uniform float pixel threshold void main() float factor 1.0 (pixel threshold 0.001) vec2 pos floor(gl TexCoord 0 .xy factor 0.5) factor gl FragColor texture2D(texture, pos) gl Color It works great in my game. But on a new enemy it doesn't. This enemy is different because it uses a texture atlas one big texture holding more frames than just the character. The sampling gets messed up because factorapproaches 0 so does the xy coord therefore sampling some pixels outside of the character's sheet. Likewise when pixel threshold gets small, the xy coord goes outside of the character's spot in the texture. Clamping is not the solution because we want it to wrap around. I've tried many ways of wrapping but not seem to offer the desired effect of the original. I tried using 4 uniforms left,top,width,height and passing in the masking values used when drawing the sprite from the spritesheet. vector2 size mySprite.getSize() float left mySprite.getTextureRect().left size.x float top mySprite.getTextureRect().top size.y float width mySprite.getTextureRect().width size.x float height mySprite.getTextureRect().height size.y myShader.setUniformf(\"left\", left) ... myShader.setUniformf(\"height\", height) How can I have the pixel blur shader stay within the texel ranges provided by the uniform? Thanks."} {"_id": 14, "text": "HLSL's Tex2D for GLSL? I am trying to port a HLSL shader to GLSL. I'm just not quite sure how to convert this line outA Input.Color.a tex2D(s, Input.TexCoord.xy float2( 4.0 pxSz.x blurSize,0)).a 0.05 It's mostly the Tex2D I'm having trouble with. In GLSL, it seems to work differently. I'm porting a horizontal blur texture al tex sampler2D s sampler state texture lt al tex gt int tWidth int tHeight float blurSize 5.0 float4 ps main(VS OUTPUT Input) COLOR0 float2 pxSz float2(1.0 tWidth,1.0 tHeight) float4 outC 0 float outA 0 outA Input.Color.a tex2D(s, Input.TexCoord.xy float2( 4.0 pxSz.x blurSize,0)).a 0.05 outA Input.Color.a tex2D(s, Input.TexCoord.xy float2( 3.0 pxSz.x blurSize,0)).a 0.09 outA Input.Color.a tex2D(s, Input.TexCoord.xy float2( 2.0 pxSz.x blurSize,0)).a 0.12 outA Input.Color.a tex2D(s, Input.TexCoord.xy float2( pxSz.x blurSize,0)).a 0.15 outA Input.Color.a tex2D(s, Input.TexCoord.xy float2(0,0)).a 0.16 outA Input.Color.a tex2D(s, Input.TexCoord.xy float2(pxSz.x blurSize,0)).a 0.15 outA Input.Color.a tex2D(s, Input.TexCoord.xy float2(2.0 pxSz.x blurSize,0)).a 0.12 outA Input.Color.a tex2D(s, Input.TexCoord.xy float2(3.0 pxSz.x blurSize,0)).a 0.09 outA Input.Color.a tex2D(s, Input.TexCoord.xy float2(4.0 pxSz.x blurSize,0)).a 0.05 outC.a outA return outC Thanks"} {"_id": 15, "text": "How can I find all connected objects of the same type recursively? In my game, when a user selects an object of some type, I'd like to search for all other objects of the same type that are connected to that first one. For example, if the user selects a object of type 2, I want to check the object next to that one to see if it is also of type 2 and so on in all directions (up, down, left and right, et cetera) until there are no more objects of that type connected. Does anyone know how I could do something like this? You can assume I have access to the set of connections from any given object."} {"_id": 15, "text": "Body lost a velocity component when hitting a wall I am creating a Breakout game using Box2D (of LibGdx if anyone interest). Everything works well until when the ball hits the wall when moving with a very small angle. Please look at the image for details I tried to set the wall friction to 0 and restitution to 1, as well as the ball's friction and restitution, but it still move along the wall (I have no World Gravity). This also happens to the vertical wall if the angle is small enough, it will lose X velocity. How can I move the ball the way I expected? If there is no friction, what caused the problem? EDIT In case of LibGdx only, this is a fix World.setVelocityThreshold(float threshold) I set it to 0.1f, and it helped. If you encounter problem, maybe 0 can help, but it is unrecommended.."} {"_id": 15, "text": "LibGDX sound playing problem I'm making a game and I want to play sounds in it. I have AssetManager which loads the sounds and I made a class which has playSound(String sound) method. This method calls the get() method of the assets manager with the string, creates Sound type file and calls it's play() method. The code public void playSound(String sound) Sound file gameRenderer.assetsManager.get(\"sfx sounds \" sound \".wav\") file.play() It works fine. But then I noticed there is a log that occurs every time the sound is played AUDIO OUTPUT FLAG FAST denied by client I also noticed a small \"tick\" sound in the end of the sound every time it's played in the game. I read about it and from what I've learned it has something to so with sample rate. I tried many types of sounds (44KHz, 48KHz, and lower values too) but it doesn't stop printing that log. I couldn't find a solution anywhere. If it really has something to do with sample rate, does it mean I need to have different types of file of the same sound (With every sample rate)? If it does, my game would be heavy... I'm using Nexus 5. Thanks!"} {"_id": 15, "text": "Using libraries in libgdx This is the first time I'm using libgdx and I've got one problem. I am confused by the web of dependencies and the number of projects and I'm not sure how and mostly where to add a third party library I want to use. It should work both for desktop and android. How is this done?"} {"_id": 15, "text": "LibGDX buttons bounds are wrong I am making a simple gui in libgdx and run into this problem when you click on the button it's bounds (?) are wrong since if I click under it, the game registers it as a click. On the other end, if I click it at the top it won't get the click. It seems like the bounds are a bit under the button. I tried to set manually but nothing. Also i tried changing sizes. Looked here too this is almost the same but no answer.. tabl new Table() stage new Stage() tabl.setSize(stage.getWidth() 2, stage.getHeight() 2) tabl.defaults().size(500, 40) g game Gdx.input.setInputProcessor(stage) skin new Skin() TextureAtlas te new TextureAtlas(Gdx.files.internal(\"uiskin.atlas\")) skin.addRegions(te) skin.add(\"default font\", new BitmapFont()) skin.load(Gdx.files.internal(\"uiskin.json\")) Pixmap r new Pixmap(100,100,Format.RGBA8888) r.setColor(0xff0000ff) r.fillRectangle(1, 1, 13, 13) Texture s new Texture(r) SpriteDrawable s1 new SpriteDrawable(new Sprite(s)) TextButton gam new TextButton(\"new games\",skin) gam.addListener(new InputListener() Override public boolean touchDown(InputEvent event, float x, float y, int pointer, int button) TODO Auto generated method stub g.setScreen(null) new Logger(\"e\").setLevel(10) return true ) super.getS LabelStyle tt new LabelStyle(new BitmapFont(), Color.BLUE) Label l new Label(\"d\",tt) tabl.add(gam) tabl.setDebug(true) stage.addActor(tabl)"} {"_id": 15, "text": "Tween animation with Universal Tween Engine LIbGDX Let me preface by saying that I'm new to libgdx... I have an animation that I'm currently drawing to the screen using game.batch.draw(flyAnimation2.getKeyFrame(elapsedTime, true), 200,200,0, 0, 124, 90, 1.0f, 1.0f, 90) Where flyAnimation2 is Animation flyAnimation2 This works correctly. I now want to use UniversalTweenEngine to tween it from one side of the screen to the other. I can't find any decent documentation on how to achieve this affect using an Animation. Any help would be appreciated!!"} {"_id": 15, "text": "Making a border around a translucent shape in LibGdx The recommended method for making a border around a shape using LibGdx is to draw a larger version of the shape to serve as the border, then draw the shape over the larger image. shapeRenderer.setColor(borderColor) shapeRenderer.circle(x, y, radius borderWidth) shapeRenderer.setColor(shapeColor) shapeRenderer.circle(x, y, radius) This works well most of the time, but if you want shapeColor to have any opacity less than 1, it fails. The problem is that you can see the borderColor through the shape, and the colors are blended. Is there anyway to fix this issue?"} {"_id": 15, "text": "Statemachine to behaviour tree? Background I was able to convert simple statemachines like this... Into a BT looking like this (Notation describtion)... root sequence After another playAnimation name idle sequence walks? Breaks current sequence if false playAnimation name walk sequence runs? playAnimation name run Goal But how do we convert more complex multi layered animation states into a behaviour tree ? How would such a BT look like ? The difficult parts here are transistions between the different states, something i couldnt solve in an BT. Question Is it possible to convert every statemachine into a behaviour tree ? What does the theory says and how does it look like in practice ?"} {"_id": 15, "text": "let me know how to save an integer for game stage level? I am trying to make a game which there are many stages. For example, If users are done stage 3 of the game, and then turn off the game, they can play the game at stage 3 anytimes. simply, I tried to make this logic, so when you press space bar, the level goes up. However, after turning off and on, lev will start at 0. I guess when program are ruining every single times, the lev is initialized 0. I tried to use preference. please let me know solutions.. AppPreferences.java public class AppPreferences private static final String PREFS NAME \"Adventure\" private static final String PREF LEVEL \"Level\" private Preferences preferences protected Preferences getPrefs() if (preferences null) preferences Gdx.app.getPreferences(PREFS NAME) return preferences public void setPrefLevel(int level) getPrefs().putInteger(PREF LEVEL,level) getPrefs().flush() public int getPrefLevel() return getPrefs().getInteger(PREF LEVEL) PlayScreen.java public class PlayScreen implements Screen int lev 0 public void render(float delta) if(Gdx.input.isKeyJustPressed(Input.Keys.SPACE)) advanture.getPreferences().setPrefLevel(i) System.out.println(advanture.getPreferences().getPrefLevel() \"!!!!!!!!!\") i"} {"_id": 15, "text": "Box2dMapObjectParser isometric map bodies have wrong position and size I'm using Box2dMapObjectParser to load Tiled objects layer onto my IsometricTiledMapRenderer'ed TiledMap. It works OK with a UnitScale of 1 but the position is wrong and some shapes have weird rotation and size. Map in Tiled Map in game I've found another similar question but the answer is unfortunately far from being clear, at least to me. Relevant code mParser.load(mWorld, mMap) mRenderer new IsometricTiledMapRenderer(mMap, mParser.getUnitScale()) Notice the 'Ellipse' is not drawn at all, the Rectangle is way off and Polygons Polylines are just not positioned right. Did anyone experience the same issue and or have any idea how to fix this?"} {"_id": 16, "text": "How should I use Monte Carlo method to simulate some police cars that keep patroling in a tile based map? I have a tile map (isometric) and I have some police cars that they patrol all the time on the map, there is some building and other cars in map too so they are considered as constraints in my shortest path algorithm (below). I use to randomly select some target for each police car and use A algorithm to find the path to the randomly selected destination tile and this cycles over and over. because cars should move all the time! (except those that gamer should control) The problem is police cars routes are correlated and they do not just traverse the area randomly, for instance, they do not usually cover a same area or they do not repeat their own route so my implementation based on some random destination point wasn t a good idea and the result was not satisfying. Take a look at the following picture, it shows how randomly selecting the destinations worked for me, obviously, it is a dumb patroling for sure, even a cockroache could do better! Suddenly, I remembered Monte Carlo Method and I thought it would solve my problem, because it exactly does what I wanted to do, However may be I should use Zobrist Hashing since it is tile base in nature. Does anyone has any idea how should I use this kind of methods? Question in Brief How can I use Zobrist Hashing or Monte Carlo method to set destinations points for my patroling car (like police car)? If anyone knows another way to implement a patrolling method, he she is very welcome to post an answer, I am not sure if Zobrist Hashing is the best and worth to implement. First Edit (added following paragraph) I need to distribute the cars and their destination points in a homogeneous manner. Every kind of solution that get me close to a good pattern could be my answer, of course there is a lot of methods to do so, but I dont have any clue."} {"_id": 16, "text": "How do I implement group formations in a 3D RTS? I managed to get pathfinding work for a single unit, and I managed to avoid agent agent collision, but now I need to be able to send a group of agents to some location. This is my set up so far Waypoint pathfinding The minimum distance between two nodes is a little bigger than the biggest bounding sphere radius allowed for an agent. Agents avoid collisions with other agents by doing some steering behaviour I based on clearpath So now I need to send my agents somewhere in group. I have read some posts saying that some way to do it is to create a group leader and give the other units offsets to his position. But then the problem is, what if the group formation cannot be achieved? e.g. you want to form a rectangle, but at the target position there is a structure nearby which prevents you from creating a rectangle setup."} {"_id": 16, "text": "A Algorithm for Tactical RPGs? I'm messing around with writing a really poor tactical RPG in C . So far I have a 2D tile map and just got the A algorithm working based on the pseudocode in the wikipedia. But real tactical RPGs don't just find the best path on a flat plane and move there. They typically have limited move ranges and must climb up or down. If you've ever played Final Fantasy Tactics these would be affected by the Move and Jump stats. This is where I get lost. How do I alter the A algorithm so that it finds the best path toward a target, but the path is only so many tiles long? How should I take height differences and jump stats into account? How do I implement jumping over a gap? If it helps, right now my map is represented by a Vector of Tile objects. Each tile has pointers to the North, South, East, and West tile, which are set to a nullptr if no tile exists there, such as along the edge of the map or if a tile is set to non passable."} {"_id": 16, "text": "Shortest path to a road I have a road network and a vehicle that is current off the roads. I want to find the shortest path to any road. An obvious solution is to run a pathfinding algorithm between the current vehicle location and all the points on the road, but that's hardly scalable. I am curious to know if there is an algorithm out there that I could use to maximize the performance of this operation."} {"_id": 16, "text": "Pointy top hexagonal A pathfinding I'm trying to create a game with a hex based map with the points at the top. I have most of it working, however the path finding is being a little awkward. The heuristic I'm using is called Euclidean I believe and is like so var dx Number destinationNode.c node.c var dy Number destinationNode.r node.r return Math.sqrt((dx dx) (dy dy)) Node is the node the unit is currently on, c is the node's column number and r is its row number. I'm using these as a simpler x and y coords. I'm trying to limit the unit to 3 hex moves in one round, so initially I thought it'd be as simple as IF returned heuristic lt 3 unit can move to that hex, however it's not working out quite like that. As you can see in the pic above, the bottom right selected hex with the \"1 9 3.162277\" is moveable to in 3 moves, however the hex with \"9 1 3.162277\" on the far right would need 4 moves to reach it. Can anyone offer any advice on how to make this work? EDIT My problem was being caused because I was using a Cartesian coordinate system and was just staggering every other Y coord. Fixed this by making the Y axis go down at a 60 degree angle. Thanks to amitp for the links that showed me what I was doing wrong."} {"_id": 16, "text": "UE4 Navmesh precision I'm trying to make my very tiny man pathfind his way through a map of a school but the navmash won't recognize some of the hallways cause they are very narrow like this. So as you can see the man can fit but the navmesh says no so can anyone help and keep in mind that i am very new to unreal engine so please make your explanation easy to understand."} {"_id": 16, "text": "What is the most appropriate path finding solution for a very large proceduraly generated environment? I have been reading quite a bit in order to make the following choice which path finding solution should one implement in a game where the world proceduraly generated, of really large dimensions? Here is how I see the main solutions and their pros cons 1) grid based path finding this is the only option that would not require any pre processing, which fits well. However, as the world expands, memory used grows exponentially up to insane levels. This can be handled in terms of processing paths, trough solutions such as the Block A or Subgoal A algorithms. However, the memory usage is the problem difficult to circumvent 2) navmesh this would be lovely to have, due to its precision, fast path calculation and low memory usage. However, it can take an obscene pre processing time. 3) visibility graph this option also needs high pre processing time, although it can be lessened by the use of fast pre processing algorithms. Then, path calculation is generally fast too. But memory usage can get even more insane than grid based depending on the configuration of the procedural world. So, what would be best approach (others not present in this list are also welcome) for such a situation? Are there techniques or tricks that can be used to handle procedural infinite like worlds? Suggestions, ideas and references are all welcome. EDIT Just to give more details, one should see the application I am talking about as a very very large office level, where rooms are generated prodecuraly. The algorithm works like the following. First, rooms are placed. Next, walls. Then the doors and later the furniture obstacles that go in each room. So, the environment can get really huge and with lots of objects, since new rooms are generated once the players approaches the boundary of the already generated area. It means that there will be not large open areas without obstacles."} {"_id": 16, "text": "Use pathfinding algorithm in a navmesh with several endpoints I need to implement a pathfinding algorithm in a navmesh with several endpoints, and I would like to find the path to the closest endpoint. I suppose that Dijkstra algorithm is the best solution for that but it needs some adaptation. Is someone could help me please ?"} {"_id": 16, "text": "How to adapt pathfinding algorithms to restricted movement? Imagine car like movement where entities cannot turn on a dime. Say, for the sake of discussion, that when at speed they can turn 90 degrees per second. This would in many cases change the optimal path and therefore the pathfinding. It may even make 'usual' paths entirely impossible to traverse. Are there any pathfinding algorithms or movement planning algorithms that can keep this in mind, or are there simple ways to adapt the popular ones?"} {"_id": 16, "text": "How can I implement caching into my Rectangular Symmetry Reduction pathfinding algorithm? I have a working implementation of Rectangular Symmetry Reduction (see here and here for more information on RSR). I'm interested in further improving performance over raw A by implementing caching of paths. My assumptions are as follows We're working on a 4 connect square grid The Destination position is more important than the start position. Because RSR is made possible via contiguous rectangular regions, the fastest path between two points in the same region is likely to be the same (but not always) It's at this point that I get stuck I'm unsure how best to use those assumptions and the information generated by a path calculation to better future path calculations. I've considered storing the path cost from the final waypoint (that is, where the resultant path leaves the first rectangle), but as far as I've been able to consider, that is only useful if I have the path cost from every other potential waypoint on the rectangle's edge so good if you've got a lot of rectangular rooms separated by doors (few open edges), less so if you're pathing around wide open fields with a few scattered rocks (many open edges). Essentially, my intuition is that going that route would potentially necessitate a lot of pathfinding before caching could provide any returns, so I'm looking for someone else's opinion on whether I'm on the right track, or if I'm missing a simpler way of implementing a performance boosting caching algorithm. Thoughts? I'm more than happy to discuss more of what I understand of the algorithm or share specific code if requested."} {"_id": 17, "text": "MMO Some NPCs controlled clientside? I'm making an online game and considering whether to handle certain NPCs clientside. Is this common? For example I play Elder Scrolls Online and it seems basic townspeople NPC locations aren't quite synced up with my friends. I guess it makes sense if they're only wandering around in a set area for that to be handled clientside right? So my question is, should I go with this method, or are there any good reasons why I shouldn't? Any examples of games that do, so I can see how they handle it?."} {"_id": 17, "text": "Node.js MMO process and or map division I am in the phase of designing a mmo browser based game (certainly not massive, but all connected players are in the same universe), and I am struggling with finding a good solution to the problem of distributing players across processes. I'm using node.js with socket.io. I have read this helpful article, but I would like some advice since I am also concerned with different processes. Solution 1 Tie a process to a map location (like a map cell), connect players to the process corresponding to their location. When a player performs an action, transmit it to all other players in this process. When a player moves away, he will eventually have to connect to another process (automatically). Pros Easier to implement Cons Must divide map into zones Player reconnection when moving into a different zone is probably annoying If one zone process is always busy (has players in it), it doesn't really load balance, unless I split the zone which may not be always viable There shouldn't be any visible borders Solution 1b Same as 1, but connect processes of bordering cells, so that players on the other side of the border are visible and such. Maybe even let them interact. Solution 2 Spawn processes on demand, unrelated to a location. Have one special process to keep track of all connected player handles, their location, and the process they're connected to. Then when a player performs an action, the process finds all other nearby players (from the special player process location tracking node), and instructs their matching processes to relay the action. Pros Easy load balancing spawn more processes Avoids player reconnecting borders between zones Cons Harder to implement and test Additional steps of finding players, and relaying event action to another process If the player location process tracking process fails, all other fail too I would like to hear if I'm missing something, or completely off track."} {"_id": 17, "text": "How do you design a record replay system for a frequently changing game? I'm working in a free MMORPG and I have a problem. I'm (with other people) developing a video recording system for the game. The idea is basically we record all the packages sent amp received with timestamps, plus some local data from the client, and then dump it in a file. For playing the video, we just emulate everything that's on the file. We also have an option to export the video to avi with ffmpeg. The problem is when we change between versions of the game, it is hard to maintain backwards compatibility for the video (commands added removed, function changes, etc). Is there a good way to handle this problem? instead of having a bunch of different players and choose the right one for each version of the video file? It would be helpful to know how does other games handle this situation. Thanks for the help, sorry for my english."} {"_id": 17, "text": "Recommended RPG game that can be used to learn game Modding? I would like to learn game development via game modding. RPG MMORPG is one of my favorite genres. Could someone recommend me a good and recent ( I would like to play the game as well) game to learn game modding. Something with the likes of Neverwinternights 2. Thank you very much for the responses. Appreciate it."} {"_id": 17, "text": "What are some alternative options for saving level data for a game like Hurtworld? For a multiplayer building crafting game like Hurtworld (where the player can build structures on the map), what are some alternative options or example data structures for saving the world data on the server? Level data including Grid of squares that foundations can be built on, with a flag to indicate whether the square is empty or already has a foundation Walls etc which have been built on the foundations, and multiple floors above Windows doors which can be built on window frames door frames Furniture machines which can be placed on foundations floors Other dynamic world data such as trees, animals, chests with items, etc I feel like it's too much data to store in a database. Am I wrong? If it can be stored in a database, how can I get my head around the data structure I would need to set up? Is there some simplified example? Can each square in the world grid be represented by 1 row in the database? And should there be a new row for each structure built on a square, with that row referencing the square structure that it was built on?"} {"_id": 17, "text": "How should I collect user behavior data in an MMO? In an MMO, and I'm trying to collect data about user behaviors for the purpose of tweaking the game rules to achieve maximum user satisfaction. Clearly one way to do it is to hand roll specific things into an application, much like one might use Console.WriteLine() to view the contents of a variable, or the StopWatch() class to see how long something takes to execute. But you can use a profiler in lieu of StopWatch, and you can use a debugger in lieu of WriteLine. How should I collect my data? Are there generalized techniques for instrumenting an application to observe user behavior, or some form of code instrumentation technique? Also, I'm only interested in collection techniques you can assume that I already know how to transmit, store and analyze said data."} {"_id": 17, "text": "Is there a way to make a dynamic world such as a MMORPG horizontally scalable? Imagine a open world of 500 players with data changing as fast as 20 updates player second. Last time I worked in a similar MMORPG, it used SQL, so obvioulsy it couldn't query the DB all the time. Instead, it loaded all players from the DB to memory as C objects and used them. That is, it scaled vertically. Would it be possible to make that server horizontally scalable instead? Is there a database designed to support that amount of updates concurrently?"} {"_id": 17, "text": "How do I create a big multiplayer world in UDK? I want to create a big multiplayer world in UDK and I'm having a few difficulties. I created the biggest terrain possible but then any terrain related action I do takes forever. However, I've seen videos of people make same size terrain and working without a problem. My pc is strong enough, so maybe someone can tell me what I'm doing wrong. I want to make it even bigger then the biggest terrain size, so I was thinking of doing level streaming but then I read that streaming is working server side which means if I have a player on every terrain all terrains will still be loaded and I want to save as much memory possible so it will work well online. Thanks for any help you can give."} {"_id": 17, "text": "Client Server MMOG data structures sync when joining playing After reading a few articles on MMOG architecture, there is still one point on which I cannot find much information it has to do with how you keep in sync server data on the client, when you join, and while you play. A pretty vague question, I agree. Let me refine it Let's say we have an MMOG virtual world subdivided into geographical cells. A player in a cell is mostly interested in what happens in the cell itself, and all the surrounding cells, not more. When joining the game for the first time, the only thing we can do is send some sort of \"database dump\" of the interesting cells to the client. When playing, I guess it would be very inefficient to do the same thing regularly. I imagine the best thing to do is to send \"deltas\" to the client, which would allow keeping the local database in sync. Now let's say the player moves, and arrives in another cell. Surrounding cells change, and for all the new cells the player subscribes, the same technique as used when joining the game has to be used some sort of \"database dump\". This mechanic of joining moving in a cell based MMOG virtual world interests me, and I was wondering if there were tried and tested techniques in this domain. Thanks!"} {"_id": 17, "text": "game state update structure and distribution in MMO? I'm developing an MMO, which has to maintain a global game state. Now I want to distribute the updates, which the players make to this state, to players which can see those updates (area of interest etc.) Is there a common structure format for these updates and for a global game state? Like, every action a player sends to the server has to include a type, the coordinates etc. And are the updates, which gets distributed from the server to the players in the same format or are they different? Also should I include timestamps or use the receive time from the server to check which action came first and which sencond etc.? And what should I send back to the player if he send an invalid action?"} {"_id": 18, "text": "2d Circle different ARC's Collision detection I've read about the way to detect collision between 2 circles or circle and line using trigonometry. But now I would like be able to detect which part of the circle arcs I'm hitting. For example I divided my circle to 4 different sized arcs. I would like to know which arc gets hit, 1, 2, 3, or 4? See image What is the best collision detection algorithm for this? Or what is this called?"} {"_id": 18, "text": "LWJGL Collision Detection I cannot find LWJGL collision detection with a camera and walls. Like making it to where you can not walk through walls and other different shaped rectangular prisms and cubes. How do I set up LWJGL collision?"} {"_id": 18, "text": "Calculating the intersection of a fast moving circle and a line Inside of my game, I am in need of a way to test the collision between a line segment and a fast moving circle. I would say I need a line v capsule collision however the problem is I need the collision point. None of the existing formulas I can find can do quite what I need. I have tried line x line intersection and just moving the lines by the radius of the capsule sort of work however then the collision point found around the edges is off. Additionally, I can't find a way to work back from the collision point to the point that the circle first collided at. Above is sort of a picture of what I am hoping to achieve. The green capsule represents the circle's path. So really the problem here is finding that collision point because a yes or no answer is trivial here. There are formulas for ray capsule or box capsule however none of them will get me this collision point. Does anyone have any ideas on how I can figure this out?"} {"_id": 18, "text": "How can I properly detect my cars collisions using hitboxes? I have created two car objects and their current speed is represented as the len variable. This is the code Under Blue Car Collision Event if (place meeting(x,y,obj CarRed)) Checking Collision len 0 Current speed if (len lt obj carRed.len) Blue Car's current speed is greater than Red's myHP obj carRed.len 2 So I copied the code to the other object and replaced Red with Blue. When the Red car crashes into the Blue car, the Blue car takes damage because the Red car is traveling faster than the Blue car. But when the Blue car crashes into the Red car, the Red car won't take any damage. Later, I thought about using a Hitbox and have it as a parent for both cars, but I don't know if it would mess up the collisions, and I don't know how to tell the hitbox that it's been created by Blue Car, or by Red Car. I don't know if I should make a new hitbox for each car or one for both."} {"_id": 18, "text": "Is there algorithm for boxcasting or thickray casting? I am not sure what the correct name is called but here is my problem I have implemented a casting ray function that works as expected. However, because I am using a loose grid structure, (ie, objects that overlap multiple grids are only put into one grid), the ray cast may miss certain objects. What I need, I think is a box casting where I need to get every grid the box hits from its initial position to final position. Does anyone have such algorithm? Or pointing out to a solution to this problem, thanks! This pic from N tutorials is a nice summary of this problem My ray cast implementation only hit the blue cells but as you can see, it may miss certain objects. They are only put into the grid where the x is. Just to add I will be using this in a heated loop and so I have the additional requirement that this must be done efficiently. EDIT this picture illustrates why I think a box casting would work for loose grid (The box must be bigger than the objects contained in the grid.)"} {"_id": 18, "text": "What data should a generic collision detection system gather? I'm working on a relatively generic 2D AABB collision detection system for a game engine, and I've re written it more times than I'd like to admit, due to not calculating or recording specific details of each collision. Right now, this is what I'm collecting Collision time as a fraction of an update cycle in the game loop. Location of the collision. ID of the colliding object. Each object has a Set that holds this data for each collision (I'm working with a component entity system) so other systems can use the recorded data. The problem I just ran into was that I needed to know which side of the object the collision takes place on. What other values or points of interest should I calculate or record, per collision? What do you look for in a standard collision detection system?"} {"_id": 18, "text": "Collisions between players in multiplayer racing game I'm creating a simple racing game (spaceships, no gravity) using p2.js, phaser and node.js. What I have done Client receives world state from server extrapolate other players based on latest velocity position from server check if client side prediction was correct if no apply position from server and process inputs that server was not aware of. fixed physics step Server receives inputs from clients and apply fixed physics step sends world state to each client Now I'm struggling with collisions between players. Colliding player is jumping all time during collision. I think it's because client side prediction is not calculating similar results to the server. Server doesn't know all inputs from player (lag). Player doesn't have the same position of colliding player as the server (lag). Combining this two makes the client to resolve collision different than the server and when world state arrives player has to make a big correction."} {"_id": 18, "text": "Game Maker 8.0 Can I simulate any collision event within the step event of an object including the \"other\" reference? Let's say I have two objects. One is \"obj wall\" and the other is \"obj player\". For the sake of argument, let us assume that I cannot just add a collision event to obj wall (I truly cannot due to limitations embedded within the nature of the project itself). Is there a function or block that allows one to write a collision event within obj wall's step event? I know there are various collision detection functions, but what I need is one that acts identically to the collision event and allows me to access the \"other\" property. If this is not possible, are there any suitable alternatives (such as a function that detects collision and returns the instance id of an object colliding the wall)? My primary issue here is that I want to be able to use collision in one instance of an object without having to have the collision event added. I'd prefer it to be as close as possible to the collision event so as to avoid unnecessary extra checks."} {"_id": 18, "text": "2D destructable terrain with collisions in MMO Task What I want is to create destructable terrain (like in Worms) and collisions with this terrain (with calculated normals) that will be fast enough to work on server machine. Basically lets say that I want to make Worms Online (I don't really) using model CLIENT SERVER, not peer2peer. Approaches I read about People in other threads were suggesting using bitmaps and perpixel collisions, but is it fast enough to handle i.e. 100 games simultaneously? Another approach is using vectors to track outline of terrain, but it seems also really CPU heavy and I didn't find any good example of that. So guys what is your proposal?"} {"_id": 18, "text": "Create bullet physics rigid body along the vertices of a blender model I am working on my first 3D game, for iphone, and I am using Blender to create models, Cocos3D game engine and Bullet for physics simulation. I am trying to learn the use of physics engine. What I have done I have created a small model in blender which contains a Cube (default blender cube) at the origin and a UVSphere hovering exactly on top of this cube (without touching the cube) I saved the file to get MyModel.blend. Then I used File gt Export gt PVRGeoPOD (.pod .h .cpp) in Blender to export the model to .pod format to use along with Cocos3D. In the coding side, I added necessary bullet files to my Cocos3D template project in XCode. I am also using a bullet objective C wrapper. (void) initializeScene physicsWorld CC3PhysicsWorld alloc init physicsWorld setGravity 0 y 9.8 z 0 Setup camera, lamp etc. .......... ........... Add models created in blender to scene self addContentFromPODFile \"MyModel.pod\" Create OpenGL ES buffers self createGLBuffers get models CC3MeshNode cubeNode (CC3MeshNode ) self getNodeNamed \"Cube\" CC3MeshNode sphereNode (CC3MeshNode ) self getNodeNamed \"Sphere\" Those boring grey colors.. cubeNode setColor ccc3(255, 255, 0) sphereNode setColor ccc3(255, 0, 0) float cVertexData (float )((CC3VertexArrayMesh )cubeNode.mesh).vertexLocations.vertices int cVertexCount (CC3VertexArrayMesh )cubeNode.mesh).vertexLocations.vertexCount btTriangleMesh cTriangleMesh new btTriangleMesh() for (int i 0 i lt cVertexCount 3 i 3) printf(\" n f\", cVertexData i ) printf(\" n f\", cVertexData i 1 ) printf(\" n f\", cVertexData i 2 ) Trying to create a triangle mesh that curresponds the cube in 3D space. int offset 0 for (int i 0 i lt (cVertexCount 3) i ) unsigned int index1 offset unsigned int index2 offset 6 unsigned int index3 offset 12 cTriangleMesh gt addTriangle( btVector3(cVertexData index1 , cVertexData index1 1 , cVertexData index1 2 ), btVector3(cVertexData index2 , cVertexData index2 1 , cVertexData index2 2 ), btVector3(cVertexData index3 , cVertexData index3 1 , cVertexData index3 2 )) offset 18 self releaseRedundantData Create a collision shape from triangle mesh btBvhTriangleMeshShape cTriMeshShape new btBvhTriangleMeshShape(cTriangleMesh,true) btCollisionShape sphereShape new btSphereShape(1) Create physics objects gTriMeshObject physicsWorld createPhysicsObjectTrimesh cubeNode shape cTriMeshShape mass 0 restitution 1.0 position cubeNode.location sphereObject physicsWorld createPhysicsObject sphereNode shape sphereShape mass 1 restitution 0.1 position sphereNode.location sphereObject.rigidBody gt setDamping(0.1,0.8) When I run the sphere and cube shows up fine. I expect the sphere object to fall directly on top of the cube, since I have given it a mass of 1 and the physics world gravity is given as 9.8 in y direction. But What is happening the spere rotates around cube three or times and then just jumps out of the scene. Then I know I have some basic misunderstanding about the whole process. So my question is, how can I create a physics collision shape which corresponds to the shape of a particular mesh model. I may need complex shapes than cube and sphere, but before going into them I want to understand the concepts."} {"_id": 19, "text": "SDL Multiple keyboard support I am making a game with multiplayer split screen mode using SDL. Basically, I like the idea of having each player plug in his own keyboard to the PC, set custom controls via options and being able to play it with controls that he likes. However, there's a problem. This code gets the keyboard event SDL Event event SDL PollEvent( amp event) SDL Event has member SDL KeyboardEvent key, which has member Uint8 which. In short event.key.which According to this, it should represent keyboard device index, however, I've tried connecting three keyboards to my PC and press the buttons at the same time and the result wasn't satisfying they all had same keyboard indexes. Is there a solution to this? Or am I missing something?"} {"_id": 19, "text": "How does this background scrolling code work? I'm glad that this code does exactly what I wanted it to do...But I can't make sense of it although I wrote it from scratch myself. What I wanted to do was create an infinitely scrolling background. What I figured I'd need to do was have it draw the entire background twice so that the second one comes in after the first one, and when the first one gets to the end it starts over behind the second one, and it would loop like this forever. However, after I wrote the code to make it scroll and tested it, it scrolls infinitely by just having the code draw the background once... I have the function to draw the background like so... void drawBackground(int xStart,int yStart,SDL Rect clip) int x,y for(y yStart y lt SCREENH y y TILEH) for(x xStart x lt SCREENW x x TILEW) drawClip(x,y,background, amp clip 45 ) And then I use the function like this to scroll the background... bgX 1 if(bgX SCREENW) bgX 0 bgY 1 if(bgY SCREENH) bgY 0 drawBackground(bgX, bgY, bgClip) It works perfectly...I just don't understand what makes it loop around with no gaps."} {"_id": 19, "text": "How do I access the pixels of an SDL 2 texture? How exactly can one get to the RGBA pixel arrays of SDL 2.0 textures? Does the texture need to be specially initialized and locked down?"} {"_id": 19, "text": "Where can I get correct pitch parameter for SDL RenderReadPixels function in SDL2? I have a texture created with SDL TEXTUREACCESS TARGET access and I want to get all its pixels with SDL RenderReadPixels() function. One of the function parameters is pitch and I don't really know where can I can get it. Texture is created with dimensions of a previously created surface, but function call with surface gt pitch as pitch parameter generates EXC BAD ACCESS. Texture and surface dimensions are 800x600, and surface gt pitch returns 3200, which is strange for me, because I thought that pitch is the width of the texture in memory and expected surface gt pitch to be something like 1024."} {"_id": 19, "text": "Memory leak around SDL FreeSurface When I call my tile engine function, the amount of memory my program uses begins to spike at about 80 90 megabytes per second. The memory use continues to go up until the program crashes. The function is below. After tinkering with it, I figured out that I could reduce this bricking to about 15 megabytes per second if I called SDL FreeSurface every time I ran the function. But this confused me, because I thought that you are only supposed to use SDL FreeSurface when the program exits so that the images don't continue to occupy the memory. if I don't call SDL FreeSurface before I blit the tiles, do the previously blitted tiles continue to exist? If I don't how do I fix the aforementioned bug? The edit I made to bring the memory bricking down to 15 megabytes per second is commented out. SDL Surface loadedTile NULL SDL Surface modelIMAGE NULL void enviroment blitTiles(SDL Surface windowENVIRO, int tileAmount, int tileType , int tileXenviro, int tileYenviro, bool quitTiles) HDcounter 1 if(loadedTile ! NULL) SDL FreeSurface(loadedTile) loadedTile IBFobjectENVIRO.loadIMG(\"tileClipSheet.png\") for(int tiles 0 tiles lt tileAmount tiles ) switch(tileType tiles ) Black square (unpassable) case 1 IBFobjectENVIRO.blitIMG(tileXenviro 75, tileYenviro, windowENVIRO, loadedTile, 0, 0, 75, 75) HDcounter 1 forbiddenX HDcounter tileXenviro 75 forbiddenY HDcounter tileYenviro forbiddenSpriteWidth HDcounter 75 forbiddenSpriteHeight HDcounter 75 forbiddenSpriteDepth HDcounter 75 break Grey square case 2 IBFobjectENVIRO.blitIMG(tileXenviro 75, tileYenviro, windowENVIRO, loadedTile, 75, 0, 75, 75) break Brown square case 3 IBFobjectENVIRO.blitIMG(tileXenviro 75, tileYenviro, windowENVIRO, loadedTile, 150, 0, 75, 75) break Invisible square (unpassable) case 4 IBFobjectENVIRO.blitIMG(tileXenviro 75, tileYenviro, windowENVIRO, loadedTile, 225, 0, 75, 75) HDcounter 1 forbiddenX HDcounter tileXenviro 75 forbiddenY HDcounter tileYenviro forbiddenSpriteWidth HDcounter 75 forbiddenSpriteHeight HDcounter 75 forbiddenSpriteDepth HDcounter 75 break tileXenviro 75 if(tileXenviro gt level1ObjectENVIRO.level1Width) tileYenviro 75 tileXenviro tileXenviro level1ObjectENVIRO.level1Width if(quitTiles true) SDL FreeSurface(loadedTile)"} {"_id": 19, "text": "Render two images to an SDL window I want to create a menu for an SDL game, so I load an Image to an SDL Window as the background, then I try to load the next image which will be a button but it doesn't display the button to the window, it just displays the background. I'm a newbie so perhaps it is an obvious mistake. Here's what I got include lt stdio.h gt define SDL MAIN HANDLED include \"SDL.h\" int main(int argc, char argv ) int a 1 SDL Event event SDL Window window SDL Surface background surface SDL Texture background texture SDL Renderer background renderer SDL Surface button surface SDL Texture button texture SDL Renderer button renderer SDL Rect button pos button pos.h 65 button pos.w 320 button pos.x 0 button pos.y 0 SDL Init(SDL INIT VIDEO) window SDL CreateWindow( \"Cosmic Racing\", SDL WINDOWPOS CENTERED, SDL WINDOWPOS CENTERED, 320, 568, SDL WINDOW OPENGL ) background renderer SDL CreateRenderer(window, 1, 0) background surface SDL LoadBMP(\"Background.bmp\") background texture SDL CreateTextureFromSurface(background renderer, background surface) SDL RenderCopy(background renderer, background texture, NULL, NULL) SDL RenderPresent(background renderer) button renderer SDL CreateRenderer(window, 1, 0) button surface SDL LoadBMP(\"Play.bmp\") button texture SDL CreateTextureFromSurface(button renderer, button surface) SDL RenderCopy(button renderer, button texture, amp button pos, NULL) SDL RenderPresent(button renderer) while(a) SDL PollEvent( amp event) if(event.type SDL QUIT) a 0 SDL DestroyTexture(button texture) SDL FreeSurface(button surface) SDL DestroyRenderer(button renderer) SDL DestroyTexture(background texture) SDL FreeSurface(background surface) SDL DestroyRenderer(background renderer) SDL DestroyWindow(window) SDL Quit() return 0 BTW if you find additional mistakes please tell me. Thank you )"} {"_id": 19, "text": "How do I modify textures in SDL with direct pixel access? I'm trying to use SDL LockTexture and SDL UnlockTexture for directly editing pixels in a texture. I'm using SDL 2.0. Setting the pixel value using the following code doesn't modify the texture void pixels int pitch SDL LockTexture(mytexture, NULL, amp pixels, amp pitch) Set pixel 100,100 to blue ((uint32 t )pixels) 100 100 255 SDL UnlockTexture(mytexture)"} {"_id": 19, "text": "Are SDL games trivially portable from Linux to Windows? I have a small game made with SDL2 and I want to port it to Windows. Would I hav eto write a lot of ifdefs to port it or will the very same code work on Windows and Linux? Or is it more complicated? Sorry if these are silly questions. I couldn't find existing resources on porting SDL games between desktop systems and I've never done this before!"} {"_id": 19, "text": "Separating logic update from render drawing code in a single thread using sleep I've read that the speed of game objects should not be hindered by FPS but instead should be based on time. How can I seperate the update draw code to maximize performance without limiting the drawing rate and provide a constant logic update rate based on time? My current pseudo code is as follows loop draw() if (ticksElapsed() gt 100) update() ticks ticksElapsed() The problem is the drawing code hinders the performance of the update() rate. And it consumes 100 cpu because if sleep is thrown in, it throws off both drawing logic functions. I am also using SDL and it doesn't seem to have a vsync option. I've also heard of the terms fixed and variable time stepping however I'm not sure how that can be done with sleep()"} {"_id": 19, "text": "Input processing performance I'm building a game using SDL in Linux platform. Now I want to read user input with SDL GetKeyboardState, but my doubt is wich is the best way using a thread or a timer. I tried both and I think they work well."} {"_id": 20, "text": "Exporting Blender bones' transform matrix I use this simple python script to export bones transformation bones armature.pose.bones for eaach bone in bones SystemMatrix Matrix.Scale( 1, 4, Vector((0, 0, 1))) Matrix.Rotation(radians( 90), 4, 'X') if (bone.parent) Export babylon.write matrix4(file handler, \"matrix\", bone.parent.matrix.inverted() bone.matrix) else Export babylon.write matrix4(file handler, \"matrix\", SystemMatrix bone.matrix) I'm using a left handed Y up system but I should forget something about rotation because the result is not correct ( Original (under Blender)"} {"_id": 20, "text": "Level design modular vs single project I'm working in Blender and UE4 for a while now... I always have the dillema of how to create my levels. My two ideas are a) modular like lego blocks. Create a wall, a floor, wall with hole for a window etc inside blender, and then, in UE4 make a level out of them. The advantage here is that I can arrange them however I want. The disadvantage is I always have a problem getting the elements to fit properly. Even if I scale one element up or down, everything else starts falling apart. b) single project make a whole object like a room or house in blender. Then everything connects nicely, but I'm always worried that it will be harder to edit, and I usually end up messing something. For example, whole house looks great, but inside the game a window is way too high, or the doors are too narrow to go through. My question What should I do should I stick to one method and try to fix the disadvantage? If yes, how would I do that? Is one of the methods considered \"better\" or the \"industry standard\"?"} {"_id": 20, "text": "Working in large play area maps, what are the performance and practical differences between Unreal Terrain and a subdivided OBJ mesh terrain? I've managed to reliably use SRTM Geotiff Raster Data with some modifications to create decent terrains in unreal, but there is a small problem with how I go about city building when I'm not using the actual terrain from unreal in my modelling program, especially with roads and other wide area constructs. If I were to convert my heightmap to a mesh object for use and manipulate in modelling programs instead of using the unreal terrain, what sort of difference in performance or stability would it make? What about load times? Would it make a large impact with object meshes for building exteriors included in the object mesh for the terrain, thus reducing the amount of information needed to load both the buildings and terrain by a small amount? EDIT title, player should have been play area"} {"_id": 20, "text": "Blender Edge Split Alternative What other options alternatives are there to \"edge split\" for smooth shading low poly game models. As I understand it, edge split does what it say's, its splits the mesh therefore increasing the vert count. I have considered manually marking the edges as sharp however I would imagine this leaves artifacts on the model(in game) ? Cheers."} {"_id": 20, "text": "Regarding BGE (Blender Game Engine), is it possible to generate a single .exe file from multiple files linked together? Would any of you guys or gals know the answer, please? I know one can generate a .exe (windows executable) file from a single blend file. My question is very simple Can anyone generate a single .exe file from multiple blender files of a same project? For instance, say I have a project folder with several subfolders and files from my project game. Each folder has specific elements like scenes, models, textures, etc. They are all linked together to form the final game. Now I want to generate a single .exe file for my game. Is this possible? Or, rather, should I make an entire game inside a single .blend file? (It seems like a very unlikely arrangement, maybe even impossible, but I rather ask). Or maybe even create a master .blend file from where to link all data and when project is finished pack everything before generating the executable (.exe)?"} {"_id": 20, "text": "Blender wont export textures (materials) to OGRE3D .mesh i am exporting a model from BLENDER to OGRE3D using the blender2ogre script. the desigre output is a single .mesh file but i know there should be also materials scripts that i should copy to my ogre path. the problem is that blender does not export those files for me it export only 1 mesh file (after i joined all objects) this is the model. in blender it looks fine but in my demo Ogre app it looks like i can use a little help here."} {"_id": 20, "text": "How to export an IK structure created in Blender 2.8 to Godot 3.1? I created this IK bone structure in Blender 2.8 and exported as .escn As you can see, in Blender it's working ok. Now, how to use it in Godot? I've tried export with .dae (native) .dae (Better Collada) .escn .glb None of them worked... Is there any tutorial for using imported IK in Godot?"} {"_id": 20, "text": "Getting Model Movement from Blender to OGRE3D I just delved myself into game development. I have created a simple character in Blender, together with its walking animation. I am planning to import the model and the movement so that I could use it in OGRE3D. How can I achieve this? Will the import thing have to separate model and movement? Is there any good and comprehensive guide for this?"} {"_id": 20, "text": "Blender Cycles materials not showing in Second Life I ran into a problem after trying to import my own creation into second life. While I am not new to 3D modelling with Blender and other programs, I have always only created things for fun. Now I have decided to import few of those creations into Second Life but the materials I have used do not display in the preview at all. I have used the Cycles Render. How do I solve my issue with Materials not displaying in Second Life after Exporting a Collada file from Blender Rendered in Cycles?"} {"_id": 20, "text": "Pixel perfect, sharp uv mapping I'm having trouble understanding how the uv map that Blender gives you works. It gives you a map with outlines which you should color in. But if I have a low res texture I don't know where should I color in exactly. Should I place my textures between the outlines, or should my textures overlap the outlines?"} {"_id": 21, "text": "What's the URL of the video showing an EA representative talking about game programming? The video shows an EA representative talking about how a programmer can get into the industry by showing some demos (in fact, he shows a physics demo presented by a candidate who wanted a job at EA). The demo shown depicts kind of a ragtime doll made of yellow cobblestones. I don't remember if I watched on YouTube or Vimeo and I frankly cannot find it after a few hours of work. I just remember the fact that it was taken at a GDC or a SIGGRAPH convention. Thanks in advance. I'm really frustrated because I'd love to show this video to some fellow developers (coders)."} {"_id": 21, "text": "Experiences of test driven devleopment in large projects I've used TDD in personal projects, but I wondered if anyone had any experience of using this approach across a large team? Was there resistence to the test first approach? Did you keep code coverage high for the whole project? How did coverage survive fixing those two in the morning and you ship in four hours crash bugs? Did you run tests on all platforms, or just assume that passing on one meant passing the others? I love the decoupled code that TDD produces, and the large suite of regression tests you get for free, but if you only have a few in the team who don't want to contribute tests, won't everyone suffer?"} {"_id": 21, "text": "What are windows used for? I have a very general question In games, what use does the programming concept of a window have? Or, in other words, why do some game dev libraries offer interfaces through which to create multiple windows? mdash Why would you need more than one windows in a game? Are multiple windows used as different views states of the game? (I.e. in game, main menu, pause menu, etc.)"} {"_id": 21, "text": "UDK Fixed AIController Pawn Height I already asked this question on the UDK forums, without much success though. I'm using a class derived from AIController to control my pawn in RTS style. My problem is that the pawn does not have a fixed Location.Z so if his Velocity in X and Y direction changes the Velocity.Z also changes (for whatever reason). I have tried nulling the Velocity.Z in the tick method, using Move to move the character back to a constant height and some other things of which nothing had worked. How can I solve this?"} {"_id": 21, "text": "why is Lua so important ( frequently used ) in game development I have written some small games for fun myself, but never used Lua myself. I have seen people discussing Lua's use in games everywhere. The question is What benefit can I get from using Lua in game development? Can someone explain this a little bit to me?"} {"_id": 21, "text": "Best solution for \"level string\"? I have a game that generates a random level map at the start of the level. I want to implement some way to save and load the level. I was thinking maybe XML would be a good option for saving all the variable, then it would be easy for me to build something that can parse that XML and generate the exact same level. But XML is probably overkill for my needs. I remember back in the day with the old Sega console that didn't have the ability to save your game (I think the Worms game did it too), that they would give you a bunch of character that you could write down. If you punched in that string later on, it would load the exact level. Would a \"level string\" be a good option? Would it be some kind of \"base60\" conversion? How would I implement this?"} {"_id": 21, "text": "how to breakdown my game project? I am a newbie so when ever i start over i get stuck how to organize things (code) or what should be the order of my work. Can you share your experience of project granularity so i can make my mind what should be done first and so on."} {"_id": 21, "text": "naming conventions in game code vocabulary I'm just starting out as a game developer, and I'm finding some difficulty when naming classes. I usually call the class that handles everything World, Map, or Grid. Then there's the names, inside. For example, I'm making a Frogger clone, and I have a class for everything that moves on screen (I try to use MVC and I'm talking about the model here I have Sprite classes in the view), and I named it Object for lack of a better name. Then there's a subclass from Object which I named Frog (though I could name it Player) and another one for the things controlled by the computer, which I don't know what to call (it could be Enemy or Obstacle or something appropriate for cars and trucks). This class also includes the turtles, which are more like helpful objects, yet are not powerups. How should I manage my naming conventions?"} {"_id": 21, "text": "Building a unified interface for a swap chain in both DirectX 12 and Vulkan Most objects in DirectX 12 have natural analogues in Vulkan, e.g. VkInstance IDXGIFactory VkPhysicalDevice IDXGIAdapter VkDevice ID3D12Device VkQueue ID3D12CommandQueue VkCommandBuffer ID3D12CommandList However, when it comes to the swap chain, it's not clear how the entries in VkSwapchainCreateInfoKHR correspond to entries in DXGI SWAP CHAIN DESC1. Clearly, there is not always a 1 1 correspondence, but I would really like to know how I can implement a unified interface for both."} {"_id": 21, "text": "Limitations of p2p multiplayer games vs client server I am reading up on multiplayer game architecture. So far most of the articles i've found deal with the client server model. I'd like to know what are the limitations for using a p2p architecture? what \"class\" of games are possible (or more common) to implement using it? which aren't? and in general, what are its main differences and limitations against the client server model."} {"_id": 22, "text": "Store and create game objects at positions along terrain I have a circular character that rolls down terrain like that shown in the picture below. The terrain is created from an array holding 1000 points. The ground is drawn one screen width infront and one screen width behind. So as the character moves, edges are created infront and edges are removed behind. My problem is, I want to create box2d bodies at certain locations along the path and need a way to store these creator methods or objects. I need some way to store a position at which they are created and some pointer to a function to create them, once the character is in range. I guess this would be an array of some sort that is checked each time the ground is updated and then if in range, the function is executed and removed from the array. But I'm not sure if its even possible to store pointers to functions with parameters included... any help is much appreciated!"} {"_id": 22, "text": "Box2D Do these simple objects need to be in the simulation? I would like to get input from the community regarding how best to represent simple objects in a Box2D based simulation. Some background Without going into too much detail, think of a top down game with a character and some 'food'. You can picture PacMan trade even. Suppose I will simulate the character and the walls to be Box2D so the guy cannot leave the maze and I get collision feed back etc. My question Should the 'food' be part of the Box2D simulation? My thoughts The food doesn't need to be in the simulation because.. It doesn't move. As soon as the character collides with the food it is removed from the game. It should not apply any force to the guy, or anything, ever. Nor should anything ever need to apply force to it. It's extra computation that isn't necessary. The food could be in the simulation because.. I am already using Box2D. Box2D can handle the collision events for me. Again, Box2D can handle the collision events for me I don't really want to check for collisions myself. Box2D should let the food 'sleep' so it won't be much extra computation, correct? What do you guys think? Do the benefits outweigh the costs here? Are there other pros cons I'm missing? I look forward to getting feedback."} {"_id": 22, "text": "How to tell if a fixture is being squeezed by two other fixtures in Box2D? Let's say I had a ground fixture, a player box, and a kinematic block moving up and down. If the player stands under the block, weird things happen. I want to simply play an event where the player dies. How can I detect when the player box is crushed by the block moving down on it?"} {"_id": 22, "text": "web based networked game physics efficiency I am writing a web based game, using websockets. its pretty simple, two players shooting at each other with bullets that have acceleration from a starting point at an angle. The client side (frontend in web dev terms) uses phaser.io with p2js for physics. However after reading this post, i believe that it's smarter to do some of the physics for the players on the server side. Such as collision detections, and update the player positions at regular intervals to keep the two players at sync. Now i have two questions if i use the Box2d javascript port on nodejs on the server side to simulate server side physics, than is it just as efficient as using the native c one? seeing as from what i have read so far it seems the js ports aren't that well maintained, and they are js ported on the other hand the js codes might be easier to deal with than the c ones. using p2js on the client side and using box2d on the server. The server being authoritative, and sends regular updates (once every second) as to the actual positions of the players. will the physics simulation be in sync (atleast to the user)? Theoretically i don't see any reason for it to not, given that both user has the exact same latency (this is assumed for sake of simplicity)."} {"_id": 22, "text": "Reduce bounciness of dynamic bodies in Box2D In my 2D platformer game my player character is modeled using a Box2D dynamic body. One thing I thought feels off though is that even though the restitution of the fixture applied is zero, the player will bounce off static geometry when falling at an angle, kind of like a plastic box would perhaps, but certainly not like a body would. There isn't really much that Box2D allows me to tweak though except for density and restitution? I noticed that playing with the density has basically no effect on how the player moves which I thought was odd too. Are there any realistic value ranges you would recommend? It's all a bit trial and error right now for me."} {"_id": 22, "text": "What's the best way to handle slopes for a platfomer game using Box2D I would like to know if there is any known solution for handling the player's movement on slopes using Box2D engine. I tried to do it using a circle as the player. Everything was fine until I tried to walk on slopes, the main problem is that due to gravity, the circle does not stop on the slope. Please if somebody has tried this before I'll appreciate it. If you have a better solution without the physics engine would be fine for me too. Thank you."} {"_id": 22, "text": "Libgdx explain steps for beginner to create and use image as a box2d body I am basically from corona background and new to LibGdx , familiar with stage and actor from scene2D. Now wants to use physics body from box2d in the game besides using actor from scene2D. But I need to use images in the game as body... not shapes.. anyone please explain me the steps to use images as a body. Thanks.."} {"_id": 22, "text": "GetContactList stops reporting collisions on welded bodies I have some strange problem with my game which uses Box2D as physics engine and I'm out of ideas on what I can do to solve it. My game is a class assignment where I need to build a simple game where the main character moves in a 2D environment while square blocks comes from below him. Each time a collision occurs, that block is attached to the character using a weld joint, when three blocks of the same colors are together, they annihilate themselves(an effect similar to Bejeweled). I'm using a recursive function to iterate through all the attached blocks of a given block to see if there are enough blocks for them to be deleted. I'm using GetContactList function to iterate through the list of contacts to see which blocks are adjacent to each other. The results are quite disappointing, the blocks only get annihilated in few cases. After a lot of debugging, I found the issue, but I still don't know how to solve. My issue is after some time, GetContactList STOPS returning contacts (return NULL) to blocks that were already attached for some time. I spent some time reading the Box2D manual as well as some tutorials and still didn't find any clue of what is happening. Below there's some simplified version of the code that I wrote. for(int a 0 a lt blocksList.size() a ) blocksList a .BuildConnections() And on BuildConnections b2ContactEdge edge body gt GetContactList() while(edge ! NULL) if (long check to see if there's a block nearby) add itself to the list to be anihilated globalList.push back(this) if there's, call BuildConnections again on the adjacent block adjacentBody gt GetUserData() gt BuildConnections edge edge gt next I know that there's another issue related to circular inclusions, but I fairly sure that this problem isn't causing the problem with the collisions. You can download my entire code from this page if you'd like http code.google.com p fellz source list"} {"_id": 22, "text": "box2d resize bodies arround point I have a compound object, consisting of a b2Body, vector graphics and a list polygons which describe the b2body's shapes. This object has its own transformation matrix to centralize the storage of transformations. So far everything is working quiet fine, even scaling works, but not if i scale around a point. In the initialization phase of the object it is scaled around a point. This happens in this order transform the main matrix transform the vector graphics and the polygons recreate the b2Body After this function ran, the shapes and all the graphics are exactly where they should be, BUT after the first steps of the b2World the graphical stuff moves away from the body. When I ran the debugger I found out that the position of the body is 0 0 the red dot shows the center of scaling. the first image shows the basic setup and the second the final position of the graphics. This distance stays constant for the rest of the simulation. If I set the position via myBody.SetPosition( sx, sy ) the whole scenario just plays a bit more distant for the origin. Any Idea how to fix this? EDIT I came deeper down to the problem and it lies in the fact that i must not scale the transform matrix for the b2body shapes around the center, but set the b2body's position back to the point after scaling. But how can I calculate that point? EDIT 2 I came ever deeper down to it, even solved it, but this is a slow solution and i hope that there is somebody who understands what formula I need. assuming to have a set polygons relative to an origin as basis shapes for a b2body scaling the whole object around a certain point is done in the following steps i scale everything around the center except the polygons i create a clone of the polygons matrix i scale this clone around the point i calculate dx, dy as difference of clone.tx original.tx and clone.ty original.ty i scale the original polygon matrix NOT around the point i recreate the body i create the fixture i set the position of the body to dx and dy done! So what i an interested in is a formula for dx and dy without cloning matrices, scaling the clone around a point, getting dx and dy and finally scale the vertex matrix."} {"_id": 22, "text": "Making the player walk on walls in box2d I'm making a game in stencil where players walk form left to right along randomly generated walls. I cant use waypoints, since the walls' shapes and positions are unpredictable. Here's a descriptive sketch The red line is the player's path. The red circles are what I thought might make good waypoints until I scrapped the idea."} {"_id": 23, "text": "Do UDK mutators only apply on local games? I have developed a pretty simple mutator. It works exactly as intended on a local game against bots, but it doesn't work (nor do any of the stock mutators) on a listen server. Is this to be expected of UDK (May Beta Release by the way)?"} {"_id": 23, "text": "Blender UV unwrapping causes texture to repeat in UDK material I've made pillow kinda thing, unwrapped it in Blender and then I imported my mesh to UDK, applied a material The material splits, but I don't want that to happen, I want to cover the mesh with only big texture, like this one for example How do I do that?"} {"_id": 23, "text": "Is it possible to use a spherical collision component in UDK? I have an object in UDK, which has a SkeletalMesh. At certain times in the game, I want this object to continue rendering the SkeletalMesh, but I'd like it to use spherical collision temporarily. After reading a bunch about PrimitiveComponents, my understanding is that UDK supports cylindrical and box like collision, but not spherical without using a static mesh. EDIT What I have now is a StaticMesh with a material that makes it invisible. I've added a StaticMeshComponent to my Pawns. I can shut off the Pawn's collision, and turn on the StaticMesh collision. But it doesn't respond to impulses. I figure I'm missing something in how you turn on the RigidBody thingy. CylinderComponent.SetActorCollision(false, false) SetPhysics(PHYS RigidBody) RBCollisionComponent.SetStaticMesh(RBCollisionMesh) RBCollisionComponent.AddImpulse(InImpulse) RBCollisionComponent.WakeRigidBody()"} {"_id": 23, "text": "Why might a Projectile hitting a KActor only call HitWall, and not ProcessTouch? I have a Projectile subclass that seems to work fine. It travels, does Explode(), etc. But when it hits one of our KActors, I get a HitWall event but not a ProcessTouch event. So my question is, why would a colliding actor trigger HitWall, but not ProcessTouch?"} {"_id": 23, "text": "Hidden Loading with UDK I was wondering, how would I go about creating hidden loading scenes with UDK? For example, a character walks in to an elevator, the elevator fakes movement, whilst the previous floor is destroyed and the next floor is loaded on top. I assume it's possible with UDK, since it's supposedly rather flexible, but I've never used UDK before (I decided to ask this question first to save me learning it all, finding out it isn't possible, then giving up). So yeah, is hiding the loading process possible? And if so, how would I go about doing it?"} {"_id": 23, "text": "Unrealscript splitting a string Note, this is repost from stackoverflow I have only just discovered this site ) I need to split a string in Unrealscript, in the same way that Java's split function works. For instance return the string \"foo\" as an array of char. I have tried to use the SplitString function array SplitString( string Source, optional string Delimiter \",\", optional bool bCullEmpty ) Wrapper for splitting a string into an array of strings using a single expression. as found at http udn.epicgames.com Three UnrealScriptFunctions.html but it returns the entire String. simulated function wordDraw() local String inputString inputString \"trolls\" local string whatwillitbe local int b local int x local array lt String gt letterArray letterArray SplitString(inputString,, false) for (x 0 x lt letterArray.Length x ) whatwillitbe letterArray x log('it will be ' whatwillitbe) b letterarray.Length log('letterarray length is ' b) log('letter number ' x) Output is b returns 1 whatwillitbe returns trolls However I would like b to return 6 and whatwillitbe to return each character individually. I have had a few answers proposed, however, I would still like to properly understand how the SplitString function works. For instance, if the Delimiter parameter is optional, what does the function use as a delimiter by default?"} {"_id": 23, "text": "Unreal Development Kit What is a good method for accessing a relational database file? I'd like to use a SQL Express file or something similar for a UDK demo. Is there a standard method for accessing data in a local relational database file? I understand that the best method may involve creating a class library and using the DLLBind feature, but I'm still a bit unclear on how to pass data to controls in the UI (I'd like to populate dropdowns and such with records from the database)."} {"_id": 23, "text": "How are the Unreal Development Kit (UDK) and Unreal Engine 4 (UE4) related? I'm thinking of learning Unreal Engine 4, but it costs money, and I want to try and keep costs as low as possible while I'm learning. In contrast, the Unreal Development Kit is free. How similar are the two? If I learn UDK first, how easily can I transition to UE4?"} {"_id": 23, "text": "How can I customize the DefaultInput.ini? I need to make some bindings false. The ones are .Bindings (Name \"W\", Command \"GBA MoveForward\") and the same for S, A, D, LEFT, RIGHT, and LeftControl."} {"_id": 23, "text": "Subclasses of GameInfo and Input in UDK We have two main subclasses of GameInfo for the two game types we have. I'm wondering if it's possible to get one of these to read a different .ini file because we'd like to move a few of the controls to different buttons for that mode. It looks to me like it's just going to read Input.ini for this, regardless of the game type."} {"_id": 24, "text": "How to properly rotate towards a local point in Unity C ? (a local LookAt) I am having a nightmare trying to make a child object rotate towards a given point of its parent object, similar to what is possible at the world level when using LookAt. The problem is that most functions related to rotating in Unity C do not work for local level. In the description of function transform.Rotate one can have the impression that it allows that, trough passing the Space.Self parameter. However, to use that function one has to know the 3 angles between the 2 points of interest. And there is no function that allows such calculation. Anyone could please help implementing a local LookAt?"} {"_id": 24, "text": "Render a 3D scene in multiple windows extended panoramic view Is there any resource location on how to view a 3D scene from an application or a game on multiple windows or monitors? Each window should continue drawing from where the neighbouring one left off (in the end, the result should be a mosaic of the scene). My idea is to use a camera for each window and have a reference position and orientation for a meta camera object that is used to correctly offset the other cameras (e.g. like in the above figure where the render targets of the two cameras reproduce the star when stitched together). Since there are quite some elements to consider (window specs, viewport properties, position orientation of each render camera), what is the correct way to update the individual cameras considering the position and orientation of the central, meta camera? I currently cannot make the cameras present the scene contiguously (and I am reluctant in working out the transformations without checking whether this is the actual way of doing things)."} {"_id": 24, "text": "Camera view projection issue I made a simple OpenGL program but I can figure out why the camera is not working, here it's a little fragment of the Camera class public Matrix4f getView() initializes the view matrix return new Matrix4f().lookAt( new Vector3f(0f, 0f, 1f), camera position at 0,0,1 new Vector3f(0f, 0f, 0f), camera target at 0,0,0 new Vector3f(0f, 1f, 0f)) up axis set to \"worldUp\" (0,1,0) public Matrix4f getProjection() return new Matrix4f().perspective( (float) Math.toRadians(fieldOfView), the fov has a value between 0f and 180f, by default I set it to 90 viewportAspectRatio, the aspect ratio is equal to 1024 960 (screen height screen width)... even if I've not understood what is it... 0.1f, 1000f) I've not really understood what near and far planes are... public Matrix4f getMatrix() with this function I obtain the final camera matrix return getView().mul(getProjection()) And it's how I handle the camera Matrix in GLSL, created using camera.getMatrix() gl Position camera model vec4(position, 1.0) Without the camera all is fine here's the program running using gl Position model vec4(position, 1.0) (Yeah, it's a cube) But using the camera in the way I showed you before, increasing the FOV, I get this Could anyone look at my code and tell me where I'm wrong? I would be really happy... D"} {"_id": 24, "text": "GLM Camera attached to model moves in opposite direction from the model I have been working on a component based engine with nested game objects each with there own transformation's. Each game object calculates its position in the world based on its parents world transformation multipled by there own local one (ModelsWorldTransform ParentsWorldTransform LocalTransform). This all works great until I attach the camera component to a model and in this scenario, as the model the camera is attached to moves away from the origin, the camera moves in the opposite direction to the movement whitest still seeming like it is in the correct position according to its mat4. All local transformations are stored in a mat4 rather then three vec3's for position, rotation and scale to save on calculations each update having to generate new mat4's. Transformation Component Update Code m world transformation m local transformation If we have a parent object if (m parent transform component) m world transformation m parent transform component gt GetWorldTransformation() m world transformation Camers position buffer being updated BufferData.view m transformation component gt GetWorldTransformation() m camera buffer gt SetData() I'm not sure what i'm missing, if anyone has any sugestions that would be apreatiated."} {"_id": 24, "text": "Camera LookAt position with fixed screen position I have a perspective \"lookAt type\" camera. I'm trying to compute specific focus point of the camera, which should be placed in the middle of the screen I also have a custom 3d point in world space, margin values in screen space and camera's tilt as inputs. I want to compute lookAt point, amp camera distance with restriction, that input world space point has to be visible in the middle of the free screen space area (not affected by margin values) and it has to have a given tilt. I'm able to compute this in 2D, but I'm lost in 3D. Any idea how to achieve this? Here's a sketch of the problem, red and blue area's are margin values of 0.5f in both x and y direction."} {"_id": 24, "text": "How to avoid gimbal lock I am trying to write code with rotates an object. I implemented it as Rotation about X axis is given by the amount of change in y coordinates of a mouse and Rotation about Y axis is given by the amount of change in x coordinates of a mouse. This method is simple and work fine until on the the axis coincides with Z axis, in short a gimble lock occurs. How can I utilize the rotation arount Z axis to avoid gimbal lock."} {"_id": 24, "text": "Relationship between Camera's view size, screen resolution and objects scale I've had this question since a long time ago and wanted to know the relationship between how an object is viewed through the camera depending on the resolution of the device, the camera's view size and the object scale, so for example let's say i have a 100x100 square in the middle of the screen, the camera is looking directly at it and let's say the square takes 10 of a specific screen, if i wanted it to keep that relationship with the screen do i have to change the square size depending on the screen size?, change the camera's width and height? or change the resolution of the game? in short, how does the resolution and the camera's widht and height affect the scale and proportions of the objects of the screen?"} {"_id": 24, "text": "Blueprint For Switching Between Cameras In The Unreal Engine So basically I have a scene in the Unreal Engine which has multiple cameras within it, the cameras are intentionally static as the application should behave more like a viewing gallery than a game. As the application is intended for mobile devices, the idea is that when a person touches the screen of their device, the rendering switches from the present camera to another one to show a different image. I have attempted to implement this action in the level blueprint with the following blueprint design Closer up Obviously in the scene there are three camera actors which must be switched between with the first camera viewed through being the 1st CameraActor hence the connection of the EventBeginPlay variable to the Set View Target With Blend variable that the 1st camera actor is connected to. When the Play button is pressed, the scene viewed from the first camera is rendered to the screen as desired however when the left mouse button is clicked equating to touch input on mobile devices, the scene does not switch to the second camera and the question is why?"} {"_id": 24, "text": "How can I emulate Diablo 3's Isometric view using Perspective? Using DX11, SimpleMath I am building a isometric game like Diablo 3 in 3D and I want to use a perspective camera that emulates their top down view. Projection Matrix CreatePerspectiveFieldOfView(1, width height, 0.1f, 100.0f) But after this I am a bit unsure how I am suppose to rotate the camera. I assume I could just do p Vector3(10, 10, 10) to get a 45 degree angle at any given p. How do I properly position, and rotate the camera and point it at position p, to mimic Diablo 3 view?"} {"_id": 24, "text": "Drag 3D camera with mouse cursor I have a camera that is moved by changing its X and Z coordinates. However, the camera is rotated, so when an object moves by 1 unit in the world, the number of pixels it moves by isn't trivial to determine. How could I move the camera so that if the pointer is at a set point, that point will be under the cursor after the cursor moves? I have a PerspectiveCamera and the number of pixels that the pointer was dragged in each direction (X and Y) on the screen."} {"_id": 25, "text": "Will using similar exact name to a current Marvel superhero name for your game be a problem? I have heard that the word 'superhero' is trademarked by marvel and DC and thus we can't use superhero as part of the name of a game we develop. But I wonder do they also have exclusibe rights to all the names of their super hero characters? For example will people get in trouble if they name their games 'premagneto', 'superwolverine', 'exbeyonder' or 'the cyclop'?"} {"_id": 25, "text": "Is it legal to imitate a game without publishing it If you're developing a game that's very similar to a game that's already published, both the game mechanics and the graphics but you're not publishing it, instead you're keeping it in your own computer. Is it legal to do so?"} {"_id": 25, "text": "Legal issues with nba player names, profiles, logos? I am searching for this topic and I found a similar question asked check here Legal issues around using real players names and team emblems in an open source game however I have specific questions where I don't know where to ask. So I'll make an nba quiz mini game and I need to know if there might cause some trouble using player profile images, names, team names and logos. I found about a dozen suck mobile apps which use real names also some use photos as well. In case I need a permission or check the customs, where to contact? where can I find such regulations? Lets say I am allowed to use profile images, can I use the official images found on nba.com or yahoo or any sports website? If not, where to get all player photos? sure can't capture them one by one ) As for team logos, there are logo quiz apps which mentions hundreds of logos of different international famous companies, I don't think the app owner contacted each company. Any related information would help. Thanks"} {"_id": 25, "text": "How not to break licence laws? I have an idea for a game. Also I have almost everything worked out considering coding. What interests me the most is how can I know if that game can be published. As it would be for iOS and android, these markets are of the geratest interest for me. I found on apples store something similair to what I would make and it's published by some big game developer. How can I be sure I am not breaking any laws when publishing the game? Can I make a game like board game 'Risk'? I don't have any knowledge considering licencing. This game would be avaliable for free, with an option to donate, so money isn't my objective (but it won't hurt)"} {"_id": 25, "text": "Which open source licenses can address these concerns for an open source game engine? I am on a team that is looking to open source an engine we are building. It's intended as an engine for Online RPG style games. We're writing it to work on both desktops and android platforms. I've been over to the OSI http opensource.org licenses category to check out the most common licenses. However, this will be my first time going into an open source project and I wanted to know if the community had some insight into which licenses might be best suited. Key licensing concerns Removing or limiting our liability (most already seem to cover this, but stating for completeness). We want other developers to be able to take part or all of our project and use it in their own projects with proper accreditation to our project. Licensing should not hinder someone's ability to quickly use the engine. They should be able to download a release and start using it without needing to wait on licensing issues. Game content (gfx, sound, etc.) that is not part of the engine should be allowed to be licensed separately. If someone is using our engine, they can retain full copy right of their content, including engine generated data. Our primary goal is exposure, which is why we're going open source to start with. Both for the project and for the individuals developing it. Are there any licenses that can require accreditation visible to players? Update As per comments, I mean this to be a requirement for either a splash screen or placement in the credits. Not a constantly visible logo. While I'd put our primary goal as exposure, for licensing the accreditation is less of a concern. From what I've read through (and have been able to understand) it doesn't seem like any of the licenses cover anything that is produced by the licensed software. Are there any that state this specifically, or does simply not mentioning it leave it open for other licensing? Are there any other concerns that we should consider? Has anyone had any issues using any of these licenses?"} {"_id": 25, "text": "About to release a free game on Google Play Store, any legal considerations I should take into account? I will release in the next few weeks a small free game on the Google Play Store. I was wondering if there was any important legal aspects I should consider before release. Here's a few infos I'm doing this alone. There's no code graphic assets gameplay concepts I would like to protect in particular. I don't have any issues with people copying something from my game. I'm not sure right now, but maybe I'll add the possibility to make a small donation (so there will be IAP in a sense). I'm using a music track from http opengameart.org . The author has specified that it wasn't necessary to credit him, though I would like to."} {"_id": 25, "text": "Releasing a game without really having an officially registered company I'm thinking of releasing my iOS game without formally registering a company at all. I'm not sure I even want to create a company. ITunes wants to know the copyright holder of my title. I have 2 questions regarding this 1) Can I use my (already decided upon if will exist) company name and just register it later (if the need arises)? 2) Or should I just use my NAME as the copyright holder, regardless if I plan to formally register the company later or not?"} {"_id": 25, "text": "Art Style Guide External Image Sources? I've been looking into how to structure an art style guide (ASG) it's a document which helps guide artists how a project's visuals should look. They often have sketches and illustrations from artists working on the project, though I am unsure about using external images. I am working on a commercial videogame project and I have been writing both a game design document (GDD) and currently an art style guide. Is it legal for me to add artwork I came across while browsing the internet into the ASG? I know that most artists can add external images outside of their project into a section talking about inspirations, but could this be done for areas that loosely talk about, say, an overall aesthetic of something, such as creature or environment design? For example, say I have a section which talks about the design of fantasy monsters in a chapter about characters. Would it be legally viable to use various images I come across to help show artists what I'm looking for in each creature? I'm guessing that it is legally viable because I don't think I'd be profiting off of an ASG. I hope somebody could verify this for me I don't want to cause any trouble. I apologize for making this a bit wordy I wish I knew how to articulate myself better."} {"_id": 25, "text": "Deferring royalties for assets until revenue is earned I am developing a game and I want to use some assets that are available online for a fixed price, I want to sell my game and actually make money off of it. I have contacted the owners and they will sell it to me for a fixed price to use commercially. However, I am on a very limited budget and don't really want to pay up front for the assets. What I want to know is how to defer the costs contingent on whether I make any money. In other words, they give me the models, and then I promise to pay them back out of the first revenue. If the project dies, I'm not out any money. Other than my word as an honest guy, how could I convince them to agree to this? Would they charge me more because of risk involved?"} {"_id": 25, "text": "Licensing Theme Music from other games As part of my game, I thought it would be fun to make a hidden level that pays tribute to Mario Bros (one of the earliest games I ever played). It would be themed in that way with 8 bit graphics and question mark blocks and completing the level would say \"Thank you but the princess is in another castle\" or such. For the sound track, I'm thinking of just overlaying the standard mario theme music by playing it on a virtual keyboard using a different instrument timing or something. My question is, am I legally safe? I'm not using anyone else's actual music, I'm just playing the same tune in a different way myself. Do I have to get licensing for this?"} {"_id": 26, "text": "SQLite Query from unknown table based on known column value single query I'm working on a database to back up my online TCG. It's currently SQLite, but I'm going to transition it to a MySQL database later. Currently, I have three distinct types of cards with their own unique fields. For example. However, all cards have a single uniqueid that I use to refer to them. TABLE cards CARD ID as INT CARD TYPE as INT TYPE ID as INT TABLE creature cards CARD ID as INT TYPE ID as INT NAME as TEXT FLAVOR as TEXT ATTACK as INT DEFENSE as INT ELEMENT as INT FLAGS as INT MANA COST as TEXT TABLE spell cards CARD ID as INT TYPE ID as INT NAME as TEXT FLAVOR as TEXT SPELL TYPE as INT ELEMENT as INT MANA COST as TEXT TABLE mana cards CARD ID as INT TYPE ID as INT NAME as TEXT FLAVOR as TEXT ELEMENT as INT I'm trying to figure out how to build a query that will allow me to get all fields from a specific table without knowing the table I need to pull from ahead of time. Basically, I want to be able to supply CARD ID, query the cards table, and use the CARD TYPE and TYPE ID to return the row from creature cards where CARD TYPE is 1, spell cards where CARD TYPE is 2, and mana cards where CARD TYPE is 3. I'm currently doing this through two separate queries and doing the logic on the backend, but I'm wondering if there's a way to get the data all in one query, because my queries can be quite time consuming. Would it just be better to jam all card fields into one table, rather than keeping them in three separate tables?"} {"_id": 26, "text": "What time to display in text messages in multiplayer game? Say I'm having a multiplayer RTS game. There's a main server for each individual game and several clients connected to it. All packets are sent to server first and then server retransmits them back to clients. Say Server is located in one time zone and all of the clients are in different time zones. ClientA send a text message in chat at 12 03, what times should be stamped for other clients? Should his message be uniformely timestamped by Server (12 02) or each client should timestamp the message whenever it is recieved (12 04, 16 04, 03 03, etc..). Bear in mind, that all the messages are to be in the same order on all clients, server takes care of that. So thats the question use local time for each client or use global server time to timestamp chat messages?"} {"_id": 26, "text": "Hobbyist game dev, want to create async multiplayer game. Are server costs manageable? I want to make the game I want to play, and that game happens to be an asynchronous multiplayer game. Think Hero Academy or Hearthstone, where there are 2 players and each submits moves to a central server. Alternatively (and not ideally), clients connect to each other and play is limited to a fixed window of connection, but still there needs to be a central server to acquire new opponents. That said, this is very much going to be a hobby type endeavor I'm just one guy after all. This is not going to commercially support itself. If I have 100 players ever, that would be fantastic. Given that I'm going to have to eat server costs myself, what are my options for getting a server? I don't need much bandwidth or CPU, due to the nature of async games only the sending of the gamestate current move, and perhaps some checking of legal moves. Going to Amazon or getting dedicated hosting seems drastically overkill for what I want. If it makes a difference, I'm intending to learn and use Unity for this."} {"_id": 26, "text": "Multiplayer Game Node.js syncing other players' position I'm creating a multiplayer platformer game with 4 players in multiple rooms. I've read a lot of articles and researched about client prediction and lag compensations. I think I can somehow manage to understand how to update \"my client\". But what would be the best approach in the multiplayer platformer game to update \"other clients\". This is what I'm thinking Approach 1 the client sends input \"left\" to the server server increases the velocity of the player the server sends the actual input \"left\" of this client to all other clients other clients run the physics on the input. Apporach 2 the client sends the input to the server server adds velocity, update state game loop in server updates the position of every single client back to everyone, sends it to all clients client gets it and updates. What would be a better approach? or please if there is a better approach, tell me I am very stuck.."} {"_id": 26, "text": "How can I prevent cheating in a distributed multiplayer game? A problem I've been thinking about recently is how it might be possible to create a multiplayer game without a centralised game server. Is it possible to distribute \"server\" responsibilities across players? I feel like it would be too easy for modified hacked clients to change the multiplayer experience, are there any ways around this?"} {"_id": 26, "text": "What is the best time to implement Multiplayer system? I have been starting a new online multiplayer game project this week and as the title says Is it better to implement the Multiplayer feature right away, or is it ok to implement it when the game already has some other stuff, like maps and physics?"} {"_id": 26, "text": "What data to exchange in multiplayer real time games? I am a hobbyist programmer and right now I am curious about what data is exchanged in a multiplayer session in real time games like starcraft 2. I did a bunch of searches. I found gafferongames.com offering a very good overview of the issues to consider. Glenn in his article and comments gives a very strong case for using UDP over TCP, but SC2 obviously uses TCP. To qoute Gleen, The problem with using TCP for games is that unlike web browsers, or email or most other applications, multiplayer games have a real time requirement on packet delivery. For many parts of your game, for example player input and character positions, it really doesn t matter what happened a second ago, you only care about the most recent data. So from his statement, I am guessing that his approach is to send the the full game state of every unit on each frame. If the server does not receive a player input on the current frame, then it's just back luck on that player. For God of War Acension, in which he is lead network dev, this should work quite well I guess. For SC2, due to it's replay capability, my gut feeling tells me that the underlying engine is a deterministic fixed timestep \"user input playback machine\", where the only data exchanged is player inputs. Hence Glenn's statement mybe completely irrelevant for SC2. Player input is important, and input sequence is even more important. I don't think it's feasible for SC2 sending game state of 200 units and more at 30 60 FPS. Question I could be wrong, but I have attempted to identify 2 possible types of data. What are other techniques? Will be good to qoute the game if you will. EDIT found this link about starcraft networking model"} {"_id": 26, "text": "Why do console versions of multiplayer games support fewer players than PC versions? source http answers.yahoo.com question index?qid 20111205162848AAb1sl9 64 Players on P.C ( 32vs32) amp bigger maps with more vehicles amp buildings. 24 on Consoles ( 12vs12) ( It was planned to have a 32 player limit, but was later reduced to 24). Why does the console version have a maximum of 24 players? Is it hardware related? One of the reason was that players can host games themselves. If thats true its very sad that you cannot play 32x32, because one or two guys want to host games."} {"_id": 26, "text": "Can various browsers be assumed to maintain predictible state accurately in multiplayer online gaming? With many games it is said that server will assume that clients keep track of the world accurately. Assuming this is true, for a browser based multiplier space invaders game you would only tell the client when new bullets or the players ship moves and everything that behaves in a predetermined manner in the js client. It would be expected that positions would be the same in the browsers. Do you think you could trust browsers to do this? I feel that timings could differ between rendering loops and cause positions to get out of sync and might just get the server to maintain all the positions to make sure."} {"_id": 26, "text": "Client side prediction on FPS game I've recently attempted to develop a simple client prediction for an FPS based on Gaffer on Games famous blog (http gafferongames.com game physics networked physics ). Now I've gotten to the point that everything works (more or less), my main problem is crossing the message sent from the server and finding the appropriate snapshot on the client. I can use the last average ping time to find a very near state, but it will never be exactly timed placed as on the server. So my question is how exactly can I sync and find the time stamp sent from the server to the client and find which snapshot is the correct one on the client?"} {"_id": 27, "text": "Is it possible to develop a game with Lua L ve and have the source code compiled? I've been looking at Lua and L ve for developing simple 2D games. But since Lua is interpreted and I know it can be compiled to some point, but how secure is that to decompiling. Or is there a better way to distribute the game?"} {"_id": 27, "text": "How can I fix this code to spawn multiple types of enemies on the same point with the Corona SdDK? I have this code to spawn a simple enemy outside the window local function spawnGrupo1() local color01 display.newImage(\"assets img enemigos blue.png\") color01.x 40 color01.y 50 physics.addBody( color01, density 5.0, friction 0.0, bounce 0 ) end timer.performWithDelay( 1000, spawnGrupo1, 0) But I want to be able to spawn multiple enemies from that point, and I can't figure it out. I tried this, but it doesn't work local function spawnGrupo1() local numero math.random(1, 3) if numero 1 then local color01 display.newImage(\"assets img enemigos blue.png\") end if numero 2 then local color01 display.newImage(\"assets img enemigos yellow.png\") end if numero 3 then local color01 display.newImage(\"assets img enemigos red.png\") end color01.x 40 color01.y 50 physics.addBody( color01, density 5.0, friction 0.0, bounce 0 ) end timer.performWithDelay( 1000, spawnGrupo1, 0) How can I approach the problem?"} {"_id": 27, "text": "how to display current time as a static value in lua pico8 Pico8 has a function time() that when called displays the current time from start of program. i.e. print(time(),0,0,14) prints time at (0,0) with colour 14 However the function doesn't stop and keeps drawing the time each frame. I'm trying to figure out how I would draw the time without it increasing changing. So If I printed this 5 seconds from the start of the program I'd want it to display 5, but not change from that 5 value. https pico 8.fandom.com wiki Time I don't know how to store a static value of this time as a variable. Although according to the wiki, assigning var time() will cause time at 0 to be stored. Another way of phrasing this is...how would I display the time of 10 seconds when the following print is triggered? So after 10 seconds \"time's up\" is displayed. How would I display the current time 10, as well, as a static? Maybe I don't know what the time is when the event is triggered, so how can I count using time()? function init() last time() end function update() (empty update to use game loop) end function draw() cls() if (time() last) gt 10 then print(\"time's up!\", 44, 60, 7) end end"} {"_id": 27, "text": "Love2D how to zoom out map? I've set up a basic map with Tiled, imported it with STI library (https github.com karai17 Simple Tiled Implementation) and everything looks in place. How can i get a zoom out feature now? I can kinda zoom in using scalex and scaley and i tried the same procedure to zoom out. However, if i set i.e. scalex 0.5, scaley 0.5 the map actually scales in size but it only fit 1 4 the screen size, and the remaining 3 4 are just black space."} {"_id": 27, "text": "var time() vs time() var pico8 lua time() is a preset function in pico8 gameengine. I don't understand whatsoever why I get an error when I write time() var But it's fine if I write (where var is any variable) var time() What are the rules dictating order? This makes no sense to me. For some context consider a time function time diff 0 function time change() if time() time diff gt 1 then time diff time() end end if You change to time() time diff.....it generates an error."} {"_id": 27, "text": "Game state management (Game, Menu, Titlescreen, etc) Basically, in every single game I've made so far, I always have a variable like \"current state\", which can be \"game\", \"titlescreen\", \"gameoverscreen\", etc. And then on my Update function I have a huge if current state \"game\" game stuf ... else if current state \"titlescreen\" ... However, I don't feel like this is a professional clean way of handling states. Any ideas on how to do this in a better way? Or is this the standard way?"} {"_id": 27, "text": "Any advantage of having chunks with sizes by the power of two? With my past experience of having my little game lag because of the size of the world, I have decided that in whatever next project I might choose to create, I will split said world into chunks. Now here comes my question Should the chunk's width, height and depth be a power of two? For example, in a little game called Minecraft, chunks have a height of 256 and width,depth of 16. Both numbers a power of two. Is there any advantage over having the chunk simply be 100x100x100? Minecraft was written in Java, but I'm working in Lua. Just pointing that out in case it has any relevance."} {"_id": 27, "text": "Why is the button's code not running? My script for a slot machine isn't working. When the button in the Bilboard GUI is clicked, the below dosen't run. I am new to scripting and this is mostly based on wiki stack exchange Dev page info. local player game.Players.LocalPlayer local textLabel script.Parent local toggled false local button script.Parent textLabel.Text \"Click to Play for 5 Cash!\" slot1Combos \"Bar\", \"Seven\", \"Seven\", \"Seven\", \"Cherry\", \"Cherry\", \"Cherry\", \"Cherry\", \"Orange\", \"Orange\", \"Orange\", \"Orange\", \"Orange\", \"Lemon\", \"Lemon\", \"Lemon\", \"Lemon\", \"Lemon\", \"Banana\", \"Banana\", \"Banana\", \"Banana\", \"Banana\" slot2Combos \"Bar\", \"Seven\", \"Cherry\", \"Cherry\", \"Cherry\", \"Orange\", \"Orange\", \"Orange\", \"Orange\", \"Orange\",\"Orange\",\"Lemon\", \"Lemon\", \"Lemon\", \"Lemon\", \"Lemon\", \"Lemon\", \"Banana\", \"Banana\", \"Banana\", \"Banana\", \"Banana\", \"Banana\" slot3Combos \"Bar\", \"Seven\", \"Cherry\", \"Cherry\", \"Cherry\", \"Orange\", \"Orange\", \"Orange\", \"Orange\", \"Orange\",\"Orange\",\"Lemon\", \"Lemon\", \"Lemon\", \"Lemon\", \"Lemon\", \"Lemon\", \"Banana\", \"Banana\", \"Banana\", \"Banana\", \"Banana\", \"Banana\" local function onButtonActivated() if toggled false then toggled true x math.random(1,23) y math.random(1,23) z math.random(1,23) g false player.leaderstats currencyName .Value player.leaderstats currencyName .Value 5 if slot1Combos math.floor(x) \"Bar\" and slot2Combos math.floor(y) \"Bar\" and slot3Combos math.floor(z) \"Bar\" then player.leaderstats currencyName .Value player.leaderstats currencyName .Value 60 textLabel.Text \"You Got 3 Bars. You won 60 cash\" wait(5) textLabel.Text \"Click to Play for 5 Cash!\" elseif slot1Combos math.floor(x) \"Seven\" and slot2Combos math.floor(y) \"Seven\" and slot3Combos math.floor(z) slot2Combos math.floor(y) then player.leaderstats currencyName .Value player.leaderstats currencyName .Value 40 textLabel.Text \"You Got 3 Sevens. You won 40 cash\" wait(5) textLabel.Text \"Click to Play for 5 Cash!\" elseif slot1Combos math.floor(x) slot2Combos math.floor(y) and slot3Combos math.floor(z) slot2Combos math.floor(y) then player.leaderstats currencyName .Value player.leaderstats currencyName .Value 10 textLabel.Text \"You Got 3 of a Kind. You won 10 cash\" wait(5) textLabel.Text \"Click to Play for 5 Cash!\" elseif slot1Combos math.floor(x) \"Cherry\" and slot2Combos math.floor(y) \"Cherry\" and slot3Combos math.floor(z) \"Cherry\" then player.leaderstats currencyName .Value player.leaderstats currencyName .Value 20 textLabel.Text \"You Got 3 Cherries. You won 20 cash\" wait(5) textLabel.Text \"Click to Play for 5 Cash!\" elseif slot1Combos math.floor(x) \"Cherry\" and slot2Combos math.floor(y) \"Cherry\" or slot3Combos math.floor(z) \"Cherry\" and slot1Combos math.floor(x) \"Cherry\" or slot2Combos math.floor(y) \"Cherry\" then player.leaderstats currencyName .Value player.leaderstats currencyName .Value 3 textLabel.Text \"You Got 2 Cherries. You won 3 cash\" wait(5) textLabel.Text \"Click to Play for 5 Cash!\" elseif slot1Combos math.floor(x) \"Cherry\" or slot2Combos math.floor(y) \"Cherry\" or slot3Combos math.floor(y) \"Cherry\" then player.leaderstats currencyName .Value player.leaderstats currencyName .Value 3 textLabel.Text \"You Got 1 Cherry. You won 3 cash\" else textLabel.Text \"Sorry, you didn't win\" wait (5) textLabel.Text \"Click to Play for 5 Cash!\" end end button.Activated Connect(onButtonActivated) end) This is the explorer tree with the script highlighted"} {"_id": 27, "text": "Obscuring stored info in a flat text file I have an idea for an addon for World of Warcraft which would basically be a minigame within the game itself. Eventually, I'd like to have players be able to compete against each other directly. The game would have some RPG like elements, particularly statistics and abilities, which it would be undesirable for the end users to modify. So the fact that this is a client side script, where everything (logic and data storage) are all in flat text files, means that it's impossible for me to truly secure things. But I'd like to at least make it non trivial to alter things, e.g. so that you can't just open up a file in notepad and type in 'Strength 256'. I'm looking for any ideas on how I might obfuscate the data, preferably something which is easy to implement as it's not really what I feel like spending my time on, but probably more sophisticated than ROT 13. As an example, one idea I had, which I'm not sure if it's feasible (have never made an addon for WoW before) is to serialize the data in base 64."} {"_id": 27, "text": "How to use LuaJIT the same that Lua in a C program? I'm using Lua in my C program, as an library. But I read that LuaJIT is a better implementation. Is it posible to replace with LuaJIT with little change? How?"} {"_id": 28, "text": "Finding equivalent axial coordinates for a wrapping hexagonal map of radius n I'm creating a wrap around hexagonal map that will potentially render infinatly. With the method I'm using, I have an x y coordinate I use to find it's equivalent axial coordinate with the equations Q (sqrt(3) 3 x 1. 3 y) hexSize R (2. 3 y) hexSize the problem is, my map is of radius n and it's possible for the Q R coordinate to be far outside that radius. How can I find the equivalent Q R coordinate within the radius n? I know methods for a single radius outside, but I'm talking the potential of having a radius of 15 but a Q R something like (9999999,99999999). What would that coordinate be in relation to the center of the 15 radius hex region it'd be in?"} {"_id": 28, "text": "Turn Based AI Algorithm (Small Board, Two Steps) This is my Game Board The Red Balls are the AI Controlled Actors. The Blue Balls are the Player Controlled Actors. The Yellow Cells are the locations, from which the Red Balls can attack. Each Red Ball can do 2 actions move move, move attack, attack attack. At no time there can be 2 minions on one cell, but a minion can arrive on a cell another just left. Its the AI's turn. Its planning the whole turn for all minions, each of them has the two moves. The objective is to maximize the number of melee attacks, which can performed in a single turn (2 actions) on the player balls. Problem I have in my current (brute force) implementation Sometimes it is advisable for a minion to move, although it is already in a melee position, to make space for another minion, which can then move into melee range aswell. Example In this situation its more effective for the front minion to move once, so another minion can move into melee range as well. Do you know an effective algorithm for this job? Thank you for your time!"} {"_id": 28, "text": "Prevent collisions between mobs npcs units piloted by computer AI How to avoid mobile obstacles? Lets says we have character a starting at point A and character b starting at point B. character a is headed to point B and character b is headed to point A. There are several simple ways to find the path(I will be using Dijkstra). The question is, how do I take preventative action in the code to stop the two from colliding with one another? case2 Characters a and b start from the same point in different times. Character b starts later and is the faster of the two. How do I make character b walk around character a without going through it? case3 Lets say we have m such characters in each side and there is sufficient room to pass through without the characters overlapping with one another. How do I stop the two groups of characters from \"walking on top of one another\" and allow them pass around one another in a natural organic way. A correct answer would be any algorithm, that given the path to the destination and a list of mobile objects that block the path, finds an alternative path or stops without stopping all units when there is sufficient room to traverse."} {"_id": 28, "text": "How to generate Bayer matrix of arbitrary size? In ordered dithering Bayer matrix is used. How is that matrix generated? What algorithm can be used to generate matrix of arbitrary size?"} {"_id": 28, "text": "How can I write my scoring system and add a score multiplier? I am new to the scripting world and I am currently trying to add a score multiplier to the game code I am working on. Right now the score is set to increase as the player runs across the level however I want to set it up so that the players gains X amount of score for every Y amount of distance traveled. I also want to add a multiplier for when they come into contact with certain objects X amount of time. Any help would be greatly appreciated."} {"_id": 28, "text": "How to find all possible destinations, within a certain walking distance of a unit? I have a standard 2D grid that i use for pathfinding. I already use A to find the path of my units when I want to move them around. What I need to implement now is a preview of all possible destinations of the unit when it is selected. Here is what I want to achieve. The yellow orange cells represent the possible destinations. I want to make this as optimized as possible, because it can be executed on very big maps and for multiple units. I have something in mind but I wonder if there is better approach for this. Here is my idea Get a list of all cells that are walkable in a square around my unit. The square will be from Unit.X Unit.Speed to Unit.X Unit.Speed for X axis and Unit.Y Unit.Speed to Unit.Y Unit.Speed for Y axis. For each element in this list, execute A search to see if it is possible to find a path. By doing the search for only the cells that are within my unit range I should be able to do this on big maps. Basically this is the same feature that we have in games like Heroes 3 and many other turn based games. Screenshot I was also wondering when it is a good idea to execute this pathfinding logic. When you select a unit (might result a spike) or when something moves on the map to update the list of all possible destinations of all units on the map."} {"_id": 28, "text": "Calculate 8 different directional input based on arrow keys combinations Considering I have four variables event binded to each arrow key, that can be 0 or 1 My current approach to this issue is simply 8 nested ifs checking each combination or keys Is there a more math y clever way to solve this issue?"} {"_id": 28, "text": "Action oriented AI evasion algorithm takes much time Evasion, the process of evading, is the opposite of chasing. Instead of trying to decrease the distance to the target we try to maximize it. It takes much time while evading multiple objects simultaneously. I use BFS here. To make it faster what algorithms should I use?"} {"_id": 28, "text": "Is there a known most efficient version of A search algorithm? Is there a known 'most efficient' version of the A search algorithm? I know some people write papers on the most efficient way to compute common operations, has this been done for A ?"} {"_id": 28, "text": "Real Time Dynamic Pathfinding? I'm currently doing some pathfinding research and my simulation is the following I have a 3d scene with a start and end point represented, I'm capable of creating navigational meshes, waypoints and polygons to aid with pathfinding. I've tried an A algorithm and some of its variants and they work perfectly. However, now I'm more interested in 'dynamic' pathfinding. For example, while finding a path from point A to point B, if a new obstacle suddenly appears, i want my algorithm to immediately be able to re plan a path and not start searching from scratch again. I've done some reading on the D algorithm and wondering if this would be appropriate for what I need or would this seem like an overkill. So my questions basically are What algorithm would be best for Real Time Dynamic Pathfinding? OR what combination of techniques could I use instead?"} {"_id": 29, "text": "Love2D STI library not drawing map at all (black screen) I'm trying to use a pretty standard map with the Simple Tiled Implementation module for Love2D, but no matter what I try, I can't get it to display anything but a black screen. Here's my current code (keybindings in an effort to see if the map has just been placed offscreen) local sti require \"sti\" local mapPos x 0, y 0 function love.load() map sti(\"maps map1.lua\") end function love.update(dt) if love.keyboard.isDown(\"w\") or love.keyboard.isDown(\"up\") then mapPos.y mapPos.y 5 end Move player down if love.keyboard.isDown(\"s\") or love.keyboard.isDown(\"down\") then mapPos.y mapPos.y 5 end Move player left if love.keyboard.isDown(\"a\") or love.keyboard.isDown(\"left\") then mapPos.x mapPos.x 5 end Move player right if love.keyboard.isDown(\"d\") or love.keyboard.isDown(\"right\") then mapPos.x mapPos.x 5 end map update(dt) end function love.draw() Transform world love.graphics.translate( mapPos.x, mapPos.y) Draw world map draw() end However, even a more basic attempt (copied from the STI tutorial linked from the README on the Github page) doesn't work, again giving a black screen Include Simple Tiled Implementation into project local sti require \"sti\" function love.load() Load map file map sti(\"map.lua\") end function love.update(dt) Update world map update(dt) end function love.draw() Draw world map draw() end My directory structure is basically just main.lua, an assets folder, and a maps folder. The maps folder contains the Tiled tile set and the Tiled .tmx maps and the lua exported maps. I'm on Mac OSX, using Love2D version 0.10.2. Am I missing something? Thanks."} {"_id": 29, "text": "Is there a name for this technique to put the tiles on the corner of a level? A very common approach in development of old games is to put the tile set on the corner of a level, to use it as a reference. For example On this level, the developers put a set of tiles in the upper left corner. But is there a name for this technique? I'm writing a manual about that and I would like to name it."} {"_id": 29, "text": "Store metadata for tile in Tiled Map Editor In my tilemap based game, I need to associate lights with light switches, buttons with doors etc. I am using the Tiled map editor (mapeditor.org), but I have yet to find a way to store these associaltion. My idea is to store a number with each tile, so I can have groups of tiles that interact with each other. Is there a way to store custom data with each tile in the Tiled map editor? Just to be clear I don't want to store custom data with each tile type, but with individual instances of one tile type."} {"_id": 29, "text": "Touch Gesture Map Movement I really dig the way that the map is moved in Clash of Clans (https itunes.apple.com ca app clash of clans id529479190?mt 8)... I can recognize the pinch and finger movement needed for the zooming and the swiping around the map, what I'm looking for is a tutorial in any language that reveals how to test for the map going out of bounds when moved or scaled. My google fu has failed so far on this one, any hints? Right now I can perform the scale operations and the movement operations, but I can't figure out how to stop the map from going out of bounds when pinched or scaled. My code is a crazy mess and makes me think I'm missing the fundamental logic of how this is done. (I am using Starling with AIR, but a tutorial in any language if you have a link). Update this is what I'm basing what I have currently so far https github.com PrimaryFeather Starling Framework blob master samples demo src utils TouchSheet.as"} {"_id": 29, "text": "Are there definitive, unambiguous terms for hexagon tile orientations? I have been working on a tile map editor and plan to support two orientations for hexagons. I have seen various terms used, but these all seem ambiguous to me. Horizontal, Vertical (ambiguous does this mean they line up horizontally, or stack horizontally?) Flat, Pointed (ambiguous Are they flat on the sides or top and bottom?) I would like to find unambiguous terms for the following two orientations for hexagons ideally, these would be definitive and succinct (I could refer to these as \"Flat on Top\" and \"Pointed on Top\" but would prefer something more technical and authoritative). Edit I was holding out for something more technical, but it's hard to argue with Amit. For me, \"pointed\" sounds more formal than \"pointy,\" so I am going to use the following (a decision reinforced by DMGregory's answer) Flat top and Pointed Top."} {"_id": 29, "text": "How do I make my foreground tiles stand out more from the background? (the background implemented in the game) In the game that I work on, the background and the map tiles are pretty indistinguishable for new players, probably because they have similar colors and theme. Are there any tips or tricks to make the background more like a background and be distinguishable from the map tiles? The game has motion blur which kinda helps, when the player is running the background gets blurry therefore making it more distinguishable from the map tiles. However, that is clearly not enough. Any suggestions, tips or tricks will be greatly appreciated! (purely the background) EDIT Thank you for the suggestions and the ideas! here are some variations of this background that I came with. (darker and way more blurry) (dawn colors and some blur) (one of the coolest things I've done blur) (one of the coolest things combined with dawn colors blur)"} {"_id": 29, "text": "What maps sizes did old 2D RTS games like C C and WarCraft support? I'm looking for the size (in tiles) of the maps for the old 2D games Command and Conquer and Warcraft 1 and 2."} {"_id": 29, "text": "Hexagonal Game Board Modal? Tile based games like Chess have a simple modal an array of arrays, each coordinate with an X and a Y value. This is easy to implement, and it is easy to figure out what is going on. Chess Board http www.chesscentral.com EasyEditor assets chess board blank.gif However, when I see games like Civilization 5, I am not sure how hexagonal worlds are implemented. I have a couple ideas, but cannot confirm them. Each row of hexagons is represented in the modal by an array of tiles. This would be much like the first chess board, but would require more effort to figure out adjacent tiles. Each tile only know what tiles border it. When the game renders one tile, that tile loads the 6 around it, which repeats until the screen is filled with tiles. Are either of these guesses correct? If not, how do developers implement these hexagonal maps?"} {"_id": 29, "text": "Tiled tilemapper isometric I'm attempting to move from oblique mapping (45 degree angles) to isometric (30 degree angles) in Tiled. I made a test tile in Photoshop Made an isometric map But my tile appears completely distorted, and the 'isometric' map just appears as a top down grid rotated 45 degrees How do I create a proper isometric scene using tiled? Hi, in your example (here), your Tiled grid is angled down. As you can see, mine appears Top down. How do I configure this?"} {"_id": 29, "text": "Torque2D, Class vs Datablock I'm scripting my first game with Torque2D and have not fully understood the difference between \"Class\" and Datablock. To me it seems like Datablock is similar to a struct in C C or a Record in Pascal. If I create Datablocks with keyword new, are they instantiated in the same way as a \"Class\"? I have a large TileMap and need to attach some information to each Tile. I was thinking to use a Datablock, as a struct, to attach this information to the tile's CustomData property. The two questions are What is a Datablock and should I use a Datablock or a \"Class\" for this tile information?"} {"_id": 30, "text": "Getting a Ranged AI \"In Range\" to shoot In my game, ranged units and melee units have the same behavior. Ranged units try to maintain a fixed distance r, from their targets. If they're within r d of the target (where d is a small value) they stop and shoot it. Note that this means the ranged unit can't be closer than r d. Melee units are the equivalent of ranged units, but with r set to zero. Right now, my AI for ranged units is as follows Find a path to the target using A . Walk along the path until you are within r d, r d of the target. Stop and shoot. If the target is dead, quit. If you are still within r d, r d , goto 3. Otherwise, move away towards the target until you are within r d again. Goto 4. Here's a sort of diagram. My problem is, if there are obstacles and the target moves, step 5 causes a lot of problems. Ranged units will end up backing into walls, unable to shoot. If instead of going to step 5 I go back to step 1 (planning a path), the problem still isn't solved, because walking toward the target will only decrease the range, not increase it. How can I get ranged units to successfully follow and shoot their targets?"} {"_id": 30, "text": "How does an AI determine the bearing to follow within a nav mesh? I've done some reading on nav meshes, and I understand how to generate a path of polygons to reach a goal. However, what I don't understand is how you determine the bearing to follow within each polygon. Without a central node to aim for, what do you aim for? I suppose you could cast a ray to the goal and then head to the point where that ray crosses into the next cell but that would only work if that next cell is actually on your path. If your ray doesn't cross the edge into the next cell, do you instead plot a path to whichever corner of the edge is closest to the goal? I think that would get you the path shown in the 3rd diagram, but would it work in all cases? http udn.epicgames.com Three NavigationMeshReference.html"} {"_id": 30, "text": "Are there any pathfinding algorithms that would handle different movement types? I'm developing a bot for a BattleTech board game simulator http en.wikipedia.org wiki BattleTech, it is turn based. The board is divided into hexagons, each one has a different terrain type and elevation. You drive a robot which moves over them, to destroy other robots. I only know Dijkstra and A pathfinding algorithms, but the problem is that there are 3 types of movements walk, run and jump several hexagons (each of them have their own rules). Walk and run are almost the same. The best path could be a combination or each movement type. Here is an example of map http megamek.info sites default files isometric view.png Do you know a good algorithm for this complex pathfinding or a way to combine A results for each movement type?"} {"_id": 30, "text": "State machine interpreters I wrote my own state machine tool in C and at this point I'm faced with two choices for specifying state machines. Crafting a little language and writing a interpreter. Writing a compiler for that language. I know the advantages disadvantages of each. I'd like to know what choices game programmers have made for their games. If you've used a state machine in your game in any form, I'd be interested in knowing how you did it."} {"_id": 30, "text": "How can I apply steering behaviors to a car controlled with turning and acceleration? I feel like I've got my head around steering behaviors, but I've having trouble applying them to a car. The steering behaviors return forces that one could apply to an object that can move in any direction, but a car can essentially only move forward and turn. I'm having trouble determining how hard the car should turn or how much it should accelerate forward based on the steering force. How can I translate a steering force into the car's input?"} {"_id": 30, "text": "How do I end a behavior tree's action early without evaluating the entire active branch? I'm reading about behavior trees, and one of the common recommendations I've seen is to maintain a reference to your running action so that you don't have to traverse the entire tree during every game loop. Without traversing the entire tree every time, how do I check if an action should end early? For example, suppose I have the following tree and the AI encounters an enemy that's closer than 10m, so he starts to aim his weapon. But then while he's aiming, his HP drops below 50 . I would want the agent to stop the aiming firing sequence and move to the \"use medkit\" action. The only way that I can think of to accomplish that sort of behavior is to evaluate every node in the active branch starting from the root node during every game loop. But based on what I've read, evaluating the entire active branch every game loop is discouraged. Is there a way to be able to abort an action if a higher level node is no longer valid, while not spending a lot of processing power evaluating the entire branch every game loop?"} {"_id": 30, "text": "Neural Net Controlled Car I'm trying to make an AI car controlled by a neural net. I saw this two videos Neural Network Demo and Q Learning and neural network in 2D car driving and I want to replicate that. I already have the neural net code made, with a back propagation algorithm. The thing is, I don't know how to reinforce the learning of the net. What kind of value should I use to calculate the error? My car currently have 5 inputs (similar to the first video), and outputs 2 numbers, 1 which is plugged in the rotation torque of the car, if it's positive it will rotate clock wise, if it's negative counter clock wise, and how much it will turn is based on it's magnitude, it ranges from 1 1, and I mapped it to a desired min max rotation. The second output number is acceleration, it's from 0 1, but mapped to 10 100 (so it can reverse)."} {"_id": 30, "text": "Should enemies still attack if they cannot see the player? This may seem like a silly question, but let me explain this further. Consider a common stealth situation where the player is hidden from the enemy AI. The AI has vision and hearing and if they either see or hear the player they will move towards the player's last known position. However if the player is behind cover (therefore the enemy cannot see them) and they attack with a silent weapon (therefore the enemy cannot hear them either), how should the enemy react? Realistically, the enemy should have an idea of the direction from which the attack struck them and therefore start moving in that direction. But I am wondering if this then would be detrimental to taking a stealth as oppose to offensive approach. If the enemy turns around when hit irrespective of whether they were hit with a loud or silent weapon, there is little benefit to using a silent weapon, since the loud ones will have more power anyway. On the other hand, if the enemy simply stands their while being hit several times by a silent pistol, it may make the AI seem a little stupid. I suppose it comes down to balancing between rewarding stealth gameplay in a stealth game, but not making the AI seem silly at the same time. What should I do to balance this?"} {"_id": 30, "text": "How can I implement a \"20 Questions\" algorithm? Ever since childhood, I've wondered how the 20Q electronic game worked. You think of an object, thing, or animal (e.g. potato or donkey). The device then asks you a series of questions such as Is it larger than a loaf of bread? Is it found outdoors? Is it used for recreation? For each question, you can answer yes, no, maybe, or unknown. I always imagined it work with immense, nested conditionals (if statements). However, I think that's an unlikely explanation because of its complexity for the programmer. How would I implement such a system?"} {"_id": 30, "text": "How would one approach developing an AI for a trading card game How would one approach developing an AI for a trading card game(e.g. Magic The Gathering, YuGiOh, etc.)? I'm not sure where to even begin. How would \"Easy\", \"Normal\", and \"Hard\" AI difficulties differ under the hood?"} {"_id": 31, "text": "Server hosting and costs I'm developing a game that will require renting a server. The server will be used to host scores, clans, friends(on off), match making, lobby, and chat. The game match will be hosted by each player (to lower the cost). How much would a server like this cost? Any hosting recommendation? How much would it cost if the server hosts the games matches too? I want to know a base price (imagine a card game or turn based RPG, even though my game is real time)."} {"_id": 31, "text": "Implementing an online database I'd like to get into online games programming. I thought that as a start i'd be a good idea to implement an online database that would store the progress and score for a game i have made, i'll probably want to implement an account system too. My issue is, i can't see to be able to ask google the right question. I don't know where to start. I've touched some on SQL and PHP, HTML and XSL, but, to me they're just languages, i can't see the big picture, how do these things connect to form a working service? I'm not looking for a solution, i just don't know what should i learn. I'm not looking for knowledge on sockets, i'm familiar with network programming, i just don't understand the \"modern\" process of handling databases. I'd be very happy if somebody could lay out the structure of a database, how you put it out on the web, how you access the information and change it (not in a direct solution, just \"this is done by programming a filter in SQL\"), what languages are used for it, etc."} {"_id": 31, "text": "Multiplayer tile based movement synchronization I have to synchronize the movement of multiple players over the Internet, and I'm trying to figure out the safest way to do that. The game is tile based, you can only move in 4 directions, and every move moves the sprite 32px (over time of course). Now, if I would simply send this move action to the server, which would broadcast it to all players, while the walk key is kept being pressed down, to keep walking, I have to take this next command, send it to the server, and to all clients, in time, or the movement won't be smooth anymore. I saw this in other games, and it can get ugly pretty quick, even without lag. So I'm wondering if this is even a viable option. This seems like a very good method for single player though, since it's easy, straight forward (, just take the next movement action in time and add it to a list), and you can easily add mouse movement (clicking on some tile), to add a path to a queue, that's walked along. The other thing that came to my mind was sending the information that someone started moving in some direction, and again once he stopped or changed the direction, together with the position, so that the sprite will appear at the correct position, or rather so that the position can be fixed if it's wrong. This should (hopefully) only make problems if someone really is lagging, in which case it's to be expected. For this to work out I'd need some kind of queue though, where incoming direction changes and stuff are saved, so the sprite knows where to go, after the current movement to the next tile is finished. This could actually work, but kinda sounds overcomplicated. Although it might be the only way to do this, without risk of stuttering. If a stop or direction change is received on the client side it's saved in a queue and the char keeps moving to the specified coordinates, before stopping or changing direction. If the new command comes in too late there'll be stuttering as well of course... I'm having a hard time deciding for a method, and I couldn't really find any examples for this yet. My main problem is keeping the tile movement smooth, which is why other topics regarding synchronization of pixel based movement aren't helping too much. What is the \"standard\" way to do this?"} {"_id": 31, "text": "How to handle MANY enemies in networked P2P game? Let's suppose a game of 4 players, one is the host. They will fight many enemies, along the lines of 20 40 at the time. Among other things like sending their own state to the other players (position, rotation, shot this frame, crouched, etc). How do I handle the enemies? Do the host \"decides\" the enemies state? (again position, rotation, isattacking, etc). And then sends several messages to the other players so they sync their own game? or do I \"divide\" the enemies, lets say 40, and 4 players, 10 enemies \"controlled\" by each player game? and then, each player sends those messages to the other players so all enemies are in sync? Also, should I group the messages and send one big message instead of 40 little ones? How do I know how big can the message be (how many enemies info in each message)? Basically I'm asking whats the best way to handle a 4 player p2p game with many enemies on screen. Any good tip is appreciated."} {"_id": 31, "text": "Lag Compensation in a Real Time Game I have been trying to implement some lag compensation techniques for a real time game, I've found some good resources online, but I don't think I fully understand the server side part of the problem. The game is a simple 2D game, where the player moves an entity around the map. There will be other entities with their own behavior that are controlled by the server, but I made it so that their movement can be easily interpolated. The player input is quite simple an analog stick and some commands (pick target, loot objective, etc.). The game runs at a fixed 10 TPS (it's on mobile and it's not that action packed, 10 TPS will suffice). The client will send the input state every tick (even if no input is present, like the analog stick is in the default position). It is worth mentioning that only one player is connected at the same time, so in a way this is not multiplayer, but I need to run this on the server as well to prevent cheating. I understand client side prediction and that's quite straightforward to implement. I am struggling with the server side part. As far as I can see I have 3 options The server waits for the input from the client then computes the new game state and sends it to the client. This is a viable option only because there is only one player. The server runs the game loop waiting up to 100ms (10 TPS 100ms tick) for an input. If it gets one, it will be taken into consideration when computing the next state, otherwise it will be discarded. If an input arrives after the tick was done, it will be discarded. If an input arrives during this timespan, but the tick numbers don't match (client send input for tick 5, but server is at tick 7) it will be also discarded. The server runs the game loop waiting up to 100ms (10 TPS 100ms tick) for an input. If it gets one, it will be taken into consideration when computing the next state, otherwise it will be discarded. In this case it will not care if the input is not for the current tick, it will apply it anyways. Option 1 is the easiest to implement and will be more consistent. The biggest issue with this is that I believe it's easy to cheat. Since the server is waiting for input, the player might be able to \"pause\" the game, analyze the situation then dispatch the action, basically allowing a cheating player to play in slow motion. The client side reconciliation in this case seems easy, when the player moves, it will move instantly on the screen and when it received the updated state from the server it will do it's reconciliation. Option 2 is a tad harder to implemented (not by much though), but it will be unplayable (literally, the player won't be able to move) if their ping is higher than 100ms. Option 3 has the same complexity as Option 2. It will work with pings higher than 100ms but it will most probably lead to some frustration on the player's side since their inputs will not arrive in time. Is there a better way to do this? What are your thoughts?"} {"_id": 31, "text": "Which server platform to choose I'm going to write a server for an online multiplayer with these requirements Pretty simple turn based game (think a card game) that is played entirely on the server (security reasons) Must be able to run multiple games (tables) with 4 players per table, but no lobby system required (another server takes care of that) Can support as many players at once as possible Might need multiple servers Chat between players Socket connection to a Flash AIR client Must be able to communicate with other servers (for player accounts and such) Now, I'm considering two options Smartfox (or equivalent) A custom Java solution in something like Tomcat Why Smartfox? It handles multiple rooms and chat natively It presumably has solutions for well known multiplayer gaming issues Why custom? Smartfox has many unneeded functions, bad for performance Smartfox communicates with an XML based format, I could use a more efficient binary one. Don't know if running the entire game model on the server is convenient with Smartfox' extension mechanism Multiple rooms and chat are easy to reimplement Tomcat or a lightweight container is easier to deploy than Smartfox Better IDE support for developing on Tomcat (automatic deploy, etc) What do you think? Are my assumptions correct? Do you have anything to add? What option should I choose (or maybe a different one entirely)?"} {"_id": 31, "text": "How to measure packet latency? In the context of lag compensation, one needs to know when the command is instantiated on the client (this can be named as \"command execution time\" as well). AFAIK, there can be 2 methods for this Client sends a timestamp with the command. Client doesn't send any timestamps, but server does a smart thing to calculate the command's instantiation time. About 1 Is this safe in terms of cheating? About 2 According to Valve's paper, command execution time is as following Command Execution Time Current Server Time Packet Latency Client View Interpolation This means, server must know the packet latency. Another paper from Valve confirms this and says Before executing a player's current user command, the server Computes a fairly accurate latency for the player ... How can the server compute \"fairly accurate latency for the player\"? Most naive and easiest approach would be sending pings regularly (less frequent than game commands and updates though) to find out an average of the latency and use it. Busy network traffic, latency fluctuation and naiveness of this method makes me feel there must be a more elegant way. EDIT Would UDP vs TCP change anything in this context?"} {"_id": 31, "text": "How to interpolate server updates on the client for multiplayer? I am implementing client side prediction and an authoritative server multiplayer architecture. I am following along with the series of articles from http www.gabrielgambetta.com entity interpolation.html I am at a point of confusion and have a few questions. Within my client game loop I have a sequence of actions that update the local state. applyLocalUpdates(state, loopTime) reconcileLastRemoteUpdate(state, loopTime) applyLocalUpdates This updates all the entities positions based on their local physics and game logic simulation. reconcileLastRemoteUpdate I am in the process of implementing this and is the point of confusion. My initial naive attempt takes the last received server state and sets each local entities position to that last remote one. This produces a choppy update effect. When each server update comes in the entities jump to match that last server position. This is where I am trying to add interpolation... I have interpolation methods like vec3.lerp(a,b,t) and quat.lerp(a,b,t) which I use to update the position vector and rotation quaternion. My unknown points are Which values do I pass for a and b in the interpolation? Is this the most recent local state and the last server state? What value do I pass in for t in the interpolation? This I'm highly confused. Is this the difference of elapsed time between the last 2 server states? Is it the difference of elapsed time between the last local update and the last server update? Additionally, should I swap the order of execution between applyLocalUpdates and reconcileLastRemoteUpdate ?"} {"_id": 31, "text": "Networking Packet Design When using a Client Server model, it's necessary for those two parts to communicate data back and forth. There is one specific area that I've been thinking about and unsure about. That being putting certain information in a main update packet, that is sent every n milliseconds, or send it once when it happens. An example would be a turn based strategy game. When one player's turn ends, and the next player's turn starts, you could send one packet to notify the clients of the change, or every n milliseconds, tell the client whose turn it is and it can determine when it changes. These are the pros I see to these approaches Inside Update Packet Harder to become desynced if packets are lost, whereas if sent as an individual packet and the client fails to receive it, could create big desync issues. Individual Packet Sends less data between the server and client since it only sends it once, instead of every n milliseconds. I'm fairly new to game design, so I'm not sure if there is a general consensus about the proper way to do this. There may also be another solution that I haven't thought of. Any input would be great."} {"_id": 31, "text": "Movement networking for a Worms like game I want to implement a strategy artillery game, similar to Worms Arcanists. As game development (and game networking especially) is new to me, I was wondering whether this would be a good performant way to do the movement networking Server has all client's positions. Moving client to server Message start moving, moving direction Server now starts a simulation of the moving client moving. Server to all clients, except originating one, every 100ms(?) Message set client position client id, new position new position calculated by the server's simulation Clients lerp the moving client's position. Moving client to server Message stop moving server's simulation ends Server to all clients, including originating one Message set client position client id, final position all clients have the same position for the moving client in the end"} {"_id": 32, "text": "UI Automation for Flash Games has anyone used or heard of a good selenium like toolset for doing automated UI testing with Flash? I would ideally like to have the ability to record and playback events against my game and integrate this into our build server."} {"_id": 32, "text": "How public should you make your betas? I am approaching the point where I can release a playable beta of a very complex game I am making in Flash AS3. It is fairly involved and the deepest game Ive made so far. I plan on hosting it on my personal site but I want to try and get good feedback on the game elements so far. Is it worth it to expose this beta to as many places as possible before the release? What are good communities to do this? Please note this will be freeware in its final release"} {"_id": 32, "text": "Hide movieclip parts that are out of bounds I'm making a game where one takes a picture of their face to use on a character. I have them zoom in on the picture, but the bitmapdata extends out of its movieclip. How do I assign bounds to my bitmapdata?"} {"_id": 32, "text": "How to implement line of sight restriction in actionscript? I have a problem with a game i am programming. I am making some sort of security game and i would like to have some visual line of sight. The problem is that i can't restrict my line of sight so my cops can't see through the walls. Below you find the design, in which they can look through windows, but not walls. Further below you find an illustration of what my problem is exactly. this is what it looks like now. As you can see, the cops can see through walls. This is the map i would want to use to restrict the line of sight. So the way i am programming the line of sight now is just by calculating some points and drawing the sight accordingly, as shown below. Note that i also check for a hittest using bitmapdata to check whether or not my player has been spotted by any of the cops. private function setSight(e Event null) Boolean g copCanvas.graphics g.clear() for each(var cop Cop in copCanvas.getChildren()) var angle Number cop.angle var radians Number ( angle Math.PI) 180 var radius Number 50 var x1 Number cop.x (cop.width 2) var y1 Number cop.y (cop.height 2) var baseX Number x1 (Math.cos( radians) radius) var baseY Number y1 (Math.sin( radians) radius) var x2 Number baseX (25 Math.sin( radians)) var y2 Number baseY (25 Math.cos( radians)) var x3 Number baseX (25 Math.sin( radians)) var y3 Number baseY (25 Math.cos( radians)) g.beginFill(0xff0000, 0.3) g.moveTo( x1, y1) g.lineTo( x2, y2) g.lineTo( x3, y3) g.endFill() var cops BitmapData new BitmapData(width, height, true, 0) cops.draw(copCanvas) var bmpd BitmapData new BitmapData(10, 10, true, 0) bmpd.draw(me) if( cops.hitTest(new Point(0, 0), 10, bmpd, new Point(me.x, me.y), 255)) gameover.alpha 1 setTimeout(function() void gameover.alpha 0 , 5000) stop() return true return false So now my question is Is there someone who knows how to restrict the view so that the cops can't look through the walls? Thanks a lot in advance. ps i have already looked at this tutorial by emanuele feronato, but i can't use the code to restric the visual line of sight."} {"_id": 32, "text": "Flash Origin Problem with Movie Clips I'm working on a game that uses Flash for the user interface. I'm running into an odd issue with the origin of a library movie clip. Here are the steps to reproduce Create a new MovieClip and construct a single rectangle with both a fill and a stroke. Set position of rectangle to (0,0) and notice that the origin is at the center of the border stroke surrounding the rectangle. Change position of rectangle accordingly so that the origin is truly the top left (so if the stroke surrounding the rectangle was set to 10, adjust the position of the rectangle to 5,5. Enable 9 slice for movie clip so that when you scale the rectangle it doesn't scale or distort the stroke surrounding the rectangle. Drag an instance of the movie clip to the stage and use the free transform tool to scale it to the width of the stage. It should visibly be as wide as the stage. Run the flash file and verify that the width during execution is not as wide as it was in design view. So in a nutshell it seems that when you use the free transform tool, it distorts the origin even though 9 slice is enabled and it shouldn't (I would think). I understand that Flash denotes the center of a stroke to be the starting point and not the edge, which is why I adjusted the origin accordingly while editing the movie clip. What I really need here is pixel perfect positioning for vector based components. I should be able to drag and drop a library movieclip that provides a window with a fancy border around it and scale it to whatever size I need, and place it where I need it to be without any odd positioning problems. Here are some screenshots of the problem ( i.imgur.com 8WU2F.png ) is a shot of my window with 9 slice enabled. You can see that the origin is correct being the top left of the window graphic. ( i.imgur.com ciK8l.png ) is a shot of my window after using free transform to scale it to the width of the stage. It's pretty much perfect in design view ( i.imgur.com QFUiJ.png ) is a shot of the flash application running. Notice that the window is no longer the width of the stage? I don't get it. It makes it very difficult to position because in order to get this to look right while executing, I have to make the window wider than the stage in design mode. Any help is greatly appreciated. I'm sure it's something silly. This is pretty much bread and butter stuff that's been around since the early days of flash."} {"_id": 32, "text": "Flash Game not working on Android I am not sure if I will be able to provide enough information for someone to answer this question, but any ideas might help. I am creating a tower defense game in Flash and I eventually want to make it run on Android. Just for testing purposes, I have been running it on Android's browser and different Android .swf player apps. Recently, the game stopped working correctly. When it gets to the second wave of enemies, they get about four tiles in and stop moving. I can get them to start moving again by constantly clicking on parts of the screen. It's almost like the game loop has quit updating. Trying to solve the problem, I have updated the gameloop from Flash's standard events to a NativeSignal event (didn't solve the problem but the game runs much faster overall). The game works fine on my PC, I can't figure out the Android problem though. Any ideas or help would be appreciated. I didn't want to supply code since I wouldn't know where to start that would be helpful."} {"_id": 32, "text": "What is the standard way of delivering HTML5 games to portals and such? Let me explain what I mean by \"standard way of delivering\"... Think about Flash games sites. Flash games can be delivered as a single file, either hosted by the site, or, I guess, provided by someone else. HTML5 games, on the other hand, don't have something so standard. Usually, they have their own page, and portals just link to that page. I think that it greatly hinders the purpose of that portal, because, well, you want people to stay on your site and look for other games. Now, I think that a some kind of iframe way of delivering games would help solve this problem greatly. I saw some games doing that, and they were often included on tutorial sites to show a live example, which is obviously a great thing. So, is there a standard at all? Any suggestions? Can you create a game that just preloads itself in an iframe (I heard something about a \"single document\" or something)?"} {"_id": 32, "text": "What is the standard way of delivering HTML5 games to portals and such? Let me explain what I mean by \"standard way of delivering\"... Think about Flash games sites. Flash games can be delivered as a single file, either hosted by the site, or, I guess, provided by someone else. HTML5 games, on the other hand, don't have something so standard. Usually, they have their own page, and portals just link to that page. I think that it greatly hinders the purpose of that portal, because, well, you want people to stay on your site and look for other games. Now, I think that a some kind of iframe way of delivering games would help solve this problem greatly. I saw some games doing that, and they were often included on tutorial sites to show a live example, which is obviously a great thing. So, is there a standard at all? Any suggestions? Can you create a game that just preloads itself in an iframe (I heard something about a \"single document\" or something)?"} {"_id": 32, "text": "Flash server, protocol protection I am making a flash game that will interact heavily with the server. For quite some time already I am using 'request hashing' technique to make sure that request data hasn't been tampered with. This works pretty well. However, in this game I'd like to go a little bit further and completely hide the protocol from the observer (right now it's plain JSON). I imagine that I could zip and encrypt data (using one of symmetric algorithms). That would make it pretty unreadable by human, right (also smaller)? And SWF encryption obfuscation should protect the encryption key (it is being done anyway). As a side benefit that will also protect dynamically loaded resources from directly saving them to disk (or copying from the cache). Questions are there tools that allow you to simply dump the SWF with all its content, received and decrypted? If yes, this will render 'the side benefit' invalid. do you think it's worth it to burn all that CPU power? To support my point, I will say that I like to inspect data being exchanged between client and server. And occasionally I find a bug or two which I can use to my benefit. But then there was a game that was sending and receiving some binary data. Being a lazy attacker, I decided not to analyze further. Otherwise, who knows what I could find ) Comments, ideas, criticism, suggestions? )"} {"_id": 32, "text": "Game networking topology dealing with host leaving I am working on a P2P game in Flash and I'm wondering what network topology would be most robust for dealing with people randomly joining leaving. I was thinking the first user to join could be the host, but if the host leaves the whole session would be killed. I am looking for a way to deal with this with no (or little) disruption."} {"_id": 33, "text": "How do I make 3d Sprites like Myth did 20 years ago? The game that Bungie cut it's AAA chops on Myth The Fallen Lords. about 20 years old now. https en.wikipedia.org wiki Myth The Fallen Lords For the time, it had a pretty revolutionary way of doing lots of \"3d\" characters, hundreds, on screen and interactive at a time. to accomplish this, basically they use sprites for each character choosing and skewing each sprite based on the camera perspective to make it appear 3d. I would really love to read and learn more about how this technique worked, as I think it can still be applicable to modern indie games, but I've never seen this technique used since. Does anyone know where I could read more on this? here is a youtube of the game being played, showing the characters https youtu.be PySrgfe6pr4?t 6m2s"} {"_id": 33, "text": "Alternative Font loaders in Monogame Framework While working on projects of mine, I have been finding that it is a huge pain to switch operating systems just to create a simple spritefont when using Monogame. I saw that the Nuclex Framework can load fonts which are clearer sharper, and they can be used in a Monogame project as well. They load fonts using the FreeType library, which is very multiplatform and is used widely. Is there a fairly simple way to use the FreeType library or another library to render text suitable for use in a Monogame project, as an xnb file?"} {"_id": 33, "text": "How to determine the best approximate direction I want to determine which sprite to use when one agent is \"facing\" another agent. My game is 2d, and uses 8 directional movement. Deciding which sprite to use for movement is easy enough since there are only 8 options when moving form one square to another. def direction(self, start, end) method for determining facing direction s width int(start 0 ) s height int(start 1 ) e width int(end 0 ) e height int(end 1 ) check directions if s height lt e height and s width e width return 'down' elif s height lt e height and s width gt e width return 'downright' elif s height e height and s width gt e width return 'left' elif s height gt e height and s width gt e width return 'upleft' elif s height gt e height and s width e width return 'up' elif s height gt e height and s width lt e width return 'upright' elif s height e height and s width lt e width return 'right' elif s height lt e height and s width lt e width return 'downleft' It's verbose, but gets the job done. start is the agents current coordinate position, and as you might guess end is the next step in the list of waypoints which makes up an agents path. The problem is that using the same function for determining which sprite should be used when attacking another agent does not look very good at all. For example, if the target is down, but only one pixel to the right, we would want to show the 'down' sprite, but this function returns 'downright' instead. How can we change it to appropriately return the fitting sprite?"} {"_id": 33, "text": "What exactly are sprites and entities and what are the differences between the two? Well I was looking at a few game based tutorials, as well as articles. I found out about two terms Entities and Sprites. Now they seem related but different, what is each and how are they the same and or different? I have some basic ideas already An entity is an item thing object person any object that can be interacted with in the world. A sprite is like an entity, but usually for NPCs players monsters. Or a sprite can be the name of a picture assigned to a specific entity. Do I have the right idea or are sprites and entities something completely different?"} {"_id": 33, "text": "how to change floor sprite in unreal engine I have started a 2D side scrolling game in Unreal Engine from the menu when you start a new game. The basic environment loads up and it works but I don't know how to change the floor sprites and player sprites. Please Help."} {"_id": 33, "text": "Where to hire 2D sprite artists? I'm looking for high quality original 2D sprite collections which are animated where appropriate. (IE not simply collections I can buy, we want original work). I've had a look online but am struggling to find out where I can hire such people! Where do I find such people? How much should I expect to pay?"} {"_id": 33, "text": "Sprite cut in half after placed on Scene I have noticed a very odd behaviour in Unity 5.2.1. When I drag and drop a Sprite into the Scene, it's cut in half like this I tried with PNG and PSD formats but it didn't help. Although when I flipped an image horizontaly in PS and then imported it, it worked. Is this a Unity bug? Edit It happens only when I set Max Size to 512 or more, but still why?"} {"_id": 33, "text": "Finding relative x and y value on a sprite Gamemaker I'm having a little conundrum that may just result from a lack of knowledge of Gamemaker's functionality. I've attached two images to aid explanation. I have a sprite of a turret (with a gun barrel), attached to a turret object, and at certain points in gameplay this object will spawn another object on top of it (let's call it the 'bullet object'). I would like the bullet object to spawn at the x and y coordinates at the end of the gun barrel. This would be easy to find as a pair of coordinates if the sprite was always stationary, in this configuration. Alas, it is not. It rotates like billy oh. This means that the x and y coordinates at the end of the gun barrel are constantly different. How do I find this constantly changing x and y coordinate? I imagine (though am most likely wrong) that there the initial x and y coordinates of the sprite are saved and can be found even if rotated is there a function that does this? Or do I need to write a script and then call it every time I want to spawn the bullet object? Thanks for your help."} {"_id": 33, "text": "Which isometric angles can be mirrored (and otherwise transformed) for optimization? I am working on a basic isometric game, and am struggling to find the correct mirrors. Mirror can be any form of transform. I have managed to get SE out of SW, by scaling the sprite on X axis by 1. Same applies for NE angle. Something is bugging me, that I should be able to also mirror N to S, but I cannot manage to pull this one off. Am I just too sleepy and trying to do the impossible, or a basic 1 scale on Y axis is not enough? What are the common used mirror table for optimizing 8 angle (N, NE, E, SE, S, SW, W, NW) isometric sprites?"} {"_id": 33, "text": "Drawing Sprites with Artemis I am trying to trace the StarWarrior code (the Artemis tutorial). I cannot figure out how these sprites are being drawn. This is the code where the player's ship is initialized lt summary gt The initialize player ship. lt summary gt private void InitializePlayerShip() Entity entity this.entityWorld.CreateEntity() entity.Group \"SHIPS\" entity.AddComponentFromPool lt TransformComponent gt () entity.AddComponent(new SpatialFormComponent(\"PlayerShip\")) entity.AddComponent(new HealthComponent(30)) entity.GetComponent lt TransformComponent gt ().X this.GraphicsDevice.Viewport.Width 0.5f entity.GetComponent lt TransformComponent gt ().Y this.GraphicsDevice.Viewport.Height 50 entity.Tag \"PLAYER\" Which line of code actually leads to the ship being drawn? The drawing takes place in PlayerShip.cs. Here is a commented version of the code with what I think is happening lt summary gt The initialize player ship. lt summary gt private void InitializePlayerShip() Entity entity this.entityWorld.CreateEntity() declare and initialize entity entity.Group \"SHIPS\" Unused label entity.AddComponentFromPool lt TransformComponent gt () Add the component to store X and Y entity.AddComponent(new SpatialFormComponent(\"PlayerShip\")) Add a component which is just a String (perhaps this has something to do with PlayerShip.cs?) entity.AddComponent(new HealthComponent(30)) Add a health component with 30 hp entity.GetComponent lt TransformComponent gt ().X this.GraphicsDevice.Viewport.Width 0.5f Initialize the TransformComponent's X entity.GetComponent lt TransformComponent gt ().Y this.GraphicsDevice.Viewport.Height 50 Initialize the TransformComponent's Y entity.Tag \"PLAYER\" Unused label So how is the actual sprite being drawn? Because, I am not seeing it. ( If it is being drawn by entityWorld.Draw() in the Draw() method, then which line(s) of code put it in the world to be drawn? Or what is actually happening in entityWorld.Draw()? I believe the RenderSystem is the one that is doing the drawing, but I don't get how it is being called. I am sorry for the uninformed question, but I have made no progress after about 12 hours, and I desperately need some guidance!"} {"_id": 34, "text": "MonoGame renders texture in an almost compressed looking way I was working on a game in MonoGame, which I have been doing for quite some time now. As I was implementing UI, I was noticing some sort of weird scanlines. Investigating further, to my surprise my whole scene was covered in it! Notice how the texture when zoomed in has almost a JPEG squared compressed feel too it. I have no idea what is causing this. We were using a RenderTargetTexture instead of the backbuffer, so figured I try it without, but the same happens. I checked if we were doing any weird Matrix transformations, we did not, in fact, I disabled them just to test it out. After that I thought \"half pixel offset\" (even though it should not really apply anymore post DX9), but also that did not solve a thing. Last but not least I thought maybe a MipMapping issue, but that would be weird since the exact same thing happens with the normal backbuffer. My Question Does anyone here recognize this \"effect\", and have any clue what might be causing it? I'm basically just rendering a 1920x1080 image to a 1920x1080 backbuffer, but I can assure its not the image. When I for example draw a cursor and make it follow the mouse, the points where pixels \"go missing\", even the cursor texture deforms. Might not be as clear in this picture, but notice how both the cursor and the orange curve basically \"skip\" a row of pixels, when I move the cursor to another location, it turns back to normal."} {"_id": 34, "text": "How can I render images to multiple windows without creating double textures in SDL2? Simply put What is the simplest way to render an image to two different sdl2 windows? I'm pretty new to SLD2, but I have a nice little level editor working. I then decided to move the area with edit tools to a separate window. I immediately got the following error Texture was not created with this renderer. Alas, it looks like each window needs to have its own SDL Renderer, and each renderer needs to create its own textures for the images I want to display. IMG LoadTexture() needs a renderer and will only render the resulting texture to that renderer. SDL CreateRenderer() then needs a window as a parameter and doesn't seem to be able to ever render to another output window. So does this really mean I have to create separate textures of each of my images for each and every renderer window? Or is there a way to load graphics into textures that can be used by any renderer, or on any window?"} {"_id": 34, "text": "How does Texture Mapping work? I read the answers from What exactly is UV and UVW Mapping? and How does UVW texture mapping work?, which are quite nice, but am still not 100 sure, if I understand correctly. Lets start with a 2D example. So, say I have a triangle (obviously described by 3 vertices). Now my question is, how do I convert a (x,y) coordinate to the (u,v) coordinate of my texture? Since x,y could be any value between 0,n with n being all real numbers, considering that it is in object space. But my texture coordinates are between 0,1 . How do I know how to map lets say (3,4) to (u,v)? If I know how to map the object coordinates to the texture coordinate it is easy to interpolate the values, I assume (either using bilinear interpolation or barycentric interpolation). And then how would this work for 3D? Lets say in this case we would have a pyramide with 5 vertices (4 bottom, 1 tip). I guess the procedure would be similar, with the exception that I know have an additional depth value. But how does the mapping of a 2D texture work on a 3D object, when I don't have nice flat surfaces like on a pyramide, but instead have a circular surface, like a tea pot? I hope I'm clear in my questions. I'm still little confused myself. I just don't quite get the mathematical background of texture mapping. It would be enough, if you could point me to some website with good explanation, maybe with clear graphics and step by step description. Thanks for your time!"} {"_id": 34, "text": "Where can I learn about Substance maps for 3ds Max 2012? A new feature in 3ds max 2012 is Substance procedural textures. Are there any good online libraries or resources for substance maps?"} {"_id": 34, "text": "Basic terrain shader without using external texture I have this (Right now I have the height map in a x x size 2D array and a 1D vector too.) What I am trying to achieve is something like this Without using any textures, only plain colors. So basically smooth transitions and some shadow (using shaders). My vertex shader looks like this version 330 layout (location 0) in vec3 Position layout (location 1) in vec3 Normal layout (location 2) in vec3 Color out vec3 fragmentNormal out vec4 ex color,pos out vec3 N out vec3 v void main () pos vec4(Position,1) ex color vec4(Color,1) fragmentNormal Normal v vec3(gl ModelViewMatrix pos) N normalize(gl NormalMatrix Normal) gl Position gl ModelViewProjectionMatrix vec4(Position,1) I have normals for all the vertices. Color is set simply in the c code based on height. Here is the fragment shader in vec3 N in vec3 v in vec4 ex color void main(void) vec3 L normalize(gl LightSource 0 .position.xyz v) vec4 Idiff gl FrontLightProduct 0 .diffuse max(dot(N,L), 0.0) Idiff clamp(Idiff, 0.0, 1.0) gl FragColor Idiff ex color So I guess my problem is what formula should I use to mix the colors. I think I don't need to set the colors in the c code but in the shaders. Update Here is the wireframe of the terrain. Update2 Based on Babis' answer the result is So the gradient is not quot projected quot onto the surface as I would like to do. What could cause this? Maybe my qustion wasn't clear."} {"_id": 34, "text": "Is it possible to use unnormalized texture coordinates from a GLES2 GLSL fragment shader? I want to look up a texel from my GLES2 GLSL fragment shader using un normalized texture coordinates (0 w, 0 h instead of 0 1, 0 1). The reason is that this texture is used as a look up table and I get precision problems with normalized coordinates. I see that GL TEXTURE RECTANGLE is not supported wihtout extensions and neither is texelFetch(), so I have ruled out those options. Thanks!"} {"_id": 34, "text": "Total Texture memory size iOS OpenGL ES My team is running into an issue where the amount of texture memory allocated via the glTexImage2D is high enough that it crashes the app ( at about 400 MB for iPhone 5). We're taking steps to minimize the texture allocation ( via compression, using fewer bits channel and doing procedural shaders for VFX etc). Since the app crashed on glTexImage2D, I felt like, it's running out of texture memory (as against virtual memory). Is there any documentation guideline on the recommended texture memory usage by an app (not just optimize your texture memory) . AFAIK on iOS devices ( and many Android devices) there's no dedicated VRAM and our app process is still well within the virtual memory limit. Is this some how related to the size of physical RAM ? My searches so far has resulted only in info on max texture size and tricks for optimizing texture usage and such. Any information is appreciated."} {"_id": 34, "text": "OpenGL ES 2.0. Sprite Sheet Animation I've found a bunch of tutorials on how to make this work on Open GL 1 amp 1.1 but I can't find it for 2.0. I would work it out by loading the texture and use a matrix on the vertex shader to move through the sprite sheet. I'm looking for the most efficient way to do it. I've read that when you do the thing I'm proposing you are constantly changing the VBO's and that that is not good. Edit Been doing some research myself. Came upon this two Updating Texture and referring to the one before PBO's. I can't use PBO's since i'm using ES version of OpenGL so I suppose the best way is to make FBO's but, what I still don't get, is if I should create a Sprite atlas batch and make a FBO loadtexture for each frame of if I should load every frame into the buffer and change just de texture directions."} {"_id": 34, "text": "How can I determine the extreme color values in a texture? I am looking for a way to determine the most extreme color values for all of the texels in a texture. So for a texture consisting only of black and white texels, the extreme values should be (0,0,0) and (1,1,1) expressed in RGB format. For a color gradient from red to green I should get the values (1,0,0) and (0,1,0). Now obviously I could do this on the CPU by iterating over all the pixels texels of the texture and keeping track of the color values found to be most apart from each other, but this is probably relatively slow, so I am looking for a way to do this using the GPU shaders. Is this possible using shaders? I am not experienced with GPGPU, so a solution in HLSL GLSL would be preferred. Or maybe there is a fast algorithm I could use on the CPU?"} {"_id": 34, "text": "How to use large texture atlases in my shaders on mobile devices This is a frustrating discovery. I have ported my desktop game to mobile and discovered that floating point precision in my shaders is not good. I have large animations which I store per frame on a texture atlas. I try to keep as many non unique frames in their own sections and piece them together e.g. a sword that doesn't animate for 4 frames is a node that attaches to the hand piece. This preserves memory. However the texture atlases are so big that, when I use shaders, it breaks the animation completely. Here's an example of one of the shaders that wraps the animated backgrounds (scrolling clouds) precision highp float precision highp int precision highp sampler2D varying vec2 vTexCoord varying vec4 vColor uniform sampler2D texture this script animates and wraps an animated background from one atlas uniform float x frame start x uniform float y frame start y uniform float w frame width uniform float h frame height uniform float offsetx wrap by x amount uniform float offsety wrap by y amount void main() vec2 origin vec2(x, y) vec2 size vec2(w, h) Get the texture coordinate vec2 texCoord vTexCoord Make its lower left be at (0,0) and it's upper right be at (1,1) texCoord (texCoord origin) size Apply the offest texCoord texCoord vec2(offsetx,offsety) Apply the wrapping texCoord fract(texCoord) Convert back to texture atlas coordinates texCoord (texCoord size) origin gl FragColor texture2D(texture, texCoord) vColor For most backgrounds, the animations are small and don't cause problems. But some backgrounds are large and their texture atlas is large. The floating point precision breaks down and the animation tears. I don't have much mobile experience. How can I use large textures atlases for animations without the x y coordinates breaking due to loss of floating point precision?"} {"_id": 35, "text": "How can I mod Minecraft 1.7.9? I've looked up a lot of tutorials on YouTube and all of them only work for versions of Minecraft prior to 1.7.9. I first got a Minecraft Coder Pack (MCP) off of this website, but then realized that only decompiles Minecraft 1.6.4. Then I found a more recent MCP (that's not on the website for some reason) and it is version 9.03, downloaded here. This decompiles Minecraft version 1.7.2 (when I followed this video's instructions, I run the decompile.bat file and it says Json file not found in C Users mike AppData Roaming .minecraft versions 1.7.2 1.7.2.json). Basically I can't decompile Minecraft 1.7.9, but I can decompile older versions. However, I don't have any older versions downloaded onto my computer. I have only 1.7.9. Then I tried using Forge, but realized that most videos were using versions of Minecraft prior to 1.6.4, meaning they use the bin folder that does not exist anymore. Even after trying to figure that out as well, the decompiling would never work. I tried to do what this video did, but couldn't replicate it. Then I finally looked at this video about using Forge and I could replicate it, but this didn't decompile Minecraft. It just set up a workspace in Eclipse that I'm not sure how to use. TL DR I can decompile Minecraft 1.6.4 and 1.7.2 but I can't decompile version 1.7.9. Should I download an older version of Minecraft, wait for an MCP for 1.7.9, or something else? Is there something I'm missing, where I actually can decompile and mod Minecraft 1.7.9?"} {"_id": 35, "text": "Is there a way to drain durability from an item instead of a sword, for instance? I am modding Minecraft 1.8 using Eclipse with Forge, and I just wondered, is there a way to drain durability from a battery, for instance, instead of a piece of armor when hit. I have tried things like getting an item from the player's inventory and attempting to reduce its durability, so that would be a possibility, however, I was unable to find a way to do so?"} {"_id": 35, "text": "How to get an AbstractClientPlayer for all players? I'm working on an minecraft forge mod (1.8.8). I have en custom (ownable) entity and want to set its texture to the texture of its owner, because it's a mini version of the owner. I found out that i can get the texture of players with AbstractClientPlayer getLocationSkin(), but i can't figure out how to access either EntityOtherPlayerMP or EntityPlayerSP, which implement AbstractClientPlayer, in my custom renderer. Is there a way to get all AbstractClientPlayer, regardless of SP or MP? I can access the GameProfile and i have the EntityPlayer of the owner."} {"_id": 35, "text": "Where do I store (or how do I refer to) textures for custom blocks? I'm making my first foray into Minecraft modding on Ubuntu using Minecraft Forge. I'm finding it a little hard to get started as many of the tutorials seem to refer to older versions of Minecraft (e.g. that use the BaseMod base class, which now appears to be deprecated). I've created a simple block type package net.minecraft.block import net.minecraft.block. import net.minecraft.block.material. public class RolyBlock extends Block public RolyBlock(int par1, Material par2Material) super(par1, par2Material) and added the following static initialiser to the Block class public static final Block rolyBlock new RolyBlock(174, Material.rock) .setHardness(1.5F) .setResistance(10) .setStepSound(soundStoneFootstep) .setUnlocalizedName(\"rolyBlock\") .setCreativeTab(CreativeTabs.tabBlock) .setTextureName(\"roly block\") I created a 256x256 PNG file called roly block and placed it in Forge's MCP folder, in the same location as the other textures. However, when I start Minecraft (v. 1.6.5) via Eclipse, it is unable to locate the texture 2014 03 14 11 38 34 SEVERE Minecraft Client Using missing texture, unable to load minecraft textures blocks roly block.png I suspect this is because I put the texture in the directory forge mcp temp src minecraft assets minecraft textures (where I found the other textures) where this temp folder is presumably a decompiled version of the Minecraft 1.6.5 .jar file. I can't actually remember how I ended up with this folder, though. My question is how do I add a new texture to my Eclipse mod development environment? Must I rebuild a .jar each time, and if so how? Or can I have Minecraft load textures from a directory? (I'm a complete Minecraft newbie, so there are many things about the configuration dev environment for Minecraft that are confusing me. For example if I run recompile.py from Forge's MCP folder, I get a swathe of errors relating to IconRegister not being found.)"} {"_id": 35, "text": "How do I make the player invulnerable using Fabric modding? I am making a Minecraft mod using the fabric mod loader and I want to make the player invulnerable for a duration when they eat a certain food item. I don't know how to manipulate anything concerning entities or players so I am asking for some guidance."} {"_id": 35, "text": "How to morph a player character into another entity in MC JAVA 1.16 1.17? I want to add morphs into my game. Is there any command code resource I can use to make someone morph into an entity?"} {"_id": 35, "text": "Why Minecraft has such a terrible backward compatibility? For every release of Minecraft, even a minor release, is almost guaranteed to break some plugins or mod. Why is this so? I am asking this so that we could avoid the same architecture problem as Minecraft. I mean, why can't the upgrades also support previous version of minecraft? Minecraft requires an EXACT version number match in order to connect. I am pretty sure the protocol can be designed so that clients should ignore unsupported commands. For example, if there is new block introduced and the client doesn't understand it, they can just fall back to default dummy block so that players can still play but start seeing dummy wireframe block or so. Once they start seeing too much of wireframe blocks they know they should upgrade."} {"_id": 35, "text": "How can I read a portion of one Minecraft world file and write it into another? I'm looking to read block data from one Minecraft world and write the data into certain places in another. I have a Minecraft world, let's say \"TemplateWorld\", and a 2D list of Point objects. I'm developing an application that should use the x and y values of these Points as x and z reference coordinates from which to read constant sized areas of blocks from the TemplateWorld. It should then write these blocks into another Minecraft world at constant y coordinates, with x amp z coordinates determined based on each Point's index in the 2D list. The issue is that, while I've found a decent amount of information online regarding Minecraft world formats, I haven't found what I really need more of a breakdown by hex address of where what everything is. For example, I could have the TemplateWorld actually be a .schematic file rather than a world I just need to be able to read the bytes of the file, know that the actual block data starts always at a certain address (or after a certain instance of FF, etc.), and how it's stored. Once I know that, it's easy as pie to just read the bytes and store them."} {"_id": 35, "text": "Why aren't ModLoader's decompiled files showing up? I want to start making a Minecraft 1.6.4 mod, but have this trouble with ModLoader I've run the decompile.bat batch file, but when opening the eclipse folder, the bin folder is empty. The batch file ran fine with no errors, and src minecraft net minecraft contains many decompiled java files, but no modloader files like BaseMod.java. If I open this folder with Eclipse, the package explorer menu stays empty. What's happening here?"} {"_id": 35, "text": "Is there a way to create a cellular automaton mod for Minecraft? I basically want to create a mod for Minecraft that does the following if(a sand block is next to lava block) change the sand block to a glass block Is it possible to create a mod like this without editing multiple Java classes?"} {"_id": 36, "text": "Projectile Rotation During Flight (Tank Game) I have a tank firing projectiles into the air. I want the projectiles to be rotated relative to their position along their flight path. See the diagram I drew in the image below. At any given point I know the x and y of the projectile, z represents initial velocity, theta is also known."} {"_id": 36, "text": "UE4 Event Tick and getting Forward Vector I'm trying to make a game where my character rotates towards cursor (that works). He can choose between a few weapons (also works). One of the weapons works like a machine gun. My idea was to make a simple solution with Event tick gt gate (opened by pulling LBM and closed by releasing it) gt gate (closed after firing and opened after some delay) gt spawning projectile which leads to a sequence that closes opens the second gate. It doesn't work perfectly. The projectile spawns with the right scale and location, but the rotation is always pointing straight forward. If the exactly same blueprint is attached to anything else (straight up to the LMB pull or a timer), it works perfectly. However, doing the LMB pull defeats the whole purpose, as it's not a machine gun any more, and timer... is less then perfect I believe. I liked my solution using gates but it just doesn't want to work. For example, this bit works with no problem (even though the spawning and rotation code is literally copy pasted) Does anyone have an idea on how to fix this?"} {"_id": 36, "text": "Rotate final image with projection matrix I'm trying to use space in my shadow (depth) maps in a more efficient way. If I could freely rotate the final image I get with projection matrix I could save a lot of pixels, but I don't know if it is possible to do so. Is it?"} {"_id": 36, "text": "How do I determine the forward right direction when I have the \"up\" vector? Unity I am trying to fix a gameobject for e.g. a cube (which i am using as a sensor) on my character. I need the cube to rotate (with orientation control from the inspector I have a public quaternion rotation variable set). I have identified 3 vertices using ray cast to fix my sensor onto the character. I already have determined the normal of these vertices which will be the \"up\" direction of the cube. I am wondering how I can determine the forward right vector to use in the LookRotation() function. I need to basically be able to rotate the cube ON the character even as the character moves on animation. This is my code, and as of now there is no rotation happening. I'm sure I am missing something and would really appreciate help. I really hope this is enough explanation. using System.Collections using System.Collections.Generic using UnityEngine public class orientation MonoBehaviour importing from Vertex Finder Script to use InterpolatedNormalInWorldSpace value private VertexFinder script public Transform target to edit orientation of sensor in Unity Inspector public Quaternion rotation edit Start is called before the first frame update void Start() script GetComponent lt VertexFinder gt () Quaternion TurretLookRotation(Vector3 approximateForward, Vector3 exactUp) Quaternion rotateZToUp Quaternion.LookRotation(exactUp, approximateForward) Quaternion rotateYToZ Quaternion.Euler(90f, 0f, 0f) return rotateZToUp rotateYToZ Update is called once per frame void Update() Vector3 relativePos target.position transform.position aligning axes of sensor to z axis and y axis respectively Quaternion rotation1 TurretLookRotation(relativePos, script.interpolatedNormalInWorldSpace) setting rotation of sensor to values entered in Inspector with respect to the above aligned axes transform.rotation rotation1 rotation edit"} {"_id": 36, "text": "How do I get a relative rotation between two positions? For example, I have two objects a amp b. a's position is 12, 16, 2 and b's position is 14, 16, 3 I need to know how to find a relative rotation between them. By relative rotation I mean this a's pitch and yaw so a is looking at b. I have no helping methods like lookingAt() etc. I only know a's x,y,z,yaw,pitch and b's x,y,z. Any help?"} {"_id": 36, "text": "XNA Rotating a Rectangle? I am in the process of making a giant shooter game and I have got to the point where I needed to use rectangles to detect bullets and giants hitting the player. I did that however, if you look at this image It shows the giant and the players rectangle, as you can see they are not rotating with the player, any idea how to fix this? Also yes I know they are no the same size as the sprites, that is because I set the size as they were to big before. Thanks for any help )"} {"_id": 36, "text": "THREE.JS why is the rotation only applied on the last axis used? my function function rotateAroundWorldAxis2(object, radians1, radians2) object.rotWorldMatrix.makeRotationX(radians1).makeRotationZ(radians2) object.rotWorldMatrix old code for Three.JS pre r54 rotWorldMatrix.multiply(object.matrix) new code for Three.JS r55 object.rotWorldMatrix.multiply(object.matrix) pre multiply object.matrix.copy( object.rotWorldMatrix ) old code for Three.js pre r49 object.rotation.getRotationFromMatrix(object.matrix, object.scale) old code for Three.js pre r59 object.rotation.setEulerFromRotationMatrix(object.matrix) code for r59 object.rotation.setFromRotationMatrix(object.matrix) my object was initialized like this object.rotWorldMatrix new THREE.Matrix4()"} {"_id": 36, "text": "Get 3D quad rotation matrix from points I have 4 3D points which represent a even quad in space. (So 3 points are sufficient) I need to get all the individual transformations (translation, rotation, dimensions) so that I can build that quad in my CSS 3D engine. So the default normal vector for a quad is 0, 0, 1 . What I have so far 4 points p0, p1, p2, p3 normal normal cross(p1 p0, p2 p0) axis and angle axis cross( 0, 0, 1 , normal) angle arccos(dot( 0, 0, 1 , normal)) rotation matrix rotationMatrix matrix4.fromAxisAngle(axis, angle) width and height width sqrt((p2.x p1.x) 2, (p2.y p1.y) 2, (p2.z p1.z) 2) height sqrt((p1.x p0.x) 2, (p1.y p0.y) 2, (p1.z p0.z) 2) translation x (p0.x p1.x p2.x p3.x) 4 y (p0.y p1.y p2.y p3.y) 4 z (p0.z p1.z p2.z p3.z) 4 It almost works, but some rotations are wrong or more specifically one rotation axis is missing. This is logical because the rotation is only build from the normal. So how can I get the full rotation? Here you see that the top and bottom quads are just rotated around the x axis but the y axis rotation is missing"} {"_id": 36, "text": "Projectile Rotation During Flight (Tank Game) I have a tank firing projectiles into the air. I want the projectiles to be rotated relative to their position along their flight path. See the diagram I drew in the image below. At any given point I know the x and y of the projectile, z represents initial velocity, theta is also known."} {"_id": 36, "text": "Camera movement in Super Hexagon I do realize there is no camera in OpenGL ES 2, but from my understanding you can simulate one using view and projection matrices. I'm using Android, by the way. Here's a video of the game, in case you are not familiar with it https www.youtube.com watch?v 5mDjFdetU28 I am trying to understand how the world camera is rotated and tilted. Is that done purely by rotating and tilting the view matrix? So you'd have a tilt and rotationSpeed fields that you'd use each frame in something like rotate(viewMatrix, angle rotationSpeed) followed by rotate(viewMatrix, tilt)? How would you achieve a similar effect?"} {"_id": 37, "text": "cocos3d versus Unity for simple IOS 3D games? Wondering if anyone here happens to have experience in doing some simple 3D based games apps for IOS, using cocos3d amp Unity and could give some pointers....questions I have are GENERAL 1) It seems currently cocos3d has the most traction in terms of a free 3D games engine for IOS development? 2) If one wanted to step up from the free IOS games engine, to a commercial one it would seem Unity is a popular choice, however you would then not really be doing apple objective c development, but rather unity development and pushing a button to pop out an IOS deployable artifact correct? FOR MY SPECIFIC REQUIREMENT If I was interested in doing the following develop a relatively simple 3D application for iPhone iPad have a small 'world' such as a room with a few basic items in it (table, cupboard) ability to 'drop' a basket ball in the room and have it bounce around based on a physics engine amp perhaps some user input (not sure what, lets say control wind direction) monitor it and react accordingly e.g. say if it goes into a net which is in the room then add a point to the scoreboard At a high level how would one do this using Unity versus using cocos3d, for example 3) Which would be quicker to develop such a basic iPhone iPad game simulation out of Unity cocos3d? 4) Would 'building your world room' approach be different? Like in Unity would you build tables cupboards within a Unity IDE which would make it very quick, whereas with cocos3d would you have to do this programmatically? 5) Would the programming aspect of letting the ball go and monitoring it be different between the two? Or would you roughly in both cases just be setting up world, release ball in certain position, and then engaging physics engine? 6) Any advice guidance re which tool approach to use? (Unity versus cocos3d) 7) Re using cocos3d for my requirement, what additional tools would you recommend above beyond XCode? (e.g. just perhaps some sort of 3D design tool to help model the room and import save somehow for us in cocos3d) Basically I'm a hobbyist iOS developer who wants to do some 3d, so the concept of using a free open source approach such as cocos3d sounds great as (a) it's free, (b) keep using existing skill set re objective c etc, BUT the biggest unknown to me is whether by chosing cocos3d would I be giving myself say 20 hours of work to develop a 3d app animation that could be developed in 1 hour in one of these commercial product (e.g. Unity)?"} {"_id": 37, "text": "Game state from Web to iPhone version? I'm currently in the planning stages for an Web and iPhone game. I'm developing it with Adobe AS3 Air. I was wondering if it's possible for people to be able to play the Web version, save their state of play and then pickup again where they left of on the iPhone version? (and visa versa) how that would be achieved? the Web version will probably be on Facebook, so could I link both versions through their FB UID?"} {"_id": 37, "text": "Game server for an android iOS turn based board game I am currently programming an iPhone game and I would like to create an online multiplayer mode. In the future, this app will be port to Android devices, so I was wondering how to create the game server? First at all, which language should I choose? How to make a server able to communicate both with programs written in objective c and Java? Then, how to effectively do it? Is it good if I open a socket by client (there'll be 2)? What kind of information should I send to the server? to the clients?"} {"_id": 37, "text": "Connect 4 Iphone game I'm new to game development and development in general, I want to make a Connect 4 game. I have made a few simple game tutorials and I decided I wanted to make my own game now. I figured a connect 4 game should be a simple start for me, can anyone help me out how to go about with this please? Should I use Cocos2d or write it in ObjC? Thanks David H."} {"_id": 37, "text": "Help implementing virtual d pad Short Version I am trying to move a player around on a tilemap, keeping it centered on its tile, while smoothly controlling it with SneakyInput virtual Joystick. My movement is jumpy and hard to control. What's a good way to implement this? Long Version I'm trying to get a tilemap based RPG \"layer\" working on top of cocos2d iphone. I'm using SneakyInput as the input right now, but I've run into a bit of a snag. Initially, I followed Steffen Itterheim's book and Ray Wenderlich's tutorial, and I got jumpy movement working. My player now moves from tile to tile, without any animation whatsoever. So, I took it a step further. I changed my player.position to a CCMoveTo action. Combined with CCfollow, my player moves pretty smoothly. Here's the problem, though Between each CCMoveTo, the movement stops, so there's a bit of a jumpiness introduced between movements. To deal with that, I changed my CCmoveTo into a CCMoveBy, and instead of running it once, I decided to have it CCRepeatForever. My plan was to stop the repeating action whenever the player changed directions or released the d pad. However, when the movement stops, the player is not necessarily centered along the tiles, as it should be. To correctly position the player, I use a CCMoveTo and get the closest position that would put the player back into the proper position. This reintroduces an earlier problem of jumpiness between actions. What is the correct way to implement a smooth joystick while smoothly animating the player and keeping it on the \"grid\" of tiles? Edit It turns out that this was caused by a \"Bug Fix\" in the cocos2d engine."} {"_id": 37, "text": "How can I scroll sprites when swiping using Cocos2D? I'm adding 3 sprites (layers) to CCParallaxNode BGLayer CCSprite spriteWithFile \"Layer.png\" backgroundNode addChild BGLayer z 2 parallaxRatio layer1Speed positionOffset ccp(screenSize.width 2,screenSize.height 2) How can I create the scroll when swiping?"} {"_id": 37, "text": "iPhone image asset recommended resolution dpi format I'm learning iPhone development and a friend will be doing the graphics animation. I'll be using cocos2d most likely (if that matters). My friend wants to get started on the graphics, and I don't know what image resolution or dpi or formats are recommended. This probably depends on if something is a background vs. a small character. Also, I know I read something about using 2x in image file names to support high res iphone screens. Does cocos2d prefer a different way? Or is this not something to worry about at this point? What should I know before they start working on the graphics?"} {"_id": 37, "text": "Game server for an android iOS turn based board game I am currently programming an iPhone game and I would like to create an online multiplayer mode. In the future, this app will be port to Android devices, so I was wondering how to create the game server? First at all, which language should I choose? How to make a server able to communicate both with programs written in objective c and Java? Then, how to effectively do it? Is it good if I open a socket by client (there'll be 2)? What kind of information should I send to the server? to the clients?"} {"_id": 37, "text": "Game uses gamecenter in iphone can I design a fallback for 3G and prior devices? If I develop a game using gamecenter, does itunes connect lock me in to only supporting 3GS and above with iOS 4.0 and above, or will it still allow sales of my game on older devices (as long as I build in fallback so it never calls gamecenter framework)?"} {"_id": 37, "text": "cocos2d exclude GUI layer transformation I have a game layer that should be pannable and pinchable and a GUI layer as an overlay that's excluded from the transformations. I added the pinch and pan gesture recognizer to the view and set the view.transform like so view.transform CGAffineTransformConcat(panTransformation, pinchTranslation) But it actually scales moves the GUI layer, ofcourse, as it is a layer added to that view aswell. Sure I could do the transformations the layer itself, but the touch coordinates are scaled aswell, and calculating where the user actually tapped based on scale and translation looks like a big overhead to me. Can I somehow add a separate view and make a GUI layer it's child? Or in any other way add GUI layer that's excluded from the transformations ?"} {"_id": 38, "text": "Is creating a separate thread for each game session a bad idea? I'm currently working on iocp game server. My game is just like diablo3 basically. 1 4 players join a separate session. I did basic iocp preparations and now I'm working on game session class. The reason why I'm stuck right now is that I'm not sure if I should create a separate thread for each game session. This concept is the first idea that I came up with but I can't come up with other ideas to avoid creating so many threads for sessions when there are so many game sessions. So here's my question. Should I create a separate game logic loop(thread) for each game session? Because that sounds so inefficient when it comes to the situation where only one player joins each session. If there are some other ways to design the server, please let me know what I should look for to study. Thanks for your help in advance and sorry for my bad English."} {"_id": 38, "text": "What type of loop code on game engines? Recently I worked on a game on Spritekit Engine. My question is not about spritekit, but generaly about game engines. When I write a loop code and run it (eg while i lt 100000) my CPU usage goes to 100 , but when I run the test game there is no changes specially on CPU usage ,why is this so? (We know game engines runs in a loop that includes logic and graphic commands )"} {"_id": 38, "text": "Implement an upper FPS limit in the gameloop I implemented my game loop as described in deWiTTER's article, using the last approach with an unlimited FPS and a constant game speed. My problem is, that the unlimited FPS cranks up my CPU usage to nearly 100 . I understand that this is desired (as the hardware does as much as it can), but for my game, it's not really necessary (it's pretty simple). So I'd like to limit the FPS to some upper bound. Consider my game loop private static final int UPDATES PER SECOND 25 private static final int WAIT TICKS 1000 UPDATES PER SECOND private static final int MAX FRAMESKIP 5 long next update System.currentTimeMillis() int frames skipped float interpolation Start the loop while (isRunning) Update game frames skipped 0 while (System.currentTimeMillis() gt next update amp amp frames skipped lt MAX FRAMESKIP) Update input, move objects, do collision detection... Schedule next update next update WAIT TICKS frames skipped Calculate interpolation for smooth animation between states interpolation ((float)(System.currentTimeMillis() WAIT TICKS next update)) ((float)WAIT TICKS) Render events repaint(interpolation) How would I implement a maximum FPS? If i implement the repaint the way I implemented the update (or use a sleep instead of \"do nothing cycles\"), then the FPS is locked, right? That is not what I want. I want the game to be able to work with lower FPS (just as it does now), but limit the FPS to a maximum of, say 250."} {"_id": 38, "text": "Speed, delta time and movement player.vx scroll speed dt Update positions player.x player.vx player.y player.vy I have a delta time in miliseconds, and I was wondering how I can use it properly. I tried the above, but that makes the player go fast when the computer is fast, and the player go slow when the computer is slow. The same thing happens with jumping. The player can jump really high when the computer is faster. This is sort of unfair, I think, because. Should I be doing this someway else? Thanks."} {"_id": 38, "text": "Game loops using Hard realtime systems vs Soft realtime systems I have read the article here about realtime systems and am looking for examples specific to game loops. Am I correct in saying Hard realtime systems will lag and slow down gameplay causing slow motion if processing of AI, collision detection, or user input is delayed past the rendering deadline set by the realtime system. Example, must render every 1 30 sec but processing caused delay to 1 20 sec. Soft realtime systems will render at 30 FPS regardless of the other subsystems processing but if there is a delay in AI, collision detection, or user inputs, the game will be presented in stop motion instead of slow motion every 1 30 sec."} {"_id": 38, "text": "How should I structure my menu game loop? I'm trying to decide on how to structure my main game loop every example I've seen of the game loop looks a bit like this while (true) UpdateGame() DrawGame() i.e. it ignores the menu. Should my game loop look like this if (ShowMenu() Play) PlayGame() PlayGame() while (true) UpdateGame() DrawGame() ShowMenu() while (true) HandleUserInput() DrawMenu() (I.e. the menu and game have separate loops), or should I structure my loop as such while (true) switch (state) case Menu MenuLoop() break case Game GameLoop() break GameLoop() HandleUserInput() DrawMenu() MenuLoop() HandleUserInput() DrawMenu() I.e. one big outermost loop. (or does it not really matter?) For what its worth I'm using SDL C (although I'd like to think that the question is language agnostic)"} {"_id": 38, "text": "Should I bother with SDL WaitEvent? When I wrote my first application in SDL, it looked like this while (!quit) SDL PollEvent( amp event) switch (event) ... But then one time I left my app running while I went to do something else, and when I came back my laptop was boiling hot. I checked the system monitor and one of my cores was maxed out by my program! So I did some research, and now my loop looks like this while (!quit) SDL WaitEvent( amp event) switch (event) ... Great, so now my program doesn't max the CPU. But now I need to introduce rendering, and the problem is, currently this loop runs one iteration per event. If input and other events don't occur, my program won't render the next frame, no matter where the rendering code goes in the loop. So that's no good. I could have threads, but that seems like a lot of extra complexity when I could just do while (!quit) SDL PollEvent( amp event) switch (event) ... SDL Delay(10) This won't melt the CPU, and I can render at about 100 frames per second. My questions are Which approach quot should quot I use? I don't see any reason to not just use (3). What's the point of SDL WaitEvent, if you can't use it the moment you want to do anything other than handle events in your game loop? I assume SDL WaitEvent does something like approach (3) under the hood, so even though ostensibly approach (3) should introduce more latency, in practice it won't. Is this correct? How low should the integer passed to SDL Delay be to get a responsive app?"} {"_id": 38, "text": "Why does my sprite player move faster when I move the mouse? I'm trying to develop a simple game made with Pygame (Python library). I have a sprite object which's the player and I move it using arrow keys. If I don't move the mouse, the sprite moves normally, but when I move the mouse, the sprite moves faster (like x2 or x3). The player object is inside the charsGroup var. I've run the game in W7 and in Ubuntu. Same thing happens in both OS. I have more entities which move like NPCs and bullets but they don't get affected, just the player. Given this, I think that the problem maybe has a direct connection with the player moving system (arrow keys). Here is the update() method of the player object def update(self) for event in pygame.event.get() key pygame.key.get pressed() mouseX, mouseY pygame.mouse.get pos() if event.type pygame.MOUSEBUTTONDOWN self.bulletsGroup.add(Bullet(pygame.image.load(\"bullet.png\"), self.rect.x (self.image.get width() 2), self.rect.y (self.image.get height() 2), mouseX, mouseY, 50, 50)) if key pygame.K RIGHT if not self.checkCollision() self.rect.x 10 else self.rect.x 10 if key pygame.K LEFT if not self.checkCollision() self.rect.x 10 else self.rect.x 10 if key pygame.K UP if not self.checkCollision() self.rect.y 10 else self.rect.y 10 if key pygame.K DOWN if not self.checkCollision() self.rect.y 10 else self.rect.y 10 And here is the while loop while True if PLAYER.healthBase lt 0 GAMEOVER True if not GAMEOVER mapTilesGroup.draw(SCREEN) charsGroup.update() charsGroup.draw(SCREEN) npcsGroup.update() npcsGroup.draw(SCREEN) drawBullets() for event in pygame.event.get() if event.type pygame.QUIT pygame.quit() sys.exit() if GAMEOVER myfont pygame.font.SysFont(\"monospace\", 30) label myfont.render(\"GAME OVER!\", 1, (255, 255, 0)) SCREEN.blit(label, (400, 300)) freq.tick(0) pygame.display.flip() I don't know what more you can need to help me, but anything you need (more info or code) just ask for it!"} {"_id": 38, "text": "Delta times and frame lag in the game loop Let's say we have a standard gameloop like this, in pseudocode while (true) dt GetDeltaTime() Update(dt) Render() Here Update(dt) either uses a true variable timestep, or it determines how many cycles of a fixed timestep physics loop to execute based on dt. Now say we have the common case where we have mostly constant framerate except for infrequent single frame hiccups, so let's say we have dt values like 1 60, 1 60, 1 60, 1 6, 1 60, 1 60, ... By the time our GetDeltaTime() detects the larger timestep in the fourth frame, we have already rendered and presented the fourth frame! So one frame will already have been rendered with a wrong (too small) timestep no matter what we do. So if we now use the larger dt 1 6 to render the fifth frame, my understanding is that we artificially create a second frame where a wrong timestep is used, this time a too large one. I wonder if this problem is acknowledged anywhere. Wouldn't it be better, say, to use the averaged dt over the previous few frames to combat this? Here are some pictures to illustrate what I mean. I use the example of an object moving along a fixed axis with a constant speed, and using a variable timestepping scheme. The problem is essentially the same with fixed timesteps, though. The plots have time on the x axis, and the object position on the y axis. Let's say the object moving at 1 unit s, and framerate is 1 Hz. This is the ideal situation. Now let's say we have a frame where the time interval is 2 instead of 1. With a classical dt based scheme, we get this So we have one frame where the velocity is perceived too low, and one where it is perceived too high and which corrects for the velocity in the previous frame. What if we instead, say, always use a constant (or very slowly changing) dt? We get this The perceived velocity seems smoother using this approach to me. Of course, the object position is now not the \"true\" one, but I think humans perceive abrupt changes in velocity more clearly than such small positional offsets. Thoughts? UPDATE At least Ogre can do this http ogre.sourcearchive.com documentation 1.6.4.dfsg1 1 classOgre 1 1Root 1f045bf046a75d65e6ddc71f4ebe0b2c.html So I guess I just got downvoted for people not understanding my question, which is rather frustrating."} {"_id": 38, "text": "Game loops using Hard realtime systems vs Soft realtime systems I have read the article here about realtime systems and am looking for examples specific to game loops. Am I correct in saying Hard realtime systems will lag and slow down gameplay causing slow motion if processing of AI, collision detection, or user input is delayed past the rendering deadline set by the realtime system. Example, must render every 1 30 sec but processing caused delay to 1 20 sec. Soft realtime systems will render at 30 FPS regardless of the other subsystems processing but if there is a delay in AI, collision detection, or user inputs, the game will be presented in stop motion instead of slow motion every 1 30 sec."} {"_id": 39, "text": "Closest circle intersection within a triangle (Context I use a Delaunay triangulation for pathfinding, and have circular points of interest, of varying sizes, being randomly placed around the map. Units need to be able to determine the closest such point of interest to path to, using a Dijkstra search across the triangulation.) Given a triangle, and a set of circles which overlap or intersect it, I'm trying to find the closest circle to a given point within the triangle. My first thought was to just take the distance from the point to each circle center, minus the radius. But there are some cases where that does not give the correct result. See the following image Measuring from point P within the triangle, to circles A and B. Using only the radius of each circle, as with d1 and d2, would result in circle A being considered closest. However, the closest edge of circle A used to measure d1 is outside of the triangle. Staying within the triangle, the closest edge from point P to circle A would be d3, which is much further away than d2, indicating that circle B is actually closest within the triangle. Is there a method for determining this difference, finding d3 for circles where d1 would be outside the triangle? How do you determine the intersection point between circle A and the triangle?"} {"_id": 39, "text": "How to know if two surface are in the same direction? In my code I creat a Mesh that are composed by multiple tile. those tile can have edge that are shared, and I need to know if the normal of the tile that have shared edge have the same direction, because I calculat the normal of the edge to smooth the light. before I used the Vector.Dot() , but I've sometimes two tile that have a curve that make impossible to use this function as you can see in the 4th image. and the only information that i have in the calcul is the 2 normal of the tile. img 1 diferent direction img 2 diferent direction img 3 same direction img 4 same direction img 5 same direction"} {"_id": 39, "text": "exact point on a rotating sphere I have a sphere that represents the Earth textured with real pictures. It's rotating around the x axis, and when user click down it has to show me the exact place he clicked on. For example if he clicked on Singapore the system should be able to understand that user clicked on the sphere (OK, I'll do it with unProject) understand where user clicked on the sphere (ray sphere collision?) and take into account the rotation transform sphere coordinate to some coordinate system good for some web api service ask to api (OK, this is the simpler thing for me ) some advice?"} {"_id": 39, "text": "How can I determine the first visible tile in an isometric perspective? I am trying to render the visible portion of a diamond shaped isometric map. The \"world\" coordinate system is a 2D Cartesian system, with the coordinates increasing diagonally (in terms of the view coordinate system) along the axes. The \"view\" coordinates are simply mouse offsets relative to the upper left corner of the view. My rendering algorithm works by drawing diagonal spans, starting from the upper right corner of the view and moving diagonally to the right and down, advancing to the next row when it reaches the right view edge. When the rendering loop reaches the lower left corner, it stops. There are functions to convert a point from view coordinates to world coordinates and then to map coordinates. Everything works when rendering from tile 0,0, but as the view scrolls around the rendering needs to start from a different tile. I can't figure out how to determine which tile is closest to the upper right corner. At the moment I am simply converting the coordinates of the upper right corner to map coordinates. This works as long as the view origin (upper right corner) is inside the world, but when approaching the edges of the map the starting tile coordinate obviously become invalid. I guess this boils down to asking \"how can I find the intersection between the world X axis and the view X axis?\""} {"_id": 39, "text": "Find projecting triangle for UV mapping in RuneScape model format I am using an old Runescape model format, also used by Thief and Quake. In this format, instead of specifying UV coordinates for each vertex ABC, we specify a second trio of vertices PMN. Those vertices are then used to project UV texture coordinates onto ABC. Some previous Q amp A explains this projection algorithm and how to reverse it. I have a mesh with UVs that I want to save in this format. To do that, I want to find a trio of vertices PMN for each triangle ABC that reproduce the correct UVs. These PMN vertices are chosen from the collection of vertices already in my mesh. I could search every possible ordered triangle in my mesh, but that scales as O(n 3) and would be impractical for meshes with high vertex counts. How can I more efficiently find a PMN triangle that produces my desired UV coordinates on each triangle ABC?"} {"_id": 39, "text": "Find extreme points of a rotated ellipse function on a given axis How to find the points where it is most extreme on the X and Y axis? For example lets say I have an equation that describes an ellipse that is rotated (x RadiusX Rx y RadiusX Ux) 2 (x RadiusY Ry y RadiusY Uy) 2 RadiusY 2 How can I find the points where it will be most extreme on each axis Please keep in mind the values for variables RadiusX, RadiusY, Rx, Ry, Ux, Uy are known. An example with values ((x 1 0.70711) (y 1 0.70711)) 2 ((x 1.414213 0.70711) (y 1.414213 0.70711)) 2 1.414213 1.414213"} {"_id": 39, "text": "How can I find the largest sphere that fits inside a frustum? How do you find the largest sphere that you can draw in perspective? Viewed from the top, it'd be this Added on the frustum on the right, I've marked four points I think we know something about. We can unproject all eight corners of the frusum, and the centres of the near and far ends. So we know point 1, 3 and 4. We also know that point 2 is the same distance from 3 as 4 is from 3. So then we can compute the nearest point on the line 1 to 4 to point 2 in order to get the centre? But the actual math and code escapes me. I want to draw models (which are approximately spherical and which I have a miniball bounding sphere for) as large as possible. Update I've tried to implement the incircle on two planes approach as suggested by bobobobo and Nathan Reed function getFrustumsInsphere(viewport,invMvpMatrix) var midX viewport 0 viewport 2 2, midY viewport 1 viewport 3 2, centre unproject(midX,midY,null,null,viewport,invMvpMatrix), incircle function(a,b) var c ray ray closest point 3(a,b) a a 1 far clip plane b b 1 far clip plane c c 1 camera var A vec3 length(vec3 sub(b,c)), B vec3 length(vec3 sub(a,c)), C vec3 length(vec3 sub(a,b)), P 1 (A B C), x ((A a 0 ) (B a 1 ) (C a 2 )) P, y ((A b 0 ) (B b 1 ) (C b 2 )) P, z ((A c 0 ) (B c 1 ) (C c 2 )) P c x,y,z now the centre of the incircle c.push(vec3 length(vec3 sub(centre 1 ,c))) add its radius return c , left unproject(viewport 0 ,midY,null,null,viewport,invMvpMatrix), right unproject(viewport 2 ,midY,null,null,viewport,invMvpMatrix), horiz incircle(left,right), top unproject(midX,viewport 1 ,null,null,viewport,invMvpMatrix), bottom unproject(midX,viewport 3 ,null,null,viewport,invMvpMatrix), vert incircle(top,bottom) return horiz 3 lt vert 3 ? horiz vert I admit I'm winging it I'm trying to adapt 2D code by extending it into 3 dimensions. It doesn't compute the insphere correctly the centre point of the sphere seems to be on the line between the camera and the top left each time, and its too big (or too close). Is there any obvious mistakes in my code? Does the approach, if fixed, work?"} {"_id": 39, "text": "a flexible data structure for geometries What data structure would you use to represent meshes that are to be altered (e.g. adding or removing new faces, vertices and edges), and that have to be \"studied\" in different ways (e.g. finding all the triangles intersecting a certain ray, or finding all the triangles \"visible\" from a given point in the space)? I need to consider multiple aspects of the mesh their geometry, their topology and spatial information. The meshes are rather big, say 500k triangles, so I am going to use the GPU when computations are heavy. I tried using arrays with vertices and arrays with indices, but I do not love adding and removing vertices from them. Also, using arrays totally ignore spatial and topological information, which I may need studying the mesh. So, I thought about using custom double linked list data structures, but I believe doing so will require me to copy the data to array buffers before going on the GPU. I also thought about using BST, but not sure it fits. Any help is appreciated. If I have been too fuzzy and you require other information feel free to ask."} {"_id": 39, "text": "How can you tell if you are a large player on a large map or a small player on a small map? If everything is scaled by a constant factor, can you tell that the world is smaller or larger? I think you could look down at the ground and see that it's \"closer\". But how do you know what should be the correct distance? What visual clues give it away? Edit The player controller has an FPS camera!"} {"_id": 39, "text": "Calculating angle a segment forms with a ray I am given a point C and a ray r starting there. I know the coordinates (xc, yc) of the point C and the angle theta the ray r forms with the horizontal, theta in ( pi, pi . I am also given another point P of which I know the coordinates (xp, yp) how do I calculate the angle alpha that the segment CP forms with the ray r, alpha in ( pi, pi ? Some examples follow I can use the the atan2 function."} {"_id": 40, "text": "Passing an UAV to a Pixel Shader in DirectX11 I have a compute shader which task is to take an input image and then blur it using a Gaussian filter approach. The input and output for the compute shader looks like this Input and output resources Texture2D lt float4 gt InputMap register(t0) RWTexture2D lt float4 gt OutputMap register(u0) The necessary steps for the compute shader setup is shown below Load SRV with the image first DX ThrowIfFailed(CreateWICTextureFromFileEx(D3DHelper GetDevice(), L\"Images floor.jpg\", 0, D3D11 USAGE DEFAULT, D3D11 BIND UNORDERED ACCESS D3D11 BIND SHADER RESOURCE, 0, 0, 0, m resource.GetAddressOf(), m blurTexture.GetAddressOf())) Create UAV D3D11 UNORDERED ACCESS VIEW DESC descView memset( amp descView, 0, sizeof(descView)) descView.Format DXGI FORMAT UNKNOWN descView.ViewDimension D3D11 UAV DIMENSION TEXTURE2D descView.Texture2D.MipSlice 0 DX ThrowIfFailed(D3DHelper GetDevice() gt CreateUnorderedAccessView(m resource.Get(), amp descView, m UAV.GetAddressOf())) Blur image with compute shader Shader SetComputeShader(\"Blur\", m blurTexture, m UAV, 16, 16, 1) Shader UnbindComputeShader() Now if we imagine that the work of the compute shader is done correctly, the last step would then be to bind the SRV UAV containing the blurred image to the pixel shader stage for the object which will use the texture. However, this texture seems to end up completely black after the compute shader pass if I do the following binding D3DHelper GetDeviceContext() gt PSSetShaderResources(0, 1, m blurTexture.GetAddressOf()) I believe there are a few more steps missing since I need to get the data from the UAV and not the SRV since the UAV contains the output image. What would be the correct approach?"} {"_id": 40, "text": "FormatMessage not working for HRESULTs returned by Direct3D 11 I am using Windows 7 x64 and Visual Studio 17 (v15.9.7). Say I try to create a swap chain using IDXGIFactory2 CreateSwapChainForHwnd and pass in DXGI SCALING NONE. I will get the following message in debug output (if I have enabled Direct3D debugging) DXGI ERROR IDXGIFactory CreateSwapChain DXGI SCALING NONE is only supported on Win8 and beyond. DXGI SWAP CHAIN DESC SwapChainType ... HWND, BufferDesc DXGI MODE DESC1 Width 816, Height 488, RefreshRate DXGI RATIONAL Numerator 0, Denominator 1 , Format B8G8R8A8 UNORM, ScanlineOrdering ... UNSPECIFIED, Scaling ... UNSPECIFIED, Stereo FALSE , SampleDesc DXGI SAMPLE DESC Count 1, Quality 0 , BufferUsage 0x20, BufferCount 2, OutputWindow 0x0000000000290738, Scaling ... NONE, Windowed TRUE, SwapEffect ... FLIP SEQUENTIAL, AlphaMode ... UNSPECIFIED, Flags 0x0 MISCELLANEOUS ERROR 175 The function returns 0x887a0001 in form of a HRESULT. If I put err,hr in the watch window, I get a nice error message there ERROR MOD NOT FOUND The specified module could not be found. However, if I pass this HRESULT to FormatMessage, it just puts NULL in the output and returns 0. err,hr helpfully informs me that the new error is ERROR MR MID NOT FOUND The system cannot find message text for message number 0x 1 in the message file for 2. My questions are Why is FormatMessage not giving me right error string (the one starting with ERROR MOD NOT FOUND...)? Where is Visual Studio getting these pretty error strings from? Can I get them too? Who do I pay? PS. I am using the Windows 10 SDK version of DX11, not the older DirectX SDK version. Thus, I can't really link to dxerr.lib either. This is the code that is used to print the error message LPTSTR error text NULL FormatMessage(FORMAT MESSAGE FROM SYSTEM FORMAT MESSAGE ALLOCATE BUFFER FORMAT MESSAGE IGNORE INSERTS, NULL, hr, MAKELANGID(LANG NEUTRAL, SUBLANG DEFAULT), (LPTSTR) amp error text, 0, NULL)"} {"_id": 40, "text": "Is object space the same as local space? I was in directx 11 and was wondering is local space the same as object space and if not, what is object space?"} {"_id": 40, "text": "Fast fullscreen quad rendering in Direct3D 11? For the last few weeks, I've been trying to port a DX9 implementation of HDR rendering (tone mapping, bloom, stars, etc.) over to DX11. I believe I've got all features working but I'm not getting good enough performance. I'd like to be able to render the whole effect in under 4ms on a fairly low powered GPU, but using D3D11 Queries I'm noticing that it takes 0.5ms to just render a fullscreen quad with a solid color, and 1.0ms to render a fullscreen texture! And because tone mapping is the only part of the effect that uses a fullscreen texture, this makes it the most expensive! I'm already doing some optimisations with my limited graphics knowledge, I've disabled blending and depth testing, I make sure that the texture sampler uses sensible filtering settings, and I'm pretty sure that the effects of any state changes are negligible. I've heard that rendering 1 oversized triangle instead of 2 can yield some improvements, but I'm not sure if that will help me in this situation. Basically, does anyone have any suggestions to speed up rendering of a textured quad?"} {"_id": 40, "text": "Manually writing a dx11 tessellation shader I am looking for resources on what are the steps of manually implementing tessellation (I happen to be using Unity CG, but any help is appreciated). Today it seems that it is all the rage to hide most of the gpu code far away and use rather rigid simplifications such as unity's SURFace shaders. And it seems useless unless you're doing supeficial stuff. A little background I have procedurally generated meshes (using marching cubes) which have quality normals but no UVs and no Tangents. I have successfully written a custom vertex and fragment shader to do triplanar texture and bumpmap projection as well as some custom stuff (custom lighting, procedurally warping the texture for variation etc). I am using the GPU Gems book as reference. Now I need to implement tessellation, but It seems I must calculate the tangents at runtime by swizzling normals (ctrl f this in gems lt normal.z, normal.y, normal.x gt ) before the tessellator gets them (during some sort of per triangle geometry pass, which comes before vert and frag). And I also need to keep my custom vert frag setup (with my custom parameters textures being passed between them) so apparently I cannot use surface shaders. Can anyone provide some guidence?"} {"_id": 40, "text": "The steps in implementing B zier triangle patches What are the steps in creating a B zier triangle patches. What steps would you do in order to create this in directx 11? Say I just input 3 vertices and create a simple triangle. Is this enough? Or should I create a triangle with 9 vertices, all of them in different heights so it would make a bumpy triangle, then I apply berstein's formulas and make them smooth. so I get like a smooth triangle, not all bumpy. A triangle like My book says Research and implement B zier triangle patches. Luna, Frank D. (2012 05 21). Introduction to 3D Game Programming with DirectX 11 (Kindle Location 11901). Mercury Learning and Information. Kindle Edition. So what are the steps you would do in order to accomplish it? Please no \"coulds\""} {"_id": 40, "text": "Making a parenting system Each Entity can have one parent and any number of children. They have a position vector and a quaternion orientation. I know that I can make objects look like they're in a hierarchy by multiplying parent transforms all the way to the root, but how to I get the canonical coordinates out of those transformations? I need to have the real position of every Entity in the world in order to update their bounding meshes. I would like to keep my drawing code for everything as just this XMMATRIX transform XMMatrixRotationQuaternion(orientation) transform XMMatrixMultiply(transform, XMMatrixTranslation(position)) transform XMMatrixMultiply(world, transform) model gt Draw(context, states, transform, view, projection) Eg. without relying on visual tricks to get the hierarchy looking right, so no multiplications with parent transforms while rendering. How do I properly change a child's position and orientation with respect to the parent?"} {"_id": 40, "text": "Implementing a Deferred Renderer (Basic Understanding) I am trying to implement a Deferred Renderer in Direct3D11. I am fairly new to this. I already bought a book Practical Rendering amp Computation with Direct3D 11. However, this book doesnt answer many of my questions. The Book just says \"Call one of the Draw Commands to execute the Pipeline\" In the context of a deferred Renderer I would like to know How I can actually render the different GBuffers, merge them and put actual Lighting to my scene. Let's say my GBuffers should represent Diffuse, Specular and Normals. I understand that Vertex Shaders have Constant Buffers that represent my Camera through Matrices. Vertices get Transformed in shaders into ViewSpace. How Do I get my Diffuse Specular Normal information out of that? Do I have to execute the Rendering Pipeline for every GBuffer? Technically do I just need to transform my vertices once in a VS and just execute my different GBuffer PS? The Context Object offers functions like \"OMSetRenderTarget\". The OutputMerger however is the last stage of the Pipeline, not the first... The Book itself just calls \"Present(0,0)\" exactly once and doesnt explain how you actually put things together. Sorry, quite a lot of different questions ("} {"_id": 40, "text": "Single pass separable gaussian blur problem I created a single pass gaussian blur using HLSL compute shader. I also want it to be separable, which means, that first I perform blur along the horizontal direction, write out the result to the texture, then perform the vertical blur with the horizontally blurred data. I do this by creating DeviceMemoryBarriers before and after writing out the blur results to the globallycoherent Texture2D. This is my shader Texture2D lt float4 gt input register(t0) globallycoherent RWTexture2D lt float4 gt input output register(u0) Note Shader requires feature Typed UAV additional format loads! numthreads(16, 16, 1) void main(uint3 DTid SV DispatchThreadID) Query the texture dimensions (width, height) uint2 dim input output.GetDimensions(dim.x, dim.y) Determine if the thread is alive (it is alive when the dispatchthreadID can directly index a pixel) if (DTid.x lt dim.x amp amp DTid.y lt dim.y) Do bilinear downsampling first and write it out input output DTid.xy input.SampleLevel(sampler linear clamp, ((float2)DTid 0.5f) (float2)dim, 0) DeviceMemoryBarrier() uint i 0 float4 sum 0 Gather samples in the X (horizontal) direction unroll for (i 0 i lt 9 i) sum input output DTid.xy uint2(gaussianOffsets i , 0) gaussianWeightsNormalized i Write out the result of the horizontal blur DeviceMemoryBarrier() input output DTid.xy sum DeviceMemoryBarrier() sum 0 Gather samples in the Y (vertical) direction unroll for (i 0 i lt 9 i) sum input output DTid.xy uint2(0, gaussianOffsets i ) gaussianWeightsNormalized i Write out the result of the vertical blur DeviceMemoryBarrier() input output DTid.xy sum The problem is that the result flickers a bit, and has some errors in the image, too. Seems a bit like a thread group can't see writes by other groups. But there is a globallycoherent modifier before the RWTexture2D which should flush the entire resource so that writes are visible in every thread group (MSDN). Indeed, if I remove that modifier, then the flickering becomes a whole lot worse than if I leave it there. Here is a screenshot of the problem (Notice the lines on the windmill, and it also flickers on the whole image from time to time which is not visible on a still shot) Anyone here has an idea what I can do about it? (PS. the blur is performed when creating mipmaps, so I very much want to avoid multiple passes because it is already one pass for each mip)"} {"_id": 40, "text": "Tessellation Texture Coordinates Firstly some info I'm using DirectX 11 , C and I'm a fairly good programmer but new to tessellation and not a master graphics programmer. I'm currently implementing a tessellation system for a terrain model, but i have reached a snag. My current system produces a terrain model from a height map complete with multiple texture coordinates, normals, binormals and tangents for rendering. Now when i was using a simple vertex and pixel shader combination everything worked perfectly but since moving to include a hull and domain shader I'm slightly confused and getting strange results. My terrain is a high detail model but the textured results are very large patches of solid colour. My current setup passes the model data into the vertex shader then through the hull into the domain and then finally into the pixel shader for use in rendering. My only thought is that in my hull shader i pass the information into the domain shader per patch and this is producing the large areas of solid colour because each patch has identical information. Lighting and normal data are also slightly off but not as visibly as texturing. Below is a copy of my hull shader that does not work correctly because i think the way that i am passing the data through is incorrect. If anyone can help me out but suggesting an alternative way to get the required data into the pixel shader? or by showing me the correct way to handle the data in the hull shader id be very thankful! cbuffer TessellationBuffer float tessellationAmount float3 padding struct HullInputType float3 position POSITION float2 tex TEXCOORD0 float3 normal NORMAL float3 tangent TANGENT float3 binormal BINORMAL float2 tex2 TEXCOORD1 struct ConstantOutputType float edges 3 SV TessFactor float inside SV InsideTessFactor struct HullOutputType float3 position POSITION float2 tex TEXCOORD0 float3 normal NORMAL float3 tangent TANGENT float3 binormal BINORMAL float2 tex2 TEXCOORD1 float4 depthPosition TEXCOORD2 ConstantOutputType ColorPatchConstantFunction(InputPatch lt HullInputType, 3 gt inputPatch, uint patchId SV PrimitiveID) ConstantOutputType output output.edges 0 tessellationAmount output.edges 1 tessellationAmount output.edges 2 tessellationAmount output.inside tessellationAmount return output domain(\"tri\") partitioning(\"integer\") outputtopology(\"triangle cw\") outputcontrolpoints(3) patchconstantfunc(\"ColorPatchConstantFunction\") HullOutputType ColorHullShader(InputPatch lt HullInputType, 3 gt patch, uint pointId SV OutputControlPointID, uint patchId SV PrimitiveID) HullOutputType output output.position patch pointId .position output.tex patch pointId .tex output.tex2 patch pointId .tex2 output.normal patch pointId .normal output.tangent patch pointId .tangent output.binormal patch pointId .binormal return output Edited to include the domain shader domain(\"tri\") PixelInputType ColorDomainShader(ConstantOutputType input, float3 uvwCoord SV DomainLocation, const OutputPatch lt HullOutputType, 3 gt patch) float3 vertexPosition PixelInputType output Determine the position of the new vertex. vertexPosition uvwCoord.x patch 0 .position uvwCoord.y patch 1 .position uvwCoord.z patch 2 .position output.position mul(float4(vertexPosition, 1.0f), worldMatrix) output.position mul(output.position, viewMatrix) output.position mul(output.position, projectionMatrix) output.depthPosition output.position output.tex patch 0 .tex output.tex2 patch 0 .tex2 output.normal patch 0 .normal output.tangent patch 0 .tangent output.binormal patch 0 .binormal return output"} {"_id": 41, "text": "What is a good approach for making a relationship between the HUD and Environment? What is an elegant common way to build a relationship between the game scene environment world and the HUD that usually sits on top of it? I have thought about this for a while, and though there are ways to accomplish this that are easy to implement, they always seem to expose too much unrelated information between the two, or end up being too coupled. An example of this that I have used in the past would be to simply pass reference from the HUD to the Environment and vice versa (which is obviously not a good apporach for many reasons, especially when aiming for low coupling). What I want to end up with is a clean, uncoupled way to do things like From the HUD reference and manipulate objects within the environment, change the state of the Environment itself. From the environment update the HUD to reflect the current state of the environment. Maybe passing reference between the two is the way to go about this, but I feel like there's some pattern that I am missing for this type of task. Any insights into past experience and implementations concerning this type of problem would be very helpful."} {"_id": 41, "text": "How can I output multiple sprite sheets from a single .fla? I have to produce out three sprite sheets of differing sizes. To do so, I have three different .fla source files. Obviously maintaining three files takes more time than maintaining one is there some way that I can use one .fla and produce three differently sized sprite sheets?"} {"_id": 41, "text": "Using Actionscript 3's Graphics API in Flixel My question sounds pretty simple but awfully, I couldn't find much information on the internet. How can I draw a circle in my FlxSprite? I couldn't find much information about drawing geometric objects or the usage of classic graphics API in Flixel."} {"_id": 41, "text": "Drawing simulator Tracking mouse movement I was wondering what would be the logic behind coding a drawing simulator like the one on this website https www.mdbg.net chindict chindict.php?page chardict amp cdcanoce 0 amp cdqchi The paintbrush tool when you click it, once you start drawing strokes that a similar to what a character is, it will automatically bring out a list of characters that leads toward what you are drawing. Once you are finished, the correct character will show up. To my idea, I think that a set of coordinates are set already and once the mouse coordinates passes through these points, then from there it will detect and provide all the possibilities and narrows it even further. I think this is very complex but what would be the right path correct way to go about programming this concept?"} {"_id": 41, "text": "Perfect fit for isometric tiles in AS3 with BitmapData I'm creating a game which uses 320x160 sized isometric tiles. I've got an editor that allows me to take loaded in tiles and plot them. The map size is 8x8, and instead of placing down 64 movieclips, and then moving them around, I thought I'd use the bitmapData.draw function to plot these mc's directly to a bitmap (as I'm not doing anything with them after they are put down anyway). That works fine apart from there's a small line appearing between the tiles, along the edges. I think this has something to do with anti aliasing, because when I export the tile as a .png with no smooth, and use that tile in my editor, there's no faint line between the tiles, but that also makes the tile overall look pixely, so I'm at a bit of a loss. I want the edges to fit perfectly together so need to be pixel perfect, but I want the interior of the tiles to be smoothed, any ideas? unless there's a way to solve this with the bitmapData approach in itself? The only thing I can think of right now is to make the tiles slightly bigger than they need to be so they overlap slightly, but that's a bit of a fudge, which I want to avoid if possible."} {"_id": 41, "text": "Animating Tile with Blitting taking up Memory I am trying to animate a specific tile in my 2d Array, using blitting. The animation consists of three different 16x16 sprites in a tilesheet. Now that works perfect with the code below. BUT it's causing memory leakage. Every second the FlashPlayer is taking up 140 kb more in memory. What part of the following code could possibly cause the leak The variable Rectangle finds where on the 2d array we should clear the pixels Fillrect follows up by setting alpha 0 at that spot before we copy in nxt Sprite Tiletype is a variable that holds what kind of tile the next tile in animation is (from tileSheet) drawTile() gets Sprite from tilesheet and copyPixels it into right position on canvas public function animateSprite() void tileGround.bitmapData.lock() if(anmArray 0 .tileType gt 42) anmArray 0 .tileType 40 frameCount 0 var rect Rectangle new Rectangle(anmArray 0 .xtile ts, anmArray 0 .ytile ts, ts, ts) tileGround.bitmapData.fillRect(rect, 0) anmArray 0 .tileType 40 frameCount drawTile(anmArray 0 .tileType, anmArray 0 .xtile, anmArray 0 .ytile) frameCount tileGround.bitmapData.unlock() public function drawTile(spriteType int, xt int, yt int) void var tileSprite Bitmap getImageFromSheet(spriteType, ts) var rec Rectangle new Rectangle(0, 0, ts, ts) var pt Point new Point(xt ts, yt ts) tileGround.bitmapData.copyPixels(tileSprite.bitmapData, rec, pt, null, null, true) public function getImageFromSheet(spriteType int, size int) Bitmap var sheetColumns int tSheet.width ts var col int spriteType sheetColumns var row int Math.floor(spriteType sheetColumns) var rec Rectangle new Rectangle(col ts, row ts, size, size) var pt Point new Point(0,0) var correctTile Bitmap new Bitmap(new BitmapData(size, size, false, 0)) correctTile.bitmapData.copyPixels(tSheet, rec, pt, null, null, true) return correctTile"} {"_id": 41, "text": "Animating Tile with Blitting taking up Memory I am trying to animate a specific tile in my 2d Array, using blitting. The animation consists of three different 16x16 sprites in a tilesheet. Now that works perfect with the code below. BUT it's causing memory leakage. Every second the FlashPlayer is taking up 140 kb more in memory. What part of the following code could possibly cause the leak The variable Rectangle finds where on the 2d array we should clear the pixels Fillrect follows up by setting alpha 0 at that spot before we copy in nxt Sprite Tiletype is a variable that holds what kind of tile the next tile in animation is (from tileSheet) drawTile() gets Sprite from tilesheet and copyPixels it into right position on canvas public function animateSprite() void tileGround.bitmapData.lock() if(anmArray 0 .tileType gt 42) anmArray 0 .tileType 40 frameCount 0 var rect Rectangle new Rectangle(anmArray 0 .xtile ts, anmArray 0 .ytile ts, ts, ts) tileGround.bitmapData.fillRect(rect, 0) anmArray 0 .tileType 40 frameCount drawTile(anmArray 0 .tileType, anmArray 0 .xtile, anmArray 0 .ytile) frameCount tileGround.bitmapData.unlock() public function drawTile(spriteType int, xt int, yt int) void var tileSprite Bitmap getImageFromSheet(spriteType, ts) var rec Rectangle new Rectangle(0, 0, ts, ts) var pt Point new Point(xt ts, yt ts) tileGround.bitmapData.copyPixels(tileSprite.bitmapData, rec, pt, null, null, true) public function getImageFromSheet(spriteType int, size int) Bitmap var sheetColumns int tSheet.width ts var col int spriteType sheetColumns var row int Math.floor(spriteType sheetColumns) var rec Rectangle new Rectangle(col ts, row ts, size, size) var pt Point new Point(0,0) var correctTile Bitmap new Bitmap(new BitmapData(size, size, false, 0)) correctTile.bitmapData.copyPixels(tSheet, rec, pt, null, null, true) return correctTile"} {"_id": 41, "text": "Workflow with Flash Pro CS6 and FlashDevelop Using fla and swc to store assets I am using this tutorial http www.flashdevelop.org wikidocs index.php?title AS3 FlexAndFlashCS3Workflow In the past older versions of Flash Pro I was able to complete these steps right click on the symbol in the Library panel, select \"Linkage...\" dialog, check \"Export for ActionScript\" and fill in the symbol name (ie. MySymbol design or assets.MySymbol design), do not change the base class (ie. flash.display.MovieClip). Right now, I am stuck at that part. Any hints? What I wish to do is Use fla for the artist to store assets. Publish to swc Extract the assets in FlashDevelop by creating an instance of their class. ... How is this done in CS6? To clear things up, this is what I see when I right click a Flash symbol"} {"_id": 41, "text": "Flash saves in Windows, not in Linux, FileReference.save() The code below compiles fine on the Flex 4 SDK on Fedora 15. Mouse click opens the dialog box, I click okay, and a file is saved, but the file is empty. I run the same SWF file (that was compiled on the Linux machine) on a Windows machine, and the created file contains the expected data. Then I broke the FileReference declaration out of the function into the class level, hoping to avoid a known bug and to keep a reference to the file alive outside the function. However, the same problem persists. Hoping to set up a workaround, I added the debug Flash player to my path and ran the file from Flash without the benefit of the browser, and it works. So now a Flex problem has become a Firefox problem, maybe owing to a shady procedure I used to install the plugin without really understanding what was happening. I am running Firefox 5.0. In essence my workflow is fixed, but perhaps people who performed the above will not be able to use projects with FileReference.save()? Should I be worried about this edge case? WriteTheFile.as Original code by Brian Hodge Test to see if ActionScript Flash can write files package import flash.display.Sprite import flash.events.Event import flash.events.MouseEvent import flash.utils.ByteArray import flash.net.FileReference public class WriteTheFile extends Sprite private var xml String private var fr FileReference public function WriteTheFile() void if (stage) init() else addEventListener(Event.ADDED TO STAGE, init) private function init(e Event null) void removeEventListener(Event.ADDED TO STAGE, init) Calling the save method requires user interaction and Flash Player 10 stage.addEventListener(MouseEvent.MOUSE DOWN, onMouseDown) private function onMouseDown(e MouseEvent) void fr new FileReference() fr.save(\" lt xml gt lt test gt data lt test gt lt xml gt \", \"filename.txt\") EDIT addressed answer in code above, but the same problem exists. EDIT This works on the same system when the standalone player is invoked. Therefore this is a browser (FF 5.0) plugin problem."} {"_id": 41, "text": "Best way to create neon glow line effect in AS3? What's the best way to create a neon glow line effect in Flash AS3? Say similar to what's going on in the game gravitron360 on the xbox 360? Would it be a good idea to create movieclips with plain lines drawn in them and then apply a glow filter to them? or perhaps just apply the glow filter to the entire movieclip layer the movieclips are on? or just draw them manually and create a glow effect by converting the lines to fills and then softening edges? (wouldn't blend as well but would be the fastest CPU wise?) Thanks for any help"} {"_id": 42, "text": "Dragging and dropping sword in unreal engine 4 I am developing a 2d game using unreal engine 4 in which the user drags and drops a sword.I want the sword to follow the position of my finger.At the moment, when I drag the sword, it moves at an offset from my finger.The game has been packaged to an pad and I have set up the functionality using blueprints. Pseudo code On Drag () finger position sword position on screen on drop finger position on release from screen position of sword How do I make the sword follow the position of my finger as I drag it?"} {"_id": 42, "text": "How to create opacity mask from colored bitmap? For the image (bitmap) below, if I do Create a spite with the image in unreal engine. Create a opacity mask on the spite. Use this image, choose a specific color, such as blue, make its corresponding region the only opacity region on the mask. My question is about the implementation of part 2 and part 3 in Unreal engine, they are quite confusing. If anyone know how to implement it in unity, it would also be helpful. Thanks a lot."} {"_id": 42, "text": "How can I set my non convex object's collision to a more detailed one to allow other objects to go underneath? My problem is simple. I imported a 3D object (it's not convex) and I want the character to go under your legs. But this does not happen There seems to be a way to do this, I just do not know which one. Vision of what would be a complex collision I've already changed all these options, but it did not work out The closest I got was when I saw something about adding a physical material. But when I try only a circle appears, I still can not add one exactly equal to the Character Links where I've already looked for help https docs.unrealengine.com en us Engine Content FBX StaticMeshes collision https docs.unrealengine.com en us Engine Physics PhysicalMaterials PhysMatUserGuide https forums.unrealengine.com development discussion content creation 18509 how can i make this have a more precise collision box"} {"_id": 42, "text": "Dynamic character in UE4 I've been creating simple characters in Blender for a while, and now I want to add dynamic details to them. I managed to make a cloak with using UE4 cloth physics. However I have trouble with smaller things. As english isn't my primary language, I don't even know how to name this issue properly. Example of things I want to achieve Simple polygon ponytail that bounces while running Edges of clothes, like shorts are loose and also move while running Accessories like bracelets or dangling keys on wrists hanging from clothes which also move a bit while walking Is this something I do in Blender and export, or is it something I add in Unreal? What should I look for? EDIT So far I found this thanks to a comment https docs.unrealengine.com en US Engine Animation NodeReference SkeletalControls AnimDynamics index.html But if anyone else has suggestions, please keep them coming."} {"_id": 42, "text": "Switch from left click to right click to make character walk in the Top Down Example, in UE4? When creating a new project in the Unreal Engine 4, you can choose to create a project already with some added features and things (Top Down Example) This project has a character (which I changed its appearance) and it moves around the map when I click the left mouse button I looked at the blueprints of the project, but I did not identify where the command is that makes the character walk, so I'd like it to walk when I right click. Alls blueprints of project M Cursor Decal TopDownCharacter TopDownController TopDownGameMode The circled blocks of reds are the ones I suspected could be where I should make the modification, but no information on them gave me the certainty that that was where I was supposed to change. I still do not create projects in C programming mode, because I do not have Visual Studio 2017. Some links from websites where I spent before coming here and that can help you to help me https docs.unrealengine.com en us Engine Blueprints https www.youtube.com watch?v EFXMW UEDco The video is 2015, so although it's good, I did not take it so well, because I know that with all this time that has already happened, a lot is different. Sorry for any translation errors, I'm not English speaker."} {"_id": 42, "text": "UE Sequencer Foot slide with character movement What is the best way to implement the character movement on the sequencer so there is no foot slide? I would go with Root Motion but the animations I have are on place. Thank you"} {"_id": 42, "text": "How to create smoke that spreads outward in all directions? I saw many tutorials about smoke grenade but they just use volumetric smoke. I want to make something like the smoke grenade in PUBG, where the smoke spreads in all directions how to do that? The Unreal Engine 4 smoke grenade throws out smoke in a particular direction, like this But I want something more like PUBG's that spreads smoke evenly in all directions"} {"_id": 42, "text": "What do you do with \"Player Start\" and \"Pawn\" objects when setting up the default VR game in Unreal? I followed the setup tutorial provided by Unreal from here but after setting up the pawn and the VR GameMode you don't do anything with them. They are never added to the scene or anything and the default game comes with a \"Player Start.\" Is the \"Player Start\" object supposed to be removed? If you remove that then running the VR Preview shows the game from wherever your camera is in the viewport and there is no game location. Adding the pawn to the scene doesn't seem to do anything. The final image they show doesn't have the \"Player Start\" object in the scene at all and they never mention it so I'm unclear about how to complete this process. Adding the pawn to the scene seems to have no noticeable affect on how the game is run in VR Preview. What's the proper last step? After following the instructions exactly as is the floor is way below the current start point and modifying the Z values as instructed has no affect on the floor. Moving the \"Player Start\" does, but this could have been done from the very beginning without doing any of the setup with the pawn or the GameMode."} {"_id": 42, "text": "How do I expose a variable from a class in Unreal Engine 4 to blueprints? I would like to expose to blueprints the variable CurrentTouchInterface in the class APlayerController, just like the method APlayerController ActivateTouchInterface. So, I'm guessing I have to change the header file PlayerController.h from this The currently set touch interface UPROPERTY() class UTouchInterface CurrentTouchInterface To something like this The currently set touch interface UPROPERTY(BlueprintReadOnly, Category \"Game Player\") class UTouchInterface CurrentTouchInterface The point is, what do I need to do for the engine's editor to start showing the exposed variable CurrentTouchInterface in blueprints? I'm working with UE 4.9 downloaded from the Epic Games Launcher application."} {"_id": 42, "text": "Modular weapon in UE4 editor I want to create a modular weapon system. I already got most of it working, but I wonder how to make an improvement. Weapons in my game have different firing modes (semi and auto) and \"bullet\" modes (projectile and hitscan). Each weapon is it's own blueprint, and there I can select proper one in editor, so for example an Uzi would be auto histcan, while a rocket launcher would be semi projectile. (the firing mode actor gets created at spawn from the class selected in editor). I want to make it easier to edit by giving the weapon some variables, like spread and projectile blueprint. But not all of the variables apply to all the types of weapon... TLDR Can I create show different variables in the editor, depending on other variables? If I choose projectile type of weapon, can I also select the projectile type, but not show the option if I chose hitscan?"} {"_id": 43, "text": "What interchange format should I use for e.g. subtitles and UI texts? Are there any standard formats that can be used as the source material for game texts (i.e. subtitles, UI texts, etc.)? I can think of the following constraints at least some style effects (e.g. bold, italics) at least some format string support (e.g. you found d gold, you found count gold) be translator friendly either easy to grasp, or standard enough that translation companies may already have experience with it I know about the following but none of them appear to support format strings. Are there any industry standards for that or does every game or engine roll their own format? UE4 uses a custom rich text format (example Hello lt RichText.Emphasis gt everyone lt gt !) Unity uses its own HTML like format (example Hello lt b gt everyone lt b gt !) the srt format uses HTML like syntax, too (example Hello lt b gt everyone lt b gt !) BBCode is another popular format (example Hello b everyone b !)"} {"_id": 43, "text": "zoom to cursor calculation I want to be able to zoom in and out of the map using the scroll wheel. I want to zoom towards the cursor like google maps does but I'm completely lost on how to calculate the movements. so far, all I have is the resizing but I now need to change the map position. What I have map x and y map width and height cursor x and y. Any help would be most welcome."} {"_id": 43, "text": "Mapping decibel range to linear audible intervals I am working in an engine that encodes sound as decibels (dB). Let's assume the decibel range of human hearing is 60dB to 0dB. Audibility is not linear across this range. So my question is... How does one map a decibel range to a set of numbers (for example, 1 10), such that each number corresponds to (roughly) one linear step up in audibility?"} {"_id": 43, "text": "What is a GUI element with a filling up rectangle shape called? After every game of StarCraft 1, the statistics screen shows up. The part I like on this screen is the points counter. It starts from zero and then it counts up to the points you have achieved, with a filling up rectangle shape based on the number it represents. What is this type of counter called? For example, the Units bar for Alpha Squadron counts from 0 to 4975."} {"_id": 43, "text": "What is the specific name of this UI component? Is there a specific name for this kind of ui component? Sometimes it only renders a symbol (without text) so using name like TextComponent is imo not very fitting."} {"_id": 43, "text": "Immediate GUI yae or nay? I've been working on application development with a lot of \"retained\" GUI systems (below more about what I mean by that) like MFC, QT, Forms, SWING and several web GUI frameworks some years ago. I always found the concepts of most GUI systems overly complicated and clumsy. The amount of callback events, listeners, data copies, something to string to something conversions (and so on) were always a source of mistakes and headaches compared to other parts in the application. (Even with \"proper\" use of Data Bindings Models). Now I am writing computer games ). I worked with one GUI so far Miyagi (not well known, but basically the same Idea as all the other systems.) It was horrible. For real time rendering environments like Games, I get the feeling that \"retained\" GUI systems are even more obsolete. User interfaces usually don't need to be auto layouted or have resizable windows on the fly. Instead, they need to interact very efficiently with always changing data (like 3d positions of models in the world) A couple of years ago, I stumbled upon \"IMGUI\" which is basically like an Immediate Graphics mode, but for user interfaces. I didn't give too much attention, since I was still in application development and the IMGUI scene itself seemed to be not really broad nor successfull. Still the approach they take seem to be so utterly sexy and elegant, that it made me want to write something for the next project using this way of UI (I failed to convince anyone at work (...) let me summarize what I mean by \"retained\" and \"immediate\" Retained GUI In a separate initialization phase, you create \"GUI controls\" like Labels, Buttons, TextBoxes etc. and use some descriptive (or programmatical) way of placing them on screen all before anything is rendered. Controls hold most of their own state in memory like X,Y location, size, borders, child controls, label text, images and so on. You can add callbacks and listeners to get informed of events and to update data in the GUI control. Immediate GUI The GUI library consists of one shot \"RenderButton\", \"RenderLabel\", \"RenderTextBox\"... functions (edit don't get confused by the Render prefix. These functions also do the logic behind the controls like polling user input, inserting characters, handle character repeat speed when user holds down a key and so on...) that you can call to \"immediately\" render a control (doesn't have to be immediately written to the GPU. Usually its remembered for the current frame and sorted into appropiate batches later). The library does not hold any \"state\" for these. If you want to hide a button... just don't call the RenderButton function. All RenderXXX functions that have user interaction like a buttons or checkbox have return values that indicate whether e.g. the user clicked into the button. So your \"RenderGUI\" function looks like a big if else function where you call or not call your RenderXXX functions depending on your game state and all the data update logic (when a button is pressed) is intermangled into the flow. All data storage is \"outside\" the gui and passed on demand to the Render functions. (Of course, you would split up the big functions into several ones or use some class abstractions for grouping parts of the gui. We don't write code like in 1980 anymore, do we? )) Now I found that Unity3D actually uses the very same basic approach to their built in GUI systems. There are probably a couple of GUI's with this approach out there as well? Still.. when looking around, there seem to be a strong bias towards retained GUI systems? At least I haven't found this approach except in Unity3D and the original IMGUI community seems to be rather .... .. quiet. So anyone worked with both ideas and have some strong opinion? Edit I am most interested in opinions that stem from real world experience. I think there is a lot of heated discussions in the IMGUI forum about any \"theoretical weakness\" of the immediate GUI approach, but I always find it more enlightening to know about real world weaknesses."} {"_id": 43, "text": "AntTweakBar doesn't register SFML mouse events I'm trying to add GUI for easier level editing in our game engine. We're using SFML for all the basic stuff (window management, input events etc). I've chosen AntTweakBar because it is a well known library with a few examples around. I was following the tutorial at AntTweakBar's website I was able to draw a simple bar with those example codes. However, mouse events received by SFML are not registered by AntTweakBar's TwEventSDL() function. Here is an example code for Input sf Event event while ( pWindow gt pollEvent(event)) Check if the event should be handled by AntTweakBar int handled TwEventSFML( amp event, 2, 3) for SFML version 2.3 if (!handled) switch (event.type) case sf Event MouseButtonPressed To check whether SFML received mouse button events properly if (event.mouseButton.button sf Mouse Button Left) std cout lt lt \"Left button pressed\" lt lt std endl std cout lt lt \"x \" lt lt event.mouseButton.x lt lt std endl std cout lt lt \"y \" lt lt event.mouseButton.y lt lt std endl else To check whether TwEventSFML received events std cout lt lt \"FINALLY!\" lt lt std endl When I press buttons, I can see \"FINALLY!\" showing up. I can also see that my mouse clicks are received by SFML. However, when I click on an AntTweakBar element (be it a button or help section) it doesn't register it. (Also, I can't see \"FINALLY!\" when I use the mouse). Any help or ideas will be appreciated."} {"_id": 43, "text": "Interfaces 101 Making it Pretty I've made a few games which I've actually released into the wild. There's one particular issue I run into over and over, and that's the issue of the interface theme of the game. How can you make non arbitrary, consistent decisions about the look and feel and interface of your game? For example, in version one of my Silverlight chemistry based game, I made a (bad?) decision to go witha natural, landscape style look and feel, which resulted in this UI Then in a later iteration, I got smarter and went with a more \"machiney,\" grungy look and feel, which resulted in this (final UI) I will definitely say that my taste in UI improved through iteration. But it was a result of a lot of initially arbitrary decisions followed by a lot of tweaking. Is there a better way to figure out how to theme your game? To give a parallel in writing, when you're writing a fantasy sci fi novel, there are a lot of elements you need to describe. While you can arbitrarily invent objects creatures places etc., you get a much more consistent world when you sit down for a few minutes and design the area, region, planet, or universe. Everything then fits together nicely, and you can ask yourself \"how would this work in this universe? Edit It seems like I didn't explain this well. Let me simplify the question in the extreme when I need to lay out a title screen (with background, fonts, skinned buttons) how do I decide how they should look? Green or blue? Grunge or not? Rounded or flat? Serif or sans serif? Take that question and explode it into your game as a whole. How do you figure out how things should look? What process do you use to make them consistent and non arbitrary? Look at the screenshots again. I could have stuck to grass sky rocks, but metal seemed more fitting to the idea of chemical reactions and atoms."} {"_id": 43, "text": "How can I model a \"weapon overheating\" mechanic? I want to create a weapon overheating system very similar to the plasma rifle in Halo. You can watch a video of the plasma rifle firing. What I want to do is to create a flexible logic that can be used for multiple weapons for different in game feelings. Here is my proposed system Create a bar that is split from 0 100. Each bullet has a heat value as an integer. Every bullet adds this value to the bar. The bar has a cooldown set as some value per second. If the bar goes over 100 with any bullet fire the bullet still shoots but then the weapon is deactivated for a period defined as it's \"cooldown.\" Now this system is very simple but visually it will be very linear, with a simple growth and decay. I wanted to try to create a more exponential system that would mean the bar would jump quickly to the middle then remain and hover near the top hot bit of the bar for a long period to create a sense of anticipation just before it overheats. What would be a good formula to achieve that result?"} {"_id": 43, "text": "zoom to cursor calculation I want to be able to zoom in and out of the map using the scroll wheel. I want to zoom towards the cursor like google maps does but I'm completely lost on how to calculate the movements. so far, all I have is the resizing but I now need to change the map position. What I have map x and y map width and height cursor x and y. Any help would be most welcome."} {"_id": 44, "text": "Godot Get button node in click event not working How we can get the button reference on the event function that is connected? My connect signals never works extends PanelContainer onready var hbox container ScrollContainer HBoxContainer var btn func ready() createbutton() func createbutton() btn Button.new() btn.set name(\"button 1\") btn.text \"button 1\" hbox container.add child(btn) btn.connect(\"toggled\", self, \"button toggled\", btn ) func button toggled(toggled, target) print(\"which button \", target.get name()) if toggled true print(\"Button ist pressed\") else print(\"Button is released\")"} {"_id": 44, "text": "How to redraw the node canvas when the zoom changes on the Godot editor? I have an addon that extends EditorPlugin and a companion script attached to a node. The companion script draw a rectangle with a size stablished by an exported property that can be edited from a gizmo on the editor. The general idea is that the plugin shows a handle that allows to drag the visual representation of the exported property, but is the companion script the one that draws its representation on the editor. The only problem is that the node won't redraw itself when zoom changes. Is there a way to fire a redraw event when the zoom changes instead of using process(delta), since doing it every frame would increase CPU consumption dramatically for the Godot editor on a MacBook Air laptop. This video shows the issue I'm describing https youtu.be wv382kOCKHY"} {"_id": 44, "text": "Why isnt setting the global position working? Previously my items were relative to the player by node hierarchy. But for items like bombs this is not ideal so i tried to add the items to the world scene, but when i try to adjust the position of the item to the players position it doesnt work and i see the sword in the top left corner of the screen the worlds origin. Entity Script func use item(item) var newitem item.instance() newitem.own self newitem.global position global position not setting the position of the item correctly newitem.add to group(str(newitem.get name(), self)) get node( quot .. quot ).add child(newitem) if get tree().get nodes in group(str(newitem.get name(),self)).size() gt newitem.maxamount newitem.queue free() Sword Script func ready() type own.type anim.connect( quot animation finished quot ,self, quot destroy quot ) anim.play(str( quot swing quot ,own.spritedir)) if own.has method( quot state swing quot ) own.state quot swing quot func destroy(animation) if own.has method( quot state swing quot ) own.state quot default quot queue free() It seems like it should work but with an engine like godot there could be several reasons why a certain thing is not working. https github.com MonkeyToiletLadder wendingo could it be that in my animation player i set the sword positions as key frames and thats why its not working?"} {"_id": 44, "text": "How to get an autotile in Godot 3.1? I wonder if there's a way to get an autotile just like a single tile using a get autotile() GDScript function or something like that."} {"_id": 44, "text": "How should bullet reflections be implemented? Im working on a tank game where bullets are relected off of walls. The formula for a relfection is r d 2d n n 2n where d is the incoming vector and n is the normal of a wall. My bullets stick to the walls and then quot bounce quot off at the corners an undesirable effect. Im honestly questioning my formula implementation at this point. func collision() for i in get slide count() var collision get slide collision(i) if collision.collider.is in group(globals.wall group) var normal collision.normal velocity velocity ((2 velocity.dot(normal)) normal.length squared()) normal rotation velocity.angle() PI if i use the is on wall function and simply negate the velocity this does produce a bouncing effect but not a correct one. I still need the normal. I orignally used rigid bodies as seen in the image. But i have switched to kinematic bodies."} {"_id": 44, "text": "Select a block of tiles from tilemap by click and dragging I'm creating my first game in Godot 3.1, a top down 2D sim game where you can build rooms on a space station. I am using a tileset I created inside a TileMap node. I placed a script inside the node that checks for the mouse position in the world, converts it into a tilemap coordinate, then loads a tile into that tilemap coordinate. func input(event) if (event is InputEventMouse) var button event.get button mask() if (button 1) var tile pos world to map(event.position) var name tileset.tile get name(0) set cellv(tile pos, tileset.find tile by name(name)) Works like a champ! As long as the left mouse button is held down, any tile the mouse pointer touches is filled with the tile from the tileset. Now I am trying to expand this function so that I can click the world and drag across several tiles to form a rectangle or square, then have all of the tiles inside the selected area filled with the selected tile. func input(event) if (event is InputEventMouse) var button event.get button mask() if (button 1) var tile pos world to map(event.position) if (dragStart Vector2( 1, 1)) dragStart tile pos var name tileset.tile get name(0) if (dragStart ! Vector2( 1, 1) amp amp dragStart ! tile pos) var newX dragStart.x tile pos.x var newY dragStart.y tile pos.y while (newX ! 0 amp amp newY ! 0) var next Vector2( 1, 1) if newX gt 0 next Vector2(dragStart.x 1,dragStart.y) newX newX 1 elif newX lt 0 next Vector2(dragStart.x 1,dragStart.y) newX newX 1 elif newX 0 if newY gt 0 next Vector2(dragStart.x,dragStart.y 1) newY newY 1 elif newY lt 0 next Vector2(dragStart.x,dragStart.y 1) newY newY 1 print(next) set cellv(next, tileset.find tile by name(name)) else dragStart Vector2( 1, 1) However, it seems to only load some of the tiles and functions sporadically. Am I not understanding how the system iterates through the while loop or am I missing something else?"} {"_id": 44, "text": "How do I remove the window border in Godot? I want to show my Godot engine game in a borderless window, but not fullscreen. I want to just have the game view visible on screen with no title bar or window chrome."} {"_id": 44, "text": "Why does the kinematic body move in opposite direction after I interpolate its movement, and how do I fix it? I have added some basic movement and some manual interpolation to a cube that is a kinematic body in Godot. extends KinematicBody var speed int 10 var slowdown buffer 0.2 var movement Vector3(0,0,0) func ready() pass func interpolate() if movement.x gt 0 movement.x slowdown buffer elif movement.x lt 0 movement.x slowdown buffer else movement.x 0.0 func physics process(delta) if Input.is action pressed( quot left quot ) movement.x speed elif Input.is action pressed( quot right quot ) movement.x speed else interpolate() move and slide(movement) The problem is that when I move the cube using A and D, after the cube stops after the interpolation, it starts moving in the opposite direction with a non increasing speed. How can I fix this?"} {"_id": 44, "text": "Godot Engine Why is baking light making my scene darker? I have an interior scene with some windows. The light is from an OmniLight near the ceiling, default environmental light from outside, and a desk lamp with emission. Without baking light, the scene looks like this Consistent with the documentation for emission, the desk lamp is not affecting the surrounding objects. I want to bake the light to see the lamp's effect and to support low end hardware. I followed the baked lightmaps tutorial and set the BakedLightmap's extents to encompass the entire room. After baking, the scene looks like this I can see the lamp's light, as expected, but now everything is too dark. It is unclear to me from Godot's documentation if scenes include indirect light without baking. I have tried this with the OmniLight set to bake quot all quot and only indirect light. What am I missing here? UPDATE 10 April 2021 The problem seems to be that baking environmental light isn't working as expected. I was using Godot 3.2.3 now I'm using 3.3rc8. That gives the BakedLightmap options for environmental light, but the problem remains. I began troubleshooting by shuttering all the light sources as follows Hid the OmniLight, and set its quot Bake Model quot property to quot disabled quot Unchecked quot use in baked light quot for the desk lamp Set the default environment's background and ambient energy to 0 Set the BakedLightmap's quot environment mode quot to quot Disabled quot , verified quot Min Light quot is set to pure black, and cleared the quot Light Data quot When I run the game, I get exactly what I expect a black screen. If I bake light under these conditions, I continue to get a black screen, as expected. Here's where things get screwy. When I set the BakedLightmap's quot environment mode quot to quot Scene quot with the environment still without any light I get dim illumination."} {"_id": 44, "text": "How do I convert degrees to radians and vice versa in Godot? In a game I'm toying with in Godot engine I have angles given in degrees in some places and angles given in radians in other places. Sometimes I need to mix both angles (ie. add them). What built in function allows me to convert degrees to radians (and the other way around)? Or is there none? P.S. I realize that converting those is not rocket science and I could code something on my own, but Godot is already doing such conversions internally, so I would venture a guess that such function already exists."} {"_id": 45, "text": "Elegantly transition from 4 grid to 6 grid and back I find in general map creation that I prefer hex grids for natural environments, but square grids for interior and urban constructed environments. Is there a smooth or elegant way to transition between these that doesn't require a scene break? Most importantly, is there an intuitive way? As long as they don't differ too vastly, the ratio of sizes between the two cell types is largely irrelevant to me. I'd also happily use several rows of transitional tiles if their borders can be procedurally determined. I also have no inherent preference for up point or side point hexes... I'd show some things I've already tried, but they are genuinely all varieties of the same eldritch mess..."} {"_id": 45, "text": "Can custom maps in Starcraft 2 automatically enable certain unit upgrades? If I create a custom map with the Starcraft 2 map editor, how can I specify that some unit upgrades are already researched? For example, if I want all the units to already have the weapons upgrade 1 at the beginning of the game, can I specify this in the editor? I've looked through the menus but didn't find anything that looked like it would relate to unit upgrades."} {"_id": 45, "text": "Good basemap generator I am currently making a board game and cannot seem to find any vector world map creators where I can select a region of the world to capture and have it download that section. I just want a base map so no city names and preferably no borders. The map doesn't have to be vector if it is of good enough quality."} {"_id": 45, "text": "What are the most common ways of closing an open world? In many open world games, there are several ways to limit them. From original ones such as Desynchronisation(Assasins' Creed) to invisible walls. What are the most common ways to close open worlds?"} {"_id": 45, "text": "How can I handle a transition into building interiors? I am trying some stuff on Unreal Engine to make something approching games like Commandos series. But for now, I'm stuck with something I can't figure how it is done How are building interior handled ? Here's a screenshot to illustrate what I say When you enter a building, everything become black, except the building's interior. The interior is isolated, there is no \"communication\" between the interior and the exterior, and between linked interiors. For exemple if you shoot with a weapon and there is an enemy outside just behind the door, he can't hear you. Sometimes biggest buildings are divided in many parts, and each floor are a different \"section\" that are isolated from each other (you have a transition and can't communicate between). Here is an image that explain it well Commandos 2 interior Teleport I first thinked that you were teleported to a different portion of the map, or a different small map, but it is unlikely. If you look at a window, you are at the same time outside (you can see your character outside at the window) and inside (you character is leaning out the window). Teleport would cause many sync issue. Section toggle Then, I thinked that the interior is already handled in the exterior map, but completely hidden, and when you enter it, it \"deactivate\" the exterior by hidding everything outside and displaying interior. Again, this is unlikely since map are full 3D (you can rotate 360 ) and exterior are 2D. How can I create this kind of transition? I've specified that I am on Unreal, but I'm more searching the theorical way rather than Unreal specifics."} {"_id": 45, "text": "how large should a map be in blender I'm learning blender and am wondering what size(blender units) I should be making my characters and maps. should i consider 1 blender unit as 1 inch, foot, meter or does it not not matter?"} {"_id": 45, "text": "How to make a map (game world) based on a real world place or city? As a disclaimer before my real question, I have little experience in game development, only basic Unity knowledge. I noticed a lot of games are based on real world places and cities, for example GTA 5 on Los Angeles and Watch Dogs. What is the process of making such a virtual version of a real world place or city? Or at least, what is a good starting point? I imagine you could get some sort of map data or satellite images and import this in a tool, from which you can start modelling the buildings etc. But if this is the case I don't know how to do that."} {"_id": 45, "text": "Where can I find real world map data for a game? I want to build a game with a concept similar to Ingress, where the game map is overlaid on a real one. Where can I find map data for an app like that? How (generally) do I integrate my own game objects into that map? Game objects need to move in real time, so having them as static markers on the map isn't sufficient. I thought I could use Google Maps but as far as I can tell it would be impossible to create my own graphical style (like Ingress has done) with the official API provided by Google."} {"_id": 45, "text": "Fair spawning of players on a 2D grid As new players enter a game, they are given a location on a large grid. The first player is given the location in the center. Now, how should the second player be placed? Then, the third and so on? New players should be placed reasonably near some other players, but not so close as to cause bunching. My current thinking is to trace a clockwise spiral around the first player, gradually moving further and further out from that initial point. But I keep thinking this must have been done before and that a general solution must exist. This is for a strategy resource game (similar to CoC or Throne Wars etc). How do I do this?"} {"_id": 45, "text": "Using a permutation table for simplex noise without storing it Generating Simplex noise requires a permutation table for randomisation (e.g. see this question or this example). In some applications, we need to persist the state of the permutation table. This can be done by creating the table, e.g. using def permutation table(seed) table size 2 10 arbitrary for this question l range(1, table size 1) random.seed(seed) ensures the same shuffle for a given seed random.shuffle(l) return l l see shared link why l l is a detail and storing it. Can we avoid storing the full table by generating the required elements every time they are required? Specifically, currently I store the table and call it using table i (table is a list). Can I avoid storing it by having a function that computes the element i, e.g. get table element(seed, i). I'm aware that cryptography already solved this problem using block cyphers, however, I found it too complex to go deep and implement a block cypher. Does anyone knows a simple implementation of a block cypher to this problem?"} {"_id": 46, "text": "How to make hard to hack leaderboards Okay, someone tasked me to make a leaderboard for their game. However, the one I have right now is really insecure. All it does is check if someone is in the top 100, and if they are, post the score to the server, where the server will completely trust the client to give a real score. My question is How would I make it harder, or better, impossible, to send a fake high score, because right now they can just post a value to an easy to find endpoint. Clarifications The game is running in an interpreted environment, therefore obfuscation is not practical, and the game has a lot of input (an hour's worth at least), so sending that to the server is also impractical."} {"_id": 46, "text": "save player achievements in local device As an indie developer with zero dollar budget for servers and backend i wonder if there are ways to store the player achievements on his mobile device ?"} {"_id": 46, "text": "Running multiple game servers on single host I want to use a vps dedicated server for game hosting. I expect hosting a lot of servers on that. For the connection, should I assign a unique port for every server or change IPs for every server. And how would these 2 options impact on the game, if I had multiple vps dedicated servers."} {"_id": 46, "text": "How can one add a level to an already published ios android unity3D game on a daily basis? How can one add a level to an already published iOS Android Unity3D game on a daily weekly basis? I know this probably isn't feasible but it won't harm if I just made sure of it. Our game requires to allow access to just one level a day week. You may access levels that were released on previous days and play to your heart's content but all players will get access to a new level every day week. I know this may sound crazy but the concept requires it so just roll with me. Is there anyway to achieve this for iOS Android games built with Unity? Of course, we could package a whole game with 10 levels and allow access to each level after a set time, but the game will need to talk to a back end CMS all the time to allow login and verification of users, make sure they aren't cheating, etc, etc, over proper security and https (proper anti cheat solutions are a must and that's why maybe HTML5 route is better). We are now moving towards doing all this in HTML5 and completely bypass Unity, but was hoping to find a solution with Unity because 3D is better than 2D depending on the situation, and some levels could be 3D while others 2D. Patching updating everyday to upload new levels won't go down well with anyone, the app stores or the users."} {"_id": 46, "text": "Should I share classes on the server with the client? I am fortunate being able to use the same programming language on both the server and client (javascript). I would therefore like to share my code and classes between server and client, like Half Life (and possibly others) did. To make things simple, suppose I have an MMO with a Player and an Enemy class, with a position, velocity, and health. var Player function() this.health 100 this.x 0 this.y 0 var Enemy function() this.health 100 this.x 0 this.y 0 I know ideally these should inherit from each other (or use a CES system). But my question is as follows Since these classes will be used on the client and server (since both client and server needs to know about these entities) with some but not all shared logic, would it not be crazy to define two separate classes on both the server and client? What are the advantages and disadvantages of sharing classes between server and client? Are there any helpful websites with tips tricks for this approach?"} {"_id": 46, "text": "Is there a turn based game service (like GKTurnBasedMatch on iOS) for desktops? Game Center's GKTurnBasedMatch seems to provide a pretty robust service for handling turn based games, and OpenFeint appears to have something similar. The problem is, I'd like something like this for the desktop. I was thinking of rolling my own REST based service on Rails, but after looking at the GameKit documentation, I realized it would take longer than I'd like to make it solid. I don't suppose something like this already exists that I missed in my searches?"} {"_id": 46, "text": "nodejs game server timer I'm new to game server develop. I'm developing a card game server. How could I make nodejs server wait for few seconds then push data to client after an event fired ? Example in poker game, one player have about 20 seconds to make a move, after 20 seconds server will auto fold and then push a message to another player. How could I make server wait for 20 seconds and then do next action ?"} {"_id": 46, "text": "How to achieve game server redundancy? The game I'm developing has a simple game server which stores inventory and hero data for each user, as well as deciding the outcome of some random events. The server interacts with PostgreSQL and responds to HTTP queries. It runs on EC2 and Microsoft Azure, but these hosts sometimes become unreachable for a few minutes, and one week the EC2 server froze every day. What are the strategies to make sure a server is always available? Non solution Multiple servers with their own databases, and the client will try to connect to a different server after a timeout. Of course, their data will be absent if it's a different server. Solution 1 As above, except whenever player data changes, that change is stored, and each player's data is replicated to the other servers (bidirectionally). The disadvantage is that this solution is \"home grown\" and prone to mistakes. Solution 2 One primary server (let's leave sharding for later), and one or more backup servers, with the database being replicated. When the primary server becomes unreachable, a backup server is made into the primary server. Problems this needs additional technology for directing queries to the right server, and also to decide when a backup server needs to be promoted to primary. Can this \"manager\" also hang? It sure can if it's on AWS! That would bring the whole system down. Solution 3 We could pay for database hosting with some guarantee of reliability, and have multiple servers but just one database. We would assume the database is reliable (but still back it up). The user could connect to any reachable server, and the servers would not need to worry about having stale data, since there will be only one database. (Again, sharding is a separate problem, and we could just multiply the whole setup without fundamentally changing its architecture.) So how is reliability achieved when individual servers aren't reliable? I like solution 3 for its simplicity, if a 3rd party database can be relied upon."} {"_id": 46, "text": "Gameserver travel time checks I'm making a little RPG multiplayer game where each player can travel long distances. Traveling from point A to B can take up to 10 minutes and once initiated is an automatic process. Now, I am not quite sure how to go about this on the server side, the server has to be the one telling the clients that the traveling player has arrived or not. Do I have to run some sort of timer where I check every X seconds milliseconds if all the traveling players have reached their destination in order to update the DB? (the characters are saved on the server to prevent cheating). The way I imagined it was simply to capture the timestamp at the start of travel, then use the distance to calculate what the time should be when the player will arrive, and then run some sort of timer that checks each traveling players \"start time\" , \"actual time\" and \"destination time\", but since I have never done this before I'm not sure if this is a good way of doing it?"} {"_id": 46, "text": "Serverside libs examples for html5 WebSocket in .net? Hey guys, I wanted to know what libraries or examples exist for WebSockets in .NET"} {"_id": 47, "text": "Event Based Entity Componenty System So, I'm new to ECS. The concept is very interesting in contrast to traditional OOP and heavy inheritance. I'm working on a game right now that is open sourced, but does not actually have a game loop. It works by sending and receiving packets, and acting upon those received packets. In that case, what would I be looking at in this relationship between Systems and Components? Let's say I have an incoming Movement packet that is telling me to move Player A 5 units to the right. And let's say I have a lot of different types of packet types (let's say up to 100), each for different stuff, like maybe making a purchase at an NPC store. Another one for purchasing at a player store. Or maybe inviting and expelling players from a party. Or inviting and expelling players from a guild or alliance. At that point, I start getting a little confused. I can definitely see similarities in .. Adding Expelling people from a party, guild, alliance But each of those packets contain different information at their core. I'm tempted to write a component that Guilds, Alliances, and Parties can use (like AddRemoveMemberComponent), but the information coming in from the packet makes me hesitant. How would those components handle specifically? Maybe I'm just entangling myself in traditional inheritance, so it's hard for me to see the uniqueness of data. Would I have unique systems for explicitly handling Party, Guild, Alliance expulsions and invitations? And how would I dynamically know what to do with these different packets coming in (different packets for alliance, guild, party)."} {"_id": 47, "text": "How does Component Entity System Manages Game Mode? I would like to create a simple game fire and shoot game using Component Entity System (CES). This game has two game modes(1) play mode and settings mode. The play mode is the actual game itself whereas the setting mode is where the player updates his sprite's settings, like type of weapon, inventories, etc. Now, how does one implement game modes in CES? How would one switch to different game modes? (1) The game mode i refer is something like the one implemented in Final Fantasy I, where it has four game modes (http en.wikipedia.org wiki Final Fantasy (video game) Gameplay)"} {"_id": 47, "text": "Intersystem communication in a ECS game Apologies if this question has been answered before, but after relentless searching I couldn't find anything. As many, I've recently jumped on the ECS bandwagon, and I am currently killing some time by making a modest ECS game. The game is a somewhat simple 2D platform game. It is programmed in plain JS. The layout of the game is essentially as following The core of the game is the Engine, the Engine runs the game loop and holds an EventManager, responsible for raising events, and an EntityManager, responsible for containing all components of all entities. All logic is done by systems. The systems registers event handlers (i.e. member functions) with the EventHandler for specific EventTypes, which gets called when an event of that type is raised. For example this.eventManager.registerHandler(EventType.EVENT RENDER, this.render, null, this) I recently switched from calling all systems relevant functions explicitly in the Engine to this pattern. But over to my problem and question. For rendering I have a RenderSystem. This system contains references to several type of drawable animatable components which it, you guessed it, renders. Until recently, this system also contained a reference to another system, MapSystem, which spatially indexes all entities. The reason for this reference was to be able to call a function along the lines of mapSystem.search(frame bound) effectively pruning away all entities not needing to be rendered. So I have a couple of questions regarding this Is it very bad practice for systems to communicate directly? Something about it just doesn't smell right to me. I see how it might severely complicate your code if you have say 100 systems, and each system holds references to many of the other systems. Hello O(n 2). If I am not to hold inter system references, how do I perform communication like described above? My initial thoughts was to create an entity whose primary purpose were to hold the necessary information (i.e. all entities currently in the frame). Then let MapSystem write to and RenderSystem read from this entity. But this also seems to be rather unclean to me. Especially as MapSystems search function might be useful to call in many different contexts. Creating an entity per calling context doesn't really seem like a good way to go about it either. TL DR Is it bad for systems in a ECS game to communicate directly and hold references to each other?"} {"_id": 47, "text": "How do I best solve multiple component modifications via a single event? I'm trying to solve multiple component modifications via a single event. Is there a better way of handling this? I have an a label entity that cares about when a shield entity's hitpoints component is modified (damage, regeneration, etc). The label will need to change its text to display the current of shield remaining. its text color to display the severity of shield remaining. When my ShieldSystem resolves the hitPoints component, the below code runs. For eah type of modification I plan to listen for, I need to add a new if statement to the for loop. This means I have a bunch of duplicated code between systems if more than one wants to modify any listening text component, sprite component, etc. for(int i 0 i lt listeners.size() i ) if(listeners.count(MessageType CHANGE TEXT) 1) std vector lt Entity gt entities listeners MessageType CHANGE TEXT for(int j j lt entities.size() j ) Entity e entities j MessageInfo messageInfo new MessageInfo(MessageType CHANGE TEXT, e, \"Some Text\") MessageCenter Dispatch(messageInfo) Add an If(listeners.count(MessageType CHANGE TEXT COLOR)... Add an If(listeners.count(MessageType CHANGE SPRITE RECT)... etc"} {"_id": 47, "text": "Should I load scenes from files, or hard code them? In a game engine I'm working on, I'm using scenes similar to what you would find in Unity. The entities, in my game, are composed of reusable components and custom data which is linked to those components. Entities have children, and one parent. This is shown, below class Entity ...custom data here... let parent Component var children Component var components Component ...methods... If the entities did not have custom data, then creating a scene would be described by the following steps Loading a file into memory. Creating a tree of entities. Adding components to them, with parameters listed in the file. However, this does not leave room for custom data. My other alternative is to create a scene by creating a scene object, then hard coding the entities in. It would look something a bit like this var opening scene Scene() opening scene.addEntity(Player()) opening scene.addEntity(Car()) opening scene.addEntity(Terrain()) ... The downside to this is that everything is hard coded. What can I do to create scenes in a game more efficiently?"} {"_id": 47, "text": "How do I go about creating the \"Watcher\" component? Last weekend, I spent several hours writing a basic entity component model, in JavaScript. I was largely referencing the Ash Actionscript game engine. I successfully worked out how the systems, components, nodes, and entities all link up, but one thing that eludes me is how to implement the \"watcher\" that checks the entities for their components, creates nodes for them, adds those nodes to the various systems automatically, and watches for any changes in the components on an entity. I was a bit confused by the way it works, in Ash, and all of the posts and tutorials on the subject that I have read seem to forgo the details of that part of the implementation. Right now, I currently have the following Entity, which is assigned components. Engine, which keeps a track of the systems and entities. Node, which contains references to the components in an entity that we want the system to modify (I am currently creating these, manually, and adding it to the system). Component, which contains the data to be modified by the system. System, which operates on the nodes, which in turn change the data in the entity. I also have various other classes, dedicated to holding lists of the various objects. How do I go about creating the \"Watcher\" component?"} {"_id": 47, "text": "How do we coordinate which order the systems get processed within an ECS? If I knew all of the systems at compile time, I could order them myself. However, I intend on having user defined mods. This adds a level of complexity, in that I (as the framework developer) don't know which mods will be developed, or installed on a particular user's machine. I've thought about adding some sort of execution ordering within core code systemRegistrar.register(PhysicsSystem.class).before(RenderingSystem.class) within third party mod systemRegistrar.register(ModSystem.class).before(PhysicsSystem.class) I've thought about an incidental ordering using some sort of message or event bus within core code PhysicsSystem public void update() do physics y stuff eventBus.send(new PhysicsSystemUpdatedEvent()) RenderingSystem public void handle(PhysicsSystemUpdatedEvent event) do rendering stuff eventBus.send(new RenderingSystemUpdatedEvent()) But this blurs the line between what will happen, and what could happen. Another thought that came into my mind is that we can be a little bit dirty and not care too terribly much about what order the systems get executed in, because even if we render before the physics are applied, the next frame will then render the physics update that was made in the previous frame. It's consistent, but it blurs the line between what happens within a frame."} {"_id": 47, "text": "Central logic in entity component architectures I've build a architecture based on the entity component system idea. So I got Components Just store Data Systems Operates isolated on Components Scripts Which are just objects from type Script and attached to a ScriptComponent. A ScriptSystem will execute the init() and update() functions on the scripts then. However, I'm wondering where to place the central game logic since the I have to check a few things about the overall game process, holding the map, counting the points, checks the points if they are enough for the end screen, etc. ... Should I create a single entitiy with just such a script attached to it or are there better solutions out there?"} {"_id": 47, "text": "Collision Response in Entity Components Systems This question seems to be a duplicate of mine, but I don't think it is. I'm trying to build a game using an ECS, but I want this ECS to be as simple as possible, therefore I am eschewing messages. My question is where should the the behaviours go for collision responses. Not physical responses, but logical response (e.g. bullet hurts player, player picks up health, explosion destroys enemies). Is there a big HandleCollisionsSystem, or does each type of collision have it's own system or should the logic be in one or both of the colliding entity's components (but a component is just data, right?) I'm sure there are several approaches, but for the moment I would like to err on the side of simplicity."} {"_id": 47, "text": "Game engine with good Lua entity creation management I'm looking for an engine that constructs it's entities using Lua or other scripting language. This is in order to find inspiration and do it in my own engine as well. I know that CryEngine does use Lua to make their entities, but I wanted to know if there are some other alternatives that I can look up. Thanks!"} {"_id": 48, "text": "Load destructible mesh at runtime Using the tutorial Components and Collision as a guide, to dynamically load a SphereShape the following is done AClass AClass() Create USphereComponent USphereComponent sphere NULL sphere CreateDefaultSubobject lt USphereComponent gt (TEXT(\"Root\")) sphere gt InitSphereRadius(1.0f) sphere gt SetCollisionProfileName(TEXT(\"PhysicsActor\")) sphere gt SetSimulatePhysics(true) sphere gt WakeRigidBody() Create UStaticMeshComponent UStaticMeshComponent mesh NULL mesh CreateDefaultSubobject lt UStaticMeshComponent gt (TEXT(\"Mesh\")) mesh gt SetupAttachment(RootComponent) Load asset from filesystem define ASSET TEXT(\" Game MobileStarterContent Shapes Shape Sphere.Shape Sphere\") static ConstructorHelpers FObjectFinder lt UStaticMesh gt asset(ASSET) undef ASSET if (asset.Succeeded()) mesh gt SetStaticMesh(asset.Object) RootComponent sphere Now, when I try the same set of steps, but this time for an Actor that has a UDestructibleComponent instead of a USphereComponent AClass AClass() Create our UDestructibleComponent UDestructibleComponent destructible NULL destructible CreateDefaultSubobject lt UDestructibleComponent gt (TEXT(\"Root\")) destructible gt SetCollisionEnabled(ECollisionEnabled PhysicsOnly) destructible gt SetSimulatePhysics(true) destructible gt SetEnableGravity(false) destructible gt WakeRigidBody(NAME None) Load asset from filesystem define ASSET TEXT(\" Game MobileStarterContent Shapes Shape Cube DM\") static ConstructorHelpers FObjectFinder lt UDestructibleMesh gt asset(ASSET) undef ASSET if (asset.Succeeded()) UE LOG(LogTemp, Warning, TEXT(\"It worked!\")) RootComponent destructible I receive the following errors ConstructorHelpers.h 105 20 error cannot initialize a parameter of type 'UObject ' with an lvalue of type 'UDestructibleMesh ' ValidateObject( Object, PathName, ObjectToFind ) ConstructorHelpers.h 29 19 error incomplete type 'UDestructibleMesh' named in nested name specifier UClass Class T StaticClass() I've also tried adding .Shape Cube DM to the end of the string, similar to how the first asset was imported, but no luck. What is the correct way to load a DestructibleMesh from the filesystem and apply it to the UDestructibleComponent?"} {"_id": 48, "text": "Does it make sense to use voxel editors only for meshing? I am a hobbyist game developer. If there is something that I really don't like is 3D modeling, in particular all the work that needs to be put in the \"pipeline\" for the creation of models, in particular UV mapping. Remembering games like Minecraft or \"Little big planet\", I was wondering if voxel editors could be used for just modeling let me explain using a voxel editor to make each object I need for my games and then export them singularly as FBX with either marching cubes or even better dual contouring for meshing. Does an approach like this make sense in terms of performance? From what I can tell, I would need a voxel software which allows all of the following criteria. Dual contouring meshing FBX or OBJ export of meshed area Multiple materials Now, it looks like such software doesn't exist, which makes me think that this approach for 3D modeling may be totally wrong. Any heads up on this?"} {"_id": 48, "text": "How to quickly create meshes that have cutouts of other meshes? I have a mesh and I would like to quickly create planes (or boxes) that have cutouts in the shape of the silhouette that mesh, but rotated at various angles. I would like to have a system to which I can feed any mesh and it will output these cutout planes, either for random or predefined rotations of the mesh. In the game I'm making, you have a 3D object, like a chess piece, that you rotate and translate to try to fit it in a hole in a wall that has the shape of the object. But the hole can be in the shape of any rotation of that object. For instance, in the case of a pawn, the hole can look like an upright pawn, so you wouldn't have to rotate the pawn at all or it could just be a circle the size of the base of the piece, so you would have to rotate the pawn so it either goes in feet or head first. I want a program to procedurally generate walls with these kinds of holes for any mesh I give it."} {"_id": 48, "text": "Vertex Normals, Loading Mesh Data My test FBX mesh is a cube. From what I surmise, it seems that the cube is on the extreme end of this issue, but I believe that the same issue would be able to occur in any mesh Each vertex has 3 normals, each pointing a different direction. Of course loading in any type of mesh, potentially ones having thousands of vertices, I need to use indices and not duplicate shared verts. Currently, I'm just writing the normals to the vertex at the index that the FBX data tells me they go to, which has the effect of overwriting any previous normal data. But for lighting calculations I need more info, something that's equivalent to a normal per face, but I have no idea how this should be done. Do I average the 3 different verts' normals together or what?"} {"_id": 48, "text": "Where can I find free or buy \"next gen\" 3D Assets? Usually I buy 3D Assets from sites like turbosquid.com or similar. My problem is that I have lately implemented glow, normal maps, specular (and specular power) maps and reflection maps and I can't find any models that use those techniques. So where can I find buy \"next gen\" assets (at least models items with a normal map)? I have checked for similar posts but those I found are about either free only or 2D or 'ordinary' 3D so I hope this is not a duplicate."} {"_id": 48, "text": "How to make low poly ground look better I'm working in Unity and have created some low poly meshes using blender. Currently, the meshes look good, but the ground seems very lacking. I am not sure how to make it feel natural or even look good in general. Here is a current image The ground needs to remain mostly flat due to the game placing flat objects on the top of the plane and the image shows the exact camera angle we are using. But, as you can see in the image, the ground is extremely lacking and it doesn't look like a ground but instead just a flat background color. I've tried textures but it does not look good with the trees and low poly design. How can I make the ground look better?"} {"_id": 48, "text": "How do I import real world models into a game engine? What are the hardware and software tools required to import physical worlds into a game engine? Can I use a HD camera to do that? What do the popular game engines support?"} {"_id": 48, "text": "How to generate AABB, OBB Sphere from polygon soup How can I generate AABB, OOBB and Sphere from a polygon soup, where the bounding volumes are defined as follows AABB should be specified by min(x,y,z) max(x,y,z) OOBB should be specified by min(x,y,z) max(x,y,z) and a quaternion for rotation Sphere is specified as position(x,y,z) and radius"} {"_id": 48, "text": "MeshParts Why do they exist? For my current game I'm working on I've decided to implement a custom model class to store my models in. Reasons being is that I want to make adding new models as painless as possible for the rest of the team without needing to go through the XNA Monogame content pipeline (Them having VS Dev framework installed just to compile a few models, or passing them onto me to compile every time a change is made would just get tedious further on, not to mention different timezones when testing slowing things down) To this end I've been looking at the model structure of different frameworks, XNA Monogame, SharpDX and AssImp.NET being the main ones. The structure for XNA MG and SharpDX is Model MeshCollection MeshPartCollection, whereas Assimp only has Scene MeshCollection. From previous experience in XNA, MeshParts seem kinda redundant. None of my meshes ever had more than one, and a lot of the XNA examples I've seen only ever had one mesh part per mesh. From everything I've experienced and seen, MeshParts seem redundant, surplus to requirements and just makes for an extra level of complexity (Should I use a loop, or just hardcode it to use the first element in the collection?). Is there some useful aspect of them that I am not aware of, or a particular use case or scenario where they are actually useful?"} {"_id": 48, "text": "Load destructible mesh at runtime Using the tutorial Components and Collision as a guide, to dynamically load a SphereShape the following is done AClass AClass() Create USphereComponent USphereComponent sphere NULL sphere CreateDefaultSubobject lt USphereComponent gt (TEXT(\"Root\")) sphere gt InitSphereRadius(1.0f) sphere gt SetCollisionProfileName(TEXT(\"PhysicsActor\")) sphere gt SetSimulatePhysics(true) sphere gt WakeRigidBody() Create UStaticMeshComponent UStaticMeshComponent mesh NULL mesh CreateDefaultSubobject lt UStaticMeshComponent gt (TEXT(\"Mesh\")) mesh gt SetupAttachment(RootComponent) Load asset from filesystem define ASSET TEXT(\" Game MobileStarterContent Shapes Shape Sphere.Shape Sphere\") static ConstructorHelpers FObjectFinder lt UStaticMesh gt asset(ASSET) undef ASSET if (asset.Succeeded()) mesh gt SetStaticMesh(asset.Object) RootComponent sphere Now, when I try the same set of steps, but this time for an Actor that has a UDestructibleComponent instead of a USphereComponent AClass AClass() Create our UDestructibleComponent UDestructibleComponent destructible NULL destructible CreateDefaultSubobject lt UDestructibleComponent gt (TEXT(\"Root\")) destructible gt SetCollisionEnabled(ECollisionEnabled PhysicsOnly) destructible gt SetSimulatePhysics(true) destructible gt SetEnableGravity(false) destructible gt WakeRigidBody(NAME None) Load asset from filesystem define ASSET TEXT(\" Game MobileStarterContent Shapes Shape Cube DM\") static ConstructorHelpers FObjectFinder lt UDestructibleMesh gt asset(ASSET) undef ASSET if (asset.Succeeded()) UE LOG(LogTemp, Warning, TEXT(\"It worked!\")) RootComponent destructible I receive the following errors ConstructorHelpers.h 105 20 error cannot initialize a parameter of type 'UObject ' with an lvalue of type 'UDestructibleMesh ' ValidateObject( Object, PathName, ObjectToFind ) ConstructorHelpers.h 29 19 error incomplete type 'UDestructibleMesh' named in nested name specifier UClass Class T StaticClass() I've also tried adding .Shape Cube DM to the end of the string, similar to how the first asset was imported, but no luck. What is the correct way to load a DestructibleMesh from the filesystem and apply it to the UDestructibleComponent?"} {"_id": 49, "text": "How to promote my free javascript game? Possible Duplicate Effective marketing strategies for independent game projects I created a javascript game for browsers. Do you know easy ways to promote it?"} {"_id": 49, "text": "Is there a way to make money from indie downloadable games? It appears that there are ways to make money with flash games through portal and aggregator sites and embedded ads. But I do my programming in C and C . I've started a prototype which relies on a few existing C SDK's. The game would have to be downloadable. Is this just a labor of love, or are there any ways to make money from this type of game? Does anyone pay for shareware anymore? What other options are there?"} {"_id": 49, "text": "How should I prepare for pitching a game to potential sponsors? We have developed a mobile game, and are preparing ourselves for demo day. We will be presenting our game to potential sponsors, and we are having trouble deciding how to make a quality presentation. Specifically, what are the unique challenges and opportunities in pitching a game, in contrast with product demos generally? Being developers, we are worried we might wrongly focus on elements important to developers, but not to investors. How can we go about identifying them? For demoing game play, in what situation should we consider switching to a separate app, embedding the game in a slide show, showing canned game footage, or something else?"} {"_id": 49, "text": "How can I record my vector graphics game without blurring the graphics? A lot of people asked for a trailer for my game, because screenshots do not do it justice. I have tested PlayClaw, Fraps, CamStudio, VirtualDub, and some other minor tools none have produced a viable result. My game uses vector graphics and is designed to run at 60fps, so lossy compression of a regular screen capture video destroys the graphical appeal. How can I record gameplay without blurring my graphics and slowing down the framerate?"} {"_id": 49, "text": "Where to find passive advertisements? Imagine a soccer game. While the player is playing a match, he is seeing the billboards all the time. I want to include that kind of advertisements in my game (obviouly, making profit). Just a brand logo, not a banner the user has to click. Does this even exist? Is there any AdMob like which allows this kind of advertisement?"} {"_id": 49, "text": "Do I have to ask for permission to use real company logos for advertising props in the world in my sports game? I'm making a simple turn based Android ball flicking soccer game and I was thinking of creating a theme in a sports game, like soccer for example, usually had advertisement banner on walls and a bit realistic but cartoony. It is usually encountered in most sports based game in any gaming platform. I have a question regarding this topic. Is it required to ask a permission to used some recognized ad banner designed walls (e.g. Adidas, Samsung, McDonalds, etc.) to be used as props for the game field like this one for example?"} {"_id": 49, "text": "Getting Early User Feedback on Games Disclaimer It may be that I'm already doing all the \"right\" things, but just don't have enough traffic for it to pay off with people giving feedback. My question is how do you attract early and quality feedback into games, from end users? Ideally, I'm looking at a model where the seed of an idea comes from you (or from them, even) and you build it into a game, molding it along the lines that people tell you are best. Because you're just one opinion. And game developers have a reputation for doing weird and sub par quality stuff, sometimes. I'm currently practicing the following VIP List I have a \"VIP\" mailing list (mostly friends) who agreed to try out game releases and give me feedback. They seem hesitant to say anything negative, and most of them are not really gamers, just doing it because they know me and like my niche. 2 4 Week Releases I use a form of agile and release iterations every two to four weeks. This means every release is functionally small, but somewhat polished. Press Releases With every release, I also post a small \"press release\" post identifying the good, the bad, what's next, and screenshots (along with a link to play the latest release version). I'm not getting that much feedback. What am I missing?"} {"_id": 49, "text": "Where to find passive advertisements? Imagine a soccer game. While the player is playing a match, he is seeing the billboards all the time. I want to include that kind of advertisements in my game (obviouly, making profit). Just a brand logo, not a banner the user has to click. Does this even exist? Is there any AdMob like which allows this kind of advertisement?"} {"_id": 49, "text": "Video recording and editing for game promo video I would like to screen record a beta of an RPG I've created. The format will be narration over video with the ability to edit. It's an indie game I'm marketing, so the video quality and narration need to be semi professional. I'm curious as to which video production packages are of good enough quality to accomplish this without breaking the bank (free is the best!). If there are no good packages, then separate software recommendations for screen recording, editing, and narration would be helpful! Notes I have a Mac I would rather do video recording first, then narration later with an overlay I've used Quicktime in the past, not sure if there was something better This video is for my Kickstarter campaign"} {"_id": 49, "text": "How to calculate the price for acquisition of a mobile game I searched long and hard thru the Game Dev community here and I didn't manage to find a good enough answer for a situation that I need help with. Here is the deal I've spent about a year to single handedly develop a mobile game which is a mix between MMORPG and idle battle games. Wouldn't like to go into too much depth as to which game we are talking about and who is trying to buy it but, a company I have worked for before reached out to me with a proposal to buy the game from me so they can reap the profit from it and develop it further since they are a professional game dev company while I am a single person and I can't really reach the game's full potential as far as growth and player base goes. Now the game is pretty beneficial for me since I live in a region with a very low income standard so the money I get from the game might not be much for many other people but for me, they are equal to about 1.5 2 monthly salaries (we are talking average salary for this country). That is about 600 Euros of pure profit after I pay for all the expenses. Of course, this number fluctuates depending on different events and sales I release and general user activity. It is a free to play game so the income is mostly from microtransactions for in game rare currency and special event item packs purchases. Now the issue I have is that I am not sure how to set a price for the acquisition. What feels right for me would be to calculate the potential average income from the game in the next 12 months and add to this the average salary per month for a game dev of my qualifications multiplied by the number of months it took me to develop it to its released state. That would round up to about 15k euros. When I think about it 15k euro isn't a small price to pay, but the game will pay that off in the next 1 2 years, and it might even grow much faster after they make investments for advertising and release new features and so on and so on. This means that after the second year they should be on a clean juicy profit from that deal, and that sounds good to me as well since 15k euros for me is a decent sum of money considering that the game has been running for about 14 months. And lets not forget that it has been beneficial for me for the last few months so it hasn't been always uphill. What I need really is an advice as to is this price of around 15k Euro justified and is it a good deal for both sides, since I don't want to shoot way high and blow the acquisition, but I don't want to give the game away for a low price since it really holds a lot of potential, and it is my first fully released game."}