Hello Dyson122! I just bought the MTE. Do you use any app to map the tiles surfaces on sonic template or do you do it manually?
Thank You!
Hello Dyson122! I just bought the MTE. Do you use any app to map the tiles surfaces on sonic template or do you do it manually?
Thank You!
I manually mapped the surfaces in Sonic Finale. It was tedious, but it only took an hour or so and the results were fantastic! I don’t think an app exists for this specific purpose, but I could be wrong about that.
Hi dyson,
I’ve been looking at the “Platformer - Sonic Finale 0v844” demo this afternoon and the corresponding TMX file in Tiled, in an attempt to piece everything together, and I think I understand what’s going on but I just wanted to clarify a few things regarding surfaces and collision detection.
As part of the learning process I sliced “GreenHill.png” back in to 32 tiles of 256 x 256 pixels and having zoomed in to each individual tile (well, just a couple) I see that there are several bright red pixels in each image tile whose y co-ordinates correspond with each integer in it’s “surfaceData” property array in Tiled.
All was well and good until I tried to understand how tile “19” was contructed - specifically, the tile with two platforms towards its top. In Tiled it’s the 20th tile (since its index begins at 0), which is row 5, col 4. Here’s the corresponding json…
"19": { "surfaceData":"[[17, 18, 20, 20, 20, 20, 18, 16, 20],[65, 66, 68, 68, 68, 68, 66, 65, 64]]", "surfaces":"[[1,8],[9,16]]", "walls":"[[8,2],[8,3],[8,4]]" }
If the “surfaceData” property contains two arrays, one for each platform, specifying the y co-ordinates across the width of the surface(s), what do the arrays in “surfaces” and “walls” represent in relation to the tile image? I’m also not quite getting how the x co-ordinate is defined or how the integers in “surfaces” directly relate to those in “surfaceData” (e.g. the first array in “surfaces” specifies 1 and/to 8, but the first array in “surfaceData” contains 9 numbers).
Hope you can help. Thanks.
Of course.
Walls don’t do anything. In the end time did not allow for them, and when all was said and done they seemed unimportant to the purpose of the sample. You can ignore any wall arrays you see.
You’ll notice that both surfaceData arrays have one extra data point in them. The reason for this is that the code calculates the player’s Y position and rotation by drawing a line segment between two points. The red points and corresponding data points are on the left edge of each segment. If the surface is eight segments long, one extra point is required just off the right edge of the last segment to complete the final line segment.
The “segments” I’ve mentioned are all hard-coded to be 16 pixels across. The user code calculates the x coordinate of each data point from the array’s index and this hardcoded segment length. You’ll see a lot of multiplication and division by 16 happening in the detectFloor() function down towards line 138.
Thanks! That was a brilliantly clear explanation!
I manually mapped the surfaces in Sonic Finale. It was tedious, but it only took an hour or so and the results were fantastic! I don’t think an app exists for this specific purpose, but I could be wrong about that.
Hi dyson,
I’ve been looking at the “Platformer - Sonic Finale 0v844” demo this afternoon and the corresponding TMX file in Tiled, in an attempt to piece everything together, and I think I understand what’s going on but I just wanted to clarify a few things regarding surfaces and collision detection.
As part of the learning process I sliced “GreenHill.png” back in to 32 tiles of 256 x 256 pixels and having zoomed in to each individual tile (well, just a couple) I see that there are several bright red pixels in each image tile whose y co-ordinates correspond with each integer in it’s “surfaceData” property array in Tiled.
All was well and good until I tried to understand how tile “19” was contructed - specifically, the tile with two platforms towards its top. In Tiled it’s the 20th tile (since its index begins at 0), which is row 5, col 4. Here’s the corresponding json…
"19": { "surfaceData":"[[17, 18, 20, 20, 20, 20, 18, 16, 20],[65, 66, 68, 68, 68, 68, 66, 65, 64]]", "surfaces":"[[1,8],[9,16]]", "walls":"[[8,2],[8,3],[8,4]]" }
If the “surfaceData” property contains two arrays, one for each platform, specifying the y co-ordinates across the width of the surface(s), what do the arrays in “surfaces” and “walls” represent in relation to the tile image? I’m also not quite getting how the x co-ordinate is defined or how the integers in “surfaces” directly relate to those in “surfaceData” (e.g. the first array in “surfaces” specifies 1 and/to 8, but the first array in “surfaceData” contains 9 numbers).
Hope you can help. Thanks.
Of course.
Walls don’t do anything. In the end time did not allow for them, and when all was said and done they seemed unimportant to the purpose of the sample. You can ignore any wall arrays you see.
You’ll notice that both surfaceData arrays have one extra data point in them. The reason for this is that the code calculates the player’s Y position and rotation by drawing a line segment between two points. The red points and corresponding data points are on the left edge of each segment. If the surface is eight segments long, one extra point is required just off the right edge of the last segment to complete the final line segment.
The “segments” I’ve mentioned are all hard-coded to be 16 pixels across. The user code calculates the x coordinate of each data point from the array’s index and this hardcoded segment length. You’ll see a lot of multiplication and division by 16 happening in the detectFloor() function down towards line 138.
Thanks! That was a brilliantly clear explanation!