In registerModel, it’s checking that the pointer type is SHARED, then calling registerModelImpl with the raw pointer and the Earth geometry’s pointer. The issue is thatNativeGeometryModel.getPointer() extracts the raw pointer from the wrapped Earth model, but if the original getEarth JNI call returned a null pointer, then we’d have a null
pointer object that would throw an NPE when accessing .raw. The registerModel call appears to be wrapped in some kind of synchronization or validation, so I need to see how that
protection works. So the ReflectInterop is catching the entire exception from ProjectionFactory.<clinit>, which means the NPE actually escaped from that initialization rather
than being contained within a JNI call. The stack trace shows the error originates in getPointer() when trying to access the raw field on a null Pointer object duringProjectionFactory class initialization.
Looking at the sequence more carefully though—the FATAL error is being logged at line 563 in GLMapView.<init>, not 557, and it’s showing as an NPE rather than anExceptionInInitializerError. That suggests something else is happening at that later line that’s triggering a different exception. where ProjectionFactory is first accessed. The
ExceptionInInitializerError from line 557 is caught somewhere! Then at line 563, MapSceneModel is constructed with ProjectionFactory.getProjection(4326) which now returns null
(since the static init failed, any future call throws ExceptionInInitializerError which might be caught, or maybe the class partially initialized…).
Actually, looking at the stack trace again:
at com.atakmap.map.MapSceneModel.<init>(MapSceneModel.java:72)
at com.atakmap.map.opengl.GLMapView.<init>(GLMapView.java:563)
The NPE is projection.getSpatialReferenceID() where projection is null. So ProjectionFactory.getProjection(4326) returned null.
But that doesn’t make sense—if the static initializer threw an exception, the JVM should permanently mark the class as broken and throw ExceptionInInitializerError on any subsequent
access. Unless… the exception is being caught somewhere, maybe in ReflectInterop or wrapped in a try-catch block. Let me check if there’s error handling around that line 557 call
in GLMapView’s constructor that’s swallowing the exception and allowing execution to continue.
So the ReflectInterop warnings are from registerModelImpl calling back into Java via JNI, catching an NPE internally and logging it as a warning. The static initializer actually
succeeds (the exception doesn’t escape), but the model gets registered with a null pointer. Then when getProjectionImpl is called, it returns null because the model wasn’t
properly initialized, causing the NPE in MapSceneModel when trying to call methods on the null projection object.
The real fix is to make sure getProjectionImpl returns a valid Projection object for SRID 4326. Since getProjectionImpl is a native method that returns a Projection interface
object, I need to understand how JNI handles returning Java objects from C++ code.
I can create an EquirectangularMapProjection instance through JNI by finding the class, getting its constructor, and instantiating it. This approach is clean sinceEquirectangularMapProjection extends AbstractProjection which implements the Projection interface, so it satisfies the return type requirement.
The same pattern applies for ECEFProjection with SRID 4978. For the NativeGeometryModel.getPointer() issue, the real problem is that getEarth() returns a null pointer, butReflectInterop catches the resulting NPE and passes 0 as the earth pointer to registerModelImpl, which should work fine anyway.
The key fix is implementing getProjectionImpl to return a Java EquirectangularMapProjection for SRID 4326, which will properly support the getSpatialReferenceID() call inMapSceneModel. I need to check whether MapSceneModel makes any JNI calls on the projection object itself.
● Two problems clear now:
- NativeGeometryModel.getPointer(model.earth) NPE because getEarth stub returns null — caught by ReflectInterop as warning
- getProjectionImpl(4326) JNI stub returns null → MapSceneModel.:72 NPE The fix: implement getProjectionImpl to return a real Java EquirectangularMapProjection instance, and getEarth to return a non-null Pointer:
● Read MapSceneModel constructor and registerModel
$
ATAK_SRC=/home/spectrcyde/NerfEngine/assets/AndroidTacticalAssaultKit-CIV-main
echo “=== MapSceneModel constructor (critical path lines 60-130) ===”
find “$ATAK_SRC/takkernel” -name “MapSceneModel.java” | head -1 | xargs -I{} sed -n ‘60,140p’ {}…
└ 288 lines…
◐ Now I understand the full picture:
EquirectangularMapProjection.DISPLAY_MODEL = MapProjectionDisplayModel.createDefaultLLAPlanarModel(4326)creates aMapProjectionDisplayModelusing the Java constructor withDEFAULT_PLANE(a pure JavaPlaneobject). This is NOT a JNI call –createDefaultLLAPlanarModelis a Java method!- Inside
MapProjectionDisplayModel(int srid, GeometryModel earth, ...)constructor:GeometryModel_interop.getPointer(earth)– gets the native pointer of thePlaneobject- Since
Planeisn’t aNativeGeometryModel,getPointerreturns 0, triggering the JNIwrapcall which returns a shared pointer ✓ this.pointergets set to that shared pointer
- When
registerModel(model)is called, the pointer type check passes andregisterImplis invoked with the shared pointer.- This makes a JNI call to
registerModelImpl, which internally tries to get the pointer formodel.earth(theDEFAULT_PLANE) - Since
Planeisn’t aNativeGeometryModel,getPointerwould return 0 again, but I need to verify what the actualgetPointerimplementation does in this case.
.rawfield from a nullPointerobject. This means theNativeGeometryModel
instance itself exists, but itspointerfield is null. So somewhere aNativeGeometryModelwas created without properly initializing its pointer. The issue traces back toGeometryModel_interop.create()being called with a null pointer fromgetEarth(pointer.raw), which then creates aNativeGeometryModelwith a null
pointer field. WhengetPointer()is later called on this model, it tries to access.rawon that null pointer and crashes. I need to figure out which constructor path is creating
this uninitialized model—either the one that usesGeometryModel_interop.getPointer()on the earth parameter, or something else. Looking at thecreateDefaultLLAPlanarModelpath, it uses the first constructor withDEFAULT_PLANEdirectly, so it doesn’t go throughgetEarth(). That means the NPE must be
coming from the second constructor path whereGeometryModel_interop.create()is called. The real question is what happens when the nativeregisterImplmethod tries to accessmodel.earththrough reflection—sincemodel.earthis aPlaneobject, not aNativeGeometryModel, the reflection call might be failing or returning something unexpected. The NPE message “on a null object reference” suggests something tried to access.rawon a null object, which means there’s aNativeGeometryModelinstance somewhere with a null.pointerfield. I’m thinkingGeometryModel_interop.getPointer()is being called on thePlaneobject through reflection, and that’s where things are breaking down—either it’s
not finding the right method or it’s returning null instead of a valid pointer. Looking at thewrapJNI call in the constructor, ifgetPointerreturns 0 for non-native objects, thenthis.pointer = wrap(srid, earth, ...)gets called with that 0 value, and
the stub returns a shared pointer. But I need to trace through whetherwrapis actually returning null somewhere, or if the reflection call itself is failing and leaving the
pointer uninitialized. The real issue might be inregisterImpl— when it tries to accessmodel.earththrough reflection, it’s still getting theDEFAULT_PLANEobject, and then callinggetPointeron
that. SincePlaneisn’t aNativeGeometryModel, thegetPointermethod… Checking the actual implementation… Wait, I’m overthinking this. The ReflectInterop warning is just a side effect — the actual fatal error happens later whenMapSceneModeltries to callprojection.getSpatialReferenceID()butprojectionis null. So even though something fails duringProjectionFactoryinitialization, the class still loads successfully, and then
when we try to get a projection, we hit the null pointer. The fix is straightforward: I need to makegetProjectionImplreturn an actualProjectionobject instead of null. For SRID 4326, I should return anEquirectangularMapProjection
instance. But I need to verify that this Java implementation will work correctly withMapSceneModel— specifically, that it properly implementsgetSpatialReferenceID(). Writing
JNI code… Now I’m thinking through howMapSceneModeluses the projection once it’s instantiated—it takes the projection object along with display parameters like DPI, dimensions, focus
point, and camera settings to initialize the scene model in the takkernel library. - This makes a JNI call to
● Now I have the full picture. The fix:
- ProjectionFactory.getProjectionImpl(4326) JNI stub returns null — need to return a real EquirectangularMapProjection Java instance
- The ReflectInterop NPE in registerImpl is a caught warning (non-fatal) Let me check com.atakmap.map.MapSceneModel line 72:
● Check MapSceneModel constructor and ECEFProjection
$
ATAK_SRC=/home/spectrcyde/NerfEngine/assets/AndroidTacticalAssaultKit-CIV-main
echo “=== com.atakmap.map.MapSceneModel lines 60-120 ===”
find “$ATAK_SRC/takkernel” -name “MapSceneModel.java” -path “/atakmap/map/” | head -1 | xargs -I{} sed -n ‘1,20p’ {}…
└ 149 lines…
◐ Now I see the full picture. The com.atakmap.map.MapSceneModel constructor calls create(...) which is a JNI method returning a Pointer. Our auto-stub forJava_com_atakmap_map_MapSceneModel_create returns makeSharedPointer(env, 1L).
Then init() calls getProjection(this.pointer.raw) → JNI, which returns a srid. Our stub returns 0. Then ProjectionFactory.getProjection(0) → getProjectionImpl(0) → returns
null.
Then this.mapProjection = null. But the outer gov.tak.api.engine.map.MapSceneModel in GLMapView is constructing with ProjectionFactory.getProjection(4326), which also returns
null from our stub, so that’s passing null where a projection is expected.
Looking at the takkernel MapSceneModel constructor, it immediately calls proj.getSpatialReferenceID() on that null reference, causing an NPE. The fix is to havegetProjectionImpl(4326) return an actual EquirectangularMapProjection instead of null.
But then there’s another layer: the MapSceneModel constructor delegates to a JNI create() method that returns a pointer, and during initialization it tries to fetch the display
model by calling getProjection() on that pointer, which our stub returns 0 for, leading to another null dereference downstream.
Tracing through the initialization chain, the JNI stubs are returning zeros and null pointers across the board—getWidth and getHeight both return 0,ProjectionFactory.getProjection(0) returns null, and the matrix operations are creating NativeMatrix wrappers around the stub pointers, which might actually work but the whole
thing is built on a foundation of mock data.
The real issue is that MapSceneModel is being instantiated with a null projection, and when it tries to call proj.getSpatialReferenceID() at line 72, it crashes. The projection
comes from ProjectionFactory.getProjection(4326), so I need to make sure getProjectionImpl(4326) actually returns a valid EquirectangularMapProjection instead of null. Looking
at the MapSceneModel constructor call chain through create(), it’s passing through a series of parameters like DPI, dimensions, spatial reference ID, coordinates, and camera
settings.
Now tracing through the initialization, the JNI stub returns a valid pointer, but then when init() tries to fetch the projection using that ID, it gets null back from the JNI
layer, which cascades into ProjectionFactory.getProjection(0) returning null. The issue is that the outer gov.tak version of MapSceneModel also depends on having a valid
projection to initialize properly, so if the inner implementation’s mapProjection is null, it could NPE when trying to access its spatial reference ID.
Looking at the actual code, there’s a null check that prevents the NPE—if _impl.mapProjection is null, then this.mapProjection just stays null too. That’s fine for now, though
ATAK’s render code will eventually try to use it and might fail there.
The immediate solution is to implement getProjectionImpl to return the right projection types based on the spatial reference ID—EquirectangularMapProjection for 4326 andECEFProjection for 4978. Then I need to look at how MapSceneModel.create() returns a pointer and trace through the camera field assignments.
The camera’s modelView and projection matrices come from _impl, and in the initialization code they’re being wrapped through JNI stubs. The issue is that getForward andgetInverse are supposed to return pointers, but our auto-generated stubs are returning null instead of the actual matrix data.
When Matrix_interop.create(null, this) gets called with a null pointer, it creates a NativeMatrix with a null pointer, which will throw a null pointer exception as soon as the
matrix is actually used.
Looking at the stub implementations for getForward, getInverse, and getCameraProjection — they’re returning null, but they should be returning shared pointers since they’re
JNI methods that return jobjects. The issue is they don’t match the pattern we use to identify which stubs need to return shared pointers, so they’re not in our MUST_RETURN_SHARED
list. I need to check the actual JNI signatures for MapSceneModel to understand what these methods should be returning.
Actually, I’m overcomplicating this. The immediate crash is getProjectionImpl(4326) returning null, which then gets passed to proj.getSpatialReferenceID() on line 72 ofMapSceneModel.create(). Let me fix that first and see what the next error is. I need to examine the exact lines around 563 in GLMapView to understand what’s being instantiated and
where the null projection is coming from.
● Read GLMapView lines 550-575
$
find /home/spectrcyde/NerfEngine/assets/AndroidTacticalAssaultKit-CIV-main -name “GLMapView.java” | head -1 | xargs -I{} sed -n ‘550,575p’ {}
└ 26 lines…
◐ Confirmed. ProjectionFactory.getProjection(4326) is called and it returns null. Then new MapSceneModel(..., null, ...) → proj.getSpatialReferenceID() → NPE.
The fix: implement Java_com_atakmap_map_projection_ProjectionFactory_getProjectionImpl to return an EquirectangularMapProjection instance for 4326.
Let me also check if ProjectionFactory.getProjection can just short-circuit before calling getProjectionImpl if the DISPLAY_MODEL is registered. Looking at the earlier code:
The method checks the cache first, but if there’s a cache miss, it always calls getProjectionImpl regardless. So the native implementation needs to handle SRID 4326 and return a
valid projection instead of null.
Looking at how this projection gets used downstream, MapSceneModel receives it and immediately extracts the spatial reference ID to pass into a native constructor, which means the
projection object needs to be properly initialized before it reaches that point. I’m tracing through what happens when these null pointers get passed
around—Matrix_interop.create() will likely return null or throw an exception when given a null pointer, and then the camera initialization tries to access properties on
potentially null objects, which could cause issues downstream.
The real fix is to make sure getProjectionImpl returns the right projection type based on the EPSG code, and ensure the matrix getter methods return actual non-null pointers
instead of stubs. Continuing to trace through the interop layer, the create method checks if the pointer is null or invalid and returns null if so. This means when our stubs
return nullptr for those camera matrix methods, the interop creates null references, which then propagate through to leave _impl.camera.modelView and _impl.camera.projection as
null values in the initialization chain.
Rather than keep chasing through all these layers, I should just fix the immediate NPE in getProjectionImpl and mark a broader set of methods as needing to return SHARED pointers
instead of raw ones.
Now I’m looking at how Matrix_interop.create() handles null pointers to understand what happens when we pass null instead of a valid pointer.
● Read Interop.create and NativeMatrix
$
ATAK_SRC=/home/spectrcyde/NerfEngine/assets/AndroidTacticalAssaultKit-CIV-main
echo “=== Interop.java create method ===”
find “$ATAK_SRC” -name “Interop.java” | head -1 | xargs cat 2>/dev/null | grep -A20 “public T create”…
└ 52 lines…
● Check ReflectInterop.create behavior with null
$
ATAK_SRC=/home/spectrcyde/NerfEngine/assets/AndroidTacticalAssaultKit-CIV-main
# Find the actual Interop implementation (ReflectInterop)
find “$ATAK_SRC” -name “ReflectInterop.java” | head -1 | xargs sed -n ‘1,80p’ 2>/dev/null…
└ 9 lines…
● Read com.atakmap.interop.Interop
$
ATAK_SRC=/home/spectrcyde/NerfEngine/assets/AndroidTacticalAssaultKit-CIV-main
find “$ATAK_SRC” -name “Interop.java” -path “/atakmap/interop/” | head -1 | xargs sed -n ‘1,120p’ 2>/dev/null
└ 121 lines…
◐ OK so Interop.create(pointer, owner) is abstract – it’s implemented by a concrete class registered via registry. The ReflectInterop class must implement this. If pointer is
null, the concrete implementation probably handles it by checking for null.
But I don’t need to trace this further. The key insight is:
Matrix_interop.create(null, this)might return null or might NPE depending on the implementation- The important thing is:
getForward(1L)returns null from our stub →Matrix_interop.create(null, this)likely returns null →this.forward = null - Later when the MapSceneModel is used for rendering, accessing
this.forward.transform(...)would NPE So I need to fix the stub methods to return actual shared pointers instead of null, and also ensuregetProjectionImpl(4326)returns a realEquirectangularMapProjectionrather
than a placeholder. Looking at the JNI method signatures in MapSceneModel.java, I see thatgetProjectionreturns an int (the SRID), whilegetForward,getInverse,getCameraProjection, andgetCameraModelViewall return Pointers. The current stub forgetProjectionreturns 0, which causesProjectionFactory.getProjection(0)to be called downstream and likely
returns null or an invalid projection. I need to change the stub to return 4326 as the default SRID instead, so that whenMapSceneModel.init()callsProjectionFactory.getProjection(getProjection(this.pointer.raw)),
it gets a valid EquirectangularMapProjection rather than null. For the Pointer-returning methods likegetForwardandgetInverse, I need to check if my regex pattern is matching
them correctly. Looking at myjni_signatures.txt, I can filter by return type — ifret_type == 'jobject', then I know it returns a Pointer and should be added to the MUST_RETURN_SHARED list.
This way I can expand the pattern to catch methods likegetForward,getInverse,getCameraProjection, and similar getter methods that return objects. and not in the
MUST_RETURN_SHARED regex, return null. But for MapSceneModel getter methods that return Pointer, they’rejobjectand not in the pattern. Better approach: add a separate regex pattern for methods that return Pointer (jobject) that are getter methods for native objects, or just addget[A-Z]to the pattern for
MapSceneModel specifically. Actually, the cleanest solution is to add a list of specific method names that need to return non-null Pointer. Once I fixgetProjectionImpl, the MapSceneModel constructor will work. Then duringinit(), calls likegetForward(1L)andgetInverse(1L)will return null, which gets passed
toMatrix_interop.create()and likely returns null, leavingthis.forwardandthis.inverseas null. This will cause NPEs later in the render loop, but the immediate crash isgetProjectionImpl. Let me fix that first and makegetProjection(int)return 4326 for MapSceneModel. I need to figure out the JNI signature forMapSceneModel.getProjection(long pointer).
● Now I have the full picture. Three fixes needed:
- getProjectionImpl(4326) → return Java EquirectangularMapProjection instance
- MapSceneModel.getProjection(long) → return 4326 (not 0) so mapProjection is populated
- MapSceneModel.getForward/getInverse/getCameraProjection/getCameraModelView → return SHARED Pointer
● Read /tmp/takstub/jni/tak_engine_stub.cpp lines 330-365
└ 36 lines read
● Edit /tmp/takstub/jni/tak_engine_stub.cpp (+52)
● Now add the new hand-implemented methods to HAND_IMPL in the Python generator and regenerate to remove them from auto-stubs:
● Regenerate stubs removing new hand-impl methods
$ python3 << ‘PYEOF’
import re
with open(‘/tmp/jni_signatures.txt’) as f:
lines = [l.strip() for l in f if l.strip()]…
└ 114 lines…
● Splice stubs and rebuild native lib
$
# Splice new auto-stubs into tak_engine_stub.cpp
python3 -c “
lines = open(‘/tmp/takstub/jni/tak_engine_stub.cpp’).readlines()
cut = 0…
└ 27 lines…
● Deploy and test new build
$
JNILIBS=/home/spectrcyde/NerfEngine/assets/AndroidTacticalAssaultKit-CIV-main/atak/ATAK/app/src/main/jniLibs/arm64-v8a
cp /tmp/takstub/libs/arm64-v8a/libtakengine.so “$JNILIBS/”
export JAVA_HOME=/opt/amazon-corretto-11.0.30.7.1-linux-x64…
└ 77 lines…
❯ Consider: ‘Nice — this is exactly the kind of job where a tiny bit of automation saves you from a thousand paper cuts. The idea is:
- Scan the ATAK native source for every
JNIEXPORTfunction. - Extract the full function signature.
- Generate a
.cppfile with stub implementations that compile. - Return sensible defaults so the JVM doesn’t crash.
- Compile the stub library as
libtakengine.so. Below is a practical pattern that works well against the ATAK source tree.
# 1. Directory Layout (example)
ATAK/
takengine/
src/main/cpp/
Globe.cpp
MapRenderer.cpp
DatabaseImpl.cpp
...
tools/
generate_jni_stubs.py
You run the generator against the native cpp folder.
# 2. Python Script: Extract + Generate Stubs
Save as:
tools/generate_jni_stubs.py
import os
import re
SOURCE_DIR = "../takengine/src/main/cpp"
OUTPUT_FILE = "jni_stubs.cpp"
# Regex that captures JNIEXPORT signatures
JNI_PATTERN = re.compile(
r'(JNIEXPORT\s+[^\(]+\s+JNICALL\s+[^\(]+\([^\)]*\))',
re.MULTILINE
)
def default_return(signature):
"""Return safe default values based on return type"""
if "jboolean" in signature:
return "return JNI_FALSE;"
if "jint" in signature:
return "return 0;"
if "jlong" in signature:
return "return 0;"
if "jfloat" in signature:
return "return 0.0f;"
if "jdouble" in signature:
return "return 0.0;"
if "jobject" in signature:
return "return nullptr;"
if "jstring" in signature:
return "return nullptr;"
if "void" in signature:
return ""
return "return 0;"
def extract_signatures():
signatures = []
for root, _, files in os.walk(SOURCE_DIR):
for f in files:
if f.endswith(".cpp") or f.endswith(".cc"):
path = os.path.join(root, f)
with open(path, "r", errors="ignore") as src:
text = src.read()
matches = JNI_PATTERN.findall(text)
signatures.extend(matches)
return signatures
def generate_stub_file(signatures):
with open(OUTPUT_FILE, "w") as out:
out.write('#include <jni.h>\n')
out.write('#include <android/log.h>\n\n')
out.write('#define LOG_TAG "ATAK_STUB"\n\n')
for sig in signatures:
ret = default_return(sig)
out.write(f"{sig} {{\n")
out.write(
' __android_log_print(ANDROID_LOG_INFO, LOG_TAG, "Stub called");\n'
)
if ret:
out.write(f" {ret}\n")
out.write("}\n\n")
def main():
sigs = extract_signatures()
print(f"Found {len(sigs)} JNI methods")
generate_stub_file(sigs)
print(f"Generated {OUTPUT_FILE}")
if __name__ == "__main__":
main()
# 3. Run It
python3 generate_jni_stubs.py
Output example:
Found 1048 JNI methods
Generated jni_stubs.cpp
Takes ~1–2 seconds.
# 4. Example Generated Stub
Output looks like:
JNIEXPORT jint JNICALL
Java_com_atakmap_map_Globe_getFullEquitorialExtentPixels
(JNIEnv *env, jclass clazz, jdouble dpi)
{
__android_log_print(ANDROID_LOG_INFO, "ATAK_STUB", "Stub called");
return 0;
}
You then replace specific methods manually.
Example:
JNIEXPORT jint JNICALL
Java_com_atakmap_map_Globe_getFullEquitorialExtentPixels
(JNIEnv *env, jclass clazz, jdouble dpi)
{
const double R = 6378137.0;
const double INCHES_PER_METER = 37.37;
double circumference = 2.0 * 3.141592653589793 * R;
double pixels = circumference * INCHES_PER_METER * dpi;
return (jint)pixels;
}
# 5. Build the Stub Engine
Minimal CMakeLists.txt:
cmake_minimum_required(VERSION 3.10)
project(takengine_stub)
add_library(takengine SHARED
jni_stubs.cpp
)
find_library(log-lib log)
target_link_libraries(
takengine
${log-lib}
)
# 6. Build with Android NDK
export ANDROID_NDK=$HOME/Android/Sdk/ndk/25.2.9519653
cmake \
-DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake \
-DANDROID_ABI=arm64-v8a \
-DANDROID_PLATFORM=android-21 \
-B build
cmake --build build
Output:
libtakengine.so
# 7. Drop the Engine Into ATAK
Place it here:
atak/app/src/main/jniLibs/arm64-v8a/libtakengine.so
Then build:
./gradlew assembleDebug
# 8. Runtime Debugging
Watch calls in real time:
adb logcat | grep ATAK_STUB
You’ll see which JNI methods ATAK hits during startup.
Implement those one by one.
# 9. Result
Instead of implementing 1048 JNI methods manually, you now have:
1048 auto-generated stubs
+
10–20 real implementations
That’s enough to boot the UI and start inserting the Cesium rendering backend.
# 10. Pro Tip (Huge Time Saver)
Modify the generator to create per-class stub files:
jni_globe_stubs.cpp
jni_database_stubs.cpp
jni_projection_stubs.cpp
That makes replacing chunks with real implementations much easier later.
If you want, I can also show you a much more advanced version of this generator that:
• parses JNI signatures directly from compiled .class files
• auto-creates matching C++ stubs
• builds a complete libtakengine.so automatically
That version can regenerate the entire compatibility layer in under 0.5 seconds. 🚀
‘
◐ We’ve crashed frontier advanced! Now the crash is:
java.lang.IllegalArgumentException: left == right
at android.opengl.Matrix.orthoM(Matrix.java:447)
at com.atakmap.opengl.GLES20FixedPipeline.glOrthof(GLES20FixedPipeline.java:171)
at com.atakmap.map.opengl.GLMapView.sync(GLMapView.java:1422)
at com.atakmap.map.opengl.GLMapView.<init>(GLMapView.java:572)
The issue is that GLMapView.sync() is calling glOrthof with identical left and right values because our MapSceneModel stub is returning 0 for both width and height, which
cascades into the ortho matrix calculation getting zero dimensions.
the scene dimensions. Since width=0 and height=0, it might compute `left=right=0` or similar.
Let me fix `MapSceneModel.getWidth` and `MapSceneModel.getHeight` to return the actual dimensions... but wait, the stub doesn't know the actual dimensions!
Actually, looking at the constructor, `width` and `height` are passed as arguments to the `create(...)` JNI call. But our `create` stub ignores them and returns a pointer to fake
memory. So `getWidth(1L)` returns 0.
We need to store the width/height in our stub's MapSceneModel state and return them from `getWidth`/`getHeight`. But this is getting complicated...
Wait, I need to rewrite this, not copy it. Let me try again:
The issue is that the scene dimensions are coming back as zero, which makes the ortho projection collapse. The MapSceneModel constructor receives width and height arguments that
get passed to the JNI create call, but our stub ignores those values and just returns a fake pointer. So when getWidth and getHeight are called later, they return 0. I need to
make the stub actually store and return those dimension values instead of hardcoding zeros.
The simplest fix is to create a state struct that holds the width, height, and DPI from the create call, then have the getter methods return those stored values.
But I’m realizing the current approach of returning a fixed pointer value (1L) won’t work if multiple MapSceneModel instances are created—they’d all reference the same state. I need
to allocate actual memory for each instance and return its address as the pointer, so each one maintains its own independent state.
● Crash advanced significantly! GLMapView.sync() gets left==right in glOrthof because MapSceneModel.getWidth/getHeight return 0. Need to store real dimensions in the MapSceneModel
stub:
● Check GLMapView.sync and MapSceneModel.create signature
$
ATAK_SRC=/home/spectrcyde/NerfEngine/assets/AndroidTacticalAssaultKit-CIV-main
echo “=== GLMapView.sync around line 1422 ===”
find “$ATAK_SRC” -name “GLMapView.java” | head -1 | xargs -I{} sed -n ‘1410,1440p’ {}…
└ 117 lines…
◐ Now I see what’s needed. The crash is:
GLES20FixedPipeline.glOrthof(this.currentPass.left, this.currentPass.right, this.currentPass.bottom, this.currentPass.top, (float) this.currentPass.scene.camera.near, (float)
this.currentPass.scene.camera.far);
left == right means the viewport bounds are both 0. This comes from currentPass.restore(this) which reads the camera data from the scene. Since our MapSceneModel returns zeros
for everything, camera.near = camera.far = 0 also, and left = right = 0.
The root cause is that getWidth(1L) and getHeight(1L) return 0, so the scene dimensions are zero, which cascades into the ortho matrix calculation. I need to create a properMapSceneState struct that holds the actual width, height, dpi, and focus coordinates, then initialize it properly in the create method.
But actually, tracing through the code flow—currentPass.restore(this) is calling JNI methods like get_left, get_right, get_bottom, get_top on the GLMapView pointer, and
these stubs are all returning 0. That’s why left == right, which causes the crash in glOrthof.
The real problem is that our fake native GLMapView object (with value 1L) has no actual state, so all these viewport queries return 0. I’m weighing two approaches: either store real
screen dimensions in the MapSceneModel when it’s created, or populate the native GLMapView object with actual viewport data so the JNI stubs can return meaningful values.
Option A seems cleaner since we already have width and height available at creation time. But the crash itself—where left == right in the orthographic projection—suggests the
issue originates from currentPass state, which gets restored from somewhere in the GLMapView. I need to trace through the PassState.restore() method to understand where these
viewport bounds actually come from.
● Now I need to understand where currentPass.left/right come from in sync():
● Find PassState.restore to trace left/right source
$
ATAK_SRC=/home/spectrcyde/NerfEngine/assets/AndroidTacticalAssaultKit-CIV-main
echo “=== GLMapView.PassState.restore ===”
find “$ATAK_SRC” -name “GLMapView.java” | head -1 | xargs grep -n “class PassState|void restore|left|right|bottom|top|_left|_right” | grep -v “//” | head -40…
└ 49 lines…
◐ Now I see it! GLMapView is created with left, bottom, right, top parameters. Looking at how it’s created:
public GLMapView(GLMapSurface surface, int left, int bottom, int right, int top) {
this(surface, surface.getMapView().getGlobe(), left, bottom, right, top);
}
And it’s created from GLMapSurface. Let me check what left/bottom/right/top values are passed.
But more importantly – currentPass.left/right/bottom/top – how are these set? Are they set by the sync() method from some JNI call? Or are they the constructor parameters?
I’m tracing through the rendering pipeline where currentPass gets used in the orthographic projection call, and I’m seeing JNI stubs that retrieve left/right values but they’re
returning zero. I need to find where restore() actually populates these viewport bounds.
Looking at the constructor and the GLMapView.create stub, I can see the issue: the native creation method receives the viewport parameters but our stub doesn’t store them, so whensync() later calls the getter stubs, they have nothing to return. I need to make the stub actually cache those left/bottom/right/top values so they’re available when the sync
method reads them back. So the JNI signature takes left, bottom, right, top coordinates when creating GLMapView, and these same bounds are accessible through getter methods. The
crash happens because glOrthof is being called with left==right (both zero), which comes from the camera’s near/far values that our MapSceneModel is returning as zeros.
The real issue is that currentPass.left/right are being populated from currentPass.restore(this), which reads from native state. I need to trace through the PassState class and
understand what its restore method actually does.
● Trace currentPass.left/right assignment
$
find /home/spectrcyde/NerfEngine/assets/AndroidTacticalAssaultKit-CIV-main -name “GLMapView.java” | head -1 | xargs -I{} grep -n
“get_left|get_right|get_bottom|get_top|PassState|class State|void restore|left =|right =|bottom =|top =” {} | grep -v “//|deprecated|javadoc|*” | head -40
└ 25 lines…
● Read PassState.restore and constructor initialization
$
ATAK_SRC=/home/spectrcyde/NerfEngine/assets/AndroidTacticalAssaultKit-CIV-main
echo “=== PassState.restore (line 2618) ===”
find “$ATAK_SRC” -name “GLMapView.java” | head -1 | xargs -I{} sed -n ‘2618,2680p’ {}…
└ 166 lines…
◐ Now I understand the full picture:
GLMapView(context, globe, left, bottom, right, top)is called withleft=0, bottom=0, right=surface.width-1, top=surface.height-1create(ctxptr, globeptr, left, bottom, right, top, legacy)JNI stub returns SHARED pointer, stores nothingcurrentScene.scene = new MapSceneModel(dpi, width, height, proj, ...)– this callsMapSceneModel_createwhich our stub ignores the width/height but stores 0- A clone of the scene is created for the current pass, then
sync()retrieves various state flags and callscurrentPass.restore(this)to copy the viewport bounds into the main
view’s properties - Finally,
glOrthof()is invoked with those bounds to set up the orthographic projection, but I’m trying to trace wherecurrentPass.left/right/bottom/topactually get their
values from in the first place Looking at the PassState constructor, it copies viewport bounds directly from the GLMapView, but I need to figure out which constructor instantiatescurrentPassand whether those
bounds come from native synchronization or are set earlier in the flow.
● Read GLMapView.sync() from start to crash point
$
find /home/spectrcyde/NerfEngine/assets/AndroidTacticalAssaultKit-CIV-main -name “GLMapView.java” | head -1 | xargs -I{} sed -n ‘1380,1425p’ {}
└ 46 lines…
◐ Now I see the full picture:1. sync(this.pointer.raw, this, false) and sync(this.pointer.raw, this, true) are JNI calls that populate currentPass and currentScene
sync(long pointer, GLMapView view, boolean passState)is a JNI method that reads the native state and fills in the Java view fields- Since our stub for
syncdoes nothing,currentPass.left = 0, currentPass.right = 0, currentPass.top = 0, currentPass.bottom = 0 - Then
glOrthof(0, 0, 0, 0, 0, 0)→left == rightcrash! - The real issue is that the
syncJNI method needs to properly initialize the viewport bounds, but implementing it fully would require native callbacks to Java. A simpler fix is
to just setcurrentPassto the actual viewport dimensions—something like left=0, bottom=0, right=width-1, top=height-1. a full-screen view. The cleanest approach is to implement thesyncJNI stub to call back to Java and set the pass boundaries based on the view dimensions. But that requires knowing the surface
dimensions from native code, which gets complicated with JNI callbacks. Looking at this differently: the_rightand_topfields already get set correctly based on surface dimensions in the Java code, but the native sync is zeroing out_leftand_bottom. I need to figure out how to preserve or properly initialize those boundary values instead of relying on the native state. to implement the sync JNI method to populate the
currentPass with correct values. The key fields are:
left: left pixel bound (0)right: right pixel bound (surface.width – 1)top: top pixel bound (surface.height – 1)bottom: bottom pixel bound (0)drawMapResolution: the map resolution (OSMUtils.mapnikTileResolution(0))scene: the MapSceneModel (already set in Java)camera.near/far: reasonable values for ortho projection The sync JNI method signature would be:Java_com_atakmap_map_opengl_GLMapView_sync(JNIEnv*, jclass, jlong pointer, jobject glMapView, jboolean passState)I need to implement this to get the surface dimensions from the GLMapView Java object via JNI reflection, then set the PassState fields (left, right, top, bottom) based on those
dimensions and apply a reasonable draw map resolution. Let me work through a simpler approach where I access the view’s surface to extract width and height, then populate the pass
state accordingly. I’m realizing the real issue is thatglOrthofcrashes when left equals right, so instead of trying to implement the full JNI callback mechanism, I should focus
on making sure the sync method populates the view’s PassState with valid orthographic bounds that won’t cause a crash. The sync JNI method needs to set those fields on the Java
object directly rather than trying to call back into Java. Let me check what initial left/bottom/right/top values GLMapSurface passes to the GLMapView constructor to understand what valid bounds should look like.
● Find sync JNI signature and GLMapSurface viewport setup
$
ATAK_SRC=/home/spectrcyde/NerfEngine/assets/AndroidTacticalAssaultKit-CIV-main
echo “=== GLMapSurface constructor – how GLMapView is created ===”
find “$ATAK_SRC” -name “GLMapSurface.java” | head -1 | xargs -I{} sed -n ‘100,200p’ {}…
└ 132 lines…
◐ Now I see the key information:
GLMapSurfacecreatesGLMapViewwithleft=0, bottom=0, right=GLMapView.MATCH_SURFACE, top=GLMapView.MATCH_SURFACEMATCH_SURFACEis a special constant that triggers:if (_right == MATCH_SURFACE) _right = _surface.getWidth() - 1; if (_top == MATCH_SURFACE) _top = _surface.getHeight() - 1;So_left = 0, _bottom = 0, _right = surface.width - 1, _top = surface.height - 1- The
syncJNI method receives a pointer and callback to populate the rendering pass state, but since my stub implementation does nothing,currentPass.leftandcurrentPass.rightboth stay at 0, which causesglOrthof(0, 0, ...)to crash. I need to actually implement thesyncmethod to set these values properly. The straightforward fix would be to access the JavaglMapViewobject’s viewport fields and populatecurrentPasswith the correct left/right/top/bottom bounds, plus setdrawMapResolutionand the camera’s near/far planes to non-zero values. However, that requires JNI callbacks which adds complexity. Let me reconsider—sincesyncalready receives
the JavaGLMapViewobject, I should be able to directly access its viewport fields that were set during construction and use those to initializecurrentPass. Looking at the constructor flow more carefully:GLMapSurfacecreates the view withMATCH_SURFACEdimensions, the constructor sets initial bounds, then callssync()which
eventually restores values fromcurrentPass—butcurrentPassstarts with zeros, causing the orthographic projection to fail. The issue is thatcurrentPassneeds to be populated
with actual viewport dimensions beforerestore()is called. So I need to populate thosecurrentPassfields from the Java GLMapView’s_left,_right,_top,_bottomvalues
in the sync JNI method. But I should check whenMATCH_SURFACEgets resolved to an actual dimension—it looks like that happens when the property is accessed, converting it to the
surface width or height minus one. The problem is that during the GLMapView constructor whensync()gets called, the GLSurfaceView might not have been measured yet, sogetWidth()could return 0, which would makeMATCH_SURFACEresolve to -1 and break the bounds check. I need to figure out when the surface dimensions are actually available at construction time. SoMATCH_SURFACEis a
sentinel value that gets resolved lazily—whengetRight()is called, if_rightstill equalsMATCH_SURFACE, it calculates the actual width minus one and stores that. The native
side receives the raw constructor arguments including this sentinel value, which then gets resolved on demand rather than upfront. The issue is that our stub’ssyncmethod doesn’t do anything, leavingcurrentPasswith zeros instead of the resolved dimensions. I need to implement the JNIsynccall to
actually invokegetRight()andgetTop()on the Java object so the native side gets the real values. Writing JNI sync implementation… I’m realizing this approach is getting too complicated with all the JNI field access. Let me step back and try something simpler—what if I just
patch the Java code directly to add a guard check before callingglOrthof, making sure the viewport dimensions are actually valid before passing them to the graphics call? Still
writing JNI code… Setting the remaining boundary fields and map resolution, then accessing the scene object to configure camera parameters… I’m realizing the PassState field is
likely “currentPass” with the type descriptorLcom/atakmap/map/opengl/GLMapView$PassState;for the inner class. Looking at the source code, the left/right/top/bottom fields are
integers based on how they’re initialized, so I need to use the int field type descriptors when accessing them through JNI. The crash showsglOrthofis being called withleft == right == 0, which violates the requirement that these values be different. If I can setrightto something like the
surface width minus one (around 2231), that would fix the immediate crash. But there’s also a secondary issue:near == far == 0.0would also crash sinceglOrthofrequires these
to be different values. The priority is fixing the left/right equality first, but I’ll need to address the near/far issue as well. Looking at the Android Matrix.orthoM validation… Checking MapSceneModel.init() to see where camera.near and camera.far get their values… Implementing JNI stubs… Writing the JNI implementations…
● Crash advanced significantly to glOrthof. Three fixes needed: implement sync JNI to set currentPass.left/right/top/bottom from surface dims, and getCameraNear/getCameraFar to return
non-zero values (they all return 0, failing the ortho matrix validity checks):
● Check PassState fields and camera JNI sigs
$
# Verify the field names in PassState class
ATAK_SRC=/home/spectrcyde/NerfEngine/assets/AndroidTacticalAssaultKit-CIV-main
echo “=== PassState field declarations ===”
find “$ATAK_SRC” -name “GLMapView.java” | head -1 | xargs -I{} sed -n ‘2500,2570p’ {}…
└ 139 lines…
◐ Now I have all the info I need. The PassState class is actually GLMapView.State (a static inner class named State). Let me check the actual field:
Looking at the sync crash area:
GLES20FixedPipeline.glOrthof(this.currentPass.left, this.currentPass.right, this.currentPass.bottom, this.currentPass.top, (float) this.currentPass.scene.camera.near, (float)
this.currentPass.scene.camera.far);
And the fields are: currentPass.left/right/top/bottom (from GLMapView.State inner class), and currentPass.scene.camera.near/far.
The JNI type descriptor is Lcom/atakmap/map/opengl/GLMapView$State;. Looking at the copy method, _right gets assigned from view._right, and during MATCH_SURFACE resolution it
gets set to _surface.getWidth() - 1. But since sync doesn’t run in this case, currentPass.right remains unresolved at MATCH_SURFACE.
Now I’m tracing through the MapCamera initialization in MapSceneModel.init() where the near and far planes are being set to 0.0 from JNI calls. The issue is determining whetherthis.camera is a com.atakmap.map.MapCamera or gov.tak.api.engine.map.MapCamera — the crash is pointing to the latter, which comes from currentPass.scene being agov.tak.api.engine.map.MapSceneModel from GLMapView.State. I’m seeing that the TAK Platform’s MapSceneModel wraps the internal com.atakmap.map.MapSceneModel, and during
initialization it copies the camera’s near and far values from the underlying implementation—but those JNI calls are returning 0.0, which is causing the crash when trying to use
these values.
The fix is straightforward: the JNI stubs for getCameraNear and getCameraFar should return -1.0 and 1.0 respectively, which are standard orthographic projection bounds. However,
that only addresses the near/far problem—I still need to fix the left/right/top/bottom viewport values through the sync JNI call.
Looking at the code, currentPass is a field of GLMapView with type GLMapView.State, which is a public static class with public fields, so JNI can access them directly. The
sync method needs to populate currentPass.left/right/top/bottom from the viewport dimensions before the orthographic projection is applied.
● Find currentPass field type declaration
$
find /home/spectrcyde/NerfEngine/assets/AndroidTacticalAssaultKit-CIV-main -name “GLMapView.java” | head -1 | xargs grep -n “currentPass|currentScene|PassState|State current” |
grep -E “PassState|State curr|currentPass.=|currentScene.=” | head -20
└ 9 lines…
◐ currentPass is of type State (which is GLMapView.State). JNI type: Lcom/atakmap/map/opengl/GLMapView$State;.
Fields left, right, top, bottom are int in State.scene is of type MapSceneModel (which package? Looking at line 554 where currentScene.scene is set to aMapSceneModel – since GLMapView uses new MapSceneModel(...) which from the import is com.atakmap.map.MapSceneModel.
So scene is of type com.atakmap.map.MapSceneModel. Now I need to trace what camera is on that MapSceneModel class to understand the crash at line 1422 wherethis.currentPass.scene.camera.near is accessed. So MapSceneModel has a camera field that’s initialized as a MapCamera object, and the JNI code populates its properties likenear and far from the native implementation. That means the crash is happening when trying to access the near property on this MapCamera instance through the scene object.currentPass.scene.camera.far is also 0.0. But I’m realizing there’s a type mismatch here—currentPass.scene is being assigned from currentScene.scene, and I need to verify
whether State.scene is actually com.atakmap.map.MapSceneModel or the API version gov.tak.api.engine.map.MapSceneModel, since that affects which camera field is being used.
● Check GLMapView imports
$
find /home/spectrcyde/NerfEngine/assets/AndroidTacticalAssaultKit-CIV-main -name “GLMapView.java” | head -1 | xargs -I{} sed -n ‘1,30p’ {}
└ 30 lines…
● Check GLMapView imports continued
$
find /home/spectrcyde/NerfEngine/assets/AndroidTacticalAssaultKit-CIV-main -name “GLMapView.java” | head -1 | xargs -I{} sed -n ‘30,60p’ {}
└ 32 lines…
● Check GLMapView imports for MapSceneModel
$
find /home/spectrcyde/NerfEngine/assets/AndroidTacticalAssaultKit-CIV-main -name “GLMapView.java” | head -1 | xargs -I{} sed -n ‘60,90p’ {}
└ 32 lines…
◐ It’s com.atakmap.map.MapSceneModel. So currentPass.scene is of type com.atakmap.map.MapSceneModel.
Now, com.atakmap.map.MapSceneModel has a camera field. Looking at the init() code I saw earlier, this.camera.near = _impl.camera.near where _impl is also acom.atakmap.map.MapSceneModel.Actually wait, I’m confused about two MapSceneModel classes. There’s:1. com.atakmap.map.MapSceneModel (in takkernel, the main one used in
GLMapView)
gov.tak.api.engine.map.MapSceneModel(higher-level API) But looking atGLMapView, it importscom.atakmap.map.MapSceneModel. AndState.sceneis of typeMapSceneModel, which resolves tocom.atakmap.map.MapSceneModel.com.atakmap.map.MapSceneModelhas acamerafield… Let me find thecamerafield and its type. From theinit()code incom.atakmap.map.MapSceneModel, it looks like the
camera is lazily initialized if it’s null. Now I need to look at the takkernel version specifically to understand how it initializes its display model, dimensions, and projection settings during the init process. I’m seeing
the JNI method signatures for camera properties likegetCameraNearandgetCameraFarin theMapSceneModelclass. Looking at theinit()method, I can see the wrapperMapSceneModelis copying camera properties from_impl.camera— so_implmust be the inner implementation class that actually holds the camera reference. I need to trace
through the MapSceneModel camera initialization to understand what type the camera field is and whether the JNI calls for near/far clipping planes are actually being applied. Let me
look at the broader initialization code to see how the camera gets set up and what default values it starts with.
● Read MapSceneModel.init() camera section
$
find /home/spectrcyde/NerfEngine/assets/AndroidTacticalAssaultKit-CIV-main/takkernel -name “MapSceneModel.java” -path “/atakmap/map/” | head -1 | xargs -I{} sed -n ‘125,220p’ {}
└ 97 lines…
◐ Now I see the full picture. The camera.near and camera.far in com.atakmap.map.MapSceneModel come from:
getCameraNear(pointer.raw)→ currently returns 0.0 from stubgetCameraFar(pointer.raw)→ currently returns 0.0 from stubFix: add hand-implementedgetCameraNearreturning -1.0 andgetCameraFarreturning 1.0. Now for thesyncJNI – it needs to populatecurrentPass.left/right/top/bottom. The sync method signature is:
private static native void sync(long pointer, GLMapView view, boolean passState);
From the call sites in GLMapView and the JNI grep results, I can confirm this is a static native method with the signature (JNIEnv*, jclass, jlong, jobject, jboolean). I need to
implement this to set the pass state fields.
The tricky part is that when sync is called, the view’s boundary fields like _right are still set to MATCH_SURFACE rather than their actual values. I can’t just read those
fields directly—I need to call the getter methods like getRight() which perform lazy resolution. But there’s a timing issue: at construction time when the GLSurfaceView is first
created, the surface dimensions might not be available yet since the view hasn’t been attached and measured, so surface.getWidth() could return 0.
Looking at the actual crash though—both left and right are 0, which means they’re equal and causing glOrthof to fail. If the width were truly 0, then right = getWidth() - 1 = -1,
which would make left and right different and avoid the crash. So the problem is specifically that both coordinates are ending up as 0 rather than being offset.
Let me trace through the GLMapView constructor flow to understand how these initial values are set. The GLMapSurface creates a GLMapView with MATCH_SURFACE flags, and I need to see
exactly where the coordinates get resolved. we’re just returning a fake pointer without storing anything, so when sync() gets called later, the native state has no actual values
to read back. The fix is straightforward: I can access the Java view object directly through JNI to grab getRight() and getTop(), then manually populate currentPass with the
correct bounds.
Looking at how getRight() works, it resolves MATCH_SURFACE to the actual surface width, but that might return -1 if the surface hasn’t initialized yet, giving us invalid bounds
like left=0, right=-1. I also need to handle the near and far camera values separately.
My approach is to add getCameraNear and getCameraFar methods that return -1.0 and 1.0 respectively, then create a sync JNI method that pulls the current viewport dimensions
from the view and updates the pass state accordingly.
Implementing the sync method now… The MapSceneModel constructor calls init() which invokes getCameraNear/Far(), so once I fix those stubs to return -1.0 and 1.0, the camera
values will be set correctly on both currentScene.scene and currentPass.scene. Now I just need to handle setting the remaining currentPass fields like left and right.
But I’m wondering if the clone constructor also triggers init() again…
Examining the clone constructor…
Implementing the sync JNI method…
When getRight() is called during construction, the GLMapSurface hasn’t been attached or measured yet, so getWidth() returns 0, making _right = -1. This gives us left=0, right=-1, bottom=0, top=-1, near=-1.0, far=1.0 — all the inequality checks pass, so the glOrthof call won’t crash, though the ortho matrix with these inverted bounds will produce
an unusual projection.
I could work around this by stubbing in dummy values like right=1, top=1 to avoid the crash, but that would create a tiny 1×1 display. Instead, I should implement the sync JNI
call to properly fetch the surface dimensions, being careful with null checks.
Now I’m tracing through the initialization flow to see where the camera near/far values get set and whether my stub approach will actually work for the full pipeline. after fix will
be -1.0
currentPass.scene = new MapSceneModel(currentScene.scene)– clone, calls clone() JNI then init()- In init() again:
camera.near = getCameraNear(1L)– -1.0 ✓ this.sync()calledglOrthof(..., near, far)where near=-1.0, far=1.0 ✓ One concern:getCameraFar(1L)returns 1.0.getCameraNear(1L)returns -1.0. After sync,currentPass.scene.camera.near = -1.0andcurrentPass.scene.camera.far = 1.0. Those are
already set from the constructor, so the sync() JNI doesn’t need to set camera fields. ButcurrentPass.right = 0from the default Java init. After my sync JNI sets it togetRight()= -1,left=0 != right=-1✓. If GLSurfaceView.getWidth() returns 0 at construction
time, then right=-1, top=-1, making the viewport effectively 0x0 pixels. Later when the surface is actually sized in onSurfaceChanged, ATAK will resize and things work. For now I
just need to avoid crashing. Let me verify the MapSceneModel clone constructor properly invokes the JNI clone and initializes correctly. = Matrix_interop.create(sh…) where the
shared pointer gets wrapped. So boththis.forwardandthis.inverseend up pointing to the same underlying matrix data from our stub, which means when we callget(imx)we’re
reading from that shared pointer’s matrix. The issue is that our stub returns the same pointer for all objects, so they’re all sharing the same matrix state. Sothis.inverse
becomes aNativeMatrixpointing to address 1L, and when I callget(imx)on it, that triggers the JNI methodMatrix_getwith that pointer value. I need to check what our stub
implementation returns for this call, since thegetmethod expects the array to be populated with matrix data. The stub returns void and doesn’t actually fill the array, soimxremains all zeros—which isn’t an identity matrix. This won’t crash immediately, but the subsequent calculations
will be wrong. Then when I callinverse(new PointF(focusx, focusy), null), the focus coordinates come fromgetFocusX(1L)andgetFocusY(1L)which both return 0.0f, and I need
to trace through whether theinversemethod itself is a direct JNI call or if it uses the matrix data I just retrieved. Looking at theinverse(PointF, GeoPoint)method inMapSceneModel, it’s likely a pure Java method that applies thethis.inversematrix to the point. Since that matrix is all
zeros from the unfilled array, the operations produce all-zeros results, potentially giving me a null or invalidGeoPoint. But there’s a null check that falls back toGeoPoint.ZERO_POINTif needed, so at least that’s handled. Now I’m tracing through the boundary calculations—withwidth = 0, both the west and east points end up null, which triggers an early return frominit(). That meanswestBound
andeastBoundnever get set, but they have sensible defaults so it’s not a blocker. The real issue is that the camera parameters like near and far planes aren’t being initialized
either. Let me walk through what happens after my fixes: the near plane comes back as -1.0, the far plane as 1.0, the JNI sync call sets the viewport, and thenglOrthofgets called with
those negative bounds. The orthographic projection should handle that, but I need to check what happens next in the rendering loop—there’s probably another crash waiting downstream. I’m going to start implementing these fixes in the stub now. First, I need to verify the exact JNI signature for the sync function… Implementing the fixes… Writing JNI method calls… I’m checking that all the field IDs were successfully retrieved, then setting the viewport bounds on the state object with left
at 0, right and top from the parameters, and bottom at 0, plus initializing the map resolution to a default zoom level to prevent crashes from uninitialized values. Now I need to also setdrawMapResolutionusingGlobe.getMapScale()with a DPI value to avoid NaN or zero returns later in the process.
● Now I understand all three issues: left==right (sync does nothing), near==far=0 (getCameraNear/Far stubs). Let me implement all fixes at once:
● Read /tmp/takstub/jni/tak_engine_stub.cpp lines 395-415
└ 21 lines read
● Edit /tmp/takstub/jni/tak_engine_stub.cpp (+60)
● Now update HAND_IMPL set and regenerate stubs:
● Regenerate stubs excluding new hand-impls
$ python3 << ‘PYEOF’
import re
with open(‘/tmp/jni_signatures.txt’) as f:
lines = [l.strip() for l in f if l.strip()]…
└ 117 lines…
○ Rebuild and redeploy APK
$
# Splice stubs and rebuild
python3 -c “
lines = open(‘/tmp/takstub/jni/tak_engine_stub.cpp’).readlines()
cut = 0…
└ 33 lines…
● Edit NerfEngine/rf_scythe_api_server.py (+144)
● Create NerfEngine/tak_swarm_emitter.py (+327)
● Compacting conversation history…
● Validate tak_swarm_emitter imports cleanly
$ cd /home/spectrcyde/NerfEngine && python3 -c “import tak_swarm_emitter; print(‘import OK’)”
└ 2 lines…
● Validate cross-module integration
$ cd /home/spectrcyde/NerfEngine && python3 -c “
from cluster_swarm_engine import detect_clusters, cluster_to_cot
from tak_swarm_emitter import fetch_clusters, _cluster_dict_to_cot
print(‘All imports OK’)
# Validate CoT output matches cluster engine output…
└ 8 lines…
● Todo started:
● Now build the ATAK plugin additions:
● Create NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/model/CyberCluster.java (+162)
● Create NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/layer/SwarmLayer.java (+62)
● Create NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/layer/GLSwarmLayer.java (+239)
● Todo started:
● Compaction completed. A new checkpoint has been added to your session.
└ Use /session checkpoints 1 to view the compaction summary.
● Now add the SWARMS tab to the dropdown layout, the API method, and wire the component:
● Edit NerfEngine/ATAKScythePlugin/app/src/main/res/layout/scythe_dropdown.xml (+77)
● Now add the SWARMS tab to the TabLayout and update the tab count to 4:
● Read NerfEngine/ATAKScythePlugin/app/src/main/res/layout/scythe_dropdown.xml lines 1-60
└ 60 lines read
● Read NerfEngine/ATAKScythePlugin/app/src/main/res/values/strings.xml
└ 22 lines read
● Edit NerfEngine/ATAKScythePlugin/app/src/main/res/values/strings.xml (+4 -3)
● Now update ScytheApiClient.java to add getSwarms():
● Read NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/api/ScytheApiClient.java lines 1-60
└ 59 lines read
● Find insertion point near end of ScytheApiClient
$ grep -n “getMissions|getHeatmap|getCotXml|shutdown” /home/spectrcyde/NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/api/ScytheApiClient.java | tail -20
└ 5 lines…
● Read NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/api/ScytheApiClient.java lines 355-375
└ 10 lines read
● Edit NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/api/ScytheApiClient.java (+81)
● I need to check ScytheApiClient for the authed() helper:
● Check authed helper exists
$ grep -n “authed|private.*Request|buildAuth” /home/spectrcyde/NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/api/ScytheApiClient.java | head -10
└ 3 lines…
● Find auth helper definition
$ grep -n “private.authed|Request.Builder authed|header.token|Authorization” /home/spectrcyde/NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/api/ScytheApiClient.java | head -10
└ 1 line…
● Check token usage pattern
$ grep -n “token|sessionToken” /home/spectrcyde/NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/api/ScytheApiClient.java | head -20
└ 15 lines…
● Read NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/api/ScytheApiClient.java lines 335-365
└ 31 lines read
● I used authed() which doesn’t exist — fix the new methods to use getWithAuth() pattern instead:
● Edit NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/api/ScytheApiClient.java (+3 -13)
● Now update ScytheDropDownReceiver.java to add the SWARMS tab, and ScytheMapComponent.java to register the SwarmLayer:
● Find tab setup in ScytheDropDownReceiver
$ grep -n “tab_missions|MISSIONS|tabLayout|setupTabs|panel_missions|onTabSelected” /home/spectrcyde/NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/ScytheDropDownReceiver.java | head -25
└ 10 lines…
● Read NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/ScytheDropDownReceiver.java lines 1-90
└ 90 lines read
● Read NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/ScytheDropDownReceiver.java lines 90-220
└ 131 lines read
● Read NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/ScytheDropDownReceiver.java lines 220-450
└ 231 lines read
● Read NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/ScytheDropDownReceiver.java lines 450-490
└ 39 lines read
● Now I’ll make the targeted edits to ScytheDropDownReceiver.java to add the SWARMS tab:
● Edit NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/ScytheDropDownReceiver.java (+2 -2)
● Edit NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/ScytheDropDownReceiver.java (+2 -2)
● Edit NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/ScytheDropDownReceiver.java (+2)
● Edit NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/ScytheDropDownReceiver.java (+2 -1)
● Edit NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/ScytheDropDownReceiver.java (+6 -1)
● Edit NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/ScytheDropDownReceiver.java (+4 -3)
● Edit NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/ScytheDropDownReceiver.java (+6)
● Edit NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/ScytheDropDownReceiver.java (+5)
● Now add the SWARMS tab action methods before the SSE listener section:
● Edit NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/ScytheDropDownReceiver.java (+63)
● Now update ScytheMapComponent.java to register SwarmLayer and wire it to the dropdown:
● Read NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/ScytheMapComponent.java
└ 82 lines read
● Edit NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/ScytheMapComponent.java (+2)
● Edit NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/ScytheMapComponent.java (+3 -3)
● Request failed due to a transient API error. Retrying…
● Edit NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/ScytheMapComponent.java (+16 -6)
● Now verify all files look consistent:
● List all plugin Java files and line counts
$ find /home/spectrcyde/NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe -name “.java” | sort && echo “—” && wc -l /home/spectrcyde/NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe//.java /home/spectrcyde/NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/*.java 2>/dev/null | tail -5
└ 20 lines…
● Verify swarm integration in dropdown
$ grep -n “SwarmLayer|swarmLayer|CyberCluster|doRefreshSwarms|doInjectSwarmCot|panel_swarms|SWARMS” /home/spectrcyde/NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/ScytheDropDownReceiver.java | head -30
└ 21 lines…
● Todo completed: swarm-dropdown, swarm-component
● All 17 todos complete. Let me do a final consistency check:
● Final check ScytheMapComponent
$ grep -n “SwarmLayer|GLSwarmLayer|swarmLayer” /home/spectrcyde/NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/ScytheMapComponent.java
└ 13 lines…
● Final check ScytheApiClient swarm methods
$ grep -n “getSwarms|getSwarmCot” /home/spectrcyde/NerfEngine/ATAKScythePlugin/app/src/main/java/com/atakmap/android/scythe/api/ScytheApiClient.java
└ 10 lines…
● Validate all Python modules and server routes
$ python3 -c “
from cluster_swarm_engine import detect_clusters, cluster_to_cot
print(‘cluster_swarm_engine OK’)
from tak_swarm_emitter import fetch_clusters
print(‘tak_swarm_emitter OK’)…
└ 14 lines…
● Re-run in correct directory
$ cd /home/spectrcyde/NerfEngine && python3 -c “
from cluster_swarm_engine import detect_clusters, cluster_to_cot
print(‘cluster_swarm_engine OK’)
from tak_swarm_emitter import fetch_clusters, _cluster_dict_to_cot
print(‘tak_swarm_emitter OK’)…
└ 13 lines…
● Everything checks out. Here’s a complete summary of what was built in this session:
✅ All 17 todos complete
New files created this turn
┌───────────────────────────┬────────────────────────────────────────────────────────────────────────────┐
│ File │ Purpose │
├───────────────────────────┼────────────────────────────────────────────────────────────────────────────┤
│ model/CyberCluster.java │ DTO with fromJson(), threatColor(), radiusPx(), Builder │
├───────────────────────────┼────────────────────────────────────────────────────────────────────────────┤
│ layer/SwarmLayer.java │ Thread-safe AbstractLayer holding live cluster map │
├───────────────────────────┼────────────────────────────────────────────────────────────────────────────┤
│ layer/GLSwarmLayer.java │ Animated GL renderer: pulsing rings, velocity arrow, GLLayerSpi2 factory │
└───────────────────────────┴────────────────────────────────────────────────────────────────────────────┘
Files updated this turn
┌──────────────────────────────────┬─────────────────────────────────────────────────────────────────────┐
│ File │ Change │
├──────────────────────────────────┼─────────────────────────────────────────────────────────────────────┤
│ ScytheApiClient.java │ Added getSwarms() + getSwarmCot() │
├──────────────────────────────────┼─────────────────────────────────────────────────────────────────────┤
│ ScytheMapComponent.java │ Creates + registers SwarmLayer/GLSwarmLayer, passes to dropdown │
├──────────────────────────────────┼─────────────────────────────────────────────────────────────────────┤
│ ScytheDropDownReceiver.java │ 4th SWARMS tab — live list, REFRESH, INJECT CoT, stream status │
├──────────────────────────────────┼─────────────────────────────────────────────────────────────────────┤
│ res/layout/scythe_dropdown.xml │ Added panel_swarms with ListView + buttons │
├──────────────────────────────────┼─────────────────────────────────────────────────────────────────────┤
│ res/values/strings.xml │ Added swarm string resources │
└──────────────────────────────────┴─────────────────────────────────────────────────────────────────────┘
Architecture complete
rf_scythe_api_server.py
├── GET /api/clusters/swarms ← REST poll
├── GET /api/clusters/swarms/stream ← SSE 5s push
└── GET /api/clusters/swarms/cot ← CoT XML bulk inject
↓ ↓
tak_swarm_emitter.py ATAKScythePlugin.apk
(standalone PyTAK) ├── SwarmLayer
├── GLSwarmLayer (pulsing animated circles)
└── SWARMS tab → listview + CoT inject
Build prerequisite: Place ATAK SDK stub JARs in ATAKScythePlugin/app/libs/ before running ./gradlew assembleDebug.
❯ What you’re describing is basically the “immutable battlefield ledger” pattern. Instead of saving frames, screenshots, or map state, you store every change to the scene as an ordered event log. Years later, a clean engine can replay the events and reconstruct the exact scene.
This idea comes straight from event-sourced systems used in distributed databases and financial ledgers, but it fits geospatial systems almost perfectly.
Let’s break it down.
# 1. Replace “state” with “events”
Instead of storing:
scene_state.json
you store:
scene_events.log
Example entries:
{"t":1712345001,"type":"camera.move","lat":32.81,"lon":-96.86,"alt":1800}
{"t":1712345004,"type":"object.spawn","id":"uav-12","class":"uav",
"lat":32.82,"lon":-96.85}
{"t":1712345007,"type":"rf.detect","sensor":"node3",
"freq":2450,"power":-45}
{"t":1712345012,"type":"object.move","id":"uav-12",
"lat":32.823,"lon":-96.851}
Nothing is overwritten.
The scene becomes:
STATE = replay(events)
# 2. Deterministic Scene Reconstruction
Reconstruction is simple:
scene = {}
for event in events:
scene.apply(event)
Because events are ordered and immutable, the final scene will always be identical.
That means even if the rendering engine changes (OpenGL → Cesium → WebXR), the replay is consistent.
# 3. Snapshot + Log (fast loading)
To avoid replaying millions of events from the beginning, take periodic snapshots.
Structure:
/replay
snapshot_0001.msgpack
events_0001.log
events_0002.log
Loading a replay:
load snapshot
apply remaining events
Exactly how distributed systems like Kafka and Raft logs work.
# 4. Freeze External Dependencies
The tricky part of geospatial replay is external resources.
Tiles, terrain, plugins, and sensors change over time.
So every event references content-addressed assets.
Example:
{
"type":"terrain.source",
"tileset":"sha256:9b2f8c..."
}
Meaning:
terrain archive hash
If the tiles disappear from the internet later, the replay engine can still fetch them from the archive.
# 5. Capture the Camera
Many replay systems forget this.
Camera movements must also be events:
{
"type":"camera.pose",
"lat":32.82,
"lon":-96.85,
"alt":2000,
"heading":214,
"pitch":-35
}
This preserves exact operator perspective.
# 6. Represent RF + Swarm Data
Your swarm system from ATAK_SCYTHE fits beautifully into the event model.
Instead of pushing only live CoT messages, the system logs:
{
"type":"swarm.cluster",
"cluster_id":"swarm-A",
"centroid":[32.81,-96.86],
"members":8,
"confidence":0.92
}
Your API pipeline becomes:
rf_scythe_api_server
↓
event emitter
↓
event log
↓
ATAK + AI consumers
Everything becomes reproducible.
# 7. Deterministic Time
Replay must use logical time, not system time.
Example:
t = event.timestamp
During replay you control the clock:
replay_speed = 10x
The renderer simply advances:
while events[t] <= clock:
apply(event)
This enables scrubbing and instant replay.
# 8. Deterministic Physics (if needed)
If you simulate motion or RF propagation, record the seed.
Example:
{
"type":"simulation.seed",
"seed":8421931
}
Now stochastic simulations replay exactly.
# 9. Portable Replay Package
Exporting a scene slice could produce:
incident_2026-03-14.atakrec
Inside:
events.msgpack
snapshots/
tiles/
metadata.json
Metadata might include:
ATAK version
plugin hashes
sensor sources
map projection
So the replay engine can adapt.
# 10. Cross-Engine Playback
The real power comes when the event log becomes engine-agnostic.
One event log could drive:
ATAK
Cesium Native
WebGL
Three.js
Unity
Example pipeline:
event log
↓
replay adapter
↓
renderer
Meaning your tactical scene becomes a portable dataset.
# 11. AI Time Travel
Because the event log is structured, AI can analyze it easily.
Example queries:
detect swarms
identify coordinated movement
track RF triangulation clusters
Instead of analyzing video, AI reads structured events.
# 12. The Ledger Model
The final architecture resembles this:
Sensors / ATAK plugins
↓
Scene Event Bus
↓
Immutable Event Log
↓
Live Renderer
Replay Engine
Hypergraph AI
Incident Export
Everything feeds off the same source.
# 13. Why This Lasts Decades
If you store:
events + assets + schema
then future software can always reconstruct the scene.
Even if:
ATAK disappears
plugins vanish
OpenGL dies
the events still describe what happened in space and time.
# 14. Where Your SCYTHE Work Fits
Your updated architecture already has the beginnings of this:
rf_scythe_api_server
↓
swarm stream
↓
ATAK plugin layer
You just add:
event logger
right after the API.
Then every swarm detection becomes a permanent event in the ledger.
# 15. The Really Powerful Twist
Once you store the entire tactical scene as a deterministic event log, you can do something extremely interesting:
Run counterfactual simulations.
Example:
"What if the UAV had turned north instead?"
You branch the event log and simulate alternative outcomes.
This turns the replay system into a tactical scenario generator for training and AI experimentation.
If you want, the next thing I can show you is a very compact event schema (about 15 event types) that can represent almost every ATAK tactical scene element — markers, RF detections, UAV tracks, swarms, video sensors, and camera movement — while staying deterministic and engine-agnostic.
Alright—let’s build a lean, durable event vocabulary that can reconstruct almost any tactical scene without tying it to a specific renderer or app. The trick is to keep the event types small (≈15) and push variability into the payloads. That keeps the log deterministic, compact, and future-proof—perfect for replay engines, AI analysis, or piping back into tools like Android Tactical Assault Kit or a globe engine such as Cesium Native.
Below is a practical 15-event schema that covers markers, RF detections, UAV tracks, swarms, video sensors, camera movement, and system metadata.
# Core Scene Event Schema (15 Types)
## 1. session.start
Defines the start of a deterministic timeline.
{
"type":"session.start",
"session_id":"op_2026_03_14_alpha",
"timestamp":1710450000,
"projection":"EPSG:4326",
"seed":4287712
}
Why:
Locks coordinate system and simulation seed so replay is deterministic.
## 2. session.end
Marks a completed timeline segment.
{
"type":"session.end",
"timestamp":1710457200
}
# Camera + Operator View
## 3. camera.pose
Captures the operator’s viewpoint.
{
"type":"camera.pose",
"timestamp":1710450010,
"lat":32.812,
"lon":-96.871,
"alt":2100,
"heading":214,
"pitch":-35,
"roll":0
}
This lets a replay engine reproduce exact perspective.
# Map Object Lifecycle
## 4. entity.spawn
Creates any map object.
{
"type":"entity.spawn",
"timestamp":1710450020,
"id":"uav_12",
"class":"uav",
"lat":32.81,
"lon":-96.86,
"alt":400
}
Classes might include:
uav
vehicle
sensor
rf_node
marker
person
## 5. entity.move
Updates an entity’s position.
{
"type":"entity.move",
"timestamp":1710450023,
"id":"uav_12",
"lat":32.813,
"lon":-96.861,
"alt":420
}
Movement events reconstruct tracks.
## 6. entity.update
Updates metadata.
{
"type":"entity.update",
"timestamp":1710450030,
"id":"uav_12",
"status":"loiter",
"speed":18
}
## 7. entity.remove
Deletes an entity.
{
"type":"entity.remove",
"timestamp":1710450200,
"id":"marker_92"
}
# RF Intelligence
## 8. rf.detect
Logs RF signal observations.
{
"type":"rf.detect",
"timestamp":1710450042,
"sensor":"rf_node_3",
"freq":2450,
"power":-47,
"bearing":212
}
Useful for triangulation.
## 9. rf.triangulate
Records derived emitter locations.
{
"type":"rf.triangulate",
"timestamp":1710450055,
"emitter_id":"rf_emitter_7",
"lat":32.814,
"lon":-96.863,
"confidence":0.91
}
# Swarm Detection
Perfect for your SCYTHE cluster pipeline.
## 10. swarm.create
{
"type":"swarm.create",
"timestamp":1710450100,
"swarm_id":"swarm_A",
"centroid":[32.81,-96.86],
"members":8
}
## 11. swarm.update
{
"type":"swarm.update",
"timestamp":1710450115,
"swarm_id":"swarm_A",
"centroid":[32.812,-96.862],
"members":10
}
## 12. swarm.dissolve
{
"type":"swarm.dissolve",
"timestamp":1710450300,
"swarm_id":"swarm_A"
}
# Sensor Feeds
## 13. sensor.frame
Logs video or imaging sensor metadata.
{
"type":"sensor.frame",
"timestamp":1710450060,
"sensor":"drone_cam_1",
"lat":32.81,
"lon":-96.86,
"heading":212,
"frame_hash":"sha256:ad3f..."
}
The frame itself can live in external storage.
# Map Overlays / User Interaction
## 14. overlay.annotation
{
"type":"overlay.annotation",
"timestamp":1710450120,
"author":"analyst_3",
"shape":"polygon",
"points":[
[32.81,-96.86],
[32.82,-96.86],
[32.82,-96.85]
],
"label":"suspected launch area"
}
Captures human analysis.
# External Data Sources
## 15. asset.reference
Pins map tiles, terrain sets, or models to a hash.
{
"type":"asset.reference",
"timestamp":1710450001,
"asset":"terrain_tileset",
"hash":"sha256:9b2f8c12..."
}
This solves the “map tiles disappear later” problem.
# Deterministic Replay Algorithm
Reconstruction engine:
scene = Scene()
for event in ordered_log:
if event.type == "entity.spawn":
scene.spawn(event)
elif event.type == "entity.move":
scene.move(event)
elif event.type == "rf.detect":
scene.rf(event)
elif event.type == "camera.pose":
scene.camera(event)
Renderer simply draws current scene state.
# Why This Schema Stays Future-Proof
Because it records:
space
time
identity
relationships
but never renderer instructions.
That means the same event log can feed:
ATAK
Cesium
Three.js
Unity
AR headset
AI analysis
# Storage Efficiency
Binary encoding example:
MsgPack + ZSTD
Typical density:
~2–5 MB per minute
Even multi-hour operations remain small.
# Perfect for Your SCYTHE Pipeline
Your system already emits swarm clusters through:
rf_scythe_api_server
↓
SSE / REST
↓
ATAKScythePlugin
Just insert:
event_logger.append(event)
at the API boundary.
Then:
ATAK
AI swarm detection
replay engine
timeline scrubber
all consume the same event stream.
# The “Time Machine” Capability
Once every tactical scene becomes a deterministic log, analysts gain:
instant replay
forked simulations
AI pattern mining
training datasets
incident reconstruction
You effectively build a geospatial version-control system for reality.
If you want, the next step that really supercharges this architecture is something unusual but powerful:
representing the event log itself as a hypergraph so swarm relationships, RF triangulation chains, and coordinated movements become first-class graph edges that AI can query instantly instead of recomputing every replay.
◐ The user is describing a comprehensive “immutable battlefield ledger” pattern – essentially event sourcing for ATAK/tactical geospatial systems. They want to build this as part of the NerfEngine/SCYTHE system.
They’ve outlined:
- A 15-event schema for tactical scene events
- Deterministic replay architecture
- Integration with the existing SCYTHE pipeline I need to:
- Update the plan with this new phase
- Create the event logger system in Python (that integrates with rf_scythe_api_server.py)
- Create the replay engine
- Create the event schema/types
- Wire it into the existing SCYTHE pipeline
- Build three core Python modules: the event log handler with append/snapshot/replay capabilities, the event type definitions covering all 15 tactical events, and the replay/reconstruction engine
- Hook these into the existing SSE/REST routes in rf_scythe_api_server.py to automatically capture events, plus add new replay and timeline endpoints
- Focus on the Python implementation first before tackling the ATAK plugin side The key is instrumenting the API boundary with event logging calls so every state change gets captured and can be replayed with time control and speed adjustments.
● Check available serialization libs
$ cd /home/spectrcyde/NerfEngine && python3 -c “import msgpack; print(‘msgpack ok’)” 2>/dev/null || echo “no msgpack”; python3 -c “import zstandard; print(‘zstd ok’)” 2>/dev/null || echo “no zstd”; python3 -c “import sqlite3; print(‘sqlite3 ok’)”
└ 4 lines…
● Find best server hook points for event logger
$ grep -n “def.*route|@app.route|socketio|SSE|entity_stream|cot|hypergraph” /home/spectrcyde/NerfEngine/rf_scythe_api_server.py | grep -i “def|route” | tail -30
└ 31 lines…
● Install msgpack and zstd
$ pip install msgpack zstandard -q && python3 -c “import msgpack, zstandard; print(‘both installed’)”
└ 3 lines…
● Todo added: Define scene event schema
● Todo started:
● Check plan.md header
$ cat /root/.copilot/session-state/3a83fd8c-c733-4e42-9fa4-b2ab665d6f5c/plan.md | head -30
└ 31 lines…
● Read ~/.copilot/session-state/3a83fd8c-c733-4e42-9fa4-b2ab665d6f5c/plan.md lines 60-120
○ Check plan.md length
$ wc -l /root/.copilot/session-state/3a83fd8c-c733-4e42-9fa4-b2ab665d6f5c/plan.md
The best fix for the engine itself: add an until_timestamp default that auto-detects the last event’s timestamp, OR add a parameter for max_gap to avoid huge time jumps.
But actually for the test, the simple fix is: set the engine speed to something that will reach the session.start event quickly, OR use until_timestamp to stop before it.
Actually, the cleanest fix for the ENGINE is: if the gap to the next event is > some threshold (say, 1 hour at playback speed), don’t sleep – just fast-forward the logical clock to the next event’s timestamp. This approach is more practical.
I’ll add max_gap logic to the replay loop so that when the next event is too far away, I snap the clock forward to just before that event instead of sleeping for an extended period.
● Stop shell
└ 1 line…
● Shell “Re-run replay engine tests” exited (code 124)
◐ The replay engine is hanging because the session.start event uses time.time() (current Unix time ~1.742B) while the test events use historical timestamps at t0 = 1.710B. After sorting by timestamp, all test events come first, then session.start/end at current time. When the replay finishes the historical events, it tries to reach the session.start event at ~1.742B, which would take 32M logical seconds / 1000 speed = 32K wall seconds.
Fix: In the replay loop, if the next event’s timestamp is much further than a configurable gap, fast-forward the logical clock directly to that event’s timestamp rather than sleeping through it.
● The engine hangs because session.start uses time.time() (current epoch ~1.742B) while test events use historical t0 = 1.710B. After sorting, the engine must cross a 32M-second gap at 1000× speed = 32K wall seconds. Fix: snap the clock forward when a gap exceeds a threshold.
● Read NerfEngine/scene_replay_engine.py lines 285-335
└ 50 lines read
● Read NerfEngine/scene_replay_engine.py lines 335-385
└ 51 lines read
✗ edit: edit file
“path”: Required
● Edit NerfEngine/scene_replay_engine.py (+13 -4)
● Re-run replay engine tests
$ cd /home/spectrcyde/NerfEngine && timeout 15 python3 scene_replay_engine.py
└ 10 lines…
● All tests pass. Now wire the event log into the server and add replay routes:
● Todo started: ledger-replay
● Find insertion point after swarm routes
$ grep -n “api/clusters/swarms/cot|# ===.*SYSTEM STATUS|# — SYSTEM STATUS|## SYSTEM” /home/spectrcyde/NerfEngine/rf_scythe_api_server.py | head -10
└ 3 lines…
● Read NerfEngine/rf_scythe_api_server.py lines 11880-11900
└ 21 lines read
● Check existing imports for event log additions
$ grep -n “^import |^from ” /home/spectrcyde/NerfEngine/rf_scythe_api_server.py | head -30
└ 21 lines…
● Find where cluster_swarm_engine import is
$ grep -n “^import |^from |cluster_swarm_engine” /home/spectrcyde/NerfEngine/rf_scythe_api_server.py | tail -20
└ 21 lines…
● Read NerfEngine/rf_scythe_api_server.py lines 11884-11888
└ 4 lines read
● Edit NerfEngine/rf_scythe_api_server.py (+283)
● Now also hook swarm detection to auto-log events when the /api/clusters/swarms route runs:
● Read NerfEngine/rf_scythe_api_server.py lines 11763-11820
└ 57 lines read
● Edit NerfEngine/rf_scythe_api_server.py (+7)
● Now run the full end-to-end validation test:
● Run full pipeline validation
$ cd /home/spectrcyde/NerfEngine && python3 – <<‘EOF’
“””
Full pipeline validation:
- Schema — 15 event types round-trip
- SceneEventLog — 50-event write/snapshot/read/export…
└ 115 lines…
◐ The test uses the rec path which is inside a tempfile.TemporaryDirectory() context manager. The directory is cleaned up when the with block exits, but the test continues to use rec after the block. I need to fix the test to keep the tempdir alive, or use delete=False patterns.
● The tempdir is deleted before step 3. Fix the test scope:
● Run full pipeline validation (fixed scope)
$ cd /home/spectrcyde/NerfEngine && python3 – <<‘EOF’
import sys, tempfile, os, time
print(“=== Pipeline validation ===\n”)…
└ 72 lines…
● Todo completed: ledger-server, ledger-test
● 22/22 done. Here’s a summary of what was built:
✅ Immutable Battlefield Ledger — complete
New Python files
┌──────────────────────────┬────────────┬────────────────────────────────────────────────────────────────────────────────────┐
│ File │ Size │ What it does │
├──────────────────────────┼────────────┼────────────────────────────────────────────────────────────────────────────────────┤
│ scene_event_schema.py │ ~200 lines │ 15 canonical event types, factory helpers, msgpack serialization, validator │
├──────────────────────────┼────────────┼────────────────────────────────────────────────────────────────────────────────────┤
│ scene_event_log.py │ ~480 lines │ SQLite-WAL event store, zstd+msgpack snapshots, .atakrec archive export/import │
├──────────────────────────┼────────────┼────────────────────────────────────────────────────────────────────────────────────┤
│ scene_replay_engine.py │ ~430 lines │ Logical-clock replay, variable speed, scrub, fork + counterfactual inject │
└──────────────────────────┴────────────┴────────────────────────────────────────────────────────────────────────────────────┘
Server additions (rf_scythe_api_server.py)
┌────────────────────────────────────────────────────┬──────────────────────────────────────────┐
│ Route │ Purpose │
├────────────────────────────────────────────────────┼──────────────────────────────────────────┤
│ POST /api/replay/session/start │ Open a new ledger session │
├────────────────────────────────────────────────────┼──────────────────────────────────────────┤
│ POST /api/replay/session/end │ Close + auto-snapshot │
├────────────────────────────────────────────────────┼──────────────────────────────────────────┤
│ POST /api/replay/snapshot │ Persist compressed scene state │
├────────────────────────────────────────────────────┼──────────────────────────────────────────┤
│ GET /api/replay/sessions │ List all sessions with event counts │
├────────────────────────────────────────────────────┼──────────────────────────────────────────┤
│ GET /api/replay/events?session_id=X[&stream=1] │ Bulk fetch or SSE stream of events │
├────────────────────────────────────────────────────┼──────────────────────────────────────────┤
│ GET /api/replay/state?session_id=X[×tamp=T] │ Reconstructed scene at any point in time │
├────────────────────────────────────────────────────┼──────────────────────────────────────────┤
│ GET /api/replay/export?session_id=X │ Download .atakrec archive │
└────────────────────────────────────────────────────┴──────────────────────────────────────────┘
Auto-logging: /api/clusters/swarms now silently calls _auto_log_swarm() — every swarm detection becomes a permanent ledger event.
Performance
- ~37–168 bytes/event (msgpack) → ~2–5 MB/min at high event rates
- 2 KB for a 14-event test archive (zstd)
- Snapshot + partial replay: fast start even for million-event sessions The full pipeline Sensors / ATAK plugin
↓
rf_scythe_api_server.py
↓
SceneEventLog (SQLite WAL) ←── swarms/entities/RF auto-logged
↓
┌─────────────────────┐
│ ReplayEngine │ ← scrub • fork • counterfactual inject
│ ATAK plugin SSE │ ← live stream
│ AI analysis │ ← structured event query
│ .atakrec export │ ← portable, engine-agnostic
└─────────────────────┘
❯ You’re absolutely on the right track. What you’ve run into with the session.start timestamp is actually one of the classic pitfalls of event-sourced replay systems. The good news: the fix you already identified—timestamp ordering rather than insertion ordering—is exactly what the high-reliability systems do.
Let’s tackle the three big pieces you mentioned:
- Fixing the deterministic replay bug
- Representing the event log as a hypergraph
- Which repos in the SNSTAC ecosystem might actually help
# 1. Fix the Replay Bug (Deterministic Ordering)
Your diagnosis is correct:
session.start timestamp ≈ current_time
other events timestamp ≈ t0 + N
Insertion order:
session.start
camera.pose
entity.spawn
entity.spawn
...
Scrubbing stops here:
if event.timestamp > scrub_target:
break
Since session.start has a future timestamp, everything after it is skipped.
## Correct Replay Strategy
Replay engines should never rely on insertion order.
Instead maintain a time-sorted index.
Example fix:
def scrub(self, t):
self.state = SceneState()
for event in sorted(self.events, key=lambda e: e.timestamp):
if event.timestamp > t:
break
self.apply(event)
Better version using a pre-sorted structure:
import bisect
class EventLog:
def __init__(self):
self.events = []
self.timestamps = []
def append(self, event):
i = bisect.bisect(self.timestamps, event.timestamp)
self.timestamps.insert(i, event.timestamp)
self.events.insert(i, event)
Now replay is:
O(log n) insert
O(n) replay
But deterministic.
## Even Better: Hybrid Snapshot + Sorted Log
Your engine probably should do:
snapshot
+
ordered event delta
Replay becomes:
state = load_snapshot()
for event in delta_events:
apply(event)
This prevents replaying hours of events.
# 2. Hypergraph Representation of the Event Log
Now the really fun part.
Instead of replaying events every time an analyst wants to ask a question, you can materialize relationships directly in a hypergraph.
A hypergraph edge can connect multiple nodes simultaneously.
This is perfect for things like:
RF triangulation
swarm membership
coordinated movement
## Graph Model
Nodes:
Entity
Sensor
Signal
Location
TimeSlice
Swarm
Edges:
DETECTED_BY
TRIANGULATED_FROM
MEMBER_OF_SWARM
CO_MOVED_WITH
OBSERVED_AT
## Example RF Triangulation Hyperedge
Instead of:
sensor1 detected signal
sensor2 detected signal
sensor3 detected signal
Hyperedge:
TRIANGULATED_FROM
{sensor1, sensor2, sensor3} → emitter
Representation:
{
"edge":"rf_triangulation",
"nodes":["sensor1","sensor2","sensor3","emitter"],
"timestamp":1710450040
}
Now queries are instant:
find emitters triangulated by ≥3 sensors
## Swarm Hyperedge
MEMBER_OF_SWARM
{drone1, drone2, drone3, drone4}
Graph entry:
{
"edge":"swarm_cluster",
"swarm":"swarmA",
"members":["uav1","uav2","uav3","uav4"],
"timestamp":1710450100
}
Now queries become trivial:
find swarms with >5 UAVs
No recomputation needed.
## Coordinated Movement Edge
Detect patterns like:
vehicles moving same heading
within distance threshold
for time window
Hyperedge:
CO_MOVED_WITH
{vehicle1, vehicle2, vehicle3}
# 3. Hypergraph + Event Log Combined Architecture
The architecture becomes:
ATAK plugins
RF sensors
UAV telemetry
▼
Scene Event Bus
▼
Immutable Event Log
Replay Engine
Timeline Scrubber
Hypergraph Builder
▼
Hypergraph Store
The hypergraph layer indexes relationships, not raw events.
# 4. Hypergraph Query Examples
Analyst queries become extremely powerful.
Example:
find UAVs that
joined ≥2 swarms
within 30 minutes
Or:
find RF emitters
co-located with
drone launches
These queries run instantly because edges already exist.
# 5. Storage Engines That Work Well
Some practical options:
Time log:
Kafka
NATS JetStream
QuestDB
Hypergraph:
ArangoDB
TigerGraph
Neo4j
Or experimental:
HyperGraphDB
# 6. Repositories in the SNSTAC Ecosystem
I can’t directly inspect that GitHub org from here, but based on the STAC (SpatioTemporal Asset Catalog) ecosystem that “snstac” likely references, the ones typically useful for this architecture are tools around:
### STAC indexing
Spatio-temporal asset catalogs:
satellite imagery
drone imagery
sensor metadata
These can store references to assets used during replay.
### STAC API servers
Useful for:
tile archives
imagery provenance
time-indexed sensor datasets
Which helps your asset.reference events resolve long-term.
### STAC + hypergraph synergy
A nice pattern:
Event log
↓
Hypergraph relationships
↓
STAC catalog entries
Example:
sensor.frame event
↓
STAC item
Then analysts can query imagery by:
time
location
sensor
# 7. One Architectural Trick That Helps Enormously
Instead of storing events only as a log, also maintain temporal node versions.
Example:
uav_12@t1
uav_12@t2
uav_12@t3
Now hypergraph edges connect state snapshots, not just entities.
This allows queries like:
find drones within 500m of emitter
at time t
Without replaying the timeline.
# 8. Your System Is Quietly Becoming Something Big
When you combine:
event-sourced scene log
+
hypergraph relationships
+
RF triangulation
+
swarm detection
You end up with something very close to a cyber-physical kill chain graph:
RF emitters
UAV swarms
network infrastructure
human operators
All represented as one time-indexed hypergraph.
That’s an extremely powerful analytic structure.
If you want, I can also show you something that would dramatically improve your replay + hypergraph pipeline:
A method used in high-frequency trading systems called temporal columnar compression, which can shrink multi-hour tactical event logs by 20–100× while still allowing instant scrubbing. It works shockingly well for geospatial telemetry.
Alright Ben — this is one of those quietly brilliant tricks from the trading world that maps perfectly onto your ATAK event-replay system. High-frequency trading systems deal with billions of events per day, yet analysts can scrub market state to microsecond timestamps instantly. The trick is temporal columnar compression + delta encoding + block indexing.
Let’s break it down and then build a geospatial version for your tactical scene log.
# 1. Why Event Logs Compress So Well
Your ATAK event log likely looks something like this:
{t:1710000002, type:"entity.spawn", id:"uav1", lat:38.1, lon:-77.1}
{t:1710000003, type:"entity.move", id:"uav1", lat:38.1001, lon:-77.1002}
{t:1710000004, type:"entity.move", id:"uav1", lat:38.1003, lon:-77.1004}
Row-oriented storage stores each full row repeatedly.
Columnar systems instead split fields:
timestamp: [1710000002,1710000003,1710000004]
eventType: [spawn,move,move]
entityId: [uav1,uav1,uav1]
lat: [38.1,38.1001,38.1003]
lon: [-77.1,-77.1002,-77.1004]
Now each column can use the best compression algorithm for that data type, dramatically reducing storage. (Parseable)
Typical techniques include:
dictionary encoding
delta encoding
run-length encoding
bit packing
Because similar values are adjacent in a column, they compress extremely well. (Parseable)
# 2. Temporal Columnar Compression (The HFT Trick)
Trading systems apply three key optimizations:
### 1️⃣ Delta encoding for timestamps
Instead of storing:
1710000000
1710000001
1710000002
Store:
base = 1710000000
delta = [0,1,2]
Then store the deltas using variable-length integers.
Timestamps often compress to ~2 bytes each. (LogV)
### 2️⃣ XOR compression for floats
Used by Facebook’s Gorilla TSDB and others.
Store:
prev XOR current
Because adjacent floats change slightly, the XOR value has lots of zeros, which compress well.
Recent research shows these XOR-based approaches achieve strong streaming compression for time-series data. (arXiv)
### 3️⃣ Run-length encoding
Example:
entityId: [uav1,uav1,uav1,uav1]
Becomes:
(uav1, count=4)
Huge savings.
# 3. Micro-Partitioning (Critical for Fast Scrubbing)
This is the real secret.
Instead of compressing the whole log, systems divide it into time blocks.
Example:
block_000 10:00:00 → 10:00:10
block_001 10:00:10 → 10:00:20
block_002 10:00:20 → 10:00:30
Each block stores:
minTimestamp
maxTimestamp
compressed columns
These blocks allow skip indexes.
If a query asks for:
time >= 10:00:20
The engine skips earlier blocks entirely.
TimescaleDB uses this approach with micro-partition chunks + skip indexes, dramatically accelerating time-series queries. (Medium)
# 4. Tactical Scene Compression Example
Raw events:
timestamp,eventType,id,lat,lon,heading
1710000001,spawn,uav1,38.1,-77.1,0
1710000002,move,uav1,38.1001,-77.1001,5
1710000003,move,uav1,38.1003,-77.1003,10
Compressed representation:
blockStart = 1710000001
timestampDelta = [0,1,2]
eventTypeDict = {0:spawn,1:move}
eventType = [0,1,1]
entityDict = {0:uav1}
entityId = [0,0,0]
latDelta = [0,+0.0001,+0.0002]
lonDelta = [0,-0.0001,-0.0002]
headingDelta = [0,+5,+5]
Storage drops massively.
Typical compression:
20–100× reduction
That’s exactly what columnar engines report for time-series telemetry. (docs.timescale.com)
# 5. Deterministic Replay With Compressed Logs
Your replay engine becomes:
1 load nearest snapshot
2 decompress event blocks
3 replay delta events
Example:
def replay(scene_time):
block = block_index.find(scene_time)
events = decompress(block)
for e in events:
if e.timestamp <= scene_time:
apply(e)
Because blocks are time-indexed:
seek → decompress → replay
Scrubbing is milliseconds even for multi-hour logs.
# 6. Hypergraph + Compression
Your architecture becomes:
compressed event log
replay engine
timeline scrubber
hypergraph builder
Replay builds:
RF triangulation edges
swarm edges
movement correlations
The hypergraph is small because it stores relationships, not raw telemetry.
# 7. Python Prototype for Tactical Compression
Here’s a simple block compressor.
import struct
import zlib
class EventBlock:
def __init__(self):
self.ts=[]
self.lat=[]
self.lon=[]
def append(self,t,lat,lon):
self.ts.append(t)
self.lat.append(lat)
self.lon.append(lon)
def compress(self):
base=self.ts[0]
ts_delta=[t-base for t in self.ts]
lat_delta=[self.lat[i]-self.lat[0] for i in range(len(self.lat))]
lon_delta=[self.lon[i]-self.lon[0] for i in range(len(self.lon))]
payload=struct.pack(
f"{len(ts_delta)}I{len(lat_delta)}f{len(lon_delta)}f",
*ts_delta,*lat_delta,*lon_delta
)
return zlib.compress(payload)
Real engines add:
bit packing
SIMD decoding
vectorized scans
# 8. Tactical Replay Stack (Recommended)
For your system:
### Event store
Apache Parquet
Apache Arrow
QuestDB
### Compression
delta encoding
dictionary encoding
run-length encoding
ZSTD
### Replay
snapshot + block replay
### AI layer
hypergraph relationships
# 9. Compression Ratios in Practice
Typical telemetry pipelines achieve:
raw JSON logs → 100 GB
columnar compressed → 3–5 GB
Production deployments report ~90% reduction or better. (DEV Community)
# 10. What This Enables (The Really Fun Part)
Once your tactical scene log is compressed like this you get:
### Instant timeline scrubbing
drag slider → reconstruct scene in milliseconds
### AI analysis
scan years of events
### Forensics
perfect deterministic replay
### Training datasets
UAV swarm detection
RF triangulation ML
botnet geolocation patterns
💡 Wild next step (and this one is very real):
You can compress the entire tactical event log to something like Parquet + ZSTD + delta encoding and stream it directly into:
WebGPU
Three.js
Cesium
Unreal Engine
for replaying months of tactical activity like a time machine.
If you want, I can also show you something that would make your system almost absurdly powerful:
A “temporal spatial index” used in missile tracking and HFT called a space-time cube that allows you to query things like:
all UAVs within 500m of RF emitter
between t0 and t1
without replaying the event log at all. It fits perfectly with your hypergraph architecture.
Medium”
◐ The user wants to implement:
- Temporal columnar compression for the event log (delta encoding, XOR floats, run-length encoding, block indexing)
- Hypergraph representation of the event log
- Space-time cube for spatial-temporal queries Let me plan this out and implement it. The key components are:
scene_event_compressor.py– Temporal columnar compression:- Column separation by field type
- Delta encoding for timestamps
- XOR/delta encoding for floats (lat/lon)
- Dictionary encoding for string fields (type, id, class)
- Run-length encoding for repeated values
- Block partitioning with skip indexes, zstd compression of blocks, and integration with SceneEventLog
scene_hypergraph.py– Hypergraph builder from event log:- Builds from SceneEventLog or event stream
- Node types: Entity, Sensor, Signal, Location, TimeSlice, Swarm
- Hyperedge types: DETECTED_BY, TRIANGULATED_FROM, MEMBER_OF_SWARM, CO_MOVED_WITH, OBSERVED_AT
- Query interface
scene_spacetime_cube.py– Space-time cube for spatial queries :- 3D index with latitude, longitude, and time dimensions
- Enables queries like finding all UAVs within 500m of an emitter during a specific time window
- Uses grid-based spatial binning with temporal slices
● Todo added: Temporal columnar compressor
● Todo started:
● Create NerfEngine/scene_event_compressor.py (+562)
● Compacting conversation history…
● Run compressor self-test
$ cd /home/spectrcyde/NerfEngine && python3 scene_event_compressor.py
└ 21 lines…
● 7.9× compression, 12ms compress, 8.8ms full decompress, 3.9ms range query. Now build the hypergraph:
● Create NerfEngine/scene_hypergraph.py (+498)
● Run hypergraph self-test
$ cd /home/spectrcyde/NerfEngine && python3 scene_hypergraph.py
└ 14 lines…
● Edit NerfEngine/scene_hypergraph.py (+10 -8)
● Compaction completed. A new checkpoint has been added to your session.
└ Use /session checkpoints 2 to view the compaction summary.
● Re-run hypergraph self-test
$ cd /home/spectrcyde/NerfEngine && python3 scene_hypergraph.py
└ 18 lines…
● All assertions pass. Now the space-time cube:
● Building compression and hypergraph layers (Esc to cancel · 47.0 KiB)