architecture/network/activate

Activation chapter for Network execution policy.

This folder answers the moment when a graph already exists and the next question becomes: how should signal move through it right now? The same network may be stepped for ordinary inference, training-aware forward passes, zero-copy raw output reuse, or a sequence of batch rows. Keeping those paths together makes the execution tradeoffs visible without mixing them into topology or serialization code.

The important split is between graph meaning and graph execution. Node and connection chapters explain what the structure is. activate/ explains how that structure is stepped: validate inputs, decide whether the slab fast path is still legal, preserve or skip training traces, and return outputs in the shape the caller requested.

A second useful lens is to read the public exports as four modes. activate() is the ordinary compatibility path. noTraceActivate() is the hot inference path when trace bookkeeping would be wasteful. activateRaw() keeps typed-array reuse available when pooling matters more than boxed outputs. activateBatch() is the clear orchestration layer for repeated forward passes over many rows.

The performance lesson here is not "always choose the fastest path." It is "choose the narrowest path that still matches the caller's semantics." If a network is slab-ready, this chapter can exploit contiguous typed arrays. If a structural edit made that layout stale, the same boundary falls back to node traversal instead of forcing callers to understand storage internals first.

flowchart LR
  classDef base fill:#08131f,stroke:#1ea7ff,color:#dff6ff,stroke-width:1px;
  classDef accent fill:#0f2233,stroke:#ffd166,color:#fff4cc,stroke-width:1.5px;

  Input[caller input]:::base --> Modes[activate chapter]:::accent
  Modes --> Trace[activate<br/>keep traces]:::base
  Modes --> NoTrace[noTraceActivate<br/>inference hot path]:::base
  Modes --> Raw[activateRaw<br/>typed output reuse]:::base
  Modes --> Batch[activateBatch<br/>repeat over rows]:::base
flowchart TD
  classDef base fill:#08131f,stroke:#1ea7ff,color:#dff6ff,stroke-width:1px;
  classDef accent fill:#0f2233,stroke:#ffd166,color:#fff4cc,stroke-width:1.5px;

  ActivateChapter[activate/]:::accent --> Validation[input validation and contexts]:::base
  ActivateChapter --> FastPath[slab fast path when layout is ready]:::base
  ActivateChapter --> Traversal[node traversal fallback]:::base
  ActivateChapter --> Buffers[pooled activation buffers]:::base

For background on why some activation paths preserve training traces while others skip them, see Wikipedia contributors, Backpropagation. This chapter sits at the forward-pass side of that story and decides how much training bookkeeping each call should carry along.

Example: use the no-trace path when you only need inference outputs.

const network = Network.createMLP(2, [3], 1);
const outputValues = network.noTraceActivate([0.2, 0.8]);

Example: run the same network over several input rows with one orchestration call.

const network = Network.createMLP(2, [3], 1);
const batchOutputs = network.activateBatch(
  [
    [0, 1],
    [1, 0],
  ],
  true,
);

Practical reading order:

  1. Start here for the public activation modes and their semantic differences.
  2. Continue into network.activate.core.utils.ts when you want the ordinary forward-pass pipeline.
  3. Continue into network.activate.raw.utils.ts and the no-trace helpers when typed-array reuse or inference hot paths are the next question.
  4. Finish with the context and helper files when you want the orchestration details behind validation, batching, and fallback behavior.

architecture/network/activate/network.activate.utils.ts

activate

activate(
  input: number[],
  training: boolean,
): number[]

Execute the main activation routine and return plain numeric outputs.

Parameters:

Returns: Output activation values.

activateBatch

activateBatch(
  inputs: number[][],
  training: boolean,
): number[][]

Activate the network over a mini‑batch (array) of input vectors, returning a 2‑D array of outputs.

This helper simply loops, invoking {@link Network.activate} (or its bound variant) for each sample. It is intentionally naive: no attempt is made to fuse operations across the batch. For very large batch sizes or performance‑critical paths consider implementing a custom vectorized backend that exploits SIMD, GPU kernels, or parallel workers.

Input validation occurs per row to surface the earliest mismatch with a descriptive index.

Parameters:

Returns: 2‑D array: outputs[i] is the activation result for inputs[i].

Example:

const batchOut = net.activateBatch([[0,0,1],[1,0,0],[0,1,0]]); console.log(batchOut.length); // 3 rows

activateRaw

activateRaw(
  input: number[],
  training: boolean,
  maxActivationDepth: number,
): number[]

Thin semantic alias to the network's main activation path.

At present this simply forwards to {@link Network.activate}. The indirection is useful for:

Parameters:

Returns: Implementation-defined result of Network.activate (typically an output vector).

Example:

const y = net.activateRaw([0,1,0]);

gaussianRand

gaussianRand(
  rng: () => number,
): number

Produce a normally distributed random sample using the Box-Muller transform.

Parameters:

Returns: Standard normal sample with mean 0 and variance 1.

noTraceActivate

noTraceActivate(
  input: number[],
): number[]

Perform a forward pass without creating or updating training / gradient traces.

This is the most allocation‑sensitive activation path. Internally it will attempt to leverage a compact "fast slab" routine (an optimized, vectorized broadcast over contiguous activation buffers) when the Network instance indicates that such a path is currently valid. If that attempt fails (for instance because the slab is stale after a structural mutation) execution gracefully falls back to a node‑by‑node loop.

Algorithm outline:

  1. (Optional) Refresh cached topological order if the network enforces acyclicity and a structural change marked the order as dirty.
  2. Validate the input dimensionality.
  3. Try the fast slab path; if it throws, continue with the standard path.
  4. Acquire a pooled output buffer sized to the number of output neurons.
  5. Iterate all nodes in their internal order:
    • Input nodes: directly assign provided input values.
    • Hidden nodes: compute activation via Node.noTraceActivate (no bookkeeping).
    • Output nodes: compute activation and store it (in sequence) inside the pooled output buffer.
  6. Copy the pooled buffer into a fresh array (detaches user from the pool) and release the pooled buffer back to the pool.

Complexity considerations:

Parameters:

Returns: Array of output neuron activations (length == network.output).

Example:

const out = net.noTraceActivate([0.1, 0.2, 0.3]); console.log(out); // => e.g. [0.5123, 0.0441]

architecture/network/activate/network.activate.utils.types.ts

ActivateRuntimeNetworkProps

Runtime network view used by the object-graph activation pipeline.

This intentionally describes the internal fields activation reads/writes (training step, RNG, regularization knobs, and slab fast-path hooks).

ActivationOutputBuffer

Pooled activation output array type acquired from the shared activation array pool.

ActivationStats

Activation telemetry collected during a single activation pass.

BATCH_INPUTS_COLLECTION_ERROR_MESSAGE

Error message used when batch activation receives a non-array container.

BatchActivationContext

Shared state used by batch activation orchestration.

BatchRowActivationContext

Shared state used while validating and activating one row in a batch.

DEFAULT_MAX_ACTIVATION_DEPTH

Default hard limit for recursive activation depth in raw activation mode.

INITIAL_OUTPUT_WRITE_INDEX

Initial write index used when collecting output activations.

INPUT_NODE_TYPE

Node role label used by activation traversal for input neurons.

NetworkLayer

Layer container type used by the layered activation paths.

NetworkLayerNodes

Node collection type attached to a single network layer.

NO_TRACE_FAST_SLAB_TRAINING_FLAG

Training flag value used by no-trace fast slab eligibility checks.

NoTraceActivationContext

Shared state used by no-trace activation orchestration and helpers.

NoTraceNodeTraversalContext

Shared state used for node traversal during no-trace activation.

OUTPUT_NODE_TYPE

Node role label used by activation traversal for output neurons.

OUTPUT_WRITE_INDEX_INCREMENT

Increment applied after writing one output activation value.

RawActivationContext

Shared state used by raw activation orchestration.

SingleNodeNoTraceActivationContext

Shared state used while activating one node during no-trace traversal.

UNDEFINED_INPUT_LENGTH_TEXT

Fallback text for undefined input lengths when formatting validation errors.

WeightNoiseApplyResult

Marker returned by weight-noise application to drive safe restore logic.

WeightNoiseStats

Weight-noise telemetry collected during a single activation pass.

architecture/network/activate/network.activate.core.utils.ts

acquireOutputBuffer

acquireOutputBuffer(
  outputSize: number,
): ActivationArray

Acquire a pooled activation output buffer for the current output width.

Parameters:

Returns: Mutable pooled output buffer.

activate

activate(
  input: number[],
  training: boolean,
): number[]

Execute the main activation routine and return plain numeric outputs.

Parameters:

Returns: Output activation values.

activateLayer

activateLayer(
  currentLayer: default,
  layerIndex: number,
  inputVector: number[],
  isTraining: boolean,
): number[]

Activate one layer, routing input only for the first layer.

Parameters:

Returns: Layer activations.

activateLayeredNetworkWithDropout

activateLayeredNetworkWithDropout(
  network: default,
  runtimeNetwork: ActivateRuntimeNetworkProps,
  inputVector: number[],
  isTraining: boolean,
  outputBuffer: ActivationArray,
  stats: ActivationStats,
): void

Run layered activation with dropout masks and no stochastic-depth skips.

Parameters:

Returns: Nothing.

activateLayeredNetworkWithStochasticDepth

activateLayeredNetworkWithStochasticDepth(
  network: default,
  runtimeNetwork: ActivateRuntimeNetworkProps,
  inputVector: number[],
  isTraining: boolean,
  outputBuffer: ActivationArray,
  stats: ActivationStats,
): void

Run layered activation with stochastic-depth skipping and inverse-survival scaling.

Parameters:

Returns: Nothing.

activateNodeNetworkFallback

activateNodeNetworkFallback(
  network: default,
  runtimeNetwork: ActivateRuntimeNetworkProps,
  inputVector: number[],
  isTraining: boolean,
  outputBuffer: ActivationArray,
  stats: ActivationStats,
): void

Run fallback node-by-node activation for networks without explicit layer definitions.

Parameters:

Returns: Nothing.

activateNodesAndCollectOutputs

activateNodesAndCollectOutputs(
  nodes: default[],
  inputVector: number[],
  outputBuffer: ActivationArray,
): void

Activate raw nodes in order and collect output-node activations into output buffer.

Parameters:

Returns: Nothing.

applyDropConnect

applyDropConnect(
  network: default,
  runtimeNetwork: ActivateRuntimeNetworkProps,
  isTraining: boolean,
  stats: ActivationStats,
): void

Apply drop-connect masking and restore original weights where required.

Parameters:

Returns: Nothing.

applyFallbackHiddenDropout

applyFallbackHiddenDropout(
  hiddenNodes: default[],
  runtimeNetwork: ActivateRuntimeNetworkProps,
  dropoutProbability: number,
  isTraining: boolean,
  stats: ActivationStats,
): void

Apply fallback dropout for hidden nodes in raw node traversal mode.

Parameters:

Returns: Nothing.

applyFallbackWeightNoise

applyFallbackWeightNoise(
  network: default,
  runtimeNetwork: ActivateRuntimeNetworkProps,
  isTraining: boolean,
): void

Apply raw fallback weight noise to all connections using global standard deviation.

Parameters:

Returns: Nothing.

applyHiddenLayerDropout

applyHiddenLayerDropout(
  layer: default,
  rawActivations: number[],
  runtimeNetwork: ActivateRuntimeNetworkProps,
  dropoutProbability: number,
  isTraining: boolean,
  stats: ActivationStats,
): void

Apply dropout masks to hidden layer nodes and enforce at least one active node.

Parameters:

Returns: Nothing.

applyTrainingDropConnect

applyTrainingDropConnect(
  network: default,
  runtimeNetwork: ActivateRuntimeNetworkProps,
  stats: ActivationStats,
): void

Apply training-time drop-connect masks to each connection.

Parameters:

Returns: Nothing.

applyTrainingWeightNoise

applyTrainingWeightNoise(
  network: default,
  runtimeNetwork: ActivateRuntimeNetworkProps,
  isTraining: boolean,
): WeightNoiseApplyResult

Apply per-connection training noise for the main activation flow.

Parameters:

Returns: Applied-state information for downstream restore logic.

collectHiddenNodes

collectHiddenNodes(
  nodes: default[],
): default[]

Collect hidden nodes from a raw node list.

Parameters:

Returns: Hidden-only node list.

containsInvalidProbability

containsInvalidProbability(
  probabilities: number[],
): boolean

Check whether a probability vector contains values outside the (0, 1] interval.

Parameters:

Returns: True when one or more probabilities are invalid.

createActivationStats

createActivationStats(
  totalConnections: number,
): ActivationStats

Create activation statistics container for the current pass.

Parameters:

Returns: Initialized activation stats object.

createWeightNoiseStats

createWeightNoiseStats(): WeightNoiseStats

Create the weight-noise statistics record with zeroed aggregates.

Returns: Zero-initialized weight-noise stats.

decideLayerSkip

decideLayerSkip(
  network: default,
  runtimeNetwork: ActivateRuntimeNetworkProps,
  currentLayerNodeCount: number,
  layerIndex: number,
  isTraining: boolean,
  previousLayerActivations: number[] | undefined,
): { shouldSkipLayer: boolean; surviveProbability: number; }

Decide whether a hidden layer should be skipped in stochastic-depth mode.

Parameters:

Returns: Skip decision and survival probability for the layer.

executeActivationPath

executeActivationPath(
  network: default,
  runtimeNetwork: ActivateRuntimeNetworkProps,
  inputVector: number[],
  isTraining: boolean,
  outputBuffer: ActivationArray,
  stats: ActivationStats,
): void

Execute one of the three activation branches: stochastic layers, standard layers, or raw nodes.

Parameters:

Returns: Nothing.

finalizeNodePathWeightNoiseRestore

finalizeNodePathWeightNoiseRestore(
  network: default,
  isTraining: boolean,
  appliedWeightNoise: boolean,
): void

Restore temporary weight-noise values for fallback node path only.

Parameters:

Returns: Nothing.

finalizeTrainingStepAndStats

finalizeTrainingStepAndStats(
  runtimeNetwork: ActivateRuntimeNetworkProps,
  stats: ActivationStats,
  isTraining: boolean,
): void

Finalize training counters and attach activation statistics to runtime state.

Parameters:

Returns: Nothing.

findSourceLayerIndex

findSourceLayerIndex(
  network: default,
  connection: default,
): number

Find the layer index containing a connection source node.

Parameters:

Returns: Layer index for source node, or -1 when not found.

gaussianRand

gaussianRand(
  rng: () => number,
): number

Produce a normally distributed random sample using the Box-Muller transform.

Parameters:

Returns: Standard normal sample with mean 0 and variance 1.

hasCompatibleSkipState

hasCompatibleSkipState(
  previousLayerActivations: number[] | undefined,
  currentLayerNodeCount: number,
): boolean

Validate whether previous activations can be reused as skip pass-through output.

Parameters:

Returns: True when pass-through activations are compatible.

hasLayeredNetwork

hasLayeredNetwork(
  network: default,
): boolean

Check whether the network has at least one explicit layer.

Parameters:

Returns: True when layered activation path should run.

hasLayeredNetworkWithStochasticDepth

hasLayeredNetworkWithStochasticDepth(
  network: default,
  runtimeNetwork: ActivateRuntimeNetworkProps,
): boolean

Check whether the network has layers and stochastic-depth configuration for layer skipping path.

Parameters:

Returns: True when stochastic-depth layer path should run.

hasOriginalWeightNoise

hasOriginalWeightNoise(
  connection: default,
): boolean

Check whether a connection already has an original weight-noise snapshot.

Parameters:

Returns: True when snapshot exists.

isHiddenLayer

isHiddenLayer(
  layerIndex: number,
  totalLayerCount: number,
): boolean

Check whether a layer index refers to a hidden layer in a layered network.

Parameters:

Returns: True when the layer is hidden.

persistOriginalWeightNoise

persistOriginalWeightNoise(
  connection: default,
): void

Store current connection weight before applying temporary weight-noise modifications.

Parameters:

Returns: Nothing.

prepareTopologyForActivation

prepareTopologyForActivation(
  runtimeNetwork: ActivateRuntimeNetworkProps,
): void

Ensure topological order is refreshed before activation when acyclic mode requires it.

Parameters:

Returns: Nothing.

recordSkippedLayer

recordSkippedLayer(
  network: default,
  stats: ActivationStats,
  layerIndex: number,
): void

Record a skipped layer in runtime and stats trackers.

Parameters:

Returns: Nothing.

releaseBufferAndCreateResult

releaseBufferAndCreateResult(
  outputBuffer: ActivationArray,
): number[]

Release pooled output buffer and return a detached plain array copy.

Parameters:

Returns: Plain array of output values.

resetSkippedLayers

resetSkippedLayers(
  network: default,
): void

Clear the runtime list of skipped layers before current activation pass.

Parameters:

Returns: Nothing.

resolveConnectionNoiseStd

resolveConnectionNoiseStd(
  network: default,
  runtimeNetwork: ActivateRuntimeNetworkProps,
  connection: default,
  fallbackStandardDeviation: number,
): number

Resolve connection-specific weight-noise standard deviation, including per-hidden overrides.

Parameters:

Returns: Effective standard deviation for this connection.

resolveDynamicWeightNoiseStd

resolveDynamicWeightNoiseStd(
  runtimeNetwork: ActivateRuntimeNetworkProps,
): number

Resolve the training-step adjusted global weight-noise standard deviation.

Parameters:

Returns: Effective weight-noise standard deviation for current training step.

restoreDropConnectWeights

restoreDropConnectWeights(
  network: default,
): void

Restore drop-connect modified weights and normalize all masks back to one.

Parameters:

Returns: Nothing.

restoreOriginalDropConnectWeight

restoreOriginalDropConnectWeight(
  connection: default,
): void

Restore and clear original connection weight after drop-connect.

Parameters:

Returns: Nothing.

restoreOriginalWeightNoise

restoreOriginalWeightNoise(
  connection: default,
): void

Restore and clear the original weight-noise snapshot for a connection.

Parameters:

Returns: Nothing.

scaleActivations

scaleActivations(
  activations: number[],
  scaleFactor: number,
): number[]

Create a new activation vector by multiplying each activation by a scale factor.

Parameters:

Returns: Scaled activation vector.

setAllMasksToOne

setAllMasksToOne(
  nodes: default[],
): void

Set mask value to one for every node in a layer.

Parameters:

Returns: Nothing.

setDropConnectMask

setDropConnectMask(
  connection: default,
  dropConnectMask: number,
): void

Set drop-connect mask value for a connection.

Parameters:

Returns: Nothing.

setLastSampledNoise

setLastSampledNoise(
  connection: default,
  sampledNoise: number,
): void

Persist last sampled weight-noise value for a connection.

Parameters:

Returns: Nothing.

stashOriginalDropConnectWeight

stashOriginalDropConnectWeight(
  connection: default,
): void

Store original connection weight before drop-connect zeroing.

Parameters:

Returns: Nothing.

tryFastSlabActivation

tryFastSlabActivation(
  runtimeNetwork: ActivateRuntimeNetworkProps,
  inputVector: number[],
  isTraining: boolean,
): number[] | undefined

Attempt fast slab activation and safely fall back to regular activation on failure.

Parameters:

Returns: Fast slab output when available, otherwise undefined.

updateStochasticDepthFromSchedule

updateStochasticDepthFromSchedule(
  runtimeNetwork: ActivateRuntimeNetworkProps,
  isTraining: boolean,
): void

Update stochastic depth probabilities using a training schedule when valid.

Parameters:

Returns: Nothing.

validateInputVector

validateInputVector(
  network: default,
  inputVector: number[],
): void

Validate that the incoming input vector exists and matches expected input size.

Parameters:

Returns: Nothing.

validateNetworkNodes

validateNetworkNodes(
  network: default,
): void

Assert that the network contains nodes before executing activation routines.

Parameters:

Returns: Nothing.

writeLayerActivationsToOutput

writeLayerActivationsToOutput(
  layerActivations: number[] | undefined,
  outputBuffer: ActivationArray,
  outputSize: number,
): void

Copy final layer activations into the pooled network output buffer.

Parameters:

Returns: Nothing.

architecture/network/activate/network.activate.helpers.utils.ts

createBatchActivationContext

createBatchActivationContext(
  network: default,
  batchInputs: number[][],
  isTraining: boolean,
): BatchActivationContext

Build shared batch activation context for helper orchestration.

Parameters:

Returns: Fully populated batch activation context.

createNoTraceActivationContext

createNoTraceActivationContext(
  network: default,
  inputVector: number[],
): NoTraceActivationContext

Build shared no-trace activation context for helper orchestration.

Parameters:

Returns: Fully populated no-trace activation context.

createRawActivationContext

createRawActivationContext(
  network: default,
  inputVector: number[],
  isTraining: boolean,
  maximumActivationDepth: number,
): RawActivationContext

Build shared raw activation context for helper orchestration.

Parameters:

Returns: Fully populated raw activation context.

executeBatchActivation

executeBatchActivation(
  activationContext: BatchActivationContext,
): number[][]

Execute mini-batch activation with top-level shape validation and per-row checks.

The orchestration keeps behavior deterministic by validating the container first, then validating each row before delegating to the core network activation function.

Parameters:

Returns: Matrix of activation outputs.

executeNoTraceActivation

executeNoTraceActivation(
  activationContext: NoTraceActivationContext,
): number[]

Execute no-trace activation with a fast-path attempt and deterministic fallback traversal.

The orchestration follows a strict sequence: refresh order guarantees, validate input shape, try fast slab inference, then compute outputs through node traversal when needed.

Parameters:

Returns: Output activation vector detached from pooled storage.

executeRawActivation

executeRawActivation(
  activationContext: RawActivationContext,
): number[]

Execute raw activation through the network delegate using a compact orchestration flow.

This helper keeps the exported activation method focused on context creation while this module owns the execution path and future branching behavior.

Parameters:

Returns: Activation output vector from the network delegate.

architecture/network/activate/network.activate.errors.ts

Raised when activation input dimensionality does not match network expectations.

NetworkActivateBatchInputsCollectionError

Raised when batch activation receives a non-array collection.

NetworkActivateCorruptedStructureError

Raised when activation is attempted on a network with invalid node structure.

NetworkActivateInputSizeMismatchError

Raised when activation input dimensionality does not match network expectations.

architecture/network/activate/network.activate.raw.utils.ts

activateViaNetworkDelegate

activateViaNetworkDelegate(
  activationContext: RawActivationContext,
): number[]

Delegate raw activation to the core network activation implementation.

Parameters:

Returns: Activation output vector.

activateWithSelectedReusePath

activateWithSelectedReusePath(
  activationContext: RawActivationContext,
): number[]

Select the raw activation execution path based on runtime reuse configuration.

Parameters:

Returns: Activation output vector.

executeRawActivation

executeRawActivation(
  activationContext: RawActivationContext,
): number[]

Execute raw activation through the network delegate using a compact orchestration flow.

This helper keeps the exported activation method focused on context creation while this module owns the execution path and future branching behavior.

Parameters:

Returns: Activation output vector from the network delegate.

architecture/network/activate/network.activate.batch.utils.ts

activateSingleBatchRow

activateSingleBatchRow(
  rowActivationContext: BatchRowActivationContext,
): number[]

Validate and activate one batch row.

Parameters:

Returns: Activation output vector for the row.

activateValidatedBatchRows

activateValidatedBatchRows(
  activationContext: BatchActivationContext,
): number[][]

Activate each row in a validated batch matrix.

Parameters:

Returns: Matrix of activation outputs.

assertBatchInputCollection

assertBatchInputCollection(
  batchInputs: number[][],
): void

Validate that the batch input collection is an array of rows.

Parameters:

Returns: Nothing.

assertBatchRowInputSize

assertBatchRowInputSize(
  rowActivationContext: BatchRowActivationContext,
): void

Validate one batch row dimensionality.

Parameters:

Returns: Nothing.

buildBatchRowInputSizeMismatchMessage

buildBatchRowInputSizeMismatchMessage(
  rowActivationContext: BatchRowActivationContext,
): string

Build a descriptive mismatch message for invalid batch row input dimensions.

Parameters:

Returns: Formatted error message for invalid row dimensionality.

executeBatchActivation

executeBatchActivation(
  activationContext: BatchActivationContext,
): number[][]

Execute mini-batch activation with top-level shape validation and per-row checks.

The orchestration keeps behavior deterministic by validating the container first, then validating each row before delegating to the core network activation function.

Parameters:

Returns: Matrix of activation outputs.

formatInputLengthForMessage

formatInputLengthForMessage(
  inputVector: number[],
): string

Convert input length into a display-safe string for error messaging.

Parameters:

Returns: Numeric length as string or predefined undefined text.

isBatchRowInputSizeValid

isBatchRowInputSizeValid(
  rowActivationContext: BatchRowActivationContext,
): boolean

Determine whether one batch row matches the expected input dimensionality.

Parameters:

Returns: True when row size is valid.

architecture/network/activate/network.activate.notrace.utils.ts

activateWithoutTraceUsingNodeIteration

activateWithoutTraceUsingNodeIteration(
  activationContext: NoTraceActivationContext,
): number[]

Execute no-trace activation through node traversal and pooled output collection.

Parameters:

Returns: Detached output activation vector.

assertInputMatchesNetworkInputSize

assertInputMatchesNetworkInputSize(
  activationContext: NoTraceActivationContext,
): void

Validate that the input vector length matches expected network input dimensionality.

Parameters:

Returns: Nothing.

buildInputSizeMismatchMessage

buildInputSizeMismatchMessage(
  activationContext: NoTraceActivationContext,
): string

Build a descriptive input mismatch message for activation validation errors.

Parameters:

Returns: Formatted mismatch error message.

canUseNoTraceFastSlab

canUseNoTraceFastSlab(
  activationContext: NoTraceActivationContext,
): boolean

Determine whether fast slab activation is available for no-trace execution mode.

Parameters:

Returns: True when slab execution is available for inference mode.

detachPooledOutputBuffer

detachPooledOutputBuffer(
  pooledOutputBuffer: ActivationArray,
): number[]

Clone pooled output storage into a detached plain array.

Parameters:

Returns: Detached output activation vector.

executeNoTraceActivation

executeNoTraceActivation(
  activationContext: NoTraceActivationContext,
): number[]

Execute no-trace activation with a fast-path attempt and deterministic fallback traversal.

The orchestration follows a strict sequence: refresh order guarantees, validate input shape, try fast slab inference, then compute outputs through node traversal when needed.

Parameters:

Returns: Output activation vector detached from pooled storage.

formatInputLengthForMessage

formatInputLengthForMessage(
  inputVector: number[],
): string

Convert input length into a display-safe string for error messaging.

Parameters:

Returns: Numeric length as string or predefined undefined text.

isInputVectorLengthValid

isInputVectorLengthValid(
  activationContext: NoTraceActivationContext,
): boolean

Check whether the input vector has a valid length for activation.

Parameters:

Returns: True when the input vector is an array with expected length.

refreshTopologicalOrderWhenRequired

refreshTopologicalOrderWhenRequired(
  activationContext: NoTraceActivationContext,
): void

Refresh cached topological order when acyclic mode is active and marked dirty.

Parameters:

Returns: Nothing.

tryActivateWithFastSlab

tryActivateWithFastSlab(
  activationContext: NoTraceActivationContext,
): number[] | null

Attempt fast slab activation and return null when slab execution is unavailable or fails.

Parameters:

Returns: Fast slab output when successful, otherwise null.

architecture/network/activate/network.activate.contexts.utils.ts

createBatchActivationContext

createBatchActivationContext(
  network: default,
  batchInputs: number[][],
  isTraining: boolean,
): BatchActivationContext

Build shared batch activation context for helper orchestration.

Parameters:

Returns: Fully populated batch activation context.

createNoTraceActivationContext

createNoTraceActivationContext(
  network: default,
  inputVector: number[],
): NoTraceActivationContext

Build shared no-trace activation context for helper orchestration.

Parameters:

Returns: Fully populated no-trace activation context.

createRawActivationContext

createRawActivationContext(
  network: default,
  inputVector: number[],
  isTraining: boolean,
  maximumActivationDepth: number,
): RawActivationContext

Build shared raw activation context for helper orchestration.

Parameters:

Returns: Fully populated raw activation context.

toNetworkInternals

toNetworkInternals(
  network: default,
): ActivateNetworkInternals

Convert a network instance into the activation internals interface used by helper modules.

Parameters:

Returns: Network internals view used by activation helper modules.

architecture/network/activate/network.activate.notrace.traversal.utils.ts

activateHiddenNode

activateHiddenNode(
  networkNode: default,
): void

Activate a hidden node without trace bookkeeping.

Parameters:

Returns: Nothing.

activateInputNode

activateInputNode(
  activationContext: SingleNodeNoTraceActivationContext,
): void

Activate an input node using the matching input vector value.

Parameters:

Returns: Nothing.

activateOutputNodeAndAdvanceIndex

activateOutputNodeAndAdvanceIndex(
  activationContext: SingleNodeNoTraceActivationContext,
): number

Activate an output node, write the activation value, and advance the output index.

Parameters:

Returns: Next output write index.

activateSingleNodeWithoutTrace

activateSingleNodeWithoutTrace(
  activationContext: SingleNodeNoTraceActivationContext,
): number

Activate one node and return the next output write index.

Parameters:

Returns: Updated output write index.

isInputNode

isInputNode(
  networkNode: default,
): boolean

Determine whether a node is an input-role node.

Parameters:

Returns: True when node role is input.

isOutputNode

isOutputNode(
  networkNode: default,
): boolean

Determine whether a node is an output-role node.

Parameters:

Returns: True when node role is output.

populatePooledOutputBufferFromNodes

populatePooledOutputBufferFromNodes(
  traversalContext: NoTraceNodeTraversalContext,
): void

Traverse nodes in activation order and write output activations into pooled storage.

This helper isolates traversal concerns from no-trace orchestration so the main flow can remain focused on high-level activation phases.

Parameters:

Returns: Nothing.

Generated from source JSDoc • GitHub