6 minute read

How These Pieces Came Together

LCC Fusion did not start with hardware. It started with an architectural idea.

When I first encountered NMRA Layout Command Control (LCC) several years ago, what stood out wasn’t the specification itself—it was the model. Inputs and outputs were intentionally decoupled. Devices didn’t need to know about each other. Everything communicated through something deliberately simple: events that were produced and consumed.

That single decision changed everything.

Events separated what happened from how it was detected and how it was acted upon. A button press, a sensor trigger, or a software condition could all produce the same event. Lights, sound, motion, or logic could all respond to it. Technologies stopped being tightly bound, and systems stopped collapsing under their own complexity.

LCC wasn’t just a protocol—it was an architectural statement.

That philosophy became immediately practical through OpenMRN and OpenMRNLite, an open-source implementation of LCC that provided a solid, working foundation. OpenMRN made the event-driven model tangible. It was consumable, well documented, and structured in a way that encouraged understanding rather than obscuring it. Just as importantly, it was approachable—not only for developers, but for tooling, examples, and increasingly for AI-assisted reasoning.

With OpenMRN in place, the architectural direction was no longer theoretical. There was a real system demonstrating how loosely coupled devices could collaborate through events without hardcoded relationships.

Not long after, the ESP32 entered the picture. At an extremely low cost point, it offered processing power, multiple communication paths, wireless connectivity, local storage, and a massive ecosystem of libraries and community knowledge. Espressif continued to evolve the platform, releasing new variants while maintaining compatibility. The ESP32 wasn’t just capable—it was expansive.

At the same time, AI-assisted development began collapsing the distance between ideas and implementation. Firmware behavior, electronics design, protocol interactions—things that once slowed experimentation became faster to reason about and iterate on. The constraint was no longer tooling. It was imagination and architectural discipline.

As these pieces converged, the system architecture matured into a multi-layer, four-tier model. Responsibilities were separated cleanly. Devices could specialize without becoming isolated. Sensors, logic, feedback, and communication infrastructure could evolve independently while still working together as a coherent system.

That convergence mattered.

With an event-driven foundation from LCC, a robust open-source implementation in OpenMRN, a powerful and inexpensive MCU platform, and an architecture designed for extension, the usual limits of automation began to fall away. New interaction ideas didn’t require redesigns. New devices didn’t force rewrites. Capabilities could be added without narrowing what already existed.

In other words, the system could scale in possibility, not just in size.


From Events to Interactions

Once everything in the system spoke the same language—events—the nature of automation changed.

An event does not describe a device. It describes meaning. Something happened. A condition changed. A request was made. By itself, an event carries no assumption about how it was generated or how it will be handled. That neutrality is what makes it powerful.

What became clear over time was that events were not just a communication mechanism—they were the foundation for interaction.

An interaction is what emerges when events are interpreted and acted upon in context. A sensor firing is not the interaction. A user approaching a scene is. A button press is not the interaction. An intent to trigger an action is. Events allow those meanings to exist independently of the hardware used to detect them.

This distinction mattered because it prevented the system from becoming device-centric. Sensors could change. Detection methods could evolve. New technologies could be introduced. As long as they produced the same events, the rest of the system remained stable.

The same was true on the output side. Responses did not need to be hardwired to inputs. A single event could result in light, sound, motion, or voice—depending on context, configuration, or user preference. Feedback became expressive rather than binary, and the system could explain itself instead of acting silently.

By treating events as the boundary between perception and response, automation stopped being a chain of devices and became a conversation between the system, the environment, and the user.

That shift—from devices reacting to signals, to systems participating in interactions—is what made it possible to keep adding new capabilities without increasing complexity.


Turning Interactions Into Real Systems

An interaction-centric architecture only matters if it holds up under real use.

Over time, LCC Fusion moved well beyond a single class of devices or a narrow automation problem. Detection, signaling, train control, and layout feedback were obvious early goals—but they were only the beginning. As the system matured, new forms of interaction were added, often driven by questions like “What would make this feel more real?” or “How would a person expect the layout to respond here?”

Those additions came from many directions.

Some interactions are physical. Optical sensors—sometimes using fiber optics—detect the presence and position of trains in places where traditional methods struggle. Ultrasonic sensors detect approach or proximity. NFC tags identify specific rolling stock or trigger behavior based on what is actually present, not just where it is.

Some interactions are human-initiated. Buttons, panels, and controls provide direct input. Voice assistants such as Alexa allow spoken commands to produce the same events as physical controls. The system does not care whether an instruction came from a switch, a sensor, or a voice command—only that an interaction occurred.

Other interactions are expressive. Configured text messages are converted to speech and played through speakers, announcing arrivals, departures, or system state in ways that mirror real-world environments. Sound effects and audio cues provide confirmation and feedback, reducing ambiguity and making the system’s behavior understandable without consulting a screen or manual.

What matters is not the individual technologies. Many of these components exist elsewhere in isolation. What matters is that none of them required architectural change.

Each of these capabilities—optical detection, ultrasonic sensing, NFC identification, voice control, audio feedback—was integrated by producing and consuming events. Cards specialize in detection, control, or feedback. Breakout boards adapt those cards to real-world devices. The interaction model remains the same.

This is where the layered architecture earns its keep.

By separating detection, logic, and response across dedicated cards and breakout boards, new interaction methods can be added without entangling existing ones. A new sensor does not require rewriting signaling logic. Adding audio feedback does not disturb detection. Voice control does not replace physical controls—it complements them.

The result is a system where interaction richness grows without increasing fragility. Builders can choose which interaction paths make sense for their layout, knowing that each one fits into the same underlying model.

That is the practical payoff of designing around interactions instead of devices.


Carrying the Journey Forward

This journey—from LCC events, to a flexible architecture, to interaction-rich automation—now shapes how I look at every new electronics project.

When I read an article on Hackaday.io, I no longer see a standalone device or a clever circuit in isolation. Instead, I step back and ask a different set of questions. What interaction does this enable? What does it sense, control, or explain? How would this apply to the real-world behaviors and workflows found in model railroading?

More importantly, I ask whether it could fit into a larger system without standing alone.

Could this capability be expressed as events that are produced or consumed?
Could it be implemented as a card and a breakout board that integrates cleanly with what already exists?
Could it participate in the same interaction model rather than introducing a parallel one?

That mental shift is the real outcome of this architecture. It turns experimentation into integration. New ideas are no longer disruptive—they are candidates. If they fit the interaction model, they belong. If they do not, they remain useful, but isolated.

AI-assisted tools now play a role in this process as well—not just writing code or designing circuits, but helping ask how, why, and where a new idea should live within the system, and whether it benefits others beyond a single build.

LCC Fusion is not finished, and it is not meant to be. It is a framework designed so that the next good idea already has a place to belong.

That, ultimately, is the goal: not to predict every feature, but to build a system where integration is intentional, flexibility is preserved, and interaction—not hardware—is the organizing principle.