News
M5Stack News.

Feb. 6, 2026M5Stack, a global leader in modular IoT and embedded development platforms, today announced the release of AI Pyramid series, comprising two models: AI Pyramid and AI Pyramid-Pro.

 

 

AI Pyramid is a pyramid-shaped, high-performance AI PC for local AI inference and edge computing. Powered by the Axera AX8850 SoC with an 8-core Cortex-A55 CPU and 24 TOPS INT8 NPU, it efficiently handles workloads such as real-time computer vision, multimodal interaction, and on-device large model inference, while providing robust video processing and flexible connectivity.

 

Targeting AIPC, edge intelligent terminals, and smart interactive devices, AI Pyramid delivers stable and reliable performance for fully localized AI applications while getting rid of reliance on cloud services.

 

Key Features

 

Built for Local AI and Edge Computing
AI Pyramid runs AI inference entirely on-device, ensuring low latency, data privacy, and reliable offline operation. Its flexible architecture supports deployment in smart terminals, interactive devices, and edge environments where consistent performance is essential.

On-Device AI Performance & Multimodal Parallelism
Equipped with a 24 TOPS INT8 NPU, AI Pyramid delivers powerful on-device AI compute for vision, speech, and language tasks in parallel. Its heterogeneous architecture enables developers to deploy and scale mainstream AI models—including Transformers and LLMs—efficiently from prototype to production. 

Hardware-Accelerated AI Video Processing

AI Pyramid’s hardware-accelerated video engine, combined with 4GB LPDDR4x memory (8GB for AI Pyramid-Pro), supports multi-stream video processing. It can decode up to 16 channels of 1080p video in parallel while running object detection, face recognition, or multimodal analytics on each stream, ensuring low latency, reliable local processing, and on-device AI acceleration.


Flexible Connectivity and Expansion

For connectivity and expansion, AI Pyramid features HD multimedia interfaces (1× input + 1× output for AI Pyramid-Pro), dual Gigabit Ethernet ports, and provides both USB 3.0 and USB Type-C interfaces, offering excellent flexibility for display output, networking, and peripheral expansion.

 

Detailed Specs

 

 

Endless Possibility

 

With its strong on-device AI computing capability and open, developer-friendly ecosystem, AI Pyramid is well suited for AIPC and edge intelligent terminals. It can be deployed as a smart interactive device, enabling applications such as smart home control (Home Assistant integration), on-device AIGC, voice cloning, and meeting transcription, as well as serving as an AI visual gateway, a local AI photo management platform (Immich), or an AI smart security system (Frigate).

 

By combining flexible hardware design with reliable on-device AI performance, AI Pyramid provides a solid foundation for developers and makers to build scalable, privacy-preserving AI applications across both desktop and edge environments. Its official launch marks the beginning of new community-driven innovations and the dawn of a new edge AI era.

2026-02-06

 

Hardware and Design

 

The Cardputer-Adv is an enhanced iteration of the small-form-factor computer powered by the Espressif ESP32-S3 microcontroller. In essence, the Cardputer-Adv is a slightly redesigned version of the original. Side-by-side, they differ visually only in color—the new model is white, while the previous was light gray. The shape, design, and general purpose remain identical. The "brain" of the system is still a Stamp series development board, but upgraded to the Stamp-S3A. Compared to the Stamp-S3 found in the predecessor, the "A" revision features a redesigned 3D antenna for improved connectivity and a "softer," more responsive Reset button. Note that this button is covered by a sticker, making it somewhat awkward to press. Other changes include internal LED wiring and lower power consumption. The core remains the ESP32-S3FN8 microcontroller with 8MB of Flash and 23 GPIO pins. As we have covered the ESP32-S3 extensively in previous articles, we will not repeat those technical details here. The USB-C port is used for programming the Stamp, power delivery, and charging the integrated battery.

 


 

Display, Keyboard, and Audio

 

The Stamp-S3A connects to the motherboard via two header rows and interfaces with the display via an FPC connector. The screen is the same color IPS LCD used previously (ST7789V2, 240×135 resolution, 1.14 inches). A defining feature of this computer is its 4×14 (56 keys) QWERTY keyboard. The keys are significantly improved with a different tactile feel (260gf vs. 160gf actuation force). Many keys serve dual purposes via 'Fn', 'Aa', 'Ctrl', 'Opt', and 'Alt' modifiers. Keyboard scanning is now handled by the TCA8418 integrated circuit.

The audio subsystem has undergone significant changes. The ES8311 codec replaces the previous NS4168 and SPM1423 combination, resulting in superior microphone noise reduction. Combined with the NS4150B amplifier and a 1W speaker (located standardly beneath the Stamp), the output quality is markedly better. Furthermore, the Cardputer-Adv now includes a 3.5mm audio jack on the side for headphone connectivity.

 

 

Power and Connectivity

 

The Cardputer-Adv can be powered via USB-C or the internal battery. This version replaces the two smaller cells of the original with a single, larger 1750mAh battery, managed by the TP4057 charging IC. Like its predecessor, the Cardputer-Adv features a GROVE port (supporting I2C and 5V). A small adjacent switch allows the user to toggle the 5V line direction: the Cardputer can either power an external sensor or be powered by an external source.

While the original Cardputer relied solely on the GROVE port for expansion, the Cardputer-Adv introduces an additional 2×7-pin header (UART, I2C, SPI) on the rear for connecting peripheral devices. M5Stack continues to use the GROVE connector for its extensive ecosystem of "Unit" expansion modules.

 


 

Sensors and Modules

 

New features include the BMI270 six-axis motion sensor (IMU). The device retains the physical power switch, 'Boot' and 'Reset' buttons, an infrared (IR) LED, and a Micro-SD slot. Examining the PCB reveals a layout largely identical to the original; it even retains an unpopulated JST connector for a smaller battery. Interestingly, there is an unconnected FPC connector near the 3.5mm jack for which we found no official documentation. The Cardputer-Adv maintains its Lego-compatible mounting holes (though there is one row fewer on the back) and internal magnets, allowing it to be mounted on metal surfaces like a refrigerator door.

Along with the Cardputer-Adv, we received the CAP LoRa868 (now the updated version is Cap LoRa-1262) expansion module, designed to interface via the 2×7-pin header. The CAP module features a matching plastic enclosure and contains two primary components: an 868MHz LoRa module (based on the SX1262 chip) with an SMA connector for an external antenna, and an AT6668-based GNSS module supporting GPS, Beidou (BD2/BD3), GLONASS, Galileo, and QZSS.

 

M5Stack Cardputer Adv Version (ESP32-S3) - m5stack-store

 

Software and Programming

 

The Cardputer-Adv can be programmed using Arduino IDE, ESP-IDF, PlatformIO, or the manufacturer-recommended UiFlow2. UiFlow2 is a block-based visual programming environment, making it an excellent educational tool for introducing children to microcontrollers and electronics. The interface offers "Blocks," "Split," and "Python" views. In "Split" mode, users can see how dragging blocks generates real-time Python code—a bridge that helps beginners transition to text-based programming. To use this online tool, the UiFlow2 firmware must first be flashed onto the device using the M5Burner utility.

Several pre-configured examples are available via M5Burner, including community-driven projects. One highlight is Meshtastic for Cardputer-Adv, which integrates seamlessly with the Meshtastic mobile app for LoRa-based mesh networking and precise GPS mapping. The firmware provides a comprehensive menu for managing hardware segments like LoRa, GPS, and system time.

 

 

Conclusion

 

Additional examples include M5Launcher, which allows users to execute BIN files directly from the Micro-SD card. The factory demo provides a comprehensive hardware test. For those using the Arduino environment, extensive support is available via M5Stack libraries.

The Cardputer-Adv is exactly what its name suggests: a sophisticated, credit-card-sized computer with meaningful upgrades over the original. The CAP expansion module (e.g., Cap LoRa-1262) is a powerful addition, and the new 2×7-pin header opens endless possibilities for hardware hackers.

 

Source: SK LABS
Author: Dejan Petrovic

2026-02-04

Jan. 23, 2026M5Stack, a global leader in modular IoT and embedded development platforms, today announced the launch of StickS3, the latest addition to its signature Stick series.

 

 

Built for developers and makers, StickS3 is a compact, high-performance programmable controller designed for remote control and IoT applications. Powered by the ESP32-S3-PICO-1-N8R8, it delivers enhanced performance and optimized interaction capabilities, making it well suited for smart devices and a wide range of IoT development scenarios.

 

To enable these capabilities, StickS3 introduces a series of meaningful hardware and capability upgrades:

 

Key Features 

Upgraded Core

Powered by ESP32-S3-PICO-1-N8R8, StickS3 ensures higher processing performance, USB-OTG support, Bluetooth LE 5, and improved suitability for AI and edge computing applications, compared with the ESP32-PICO-V3-02 used in StickC-Plus2. 

Enhanced Audio

StickS3 upgrades its audio system with an ES8311 mono audio codec, a MEMS microphone, and an onboard speaker, enabling true audio input and output for voice interaction and audio playback. 

Extended Battery Life

StickS3 is equipped with a 250 mAh internal battery, providing stable power support for extended interactive use cases. 

Expanded Memory

StickS3 comes with 8 MB Flash and 8 MB PSRAM, providing significantly expanded program storage and runtime memory. 

Enhanced Expansion Options

StickS3 features the newly added Hat2-Bus (2.54-16P), giving access to more GPIO and communication resources, along with an integrated IR receiver for infrared control.

  

Detailed Specification

 

 

Development Potential

Beyond the hardware upgrades, the real strength of the Stick series lies in its development ecosystem and community creativity. With its compact form factor and ease of use, the Stick Series has become a signature product line within the M5Stack family, offering rich development potential and a wide range of use cases—especially for beginners.

 

Taking StickC-Plus2 as an example, the community has built a diverse collection of projects and custom firmware around it. Thanks to its portable design, it has been widely adopted for wearable and handheld applications, enabling creators to turn it into anything from a smartwatch to a security-testing tool.

 

Now, with the release of StickS3, a more powerful and highly integrated evolution of the series, it’s time to unlock even greater possibilities—driven by both the enhanced hardware and the passionate community behind Stick.

2026-01-23

Jan.2026, M5Stack officially announced its collaboration with SquareLine Vision, a next-generation UI design platform built to simplify and accelerate graphical interface development for embedded systems. As part of this partnership, SquareLine Vision has completed device adaptation for selected M5Stack products, enabling developers to design, preview, and deploy UIs more efficiently. 

With this integration, M5Stack Core2 and Tab5 are now fully supported in SquareLine Vision v1.2.1, allowing users to directly select these devices when designing interfaces. This marks an important step toward lowering the barrier to embedded UI development and streamlining the workflow from design to deployment.

 

Experience Effortless UI Development with SquareLine Vision

SquareLine Vision provides a visual, drag-and-drop environment for creating user interfaces quickly and easily. Developers can focus on layout, interaction, and user experience while SquareLine Vision generates production-ready code behind the scenes. This approach helps shorten development cycles and makes UI iteration faster and more intuitive. 

Seamless Hardware Integration for Further Development

The completed integration ensures that UI designs created in SquareLine Vision can be deployed seamlessly on M5Stack devices, with display resolution, touch input, and hardware characteristics already accounted for. This reduces the burden of hardware–software integration and allows developers to focus on further development. 

Supported devices include:

Core2 — a second-generation M5Stack Core controller featuring an ESP32-D0WDQ6-V3 dual-core LX6 CPU (up to 240 MHz), 16 MB Flash, 8 MB PSRAM, Wi-Fi connectivity, and a 2.0-inch capacitive touch screen, with programming supported via USB Type-C. 

Tab5 — a highly expandable, portable smart IoT terminal designed for developers, integrating a dual-core architecture and rich hardware resources. It is built around the ESP32-P4 SoC based on the RISC-V architecture, featuring 16MB Flash and 32MB PSRAM for high-performance application development. 

By combining M5Stack’s modular hardware ecosystem with SquareLine Vision’s effortless UI design workflow, developers can now move from concept to functional interface with greater speed, clarity, and confidence. 

About M5Stack

M5Stack is a leading provider of modular, open-source IoT development solutions. Our stackable hardware and intuitive programming platform empower developers and businesses to accelerate innovation and rapidly prototype solutions for IIoT, home automation, smart retail, and STEM education. 

About SquareLine Vision

SquareLine is a next generation UI editor for individuals and professionals to design and develop beautiful UIs for your embedded devices quickly and easily. Offering a unique cross-platform software, which enables design and implementation in one tool.

2026-01-20

Jan. 13, 2026 — M5Stack, a global leader in modular IoT and embedded development platforms, today launched StackChan, the first community-co-created open-source AI desktop robot, built on a proven ESP32 platform and designed to be endlessly hackable by makers worldwide.

 

Unlike closed, concept-driven AI robots, StackChan exposes its hardware, firmware, and interaction logic from day one — turning a playful desktop companion into a real development platform.

 

StackChan is now live on Kickstarter with a $65 Super Early Bird offer available for the first 72 hours.

 

 

From Community to the Globe: How StackChan Was Born

 

Before its official launch by M5Stack, StackChan had already existed as a community-driven project since 2021. Built on M5Stack standard controller, Core series, it began as a personal open-source project by maker Shinya Ishikawa, sustained and shaped through ongoing community contributions. 

 

As more enthusiasts joined the project, contributors like Takao, who helped popularize the DIY kits, and Robo8080, who introduced AI capabilities, played key roles in expanding StackChan beyond its original form. 

 

Inspired by StackChan’s expandability and creative potential, M5Stack officially brought the project to life as its first ready-to-play yet endlessly hackable desktop robot—while keeping its community-driven spirit at the core.

 

What Remains: Core Computing & Interaction Capabilities

 

As with the original version, StackChan continues to use the M5Stack flagship Core Series (CoreS3)as its main controller. CoreS3 is powered by an ESP32-S3 SoC with a 240 MHz dual-core processor, 16 MB Flash, and 8 MB PSRAM, and supports both Wi-Fi and BLE connectivity. 

To enable richer interactions, the main unit integrates a 2.0-inch capacitive touch display, a 0.3 MP camera, a proximity sensor, a 9-axis IMU (accelerometer + gyroscope + magnetometer). It also includes a microSD card slot, a 1W speaker, dual microphones, and power/reset buttons. Together, these hardware components form a solid foundation for StackChan’s audio-visual interactive experiences. 

For more technical details, please refer to the StackChan documentation: https://docs.m5stack.com/en/StackChan

 

What’s New: Ready-to-Play Functions Powered by Advanced Hardware

 

For the robot body, several advancements have been made to make it easier to get hands-on and improve the out-of-box experience. It features: 

Power & connectivity: A USB-C interface for both power and data, paired with a built-in 700 mAh battery.

Movement system: 2 feedback servos supporting 360° continuous rotation on the horizontal axis and 90° vertical tilt—enabling expressive movements with real-time position feedback.

Visual feedback: 2 rows totaling 12 RGB LEDs for expressive system and interaction feedback.

Sensors & interaction: Infrared transmission and reception, a three-zone touch panel, and a full-featured NFC module enabling touch- and identity-based interactions. 

On the software side, StackChan is ready-to-play for starters with no coding required. The pre-installed factory firmware delivers:

Expressive faces and motions: Preloaded with vivid facial expressions and coordinated movements that bring personality and liveliness to StackChan.

Built-in AI agent: Integrates an AI agent for natural voice-based interaction and conversational experiences.


 

App-based remote interaction: Supports official iOS app for video calls, remote avatar control, and real-time interaction with StackChan.

 

Chan-to-Chan Friends Map: Enables discovery of nearby StackChan devices, unlocking playful multi-device and social interaction scenarios.

 

Open for customization: While beginner-friendly by default, the firmware supports further development via Arduino and UiFlow2, making it easy to create custom applications.

 

100% Open-Source: Built to Be Customized and Extended

 

In an era filled with closed, concept-driven “AI robot” products, StackChan stands out with its open-source core. From firmware and hardware interfaces to development tools, every layer is designed to be explored, modified, and extended by users. 

Beyond code, StackChan also encourages physical customization. With 3D printing and creative accessories, users can personalize their StackChan’s appearance and turn it into a unique desktop companion. 

Open-source repository: https://github.com/m5stack/StackChan

Fun with Global Community: Share, Extend, and Evolve Together

 

Since its birth, StackChan has grown into a vibrant global community of makers, developers, and enthusiasts. From sharing projects and source code online to hosting meetups and anniversary events offline, the community continues to expand what StackChan can be. 

Owning a StackChan is not just about building a robot—it’s about being part of an open ecosystem where ideas and creativity evolve together. 

StackChan is not built to its end at launch. It is built to grow—through open technology, creative experimentation, and a global community that continues to redefine what a desktop robot can be. 

Discover your StackChan on Kickstarter now: https://www.kickstarter.com/projects/m5stack/stackchan-the-first-co-created-open-source-ai-desktop-robot?ref=d5iznw&utm_source=PR

2026-01-14

When using M5Stack Modules or Bases, many users run into a common problem:

The same module, when stacked on different controllers (such as Basic, Core2, CoreS3, Tab5, etc.), uses different pin definitions. So, how should you correctly configure the pin numbers in your code?

If you have the same question, then understanding how the M5-Bus and DIP switches work is crucial.

This article will explain in a clear and practical way:

  • What is M5-Bus
  • Fixed Function Pins
  • What is a DIP Switch 

By the end, you could have a much clearer idea of how to set the DIP switches on the module, and how to configure the corresponding pin numbers in your program.

 

 

 

 

1. What is M5-Bus

M5-Bus is a stack expansion bus design adopted by M5Stack stacking series products (Module, Base). The interface uses 2x15P@2.54mm pin headers/sockets. The Core series controllers can quickly stack different modules via the M5-Bus to achieve functional expansion. Its fixed positions define power pins such as GND, 5V, 3V3, and BAT, ensuring compatibility with various devices; other pins vary depending on the controller model, so you need to configure your program according to the actual pin mapping.

 2. Fixed Function Pins 

The pin numbers of M5-Bus are fixed starting from the GND pin at the top left corner, numbered from 1 to 30. This sequence is consistent across all controllers. The pins marked with a red box are fixed-function pins (power and GND, etc.), while other pins may have different functions or GPIO mappings depending on the main controller.


3. What is a DIP Switch

A DIP Switch is a toggle switch. It is used to flexibly change the connection of key module pins to adapt to different controller models. For example, in the case of Module GPS v2.0, there are three switchable pins: TXD, RXD, and PPS. Two onboard DIP switches control which pins these signals are connected to.

DIP Switch1s switches 14 control TXD, switches 58 control RXD; DIP Switch2 is used to control PPS.

To avoid pin conflicts, typically each function pin only needs to be switched to one pin based on actual usage requirements. For example, in the following configuration, the 1st and 5th switches on DIP Switch1 are set to ON, the 2nd switch on DIP Switch2 is set to ON, and all other switches are set to OFF.

Based on the PCB silkscreen reference:

  • For Basic, the G17 pin; for Core2, the G14 pin; for CoreS3, the G17 pin will be connected to TXD.
  • For Basic, the G16 pin; for Core2, the G13 pin; for CoreS3, the G18 pin will be connected to RXD.
  • For Basic/Core2, the G35 pin; for CoreS3, the G10 pin will be connected to PPS.

When programming the device, you must modify the corresponding pin configuration according to the actual pin connections.


The DIP switchs corresponding positions and numbering connected to the M5-Bus are fixed (indicated by blue box).

If the PCB silkscreens I/O reference table does not include the controller model you are currently using, you can refer to the existing devices silkscreen PinMap to identify which M5-Bus pins the DIP switch connects to, and then map those to the corresponding pins of your current controller.


 4. Quick Question Time

When using Module LoRa868 with Tab5, and the DIP switches are set as shown in the picture, which Tab5 pins are used for NSS, BUSY, RST, and IRQ, respectively?


 Answer

5. Wrapping Up:

Treat the DIP Switch as a “Hardware-Level Remapping Tool”  

A DIP switch essentially gives you a form of hardware-level pin remapping:
The same module can be used with different Controllers, while routing key signals (TXD, RXD, PPS, etc.) to the most suitable GPIO pins.

In practice, if the module’s silkscreen or documentation already specifies how to set the DIP switches and which pins to use for your controller (for example, Core2, CoreS3), you can simply:

  • Set the DIP switches according to the instructions, and
  • Use the same pin numbers in your code as indicated in the documentation.

If your host controller is not listed, you can follow this simple procedure:

  • Check the module silkscreen/documentation
    Identify which function each DIP switch group corresponds to.
  • Refer to the Controller’s PinMap
    Find out which GPIO pins on your current Controller correspond to those M5-Bus pins.
  • Set the DIP switches
    Ensure that each functional signal is routed to exactly one target pin, avoiding conflicts or duplicate connections.
  • Update the pin definitions in your code
    Make sure the pins used for UART, interrupts, etc., match the GPIO pins you determined in the previous steps.

Once you understand this workflow, you no longer need to memorize which module “must be used with which Controller.”
Instead, you can flexibly migrate and reuse modules across different controllers, according to your actual needs.

2026-01-07

Setting up a voice assistant doesn’t have to be complicated. At M5Stack, we’re proud to bring this capability closer to developers and makers with the M5Stack CoreS3—a powerful ESP32-S3 based controller with integrated display, rich interfaces, and cutting-edge performance.

With M5Stack CoreS3, you can seamlessly integrate advanced voice control into your Home Assistant ecosystem, enjoy real-time responsiveness, and experience true local AI interaction—secure, reliable, and fast.

The following guide walks you step-by-step through the process of setting up the CoreS3 HA Voice Assistant, from environment installation to voice activation.

 

1.Environmental installation

  • 3.In Settings -> Add-ons -> Add-ons STORE, install the ESPHome addon.
  • 4.After successfully installing the ESPHome addon, enable Show in sidebar in the ESPHome management page to add it to the left navigation bar.
🏷️ Note

The compilation of firmware was based on ESPHome 2025.9.0/2025.10.0. If you face issue when compiling the project, consider switching to these versions.

 2.Adding a Device

Open the ESPHome addon page and click NEW DEVICE in the lower-right corner to create a new device.


Click CONTINUE.

Select New Device Setup to create a new configuration file.

Give the configuration file a proper name.


Next, when selecting the device, cancel the Use recommended settings, then select ESP32-S3. Locate M5Stack CoreS3 among the list.

Copy the Home Assistant API Encryption Key for later use, then click Skip

3.Configuring the Device

Click EDIT in the lower-left corner of the device to modify the Wi-Fi connection configuration. (The Wi-Fi configuration defaults to the current HA server's Wi-Fi settings, but you can also modify it directly with plaintext: ssid:"xxxx")

Add the following package configuration link to add voice assistant functionality to the device.

packages: 
m5stack.cores3-voice-assistant: github://m5stack/esphome-yaml/common/cores3-satellite-base.yaml@main

Click SAVE and then INSTALL in the upper-right corner.

Select Manual Download to start compiling the firmware.

🏷️ Firmware Compilation Note:

Compiling firmware through HA can be resource-intensive. The first compilation may take a long time for resource downloads, depending on the device hosting the HA service and network quality.

4.Firmware Flashing

Saving the Firmware

  • 1.After firmware compilation is complete, click the Download button and select the Modern Format firmware to download to your local machine.
  • 2 Use the ESPHome Web flashing tool to flash the firmware, or use tools like esptool. The starting address for firmware flashing is 0x00.

🏷️ Note

You can click Open ESPHome Web link on download prompt

Connecting for Flashing

Connect the CoreS3 device to your computer via a USB-C cable and press and hold the reset button until the green light turns on, then release it to enter download mode.

In ESPHome Web, click Connect to connect to the device and select the corresponding device port.

Click INSTALL, upload the *.bin file previously compiled

Click INSTALL again to begin flashing

Wait until the flash is successful

5.Confirming the New Device Configuration

After firmware flashing, the device will automatically connect to Wi-Fi. The Home Assistant service within the same local network will prompt for a new device discovery. In Notifications, select the new device and click Check it out -> CONFIGURE, then follow the pop-up steps to add the device to the specified area to complete the configuration. If you do not receive a new device notification, click Settings -> Devices & services to view device status.

Then, you should be able to configure your Voice Assistant, or you can skip it and configure later

  • Test the wake word

  • Select an area

  • Select the pipeline

  • Finish the configuration


6.Waking Up the Device

After adding the device and completing the preparation steps for Home Assistant Cloud and Assist pipeline, you can now wake up the device using voice commands.

Demo video

With the steps above, your M5Stack CoreS3 has transformed into a fully functional Home Assistant voice terminal. Whether you use it to control lighting, monitor your environment, or communicate with other smart devices, CoreS3 bridges the gap between you and your smart home—bringing natural voice control to your fingertips.

M5Stack continues to empower developers with open, powerful, and beautifully designed hardware.
With CoreS3, you’re not just installing firmware—you’re giving your smart home a voice.

2025-12-03

On November 17, 2025, M5Stack once again hosted its annual Open Day, welcoming global users and enthusiasts. This year’s event reached a new milestone, bringing together more than 80 participants from 11 countries to explore the world of M5Stack innovation—where people build, share, and connect.

Over the course of the afternoon, guests explored our all-in-one facility, exchanged ideas face-to-face, and discovered what’s next in the M5Stack ecosystem.

Factory & Office Tour: From idea to device

M5Stack operates an all-in-one facility where product design, production, packaging, quality control and shipping are all managed under one roof. Open Day offered visitors a rare opportunity to see this entire workflow in action.

Guests were divided into three language groups—Chinese, English, and Japanese. Guided by our team, participants followed the full journey of M5Stack products, from design and production to testing and warehousing, gaining a deeper understanding of how each idea becomes a finished piece of hardware.

Highlights Session: New products, new tools, new recognition

After the tour, the meetup session brought everyone together.

Jimmy opened with a brief look back at M5Stack’s story—how a small modular hardware idea grew into a global developer platform—and shared key milestones achieved over the years.

The session then moved into a series of exciting highlights:

  • The debut introduction of the new Arduino Nesso N1 by Julian Caro Linares from Arduino
  • The latest updates on UiFlow2 and its expanding ecosystem
  • Recaps from Maker Faire Tokyo and Maker Faire Rome
  • M5Stack being honored with the 2025 Good Design Award

User Lightning Talks: Global creators on stage

This year’s Open Day also featured a strong lineup of international speakers. Contributors from China, Japan, Indonesia, and other regions presented projects spanning education, AI, robotics, digital tools, and more. 

The session showcased both award-winning professional works and small yet inspiring experiments from individual creators. Each presentation ignited new ideas, and every round of applause reflected the community’s appreciation for creativity and innovation.

Thank You for Being Part of Open Day

Every attendee played an essential role in making the event a success. Your passion, projects, and ideas are what give the M5Stack community its vibrancy. Thank you for traveling from around the world to join us in Shenzhen—to build, share, and connect together. 

Looking ahead, M5Stack will continue to:

  • Make IoT development more accessible and beginner-friendly
  • Provide powerful, modular hardware and intuitive software tools
  • Support an open, global community of developers, makers, and partners

We look forward to welcoming you again at the next M5Stack Open Day—see you in 2026!

 

2025-11-19

Nov. 12, 2025–M5Stack, a global leader in modular IoT and embedded development platforms, today announced ArduinoNessoN1, a compact and powerful development kit co‑engineered with Arduino, the world’s leading open‑source hardware and software company. 

Powered by the ESP32‑C6, the NessoN1 delivers multi‑protocol connectivity including Wi‑Fi®6, Bluetooth®5.3, Thread1.4, Zigbee®3.0, Matter, and LoRa®(850960MHz), all packed into a sleek, pre‑assembled enclosure. With its 1.14‑inch touch display, programmable buttons, IMU, IR transmitter, RGBLED, buzzer, and rechargeable battery, it offers an intuitive and interactive platform for rapid IoT development. 

Highlights of Nesso N1

  • Powered by the ESP32-C6, a single-core 32-bit RISC-V CPU running up to 160 MHz for efficient, reliable performance
  • Equipped with 16 MB Nor Flash and 512 kB SRAM, providing ample memory for IoT applications
  • A 1.14″ touchscreen and programmable buttons for easy interaction
  • Built-in IMU, IR transmitter, RGB LED, and buzzer
  • Powered via rechargeable battery
  • Multiple connectivity options, including Bluetooth® 5.3, Zigbee 3.0, Matter, LoRa® (850-960MHz)
  • Support for Arduino Cloud for remote control and data visualization
  • USB-C, Grove, and Qwiic connectors for easy expansion with Arduino Modulino nodes or third-party modules

A Developer-Friendly, Extensible Ecosystem

NessoN1 delivers robust hardware expandability, allowing developers to easily extend its capabilities through Grove and Qwiic connectors—compatible with ArduinoModulino nodes, third‑party modules, and the full M5Stack sensor‑hat ecosystem. It also integrates seamlessly with ArduinoCloud for remote monitoring and data visualization. 

Complementing this is an open and flexible development environment. Developers can start coding with NessoN1 in their preferred workflowwhether using ArduinoIDE, MicroPython, or UiFlow—making it a perfect fit for everything from hobby projects to professional applications. 

From plug-and-play compatibility to seamless connectivity and powerful programmability, it’s everything you need in a pocket-sized powerhouse. Check out Nesso N1 on the Arduino Store or explore Arduino Docs for documentation, examples, and specs. 

Advancing Innovation Through Collaboration

ArduinoNessoN1 marks a major step forward in the collaborative innovation between M5Stack and Arduino. Designed with the open‑source community in mind, this joint effort expands the ecosystem with a more integrated, flexible, and developer‑centric product experience. The launch of NessoN1 is expected to inspire new ideas, accelerate project development, and support the diverse needs of developers worldwide.

2025-11-12

In today’s rapidly advancing world of intelligent applications, image and video management is evolving at an unprecedented pace.  

Imagine capturing breathtaking travel landscapes or precious moments of your child’s growth — and having your photos automatically categorized, tagged, and searchable via natural language. All processing happens locally, with no dependence on cloud servers, ensuring both speed and privacy. With the powerhouse performance of the M5Stack LLM‑8850 Card, bring your vision to life with an intelligent, deeply personalized photo album that’s uniquely yours.

M5StackLLM‑8850Card is an M.2 M‑Key2242AI accelerator card designed for edge devices. It is a powerful yet energy-efficient AI edge computing module, purpose-built for multi-modal large models, on-device inference, and intelligent analysis. It delivers high-performance inference for both language and vision models, and can be deployed effortlessly across diverse devices to enable offline, private AI services.

In this article, we’ll show you how to build an intelligent photo management platform with M5Stack LLM-8850 Card, making the organization of your pictures and videos smarter, faster, and more secure.

To achieve this, we’ll leverage Immich, an open‑source self‑hosted photo and video management platform that supports automatic backup, intelligent search, and cross‑device access.

This post provides an introduction to the app usage. For the latest updates and detailed information, please visit Product Guide for LLM-8850 Card Application - Immich.

Immich is an open-source self-hosted photo and video management platform that supports automatic backup, intelligent search, and cross-device access.

 1.      Manually download the program and upload it to raspberrypi5, or pull the model repository with the following command.

If git lfs is not installed, first refer to git lfs installation guide for installation.
git clone https://huggingface.co/AXERA-TECH/immich

File Description:

m5stack@raspberrypi:~/rsp/immich $ ls -lh
total 421M
drwxrwxr-x 2 m5stack m5stack 4.0K Oct 10 09:12 asset
-rw-rw-r-- 1 m5stack m5stack 421M Oct 10 09:20 ax-immich-server-aarch64.tar.gz
-rw-rw-r-- 1 m5stack m5stack    0 Oct 10 09:12 config.json
-rw-rw-r-- 1 m5stack m5stack 7.6K Oct 10 09:12 docker-deploy.zip
-rw-rw-r-- 1 m5stack m5stack 104K Oct 10 09:12 immich_ml-1.129.0-py3-none-any.whl
-rw-rw-r-- 1 m5stack m5stack 9.4K Oct 10 09:12 README.md
-rw-rw-r-- 1 m5stack m5stack  177 Oct 10 09:12 requirements.txt

2.     Import the docker image

If docker is not installed, please refer to RaspberryPi docker installation guide to install it first.
cd immich
docker load -i ax-immich-server-aarch64.tar.gz
3. Prepare the working directory

unzip docker-deploy.zip 
cp example.env .env

4.    Start the container

docker compose -f docker-compose.yml -f docker-compose.override.yml up -d

If started successfully, the information is as follows:

m5stack@raspberrypi:~/rsp/immich $ docker compose -f docker-compose.yml -f docker-compose.override.yml up -d
WARN[0000] /home/m5stack/rsp/immich/docker-compose.override.yml: the attribute `version` is obsolete, it will be ignored, please remove it to avoid potential confusion 
[+] Running 3/3
 ✔ Container immich_postgres  Started                                      1.0s 
 ✔ Container immich_redis     Started                                      0.9s 
 ✔ Container immich_server    Started                                      0.9s 

5.     Create a virtual environment

python -m venv mich

6.     Activate the virtual environment

source mich/bin/activate 

7.     Install dependency packages

pip install https://github.com/AXERA-TECH/pyaxengine/releases/download/0.1.3.rc1/axengine-0.1.3-py3-none-any.whl
pip install -r requirements.txt
pip install immich_ml-1.129.0-py3-none-any.whl # Precompiled package may be upgraded; use the actual file name.

8.     Start the immich_ml service

python -m immich_ml 

After running, you should see:

(mich) m5stack@raspberrypi:~/rsp/immich $ python -m immich_ml
[10/10/25 09:50:12] INFO     Starting gunicorn 23.0.0                           
[10/10/25 09:50:12] INFO     Listening at: http://[::]:3003 (8698)              
[10/10/25 09:50:12] INFO     Using worker: immich_ml.config.CustomUvicornWorker 
[10/10/25 09:50:12] INFO     Booting worker with pid: 8699                      
2025-10-10 09:50:13.589360675 [W:onnxruntime:Default, device_discovery.cc:164 DiscoverDevicesForPlatform] GPU device discovery failed: device_discovery.cc:89 ReadFileContents Failed to open file: "/sys/class/drm/card1/device/vendor"
[INFO] Available providers:  ['AXCLRTExecutionProvider']
/home/m5stack/rsp/immich/mich/lib/python3.11/site-packages/immich_ml/models/clip/cn_vocab.txt
[10/10/25 09:50:16] INFO     Started server process [8699]                      
[10/10/25 09:50:16] INFO     Waiting for application startup.                   
[10/10/25 09:50:16] INFO     Created in-memory cache with unloading after 300s  
                             of inactivity.                                     
[10/10/25 09:50:16] INFO     Initialized request thread pool with 4 threads.    
[10/10/25 09:50:16] INFO     Application startup complete.  

In your browser, enter the Raspberry Pi IP address and port 3003, for example: 192.168.20.27:3003

Note: The first visit requires registering an administrator account; the account and password are saved locally.

Once configured, you can upload images.

The first time, you need to configure the machine learning server. Refer to the diagram below to enter the configuration.

Set the URL to the Raspberry Pi IP address and port 3003, e.g., 192.168.20.27:3003.

If using Chinese search for the CLIP model, set it to ViT-L-14-336-CN__axera; for English search, set it to ViT-L-14-336__axera.

After setup, save the configuration.

The first time, you need to manually go to the Jobs tab and trigger SMART SEARCH.

Enter the description of the image in the search bar to retrieve relevant images.

Through this hands-on project, we’ve not only built a powerful smart photo album platform, but also experienced the exceptional performance of the M5Stack LLM‑8850 Card in edge AI computing. Whether setting up a private photo album on your Raspberry Pi or deploying intelligent image processing in security scenarios, the M5Stack LLM‑8850 Card delivers efficient, stable computing power you can rely on.


It brings AI closer to where your data resides, enabling faster, more secure processing and turning your ideas into reality. If you’re looking for a solution for on-device AI inference, give M5Stack LLM‑8850 Card a try — it might just become the core engine of your next project.

2025-11-03

With the outstanding modular design, excellent user experience, and innovative integration of hardware and software ecosystems, the M5Stack Series was honored with the 2025 Good Design Award!

The Good Design Award, founded in Japan in 1957, celebrates products that combine aesthetic design with social impact. This year, the M5Stack Series stood out among thousands of entries for its modular system, open ecosystem, and design that empowers creativity across all levels.

Designed for Innovation

M5Stack is a modular prototyping tool for turning ideas into real products fast. It combines a durable enclosure, standardized expansion modules, and support for multiple programming languages. Its GUI environment, UIFlow, lowers the barrier for beginners while serving professionals, making hardware development simpler and more efficient.

Since its launch in 2017, M5Stack has built a vibrant global community of developers and educators. The series has become a common sight not only in schools, research institutions, and maker activities but also across industrial and IoT innovation projects. In Japan alone, dozens of introductory books have been published on IoT and robotics development using M5Stack.

Today, the M5Stack Series encompasses more than 400 products, widely used in education, industry, and embedded systems, significantly enhancing rapid development and fostering innovation.

Evaluation Comments from Jury members

The Good Design Award 2025 judges commended M5Stack :

M5Stack is a development platform featuring a modular design that allows users to freely combine a diverse lineup of modules. With its highly refined enclosure and attention to detail, it offers a level of aesthetic appeal and usability unmatched by conventional microcontroller boards—reminding users of the joy of creating tools born from their own imagination.

Supported by an active global community, M5Stack has found widespread use across various fields—from education and hobbyist projects to industrial and embedded product applications. Its standardized expansion system and GUI-based tools make it accessible for both beginners and advanced developers alike, offering a broad entry point into creative technology development.

As programming becomes more approachable in the age of AI, M5Stack is expected to play a key role in unlocking creativity across diverse groups of people, opening up new possibilities for innovation and making.

The official announcement: Good Design Award 2025 — M5Stack Seires

About GOOD DESIGN AWARD

GOOD DESIGN AWARD is a social movement to make people's lives, industries, and society more well-off through design. Since its start in 1957, it has gained widespread support along with its logo G Mark. GOOD DESIGN AWARD is for products, architecture, software, systems, and services that are relevant to people. Whether visible or invisible, anything that is constructed for some ideal or purpose is considered as design, and its quality is evaluated and honored. 

The GOOD DESIGN AWARD helps people to discover the possibilities of design and expand the fields where design can be used through screening and a wide range of promotion, and is dedicated to a well-off society where everyone can enjoy creative life.

The M5Stack Series has been officially included in the GOOD DESIGN AWARD Gallery, which archives and showcases all award-winning works by year since its establishment in 1957.

About M5Stack

Founded in 2017, M5Stack creates modular, open-source hardware development tools designed to streamline prototyping and productization. Based in Shenzhen, China, M5Stack has become a globally recognized developer platform for IoT, AIoT, and robotics solutions.

Learn more about M5Stack and explore our products: https://shop.m5stack.com/

2025-10-17

If you've been wanting to use your devices remotely for a while, today you'll learn how to do it by integrating LoRa into Home Assistant.

Index

l  What is LoRa and how does it work?

l  Prerequisites

l  Mounting and configuring the LoRa Gateway

l  LoRa node configuration

l  Check the communication

l  Sending information

l  Customize your Gateway

l  Acknowledgments

What is LoRa and how does it work?

LoRa (an acronym for "Long Range") is a radio frequency communication protocol that enables long-distance data transmission (up to 10 km in open fields) with very low power consumption. Therefore, it offers two major advantages:

  • It has a much greater range than the protocols we commonly use (WiFi, Bluetooth, Zigbee, etc.). This makes it the ideal choice for remote sensors (storage rooms, mailboxes, large gardens, etc.).
  • It doesn't rely on the telephone or electrical network to transmit information, so different LoRa devices can communicate even during a power outage.

However, it also has its limitations, as its data transfer speed is slow. In practice, this means it's perfect for sending simple data (commands to our devices, numbers, or text strings), but it's not a suitable protocol for transmitting photos or videos.

Additionally, you should consider the frequency LoRa uses in your geographic area. For example, in Europe it's 868 MHz, while in the US and most of Latin America it's 915 MHz.

Prerequisites

To integrate LoRa into Home Assistant, you will need the following components:

  • A LoRa gateway that receives information from the various nodes and transmits it to Home Assistant. In this case, we're going to create a gateway with the M5Stack Core S3 SE (yes, the same one we already used to Activate Assist ), combined with the LoRa868 modules and an external battery (this is optional).
  • A LoRa node that collects and sends information (e.g., a connected sensor). In this case, we're going to use a board with the integrated SX1276 chip (which would also be a valid device to act as a LoRa gateway).
  • A  USB cable to power the DATA board  (  with a charging cable, you will not be able to install the software).
  • Having installed  ESPHome in Home Assistant.

Mounting and configuring the LoRa Gateway

We are going to be using the M5Stack Core S3 SE along with the LoRa868 V1.1 module (now the LoRa868 v1.2 module is available). This is a modular device that's very easy to expand by simply assembling the components.

Something important to keep in mind is that the LoRa module has some small switches on the back ('DIP switches') that modify the pin assignment, and logically, it must match what we indicate in the ESPHome code.

To do this, make sure the switches are in the following position (2, 6 and 7 on and the rest off).

From here, the process will be similar for any compatible motherboard (adapting the steps and connection diagram to the specifications of your device). The steps we followed are as follows:

1.      In Home Assistant, go to your ESPHome plugin  , tap “New device” and “Continue.”

2.    Give your device a name  (for example, “LoRa Gateway” )  and click “Next”.

3.    Select “ESP32-S3” as the device type . You'll notice that a new block has been created for your device in the background.

4.   Click “Skip” and click “Edit” above your device block.

5.    Add the following lines to the end of your code (which come directly from the ESPHome SX127x component, adapted to our device).

captive_portal:

spi:
  clk_pin: GPIO36 
  mosi_pin: GPIO37 
  miso_pin: GPIO35 

sx127x:
  cs_pin: GPIO6 
  rst_pin: GPIO7 
  dio0_pin: GPIO10 
  pa_pin: BOOST
  pa_power: 14
  bandwidth: 125_0kHz
  crc_enable: true
  frequency: 868920000
  modulation: LORA
  rx_start: true
  sync_value: 0x12
  spreading_factor: 7
  coding_rate: CR_4_5
  preamble_size: 8
  on_packet:
    then:
      - lambda: |-
          ESP_LOGD("lambda", "packet %s", format_hex(x).c_str());
          ESP_LOGD("lambda", "rssi %.2f", rssi);
          ESP_LOGD("lambda", "snr %.2f", snr);

button:
  - platform: template
    name: "Transmit Packet"
    on_press:
      then:
        - sx127x.send_packet:
            data: [0xC5, 0x51, 0x78, 0x82, 0xB7, 0xF9, 0x9C, 0x5C]

6.   Click “Save” and “Install.” Select “Manual download” and wait for the code to compile. 

7.   When finished, select the “Modern format” option to download the corresponding '.bin' file.

8.   Connect the M5Stack Core S3 SE to your computer using the USB-C data cable via the port on the side.

9.   Now go to the ESPHome page   and click "Connect." In the pop-up window,  select your board  and click "Connect."

10.   Now click on “Install” and select the '.bin' file obtained in step 7. Again, click on  “Install”.

11.  Return to Home Assistant and go to  Settings > Devices & Services. Your device should have been discovered and appear at the top, waiting for you to press the  "Configure" button. Otherwise, click the "Add integration" button, search for "ESPHome," and enter your board's IP address in the "Host" field. As always, we recommend assigning a static IP address to your router to avoid future issues if it changes.

LoRa node configuration

Now that we have our LoRa gateway, let's configure a node to send information to it. To do this, we'll follow steps very similar to those in the previous section:

1.      In Home Assistant, go to your ESPHome plugin  , tap  “New device”  and “Continue.”

2.    Give your device a name  (for example,  “LoRa Node” )  and click “Next”.

3.    Select “ESP32” as the device type  . You'll notice a new block has been created for your device in the background.

4.   Click “Skip” and click “Edit” above your device block.

5.    Add the following lines to the end of your code  (which in this case match the example of the  SX127x component of ESPHome).

captive_portal:

spi:
  clk_pin: GPIO5
  mosi_pin: GPIO27
  miso_pin: GPIO19

# Example configuration entry
sx127x:
  cs_pin: GPIO18
  rst_pin: GPIO23
  dio0_pin: GPIO26
  pa_pin: BOOST
  pa_power: 14
  bandwidth: 125_0kHz
  crc_enable: true
  frequency: 868920000
  modulation: LORA
  rx_start: true
  sync_value: 0x12
  spreading_factor: 7
  coding_rate: CR_4_5
  preamble_size: 8
  on_packet:
    then:
      - lambda: |-
          ESP_LOGD("lambda", "packet %s", format_hex(x).c_str());
          ESP_LOGD("lambda", "rssi %.2f", rssi);
          ESP_LOGD("lambda", "snr %.2f", snr);

button:
  - platform: template
    name: "Transmit Packet"
    on_press:
      then:
        - sx127x.send_packet:
            data: [0xC5, 0x51, 0x78, 0x82, 0xB7, 0xF9, 0x9C, 0x5C]

6.   Note that the pin assignment is different (because we're using a different device than the one we used for the Gateway), but the rest of the configuration is exactly the same as in the previous section. This is important so they can communicate with each other.

7.   Click “Save” and “Install.” Select  “Manual download”  and wait for the code to compile. 

8.   When finished, select the “Modern format” option to download the corresponding '.bin' file.

9.   Connect the board to your computer using the Micro USB data cable through the port on the side.

10.   Now go to the ESPHome page   and click "Connect." In the pop-up window,  select your board  and click "Connect."

11.  Now click on “Install” and select the '.bin' file obtained in step 8. Again, click on  “Install”.

12.    Return to Home Assistant and go to  Settings > Devices & Services . Your device should have been discovered and appear at the top, waiting for you to press the  "Configure" button . Otherwise, click the "Add integration" button, search for "ESPHome," and enter your board's IP address in the "Host" field. As always, we recommend assigning a static IP address to your router to avoid future issues if it changes.

Check the communication

Let's do a little test to check that both devices are communicating correctly (the Gateway and the node) .

1.      Make sure both devices are turned on and are online in Home Assistant and ESPHome.

2.    From Home Assistant, go to Settings > Devices & Services > ESPHome and access one of them (for example, the Gateway).

3.    Open a new window (without closing the previous one), enter the ESPHome plugin and access the logs of the other device (in this case, the node).

4.   In the Home Assistant window, click the "Transmit Packet" button. You'll immediately see the logs from the second device recording the incoming packet.

You can perform a reverse test to verify that communication is working both ways. If everything went well, your devices are now communicating.

Sending information

Logically, the point of integrating LoRa into Home Assistant is to send some kind of useful information (such as the value of a sensor connected to the node).

1.      The first step is to add the sensor you're interested in to the node board. For example, we're going to add a PIR sensor to get the long-awaited motion sensor on the mailbox. To do this, we've added the following code snippet:

binary_sensor:
  - platform: gpio
    pin: GPIO34
    name: "PIR Sensor"
    device_class: motion
    id: pir_sensor

2.    If you look at the code above, it's the same one we would use in any "normal" scenario. However, to send the information to our LoRa Gateway, we need to add something extra using the 'Packet Transport' component.

packet_transport:
  platform: sx127x
  update_interval: 5s
  encryption: "password"
  rolling_code_enable: true
  binary_sensors:
    - pir_sensor

3.    Analyze the code above and observe the following:

·        In the 'encryption' attribute, you have to indicate the encryption key for your message (whatever you want), so that it cannot be received by anyone on the same frequency.

·        We've identified the ID of the sensor we want to send (in this case, the binary sensor with the ID "pir_sensor") . You can add any other sensors you're interested in here.

4.   Now we are going to add the following to the code of our LoRa Gateway, so that it receives the information.

packet_transport:
  platform: sx127x
  update_interval: 5s
  providers:
    - name: lora-node
      encryption: "password"

binary_sensor:
  - platform: packet_transport
    id: pir_sensor
    provider: lora-node

  - platform: template
    name: "Buzón"
    device_class: motion
    lambda: return id(pir_sensor).state;

5.    Now we’re going to add the following to the code of our LoRa Gateway, so it can receive the information.

Again, analyze the code and note the following:

l We have specified the ESPHome device name of our LoRa node as the data provider.

l In the ‘encryption’ attribute, we have indicated exactly the same key as in the node.

l To transform the information received into a gateway sensor, we used the “packet_transport” platform. We assigned it an “id” (you can choose any) and again indicated the LoRa node name as the provider. This is an internal ESPHome sensor.

l To display this information in Home Assistant, we created a template sensor of the same type, assigning it the value of the internal sensor created in the previous step.

6.   And that's it! If you now check your gateway device in Home Assistant, you’ll see that it already shows the information from the node.

Customize your Gateway

Since we used the M5Stack Core S3 SE to integrate LoRa into Home Assistant, we can take advantage of its other features to customize it! Below, we're leaving you the full code to create a screen that notifies you when you receive letters in your mailbox!

esphome:
  name: lora-gateway
  friendly_name: LoRa Gateway
  libraries:
    - m5stack/M5GFX@^0.1.11
    - m5stack/M5Unified@^0.1.11

esp32:
  board: esp32-s3-devkitc-1
  framework:
    type: esp-idf

psram:
  mode: octal
  speed: 80MHz

external_components:
  - source:
      type: git
      url: https://github.com/m5stack/M5CoreS3-Esphome
    components: [ m5cores3_display ]
    refresh: 0s

# Enable logging
logger:

# Enable Home Assistant API
api:
  encryption:
    key: "1QrsXUgryxlF6OGsIwLj7eijyy/OMhSobQQHYWPvpb0="

ota:
  - platform: esphome
    password: "4844c4205ab6ab665c2d1a4be82deb57"

wifi:
  ssid: !secret wifi_ssid
  password: !secret wifi_password

  # Enable fallback hotspot (captive portal) in case wifi connection fails
  ap:
    ssid: "Lora-Gateway Fallback Hotspot"
    password: "6BRaaV17Iebb"

captive_portal:

spi:
  clk_pin: GPIO36 
  mosi_pin: GPIO37 
  miso_pin: GPIO35 

sx127x:
  cs_pin: GPIO6 
  rst_pin: GPIO7 
  dio0_pin: GPIO10 
  pa_pin: BOOST
  pa_power: 14
  bandwidth: 125_0kHz
  crc_enable: true
  frequency: 868920000
  modulation: LORA
  rx_start: true
  sync_value: 0x12
  spreading_factor: 7
  coding_rate: CR_4_5
  preamble_size: 8
  on_packet:
    then:
      - lambda: |-
          ESP_LOGD("lambda", "packet %s", format_hex(x).c_str());
          ESP_LOGD("lambda", "rssi %.2f", rssi);
          ESP_LOGD("lambda", "snr %.2f", snr);

packet_transport:
  platform: sx127x
  update_interval: 5s
  providers:
    - name: lora-node
      encryption: "password"

binary_sensor:
  - platform: packet_transport
    id: pir_sensor
    provider: lora-node

  - platform: template
    name: "Buzón"
    device_class: motion
    lambda: return id(pir_sensor).state;

color:
  - id: green
    hex: 'bfea11'
  - id: red
    hex: 'ff0000'

font:
  - file: "gfonts://Roboto"
    id: font_title
    size: 18
  - file: "gfonts://Roboto"
    id: font_text
    size: 16

image:
  - file: mdi:mailbox
    id: buzon_off
    resize: 100x100
    type: grayscale
    transparency: alpha_channel
  - file: mdi:email-alert
    id: buzon_on
    resize: 100x100
    type: grayscale
    transparency: alpha_channel

display:
  - platform: m5cores3_display
    model: ILI9342
    dc_pin: 15
    update_interval: 1s
    id: m5cores3_lcd
    lambda: |-
      // Obtener dimensiones de la pantalla
      int screen_width = it.get_width();
      int screen_height = it.get_height();
      
      // Título en la parte superior con margen de 20px
      it.print(screen_width/2, 20, id(font_title), id(green), TextAlign::TOP_CENTER, "LoRa Gateway by Aguacatec");
      
      // Obtener estado del sensor del buzón
      bool mailbox_open = id(pir_sensor).state;
            
      if (mailbox_open) {
        // Buzón abierto - icono rojo
        it.image(110, 70, id(buzon_on), id(red));
        it.print(screen_width/2, 200, id(font_text), id(red), TextAlign::CENTER, "Carta recibida!!");
      } else {
        // Buzón cerrado - icono verde
        it.image(110, 70, id(buzon_off), id(green));
        it.print(screen_width/2, 200, id(font_text), id(green), TextAlign::CENTER, "No hay correspondencia");
      }

Acknowledgements

To prepare this post, this video by our friend Miguel Ángel (from La Choza Digital) was extremely helpful! 

Source: AguacaTEC

Author: TitoTB

2025-10-13