Hello! My name is Donghao Hong, a Master’s student in Electrical and Computer Engineering (ECE).
My academic interests include robotics, embedded systems, and autonomous systems.
This website documents my coursework, labs, and projects for ECE 5160 Fast Robots.
The goal is to program the Artemis board and establish reliable bidirectional Bluetooth Low Energy (BLE) communication between the board and a laptop for command execution and data exchange.
I set up the Arduino IDE and installed the SparkFun Apollo3 board support package.
I connected the Artemis board to the computer and selected the RedBoard Artemis Nano and the correct USB serial port in the Arduino IDE.
The onboard LED repeatedly turned on and off at a fixed interval, producing a steady blinking pattern.
Text input from the computer was received by the Artemis board and printed in the Serial Monitor, confirming successful serial communication.
When the Artemis board was held by hand, the reported temperature values gradually increased as heat was transferred from the hand to the board. Specifically, the temperature reading rose from approximately 33956 to 34182 over time, demonstrating that the onboard temperature sensor responded to changes in environmental temperature.
When whistling near the Artemis board, the detected frequency exhibited a sudden increase in magnitude. Once the whistling stopped, the frequency values returned to their baseline levels, indicating that the onboard PDM microphone correctly responded to changes in sound input.
For the 5000-level additional task, I extended the PDM + FFT example to detect specific tones by finding the dominant frequency peak and comparing it against three target notes (C5, D5, and E5) within a small tolerance. To generate known test tones, I downloaded an app called Tuner T1, which can play pure tones at user-specified frequencies. When playing the corresponding frequencies, the Artemis code correctly reported C5, D5, and E5 over serial, confirming that the frequency detection logic worked as intended.
Before starting the lab, I verified that Python was correctly installed on my system by launching the Python interpreter from the terminal. This confirmed that a suitable Python environment was available for running the host-side BLE scripts required for the experiment.
I then activated the provided Fast Robots BLE virtual environment, which contains the necessary dependencies for Bluetooth communication with the Artemis board.
Finally, I launched Jupyter Notebook and confirmed that the BLE project directory and Python kernel were correctly loaded.
After uploading the ble_arduino.ino sketch to the Artemis Nano board, the device began advertising over BLE and printed its MAC address to the serial monitor. This MAC address was then copied into the connection.yaml file by replacing the artemis_address field, allowing the host computer to identify and connect to the correct Artemis device.
I generated a unique BLE service UUID using a Python command. This UUID was used to define a custom BLE service for communication between the Artemis board and the host computer.
The generated UUID was then inserted into both the ble_arduino.ino firmware and the
connection.yaml configuration file on the host side. Ensuring that the BLE service and
characteristic UUIDs matched on both sides allowed the Python scripts to successfully discover
and communicate with the Artemis board.
On the Artemis side, the ECHO command extracts the incoming string and sends it backover BLE by writing a formatted response to the string characteristic.
From Jupyter, I sent an ECHO command and successfully received the same string in theresponse, confirming reliable bidirectional BLE communication.
For this task, I implemented a SEND_THREE_FLOATS command to transmit three floating-point values from the host
computer to the Artemis board over BLE. On the Arduino side, the command handler sequentially parses three float values
from the incoming command string using robot_cmd.get_next_value() and prints them to the serial monitor for verification.
From Jupyter, I sent the command with three floats encoded as a single string, and the Artemis board correctly received
and decoded all three values, confirming reliable multi-parameter data transmission over BLE.
On the Arduino side, a GET_TIME_MILLIS command was implemented to return the current value of
millis(), representing the elapsed time since startup. The value was packaged as a string and
sent back over BLE. From Jupyter, the command was issued and the returned timestamp
(e.g., T:285136) was successfully received, verifying correct BLE communication.
On the Python side, a notification handler was registered for the RX_STRING characteristic.
The handler converts incoming byte arrays into strings and parses messages starting with
"T:" as timestamps. Once notifications were enabled, the Artemis board was able
to asynchronously send time data, which was immediately received and printed in Jupyter
without explicitly polling for responses.
In this method, the Artemis continuously reads the current timestamp (ms) inside a while loop and
immediately transmits it to the laptop over BLE on every iteration.
On the host side, I used a notification callback to receive and parse the incoming T:<millis> messages in real time.
Pros (real-time streaming):
Cons (real-time streaming):
In my test, the board streamed data for about 3 s and I received 214 timestamps. Each message was about 10 bytes, so the effective rate was 214 / 3 ≈ 71 msg/s, i.e., approximately 71 × 10 ≈ 710 bytes/s. (The timestamp output was long, so I only included the beginning and the end of the stream.)
In this task, the Artemis board first collected timestamps into a local array for a fixed duration
(100 ms) and then transmitted the buffered data to the computer over BLE. On the host side, a
notification handler parsed each T:<millis> message and appended the values to a Python list;
in this run, a total of 2094 timestamps were received.
To prevent buffer overflow on the microcontroller, I added a simple guard:
before writing into the timestamp array, the code checks
if (time_buf_len < size). If the buffer reaches its maximum capacity, no additional samples are
stored, ensuring the firmware never writes past the allocated array bounds.
For this task, the Artemis collected paired timestamp and temperature readings in arrays for a fixed duration,
then sent the buffered data to the laptop over BLE. On the Python side, a notification handler parsed each message in the format
Time(ms):...,Temp(F):... and appended the values into timestamps and temps lists for later analysis.
This approach separates data acquisition from transmission, allowing the microcontroller to sample the sensor continuously
while the host computer focuses on parsing and processing the received data.
Method 1 (Task 5): real-time streaming. In this approach, the Artemis continuously reads the current timestamp (ms) inside a
while loop and immediately transmits each sample to the laptop over BLE. On the host side, a notification callback receives and
processes the data in real time.
Pros: low latency and near real-time visibility; straightforward implementation that is convenient for debugging and monitoring;
minimal RAM usage since data is not buffered on the microcontroller.
Cons: throughput is limited by BLE and host-side processing; at high sampling rates, repeated timestamps or dropped messages can occur;
BLE transmission and string formatting overhead slows down the loop execution.
Method 2 (Task 6/7): local buffering then batch transfer. In this approach, the Artemis does not communicate over BLE during sampling.
Instead, it stores timestamps (and sensor data) in local arrays, and only after the sampling window ends does it send the buffered data to the laptop
for parsing and analysis.
Pros: faster sampling because the loop avoids BLE overhead; better suited for short bursts of high-frequency data collection; improved
reliability because samples are stored locally before transmission.
Cons: no real-time visibility during sampling (added latency); buffer size is constrained by available RAM; requires extra logic to manage
the buffer and the send-back process.
When to use which: If you need real-time monitoring, debugging, or online control, Method 1 is preferable due to its low latency and
simplicity. If you need high-rate sampling for offline analysis or want to maximize data completeness, Method 2 is preferable because sampling is almost
unaffected by BLE communication overhead (although the upload phase is still limited by BLE throughput).
Sampling speed (Method 2): Over a 0.1 s sampling window, the board recorded 1812 timestamps, which corresponds to
1812 / 0.1 = 18120 samples/s. If we approximate each sample message as ~10 bytes, the implied data rate is
18120 × 10 ≈ 181200 bytes/s.
How much can be buffered without exhausting RAM? The Artemis board has 384 kB RAM:
384 × 1024 = 393,216 bytes. The compiled output shows global variables use 54,152 bytes, leaving
339,064 bytes available.
Case A (Task 6, timestamps only): one unsigned long timestamp is 4 bytes, so approximately
339,064 / 4 ≈ 84,766 timestamps.
Case B (Task 7, timestamp + temperature): timestamp 4 bytes + temperature float 4 bytes = 8 bytes per record, so approximately
339,064 / 8 ≈ 42,383 records.
In this experiment, the laptop sent a request to the Artemis using REPLY_NB, and the Artemis replied with a string
containing n characters. On the Python side, I recorded the send timestamp (t_send) and the first receive
timestamp from the notification callback, then estimated the round-trip time (RTT). Using the reply payload length (bytes) divided by
RTT (seconds), I computed an approximate effective data rate in bytes/s.
The results show that many small packets introduce significant overhead: RTT stays roughly constant (~90–93 ms) while payload size increases, so the effective throughput rises as the reply becomes larger. This suggests that using a larger reply amortizes the fixed BLE/processing overhead and improves effective data rate.
| Reply length (bytes) | RTT (ms) | Estimated data rate (bytes/s) |
|---|---|---|
| 5 | 91.429 | 54.687 |
| 10 | 91.838 | 108.888 |
| 15 | 89.566 | 167.474 |
| 20 | 91.469 | 218.652 |
| 25 | 90.652 | 275.781 |
| 30 | 91.467 | 327.986 |
| 35 | 91.816 | 381.196 |
| 40 | 91.779 | 435.828 |
| 120 | 92.842 | 1292.520 |
The plot above summarizes the trend: with a roughly constant RTT, larger replies achieve higher effective throughput. Therefore, if the application can tolerate buffering (or sending fewer, larger packets), it can significantly reduce the per-message overhead and improve overall BLE efficiency.
In this task, the Artemis board was commanded to transmit a sequence of 1000 monotonically increasing integers over BLE as quickly as possible. On the microcontroller side, the values were sent in a tight loop using BLE notifications. On the host computer, a Python notification handler parsed each incoming message and stored the received integers in a list.
After the transmission completed, the received values were compared against
the expected range [0, 999]. As shown in the results, the computer successfully
received all 1000 messages, and the set difference check confirmed that
no data was missing.
This experiment indicates that at the tested transmission rate, the BLE link between the Artemis board and the host computer was reliable, and the host was able to process all incoming notifications without packet loss.
In this lab, I learned how to configure BLE services and characteristics on the Artemis board and how to exchange data reliably with a host computer using commands and notifications. I also observed how BLE communication overhead directly impacts achievable sampling rates.
The purpose of this lab is to add the IMU to your robot, start running the Artemis+sensors from a battery, and record a stunt on your RC robot.
This experiment utilizes the SparkFun 9DoF IMU Breakout (ICM-20948), connected to the Artemis development board via an I²C interface. The Artemis board provides both power supply and communication.
To verify that the hardware connections and software environment were correctly configured,
the example program Example1_Basics from the SparkFun ICM-20948 Arduino library was executed.
The example outputs the following sensor data:
AD0_VAL represents the logic level of the AD0 pin on the ICM-20948 sensor and is used
to select the I2C communication address. Since the ADR jumper on the breakout board
is left unconnected by default, AD0_VAL was set to 1 in this experiment to match
the corresponding I2C address configuration.
The accelerometer measures the combined effect of linear acceleration and gravity. When the breakout board is stationary or slowly rotated, the measured values mainly reflect the gravity components along each axis. For example, when the board is rotated by 180 degrees, an axis that originally measured approximately +1 g will measure approximately −1 g, while the values on the other axes change according to the board orientation. During rapid motion or acceleration, the accelerometer readings exhibit noticeable fluctuations.
The gyroscope measures angular velocity. When the system is stationary, the output is close to zero. During rotation or turning, clear angular velocity changes are observed. After motion stops, the readings return close to zero, with only small amounts of noise and bias remaining.
To verify that the program was running correctly on the Artemis, the onboard LED was used as a visual indicator and configured to blink at a fixed time interval. Normal blinking of the LED indicates that the program is executing successfully.
To evaluate the performance of the accelerometer in static orientation estimation, the IMU was placed in four different static poses: Pitch = +90°, Roll = +90°, Roll = −90°, and Pitch = −90°. For each configuration, the serial monitor output was observed in real time to verify the corresponding accelerometer readings.
Figure 1. Pitch = +90°
Figure 2. Roll = +90°
Figure 3. Roll = −90°
Figure 4. Pitch = −90°
To evaluate the accelerometer accuracy, a two-point calibration was performed on the Z-axis using static measurements at +1g and −1g.
From the experimental data, the measured accelerometer outputs were:
m+ = 1024.36 mg, m− = −988.54 mg
Ideally, these values should be +1000 mg and −1000 mg, respectively. The deviation indicates the presence of sensor bias and scale error.
The bias b and scale factor s are computed as:
b = ( m+ + m− ) / 2 = (1024.36 − 988.54) / 2 = 17.91 mg
s = 2000 / ( m+ − m− ) = 2000 / (1024.36 − (−988.54)) ≈ 0.9936
The calibrated Z-axis acceleration is then given by:
az,cal = ( az,raw − 17.91 ) × 0.9936
The results show that the accelerometer exhibits a small bias and a scale error of less than 1%. After calibration, the measured acceleration closely matches the expected ±1g values, which is sufficient for reliable pitch and roll estimation under static or low-dynamic conditions.
To analyze the noise characteristics of the accelerometer data, time-domain and frequency-domain (FFT) analyses were performed on the pitch and roll angle signals under two conditions: static (clean) and with artificially introduced disturbances (with noise).
Figure 5. Time Domain Pitch (Clean)
Figure 6. Time Domain Pitch (With Noise)
Figure 7. Time Domain Roll (Clean)
Figure 8. Time Domain Roll (With Noise)
From the FFT results, it can be observed that the primary energy of both pitch and roll is concentrated in the low-frequency region (around 3–4 Hz).
\( \alpha = \dfrac{T}{T + RC}, \quad T = \dfrac{1}{f_s}, \quad RC = \dfrac{1}{2\pi f_c} \)
With \( f_s = 283 \,\text{Hz} \) and \( f_c = 3 \,\text{Hz} \),
\( \alpha \approx 0.062 \)
Figure 9. FFT Pitch (Clean)
Figure 10. FFT Pitch (With Noise)
Figure 11. FFT Roll (Clean)
Figure 12. FFT Roll (With Noise)
From the results, it can be observed that after applying the low-pass filter, the high-frequency jitter in both the pitch and roll signals is significantly reduced, resulting in much smoother curves. When noise is introduced, the filtering effect becomes even more pronounced, with spikes and random fluctuations being effectively suppressed. At the same time, the filtered signals are still able to follow the overall trend of the original attitude changes, indicating that the cutoff frequency is not set too low.
Figure 13. Pitch: Raw vs LPF (Clean)
Figure 14. Pitch: Raw vs LPF (With Noise)
Figure 15. Roll: Raw vs LPF (Clean)
Figure 16. Roll: Raw vs LPF (With Noise)
The angle estimation is based on the integration model introduced in class:
\( \theta_g(t) = \theta_g(t - \Delta t) + \omega(t)\cdot \Delta t \)
Using the gyroscope angular velocity measurements, the pitch, roll, and yaw angles are obtained through time integration.
Figure 17. Pitch: Accelerometer vs Gyroscope (Clean)
Figure 18. Pitch: Accelerometer vs Gyroscope (With Noise)
Figure 19. Roll: Accelerometer vs Gyroscope (Clean)
Figure 20. Roll: Accelerometer vs Gyroscope (With Noise)
The following differences can be observed from the results:
In this experiment, the complementary filter provided in the lecture was adopted to fuse the gyroscope-integrated angle with the accelerometer-based attitude angle:
\[ \theta = (1 - \alpha)\,\theta_{\text{gyro}} + \alpha\,\theta_{\text{acc}} \]
When only the accelerometer is used, both pitch and roll angles exhibit a large number of spikes under noisy conditions, indicating high sensitivity to instantaneous acceleration and external vibrations.
When only the gyroscope integration is used, the output is relatively smooth; however, significant drift accumulates over time, causing the estimated angle to gradually deviate from the true value.
By contrast, the complementary filter provides stable and smooth outputs in both clean and noisy scenarios. High-frequency noise from the accelerometer is effectively suppressed, while the long-term drift of the gyroscope is significantly corrected.
Figure 21. Pitch: Acc vs Gyro vs CF (Clean)
Figure 22. Pitch: Acc vs Gyro vs CF (With Noise)
Figure 23. Roll: Acc vs Gyro vs CF (Clean)
Figure 24. Roll: Acc vs Gyro vs CF (With Noise)
To maximize the data acquisition rate, the main loop avoids blocking or waiting for the IMU. Instead, each loop iteration checks whether new IMU data are available (data ready). Data are read and stored only when new samples are detected.
All debugging-related delay() and
Serial.print() statements were removed to prevent serial
communication from becoming a performance bottleneck.
From a 5-second sampling window, len(t) = 1608,
corresponding to an average sampling rate of:
Since the main loop runs faster than the IMU data update rate, this polling-based approach (“check data ready + log when available”) maximizes effective sampling while avoiding repeated reads of the same data frame.
IMU data were stored in time-stamped arrays. Each record contains eight fields:
t, pitch_a, roll_a, pitch_g, roll_g, yaw_g, pitch_cf, roll_cf
The host computer receives the data via BLE notifications, parses the comma-separated values, and appends them to the corresponding arrays.
Storage structure: A parallel multi-array design was adopted, where each variable is stored in an independent array. This structure simplifies post-processing in Python, such as individual plotting, FFT analysis, and filter comparison, while avoiding the parsing and indexing complexity associated with a single large structured array.
Data types:
Time stamps are stored as int values (millisecond resolution),
while attitude angles are stored as float. This choice provides
a good balance between numerical precision and memory usage.
Based on the compiler output, the program uses approximately
187,424 bytes (19%) of Flash memory, and global variables
occupy about 117,880 bytes (29%) of RAM. This leaves
approximately 275,336 bytes of free RAM available for data
logging. With a data format consisting of one timestamp (int)
and multiple float values per sample, storing several seconds
of IMU data in memory is feasible.
The host computer sends the START_IMU_LOG command to initiate
data recording. After a 5-second acquisition window, the
DUMP_IMU_LOG command is issued to request data transmission.
A total of 1608 time-stamped IMU samples were successfully received. The resulting plots span the time range from 0 to 5000 ms (Time vs. Sample Data Values), demonstrating that the system can reliably complete the full pipeline of data acquisition, buffering, BLE transmission, host-side parsing, and visualization within a 5-second window.
When the car impacts a wall at high speed, it rebounds and flips over, yet remains functional and can continue to operate.
This experiment utilizes the VL53L1X Time-of-Flight (ToF) distance sensor, communicating with the Arduino via I2C. The default I2C address of the VL53L1X is typically displayed as 0x29 (7-bit). Since both VL53L1X sensors share the same factory default address, connecting them directly to the same I2C bus would cause an address conflict. Therefore, differentiation is required during the initialization phase. I employ a method combining XSHUT control with address modification: At startup, pull one sensor low via XSHUT to keep it disabled. Initialize only the other sensor and modify its I2C address to a new value (e.g., 0x30). Subsequently release XSHUT to enable the second sensor, allowing both sensors to coexist with distinct addresses and perform parallel sampling. Wiring is shown in the figure: The Artemis connects to the Qwiic MultiPort via QWIIC (I2C). The IMU shares SDA/SCL, VIN, and GND with both VL53L1X sensors. To enable individual power-up activation and address assignment, I additionally connected one ToF sensor's XSHUT pin to an Artemis GPIO (green wire in the diagram). This allows selective disabling/enabling of that sensor during initialization.
After cutting the battery wires, I soldered them to the JST connectors and insulated each solder joint with heat shrink tubing. To prevent short circuits during operation, I cut only one wire at a time, soldering and insulating it before proceeding to the next. This ensured that the positive and negative terminals were never exposed and in contact simultaneously.
Additionally, when verifying polarity, I discovered that the actual positive and negative terminals of the battery wires were reversed from their color markings. Therefore, the final connections were made by connecting the red wire to the black terminal and the black wire to the red terminal. This ensures the battery's positive terminal connects to the Artemis's positive terminal and the negative terminal connects to the Artemis's negative terminal.
As you can see in the video, when Artemis is powered by its battery, it can still communicate with the computer via Bluetooth.
As shown in the diagram above, I connected two ToF sensors to the Artemis board via a QWIIC MultiPort splitter board. To facilitate future installation of the ToF sensors at designated locations on the vehicle body and simplify cable routing, I selected longer QWIIC cables for both ToF sensors. For wire color assignment, I connected the blue wire to SDA and the yellow wire to SCL, while the remaining power and ground wires were connected according to QWIIC standards.
Additionally, to enable independent activation and address assignment during dual ToF initialization, I soldered an extra wire to bring out the XSHUT pin from one ToF sensor and connect it to pin A3 on the Artemis board. This allows for individual shutdown/activation control of that sensor within the program.
When scanning the I2C bus to confirm the sensor address, the datasheet specifies an address of 0x52, but when I used Arduino's Example05_wire_I2C to scan, the address detected was 0x29. These appear different but actually represent the same address: 0x52 is the 8-bit address format including the least significant R/W bit, while scanners typically display the 7-bit address after removing the R/W bit. Therefore, removing the least significant R/W bit from 0x52 (equivalent to shifting right by one bit) yields 0x29, matching the scan result. This confirms the sensor address is as expected.
The VL53L1X offers three ranging modes: Short, Medium, and Long, with maximum ranges of approximately 1.3 m, 3 m, and 4 m respectively. Long mode provides the greatest distance but tends to be less stable in strong ambient light or with low-reflectivity targets, and typically has a lower refresh rate. Medium mode represents a compromise. Short mode, while offering the shortest range, delivers more stable readings at close distances, better interference resistance, and facilitates higher sampling frequencies.
Therefore, I ultimately selected short mode. The primary reasons are its more reliable readings at close range and its suitability for increasing sampling frequency. This enables a more responsive control loop, aligning better with the final robot's application scenario of indoor close-range navigation.
I conducted simple calibration tests in both Short and Long modes: positioning the sensor directly facing the wall, securing the actual distance at four fixed points (30, 100, 200, 300 mm) using a ruler, recording the measurement results at each position, and comparing them with the actual values. Note that the sensor's minimum effective range is approximately 40 mm, so when the actual distance is 30 mm, both modes read 0 mm. Overall, the Short mode exhibited more stable standard deviation (around 1.1 mm), while the Long mode showed slightly greater fluctuation at 200 mm (std≈2.31 mm). Given our primary focus on stable close-range obstacle avoidance, this supports prioritizing the Short mode for subsequent use.
| Mode | Sampling Method | Mean Period (mean_dt, ms) |
|---|---|---|
| LONG | Continuous ranging (NO_STOP) | 49.14 |
| LONG | Stop/Start per measurement (WITH_STOP) | 96.00 |
| SHORT | Continuous ranging (NO_STOP) | 49.09 |
| SHORT | Stop/Start per measurement (WITH_STOP) | 49.10 |
To estimate ranging speed and repeatability, I statistically analyzed the
time intervals between consecutive samples. As shown in the table above,
during continuous ranging (without frequent stop/start cycles), both Short
and Long modes maintained an average update interval of approximately
49 ms (around 20 Hz) with minimal jitter (standard deviation around
0.3 ms), indicating a stable sampling rhythm. However, in Long mode, if
stopRanging() is called before each subsequent ranging, the
cycle time slows significantly (around 96 ms). Therefore, for higher update
rates and real-time performance, I favor the continuous ranging approach.
In the final system, I will use Short mode to balance proximity stability
with sampling frequency.
To connect two ToF sensors simultaneously, I changed the address of one sensor to 0x30 during initialization, as shown in the diagram above. The specific procedure is as follows: First, pull the sensor connected to A3 (XSHUT) low to keep it disabled, allowing only the other sensor to operate on the bus and complete initialization; then, I called setI2CAddress(0x30) to change its address from 0x29 to 0x30. Finally, I pulled XSHUT high to enable the second sensor and executed begin() and startRanging() on both sensors. This allows both ToF sensors to operate in parallel on the same I2C bus for normal ranging.
The video above demonstrates that both ToF sensors can successfully perform distance readings.
As shown in the code above, to prevent the program from blocking while waiting for the ToF sensors to complete distance measurements, I continuously output the Artemis timestamp using millis() within the loop(). Distance data are read and printed only when both the front and right sensors return checkForDataReady() as true.
From the captured serial output, it can be observed that within the 0.404 s interval from 13204 ms to 13608 ms, “Time:” appeared 159 times. Therefore, the loop execution rate is approximately 393.6 Hz. Meanwhile, “Front Distance(mm)…Right Distance(mm)… ” was output 5 times, corresponding to a new ToF data update rate of approximately 12.38 Hz.
Loop iterations per second = 159 / 0.404 = 393.6 Hz (≈ 394 Hz)
Distance updates per second = 5 / 0.404 = 12.38 Hz (≈ 12.4 Hz)
Therefore, the current bottleneck is not the MCU loop execution, but rather the intrinsic ranging update cycle of the ToF sensors (which requires both sensors to be ready), compounded by the overhead of serial printing and I2C communication. Even though the main loop can execute at a high frequency, distance data can only be produced at the rate at which the sensors generate new measurements.
In this experiment, the BLE framework developed in Lab 1 was extended to enable simultaneous acquisition of time-stamped ToF distance data and IMU attitude data within a fixed recording window. The collected data were then transmitted to the host computer in a single BLE transfer for subsequent plotting and analysis.
A new Arduino command, READ_IMU_TOF, was implemented. When triggered, the
system continuously records sensor data for approximately 3 seconds. All sensor
measurements share a common time reference defined as
\( t = \text{millis()} - t_{\text{start}} \), ensuring that the ToF and IMU data are aligned on
the same time axis and temporally synchronized.
The figure above shows the ToF distance curve over time. It can be observed that the front distance remains stable within the range of approximately 85–97 mm, while the right-side distance stabilizes within the range of approximately 103–109 mm. The overall trend is relatively smooth, exhibiting only minor fluctuations.
The figure above shows the pitch and roll of the IMU over time. It can be seen that the pitch stabilizes around 3°, while the roll stabilizes around −0.3°. The overall curve is relatively smooth, exhibiting only minor fluctuations.
In this experiment, I used two dual motor drivers to control the left and right motors respectively. Following the experiment requirements, I connected the two H-bridge channels of each driver in parallel to increase the current available to the motors. For the left motor, its control inputs AIN1/BIN1 are connected to pin 2 of the Artemis, while AIN2/BIN2 are connected to pin 3. For the right motor, its control inputs AIN1/BIN1 are connected to pin 0, while AIN2/BIN2 are connected to pin 1.
Regarding power connections, the motor driver's VIN/GND terminals are powered by an 850 mAh battery to supply power to the motor, while Artemis is powered by a separate battery.
As shown in the figure, we set the power supply voltage to 3.7V, matching the rated voltage of the 850mAh lithium battery to be used later. This ensures that the motor's operating state during debugging aligns with the final battery power supply conditions, preventing test results from deviating from actual operating conditions due to testing at higher or lower voltages.
After completing the hardware connections of the motor driver, I used analogWrite() to generate a PWM signal to regulate the output power of the motor driver. In the program, IN2 was set to a low level to fix the rotation direction, while a PWM signal was applied to IN1 with a duty cycle value of 150 (within the range of 0–255).
To verify whether the PWM signal is output correctly, I measured the waveform at the driver input using an oscilloscope. The oscilloscope probe was connected to the IN1 pin, with the ground wire connected to the system common ground. The waveform displayed a stable square wave signal with a frequency of approximately 183 Hz and a duty cycle of about 59%, corresponding to the set analogWrite(IN1, 150) value (150/255 ≈ 58.8%). When changing the analogWrite() value, the duty cycle was observed to change accordingly on the oscilloscope.
For this section, the external power supply was also set to 3.7 V, and the positive and negative terminals of the circuit were connected to the corresponding terminals of the power supply.
A PWM signal was then applied to the motor driver input. By switching the PWM input configuration to generate forward and reverse rotation, I verified that both motor driver output terminals were able to correctly receive the control signals and drive the motor accordingly.
The following video demonstrates testing of the right wheel. The operation results align with expectations: it first rotates clockwise and then stops, followed by rotating counterclockwise and stopping.
After confirming that the motor operates normally when powered by the external power supply, the system was then powered using the provided 850 mAh lithium battery.
The following video demonstrates the wheel's operational state, showing that it rotates as expected.
The following video shows the right wheel drive in operation.
I started with PWM 30. At this point, the motor only emitted a humming sound, insufficient to drive rotation. This indicated the current PWM signal was too weak. I then increased it from 30 to 25, and from 35 to 40. When PWM reached 40, the cart moved slightly but quickly stopped again.
I then tested a PWM value of 45, at which point the car operated normally. Therefore, I determined 45 to be the minimum PWM value for straight-line travel. The following video demonstrates the car's movement.
To determine the minimum PWM value required for the wheels to overcome friction during turning, I incremented the PWM value from 40 in 10-step increments. The cart only began rotating when the PWM reached 120. The rotation is demonstrated in the following video.
During previous tests, I observed that the right wheel of the small car rotated faster than the left wheel when receiving identical PWM signals. Therefore, I introduced a calibration factor in the code to scale the PWM of the right motor, thereby balancing the rotational speeds of both wheels. Through iterative adjustments, I determined that 0.63 is the optimal calibration factor when the PWM is set to 80.
The following video demonstrates the calibrated vehicle traveling in a straight line along the tape.
I predefined a sequence of actions in the program, enabling the robot to execute movements in the order of straight → right turn → left turn → stop without relying on any sensor feedback.
As seen in the video, the robot first moves straight ahead for a distance, then turns right, followed by a turn to the left, and finally comes to a stop.
Oscilloscope measurements reveal that the PWM signal generated by analogWrite() on the Artemis board has a frequency of approximately 183 Hz and a duty cycle of about 58.8%, consistent with the program's analogWrite(IN1, 150) setting.
For DC motors, this frequency typically suffices for basic control requirements. However, lower PWM frequencies may cause significant motor current fluctuations and occasionally produce audible motor noise.
Manually configuring the microcontroller's timer to increase the PWM frequency may offer several advantages. For instance, a higher PWM frequency can reduce motor current fluctuations, resulting in smoother output torque. It also helps minimize audible noise, as higher frequencies fall outside the human hearing range. Additionally, higher-frequency PWM enables more stable motor drive, thereby enhancing overall operational performance.
From the experimental video, it can be observed that starting from the previously measured minimum PWM value of 45, the robot was still able to continue moving forward for a distance when the PWM was reduced to 40. However, after running for a period of time, the robot gradually stopped advancing, with only a faint humming sound emanating from the motor. This indicates that at this PWM value, the torque output by the motor had approached the limit of the system's frictional resistance, rendering it unable to sustain the vehicle's motion.
Therefore, the experiment suggests that PWM = 40 represents the minimum PWM value at which the robot can sustain motion. At lower PWM values, the motor cannot generate sufficient driving force to overcome friction and maintain stable movement.
I would like to thank Professor Hebring and teaching assistant Julie for their assistance.
In this lab, Bluetooth Low Energy (BLE) communication was used to control the robot and retrieve experimental data. A Python script running on the computer sends commands to the Artemis board through a BLE characteristic. The Arduino program parses the received command and performs the corresponding operation, such as setting PID gains or starting a PID control experiment.
During a PID test, the robot continuously records data including timestamp, raw ToF distance, extrapolated distance, control error, and motor PWM values. These values are stored in arrays on the Artemis board. After the experiment finishes, the recorded data is transmitted back to the computer through BLE notifications, where Python collects the data and generates plots for analysis of the controller performance.
The robot position control uses a standard PID controller.
\( u(t) = K_p e(t) + K_i \int e(t)\,dt + K_d \frac{de(t)}{dt} \)
The error is defined as
\( e(t) = d(t) - d_{target} \)
The target distance is set to 304 mm. To compensate for differences in motor performance, a proportional scaling factor is applied to the left and right wheels:
\( PWM_L = 1.5\,PWM_{base}, \quad PWM_R = 1.0\,PWM_{base} \)
The left wheel rotates more slowly, so additional compensation is applied.
To ensure that the robot approaches the wall quickly when it is far away, the controller is designed so that the robot reaches near-maximum speed at approximately 2000 mm. Since the Arduino PWM limit is 255, the controller’s base PWM is limited to 188, ensuring that the compensated wheel PWM values do not exceed the maximum PWM allowed by the motor driver.
Since the robot’s initial distance from the wall can reach approximately 2000 mm, the long distance mode of the ToF sensor is used to obtain a larger measurement range. Experimental measurements show that the average time interval between two consecutive ToF readings is approximately 39.2 ms, corresponding to a sampling frequency of about 25.5 Hz.
In the proportional control experiment, I first estimated a theoretical initial value for \(K_p\) based on the target conditions. During the experiment, the robot is expected to move at near maximum speed when it is approximately 2000 mm away from the wall, while the target distance is 304 mm. Therefore, the initial error is approximately
\( e = 2000 - 304 = 1696 \ \text{mm}. \)
The controller output satisfies
\( u = K_p e. \)
Since the goal is for the controller to produce a near-maximum base PWM output of approximately \(u \approx 188\) at this error, we can estimate
\( K_p \approx \frac{188}{1696} \approx 0.11. \)
Therefore, \(K_p = 0.11\) was selected as the theoretical initial value for testing.
As shown in the figure above, when \(K_p = 0.11\), the robot is able to approach the target position quickly, but the overshoot is very significant and the robot even collides with the wall. To reduce the overshoot and avoid collision, I then gradually decreased the proportional gain and finally selected \(K_p = 0.04\).
However, overshoot was still observed in the experiment. This is likely related to the deadband, the minimum effective PWM, the relatively low ToF sampling frequency, and the inertia of the robot itself. Therefore, a derivative term needs to be introduced to suppress this behavior.
After introducing the integral term, I gradually increased \(K_i\) to reduce the steady-state error. When \(K_i = 0.0015\), the system reached an acceptable upper limit. Although the integral term could eliminate the error more aggressively, further increasing the integral gain would cause the robot to collide with the wall.
Since the update frequency of the ToF sensor is about 25.5 Hz, while the PID control loop operates at a higher frequency, waiting for every new ToF measurement would limit the controller performance and cause the distance estimate to change in a step-like manner, which is not suitable for smooth control. Therefore, a linear extrapolation method is introduced.
When no new ToF measurement is available, the two most recent measurements \((d_{k-1}, t_{k-1})\) and \((d_k, t_k)\) are used to estimate the velocity:
\( v = \frac{d_k - d_{k-1}}{t_k - t_{k-1}}. \)
The distance at the current time \(t\) can then be estimated as
\( \hat{d}(t) = d_k + v(t - t_k). \)
This approach generates a continuous distance estimate between two consecutive measurements, allowing the controller to run at a higher frequency while maintaining smoother input to the control system.
The black curve represents the original ToF measurements, while the red curve represents the extrapolated results. It can be observed that the extrapolated curve closely follows the overall trend of the original measurements while effectively reducing the discontinuities caused by the step-like sampling behavior.
The final selected parameters are \(K_p = 0.04\), \(K_i = 0.0015\), and \(K_d = 0.03\). From the figures, it can be observed that the ToF distance curve decreases smoothly overall and gradually converges as the robot approaches the target position. Correspondingly, the motor PWM also gradually decreases as the control error becomes smaller.
The following three videos demonstrate the robot automatically stopping at the target distance.
The following video demonstrates the robot's ability to resist external disturbances.
I adopted the simplest integral limiting method by setting upper and lower bounds on the integral error integral_error to prevent unlimited accumulation. The code uses
\[ I = K_i \int e(t)\,dt \]
and applies the constraint
\[ \text{integral\_error} \in [-I_{max}, I_{max}] \]
In this experiment, \( I_{max} = 3000 \).
The figure on the right shows the results after removing the wind-up protection. The robot exhibits larger overshoot as it approaches the target position. After applying the integral limit, the system converges more smoothly and stabilizes more easily near the target distance.
This experiment referenced the course website created by Katherine Hsu.
For the orientation control experiment in this lab, I implemented a command called PID_ORIENTATION. This command is used to set the control duration, define the target angle (setpoint), configure the controller parameters (Kp, Ki, Kd), and start the orientation control along with data logging.
To support dynamic control, I also implemented a separate SET_ANGULAR_SETPOINT command to update the target angle during runtime.
During the orientation control process, the robot records one data sample at each control cycle and stores it in an array, including
You can send commands to the robot via BLE from your computer.
This command specifies a control duration of 3 seconds, a target angle of 50°, and controller parameters of \(K_p = 0.8\), \(K_i = 0.001\), and \(K_d = 0.5\).
To realize in-place orientation control, I apply differential drive by letting left and right wheels rotate in opposite directions.
The angular error is defined as:
\( e(t) = \theta_{\text{setpoint}} - \theta_{\text{current}} \)
A PID controller is used:
\( u(t) = K_P e(t) + K_I \int e(t)\,dt + K_D \frac{de(t)}{dt} \)
In implementation, the control output is mapped to motor PWM:
\( \text{PWM} = |u(t)| \)
The turning direction is determined by the sign of the error:
\( e(t) > 0 \Rightarrow \text{turn right} \)
\( e(t) < 0 \Rightarrow \text{turn left} \)
This corresponds to differential drive as follows:
For orientation control, it is necessary to estimate the robot's current yaw angle. The most straightforward approach is to integrate the gyroscope angular velocity. However, this method suffers from drift over time. Even when the robot is stationary, gyroscope bias leads to accumulated error in the estimated angle.
To mitigate this drift, the IMU's built-in DMP is used to directly output orientation data. The DMP can directly output orientation in the form of quaternions, which significantly reduces yaw drift and makes it more suitable as the input signal for the PID controller.
In implementation, the DMP is first enabled by uncommenting the following line in the library file ICM_20948_C.h:
#define ICM_20948_USE_DMP
Then, following Example7_DMP_Quat6_EulerAngles.ino to initialize the DMP.
Call this after initializing the IMU in setup()
In the main loop, DMP data is read from the FIFO and the yaw angle is calculated
During the control loop, this function is continuously called, and current_yaw_deg is used as the input to the PID controller.
In terms of sensor limitations, the IMU gyroscope has a maximum range of ±2000 deg/s, which is sufficient for the in-place rotation required in this experiment and does not require additional configuration.
In addition, when the PWM is below approximately 125, the motors do not rotate. Therefore, a minimum PWM threshold is applied.
In this experiment, the orientation control error is defined as the difference between the current yaw angle and the target yaw angle. Since this error is obtained from discretely sampled angle measurements, it is reasonable to compute its derivative.
The derivative term reflects the rate of change of the error. When the robot approaches the target angle rapidly, the derivative term provides a damping effect that counteracts the motion, thereby reducing overshoot and improving convergence speed.
When the setpoint changes during operation, the error experiences an instantaneous jump, which leads to a spike in the derivative term (known as derivative kick), causing a sudden change in the control output.
Since the derivative term is sensitive to noise, a low-pass filter is required in practical implementation to smooth the output. After applying the filter in this experiment, the control becomes more stable, making it a necessary improvement.
This experiment adopts a non-blocking control structure to implement orientation PID control. When the PID_ORIENTATION command is received, only the control parameters are updated and a control flag is set, while the main loop continues to perform orientation updates and control computations.
To verify this functionality, I set the target angle sequentially to \(+50^\circ \rightarrow 0^\circ \rightarrow -50^\circ \rightarrow 0^\circ\) during a single run. The results are shown below:
It can be observed that the yaw angle follows the changing setpoint and gradually converges to the target angle.
The corresponding PWM curves show that the left and right wheels alternately reverse direction, enabling orientation adjustment.
The following video demonstrates the robot’s actual turning behavior.
In future navigation or advanced applications, real-time setpoint updates are essential, as the robot needs to continuously adjust its target heading during operation rather than restarting the controller each time.
Regarding whether orientation can still be controlled while the robot is moving forward or backward, this functionality was not explicitly implemented in this experiment. However, the current program has already modularized the orientation control logic. By combining linear velocity control and angular velocity control into the left and right wheel PWM signals, heading correction can be incorporated during motion.
The final tuned gains are \(K_p = 0.8\), \(K_i = 0.001\), and \(K_d = 0.5\). To validate the reliability of the results, the experiment was repeated three times.
The following video demonstrates the disturbance rejection test. It can be observed that when I manually change the robot’s orientation using my foot, it is able to automatically correct itself and return to the target heading.
For the integral term, wind-up protection is introduced by limiting it within the range \([-100, 100]\).
This is necessary because when the error is large or the motor output is already saturated, the integral term may continue to accumulate, leading to excessive control output and causing significant overshoot and oscillations. By constraining the integral term, over-accumulation is prevented, resulting in more stable behavior under different surface conditions and faster convergence.
This project referenced the course websites of Katherine Hsu and Evan Leong.
First, I added commands related to step response for drag and momentum estimation in Part 1. The START_STEP_RESPONSE command is used to initiate a linear test with a fixed PWM, while the SEND_STEP_RESPONSE command is used to send the recorded timestamps, ToF, and PWM data back to the computer.
To support the Kalman Filter experiment on the robot side in Part 4, I have added new commands related to KF-PID logging. The START_KF_PID_LOG command is used to initiate Kalman Filter-based distance estimation and PID control, while the SEND_KF_LOG command is used to transmit recorded raw ToF data, KF-estimated distance, estimated velocity, and scaled control inputs.
On the computer, I can launch step response or KF-PID experiments directly through Notebook and receive log strings asynchronously, which I can then parse into arrays for plotting and saving data.
In this experiment, I had the robot move in a straight line toward the wall at a fixed motor input, while simultaneously recording the ToF distance, timestamps, and motor input data. Since the output of the left and right wheels was not exactly the same, I set the base PWM to 130 and applied compensation to the left wheel. To prevent the robot from colliding with the wall, it automatically stopped when the ToF reading approached 1000 mm.
The figure below shows the motor input curve. As can be seen, the input remains constant throughout the entire step response, which satisfies the basic conditions for first-order system response analysis.
Next is the ToF distance curve, which shows the robot gradually approaching the wall from a distance of approximately 3,900 mm and stopping at around 1,000 mm.
To estimate the system speed from the step response, I differentiated the ToF data to obtain a velocity curve.
Based on this data, the steady-state speed was ultimately determined to be vss ≈ 2.281 m/s, and the 90% steady-state speed is 0.9vss ≈ 2.053 m/s, and the corresponding 90% rise time is t0.9 ≈ 1.355 s, and the velocity at the time corresponding to the rise time is approximately v(t0.9) ≈ 2.086 m/s.
Let the normalized input be \( u = 1 \).
According to the steady-state relationship
\[ d = \frac{u}{\dot{x}_{ss}} \]
we obtain
\[ d = \frac{1}{2280.8} \approx 4.38 \times 10^{-4} \]
Next, the 90% rise time is used to estimate the momentum term,
\[ m = \frac{-d\,t_{0.9}}{\ln(0.1)} \]
Substituting \( d \approx 4.3845 \times 10^{-4} \) and \( t_{0.9} \approx 1.3546 \,\text{s} \) gives
\[ m \approx 2.58 \times 10^{-4} \]
Therefore, in the subsequent Kalman Filter initialization, the system parameters I used are
\[ d \approx 4.38 \times 10^{-4}, \qquad m \approx 2.58 \times 10^{-4} \]
After obtaining the estimated drag term d and momentum term m from Part 1, I first constructed the system state-space model in Python. The system state is defined as distance and velocity, so the state vector is written as
\[ \mathbf{x} = \begin{bmatrix} x \\ \dot{x} \end{bmatrix} \]
According to the first-order dynamic model,
\[ \ddot{x} = -\frac{d}{m}\dot{x} + \frac{1}{m}u \]
it can be written in state-space form as
\[ \dot{\mathbf{x}} = A\mathbf{x} + Bu \]
where the continuous-time matrices are
\[ A = \begin{bmatrix} 0 & 1 \\ 0 & -\dfrac{d}{m} \end{bmatrix}, \qquad B = \begin{bmatrix} 0 \\ \dfrac{1}{m} \end{bmatrix} \]
Substituting the values obtained in Part 1,
\[ d \approx 4.38 \times 10^{-4}, \qquad m \approx 2.58 \times 10^{-4} \]
gives
\[ A = \begin{bmatrix} 0 & 1 \\ 0 & -1.6998 \end{bmatrix}, \qquad B = \begin{bmatrix} 0 \\ 3876.90 \end{bmatrix} \]
Based on the recorded timestamps, the average sampling time in this experiment is
\[ \Delta T = 0.03912 \,\text{s} \]
Then, following the first-order discretization method in the lab manual,
\[ A_d = I + \Delta T A, \qquad B_d = \Delta T B \]
the discrete-time matrices are obtained as
\[ A_d = \begin{bmatrix} 1 & 0.03912 \\ 0 & 0.93351 \end{bmatrix}, \qquad B_d = \begin{bmatrix} 0 \\ 151.91 \end{bmatrix} \]
Since the measurement is the positive distance relative to the wall, while the state uses the negative distance form, the measurement matrix is set as
\[ C = \begin{bmatrix} -1 & 0 \end{bmatrix} \]
The initial state vector is set to the first-frame ToF distance and zero initial velocity:
\[ x_0 = \begin{bmatrix} -\mathrm{TOF}[0] \\ 0 \end{bmatrix} \]
Finally, I initialized the covariance matrices required for the Kalman Filter. The initial state covariance is set as
\[ \Sigma = \begin{bmatrix} 10000 & 0 \\ 0 & 250000 \end{bmatrix} \]
the process noise covariance is set as
\[ \Sigma_u = \begin{bmatrix} 2500 & 0 \\ 0 & 90000 \end{bmatrix} \]
and the measurement noise covariance is set as
\[ \Sigma_z = \begin{bmatrix} 1600 \end{bmatrix} \]
I first implemented the Kalman Filter function based on the prediction-update structure described in the lecture.
Then, I used the initialized \( \mathbf{x} \) and \( \Sigma \) from Part 2 as the initial values of the filter and ran the Kalman Filter point by point on the step response data. Since the step input in this experiment was set to 130, the motor input was scaled as \( u/130 \) at each step. Because the position in the state is defined as negative distance, it was converted back to a positive distance when saving the results so that it could be directly compared with the raw ToF data.
As can be seen in the figure, the KF output closely follows the overall trend of the ToF distance curve and largely coincides with the raw measurements for most of the time.
In this section, I integrated a Kalman filter into the linear PID control program from Lab 5 to replace the original linear extrapolation. This way, even when the ToF sensor updates slowly or there is a temporary lack of new data, the controller can still continue closed-loop control using the distance estimates provided by the filter.
First, I added the BasicLinearAlgebra library to the Arduino side and defined the state, covariance, and system matrices required for the Kalman filter as global variables.
Subsequently, I implemented the same Kalman Filter prediction-update logic on the board as on the Python side.
In the main loop, I have implemented the flag_kf_pid mode. In this mode, the first frame of ToF data is used to initialize the filter; subsequently, if new measurements are available, a Kalman filter update is performed, and if no new measurements are available, only a prediction is made. Therefore, the distance signal actually used by the PID is no longer the result of linear extrapolation, but rather kf_est_dist_mm.
As can be seen in the figure, the KF output generally tracks the ToF curve well and remains stable as it approaches the target distance.
In addition, I recorded a video of the robot operating under KF-PID control.
I referenced Katherine Hsu's website.