Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Best and Worst Microcontroller SDKs (memfault.com)
101 points by ducktective on Nov 14, 2020 | hide | past | favorite | 78 comments


I'm surprised from this checklist because this checked almost none of what I would be looking for in a as a professional embedded developer.

What I would like to see:

How easy to push the software from the computer to the microcontroller? Preferably you should be able to push the firmware over USB or something like that in a few seconds. Don't mind whether that's a command line tool or a button in the IDE but there has to be something and it MUST work.

Is the tool friendly to source control? I had very bad experiences with embedded tools automatically creating hundreds of files all over the project and editing them on every build, causing a massive headache to store the project in source control. Note that software developers expect their development tools to support source control natively but I have no illusion of that happening in embedded IDE.

How easy to make a simple software and how is the documentation? For reference the basic in EE is to blink a LED, the microcontroller should provide API with documentation and examples to do that (and much more). It's shocking how many MCU have zero doc and the little there is doesn't work.

Personal note: It's funny the author wants to compile a hello world because that's not a thing in embedded, there's no screen on a microcontroller so printf("hello world") does nothing.

Last. I'd like to know about debug capabilities? Can you connect the computer to the MCU and debug directly on the device? Ideally there should be a full debugger including breakpoint support and step by step execution and displaying all local variables. The debugger should also be able to show debug output from the microcontroller (output from printf or equivalent).


> there's no screen on a microcontroller so printf("hello world") does nothing

Unless printf() is ultimately hooked up to some text-capable interface, like a UART with a terminal on the other end. :) Although I will agree that in general this isn't a typical use case for embedded microcontrollers in the real world.


I'd say a lot of UARTs are running debug consoles, but you don't want that for a hello world typically because it can be pretty complex to initialize when the whole point of a hello world is to get something happening with the minimum of setup.

The UART will have pin config, the clock tree, and the UART itself that all need to be initialized, and can be finicky when setup incorrectly (particularly the clock tree).

Blinking an LED with a busy wait loop square wave is way simpler and gives you externally visible code execution with less that can go wrong than printing to the UART.


>Blinking an LED with a busy wait loop square wave is way simpler and gives you externally visible code execution with less that can go wrong than printing to the UART.

Yeah, but everyone doing professional embedded work uses a proper debugging tool to load that LED code. "Externally visible code execution" is just when the code hits the reset breakpoint and you start stepping through the code.


You'd be surprised how often there's a difference between the debug load tooling and boot from regular flash. That leaves you with code that loads just fine over JTAG, but can't boot just from poweron. Stuff like if your code runs out of SRAM, maybe the clock tree looks totally different depending on how it loaded the code.

Once you get much past that point you want your team to use an OTA update mechanism as much as possible during normal development so that it has a ton of cycles on it by the time it ships.


>I'd say a lot of UARTs are running debug consoles

You are already assuming RTOS/Linux with this.

UART is easy to initialise on about all platforms. On an AVR you you don't even have to think about configs or clock trees.

UART is one of the best ways to debug and vastly superior to just blinking a led.


> You are already assuming RTOS/Linux with this.

> UART is easy to initialise on about all platforms. On an AVR you you don't even have to think about configs or clock trees.

I'm assuming fairly standard microcontrollers, but yes, from my experience as a lead for a proprietary RTOS that I ported to dozens of boards.

I'd say that AVR is the outlier here. Nearly every more complicated core has clock configuration that's finicky and needs to happen to bring the UART up. Blinking an LED just needs a couple writes to PINCONFIG and GPIO blocks and a busy wait loop that isn't optimized out.

> UART is one of the best ways to debug and vastly superior to just blinking a led.

It is once you've validated that code execution is actually occurring.


> UART is one of the best ways to debug and vastly superior to just blinking a led.

A proper debugger, and if the platform allows, a tracer are the best ways to debug/profile code on an MCU.


It's been a long time since I was in embedded. At the time, logic analyzers were expensive and we only had a few that were permanently housed on the Grey Beards desks. I used to read UART off an Oscope for debugging. Learned a lot in that job.


>as a professional embedded developer.

>there's no screen on a microcontroller so printf("hello world") does nothing.

I'm going to take a very straight jab at you here and say that even if you are working professionally, it's in a very specific and not general way and you have been doing the same thing for quite some time.

As another commenter already mentioned, it is the most basic of things to have printf go to UART and to your monitoring terminal, and actually it tells us a lot that you are assuming that printf goes to a display on the device of all places! And that there can't be a screen on a device.


>>there's no screen on a microcontroller so printf("hello world") does nothing.

The article is specifically about ARM Cortex-M4s, which support semihosting. So most of the tools listed in the article will just automatically spit printf out to the semihosting console without any effort.

Even if there isn't semihosting on a platform, the first thing I almost always do with board bring-up is pipe printf to some debug port (uart, ethernet, usb, etc).


I think there's been one project I worked on that had a UART hooked up to printf, usually the pin+code overhead didn't make sense.

LED or Logic Analyzer are usually where I lean before reaching for printf.


A robust console-like system to capture when "interesting things" happen is an incredibly valuable tool to have for any complex embedded project. When you have hundreds or thousands of something in the wild, you can't rely on always having a logic analyzer connected when something anomolous happens.

Often times, a UART is just one of many sinks for console-like data. You might direct messages to nonvolatile memory for logging purposes, over a network link for remote diagnostics, or a display if you have one.

If you're running Linux, then you get that for free more or less. If you're running bare metal, then you usually have to build it yourself. It's always worth it.


Look, I'm not knocking printf, for larger systems it's great.

However you're just not gonna spin up a UART debug console for an ATTiny, I've seen printf take ~2k of flash which would be 1/4 of your total storage gone.

If you're reaching for Linux you're already on the edge of the domain space that most microcontrollers occupy.


I used to spend so many off-work hours on my custom printf functions to try so desperately to find optimizations to save on space. I got it down to, IIRC, ~1300 bytes on a PIC8 and ~1500 on an MSP430. I left embedded because the pay:work ratio was shit (compared to backend / web development) but damn do I ever miss that work.


> pay:work ratio was shit (compared to backend / web development)

This is my current dilemma. Does this fact still hold? And will it hold in near future?


In terms of pay/work ratio, it might always hold true that it trends lower for embedded work. For individuals, though, I think the variation is a lot higher.

In my personal experience, there's a huge demand for experienced embedded software engineers in the bay area, and companies are willing to pay competitively. The team at my current company has grown from just me to a dozen embedded engineers, and we have several open positions for more. When I worked at Google, I'm pretty sure the embedded software folk had above average total compensation compared to general software engineers.

I think a big part of it, though, is that embedded engineers in silicon valley tend to get pulled into longer term high capital expense projects. The compensation packages tend to be more equity than salary, and so the salary component is below average. You need to be willing to take the risk to get the upside.

From what I hear from friends in the midwest, the compensation is significantly lower than the difference in cost of living, which is unfortunate. Salaries may be above average, but without much if any equity.

Another part of it, I think, is the sheer amount of interdisciplinary and domain specific knowledge required to be a truly strong embedded software engineer. The demand curve is very heavily biased toward senior level engineers. The companies with the money to spend want the best, and with such high capital expenditure projects, they can't really afford not to.


I don't think it ever held as a fact, it's really dependent on location.

If you compare web developer at FAANG to a random embedded developer, of course embedded is a joke. (A better comparison might be workers at Intel/AMD and I don't think they are terribly treated).

If you compare jobs in another city/country, maybe a city that has a bit of defense or aerospace industry, embedded is not necessarily that bad compared to other tech jobs available.


Yes, but this is true for many fields. Work that is 'more fun' tends to be payed worse since people actually like to do it.


> just not gonna spin up a UART debug console for an ATTiny

That's really an argument to design a dev system with more resources than the final target and do most development on that.


Well yeah, if you're using such a simple MCU that you're basically writing assembly whether or not you have a C compiler, you aren't going to end up with something complex enough to need console-like logging. At that scale, you can more or less exhaustively test every code path and input to prove correctness.

You don't have to go too far from there to benefit from a debugging console, though. There's tons of design space between 8-bit 8-pin MCUs and full featured processors with MMUs and megabytes of memory. You don't even have to leave 8-bit to benefit from a "printf" mechanism.

Tangentially, you don't necessarily need to implement "printf" within an embedded firmware to benefit from it. It's entirely possible to offload the fmt string expansion, instead capturing the raw fmt string and arguments as data and expanding it at point of consumption.


I don't believe I said at any point that printf isn't useful, there's just reasons why you don't use it.

Heck, even on the PSP which had a fair amount of horsepower console logging was a blocking read and we avoided it unless necessary because it would throw off all sorts of timings(and drive the frame rate in the toilet).

In the microcontroller space, 2/3rds of what I'm debugging is cycle timing sensitive. I2C or SPI sequencing and there having a logic analyzer that I can cross-correlate with actual external signals is so much more useful. They've gotten cheap enough these days that it is sometimes just easier to connect one to a spare pin and toggle that pin based in the internal state I want to track.


I don't think the author explicitly said it until the end of the article, but this comparison is only looking at the setup process. I thought it was a wellwritten article so hopefully he continues on with the series and hits some of the points you mentioned


I've been using Zephyr with Nordic chips for all of my personal projects lately, and it's been fantastic. (Zephyr's the upstream for the mentioned "equally excellent nRF-Connect-SDK")

Unlike most of the other options, Zephyr's run as an actual open source project and not just as a closed box you can peer into every once in a while, and Nordic seem to have gone all in on it.


Hi, what type of personal projects are you working on? I started learning c recently and I want to learn embedded development, but I don't know where to start or how.


Not OP, but one thing I found useful when starting out with embedded development was to just get a simple "hello world" sort of program running on bare metal and printing over serial, and to try to understand what was happening in various places that made my program work (code execution on the microcontroller, sending of data over the serial lines, and so on). Even a straightforward Arduino-based environment [0] is usually enough to get started and begin poking around to see what else is possible!

For myself, after getting something simple like the above working, it was a gateway to much more experimentation of further possibilities, and ultimately I ended up making an interactive shell [1][2] with some built-in utilities and text games I also wrote that runs on a variety of AVR microcontrollers (those very same Arduino boards I had when starting out).

[0] https://www.arduino.cc/

[1] https://community.atmel.com/projects/avrsysh-systemshell-som...

[2] https://github.com/ls4096/avrsysh


> I started learning c recently and I want to learn embedded development, but I don't know where to start or how.

After 38 years of programming in C, I'd instead recommend exploring CircuitPython.org from Limor ("Lady Ada") Fried based on the work of Damien George. Also recommend starting out using one of her kits https://www.adafruit.com/category/116 and avoiding rechargeable batteries (unless they're NiCads) while keeping your USB power adapter on an easy-to-reach power strip for a while.

Even back in the day, I would have recommended new embedded systems developers start with FORTH or BASIC (PEEK and POKE yoh) ..or if it was a serial/UART-controllable device, Phillippe's insanely great Turbo Pascal --that is unless you had already written a lot of applications in C and maybe a few operating system hacks or device driver mods too.


My main one right now is a low latency game controller firmware: https://github.com/passinglink/passinglink

I think the most important thing to do figure out what you actually want to make. Any microcontroller SDKs are going to be at least usable for most things.


Zephyr = winning!

(except for the dumbass name, but could be worse, like Deb-Ian)


Until you have to go hunt a bug in Zephyr. Or you need a particular subsystem to have immediate priority NOW.

It's quite irritating that everybody wants to complicate something which is basically "1) call this init function" "2) call that function every n ms on a timer".

The problem in programming (and this is true even for desktop stuff) is that everybody wants to own the event loop.

No. No no no no no no no. Bad programmer. No biscuit.

Two systems that want to own the event loop can't compose.

Maybe I'm running a CANOpen and a BLE stack. Both want to own everything. Well, sometimes that CAN system emits an event that I MUST not ignore (human proximity alert on a robot--for example). And I have to spend a not insignificant amount of time getting the bloody BLE stack out of the way so that I can focus solely on the CAN system for the moment.


> everybody wants to own the event loop

Ah yes, one of my fav songs from the New Wave era


By "Tears for Engineers"


+1, been using Zephyr on STM32L4 and I've never been as happy. Also checks a lot of the checkboxes on integration with debug & tooling and source control.


It does tick a lot of boxes, but I can't say I'm happy with the absurd overcomplexity of the configuration and build system (west + kconfig + cmake). It is extremely opinionated, and while I can see why they did things this way, I'm not personally comfortable with it, particularly when you have to do extensive validation of the build. Simplicity is a virtue.


Then maybe try out RIOT-OS. It tries to achieve the same a Zephyr but tends to be a lot simpler. All you need to do to flash an application is

    make BOARD=your_board flash
it has pretty good networking support (6LoWPAN) and is mostly used for sensor network applications


Are you using the actual upstream or NCS?


I'm using upstream directly. I started using Zephyr with STM32, but even if I hadn't, I'd probably have started with upstream just to make it easier to send patches back upstream.


It seems like Nordic recommends their fork (NCS) which is ahead of upstream and contains more examples and support. I feel weird managing multiple zephyr and west instances...


> Cypress PSoC Creator, Maxim DARWIN 0/10

Ha.

I inherited a project that used a PSoC module and had to set up and keep a Windows box around for a year until I could get that damned thing designed out of the next revision (replaced with a Nordic).

Everything has to be done in a buggy, bloated, custom GUI tool, even configuring peripherals, even flashing. Nothing is scriptable. Their chip might have been perfectly fine, but working with their software was torture. They treat their customers like idiots -- you have to pull teeth to get something as basic as a memory map so that you can fix things when they inevitably don't work.

Completely agree with the 0/10 assessment. I'd sooner quit my job than use one of these chips again.

Edit: Another comment suggests that Cypress has a new, less insane software tool now, that's cross platform and can work with Makefiles. shrug


The PSoC hardware isn't fully documented, either. Some critical parts of the UDB architecture (which is arguably the most interesting part of the product!) aren't explained in the register documentation -- for example, the registers which control routing are described as "RAM configuration bytes for channel", with no further explanation.

If you're using Cypress's IDE, it can generate the required configuration data for you. But if you want to take off on your own, you're out of luck.


I keep my toolchains for embedded development on Linux VMs. Basically: one project === one VM. I know I cane bring it up after couple of years and be able to fix a bug or add a feature.


Article has good points, but for some reason does choose to ignore the alternative, open SDKs many µC families have available.

The likes of avr-libc, energia, stm32-base, stm32duino or libopencm3.


I used Nordic SDK's for years and didn't like them. There were many problems:

- They released new versions often, and they were tied to new versions of the hardware. So you HAD to upgrade your SDK to use the new hardware. And they would drop support for the old hardware so if you wanted to be able to buy chips you had to upgrade.

- They changed the API's often which broke all my software. Not little changes either but major ones.

- They have lots of examples programs but the examples can't be combined. Want to use the interrupt handling example with the timer example? It doesn't work and you have to reverse engineer their code to find out why.

- The example programs are not updated until long after the SDK changes. Because the changes in API's are large the examples often don't work.

- Some features didn't work. We specifically chose the nRF51822 because it was supposed to be able to do simultaneous send and receive of BLE on multiple channels. Turns out that was marketing hype, the feature was experimental, and they didn't deliver a complete working example for years.

- The debugger was useless. It worked for simple programs but this chip is a real time system and turning on the debugger broke the real time operations. Often it didn't even work at all, returning random values for variable contents.

It's difficult to find out things like this when all you have is the spec sheets and forums to rely on. It's only when you get deep into the system you find them out. That's why developers often stick with one microcontroller forever; they know it inside out and all the tricks to making it work. I have a suspicion that some of the problems were intentional to get people to buy their development services.

Microcontrollers themselves are still stuck in the dark ages. Atmel introduced some positive changes but most MCU's are still really primitive and difficult to use. A 5000 page manual? You've got to be kidding! And critical changes to the info in the manuals is published in errata sheets? Unusable! If I turn on a UART why doesn't it default to a standard configuration instead of making me set bits in registers? Perhaps it would increase cost slightly but there should be much more standardized automation in MCU's and more should be done in hardware rather than requiring software. There should be autonegotiation of communication protocols. MCU design hasn't changed much in decades and is ripe for disruption.


I recently started using the ESP32 and its IDF sdk, and find it hits all the authors’s desiderata.


I agree, this is a nice writeup but I was surprised there were not some other big players covered. ESP32, Microchip XC32, Ti CCS etc ..


I do wish Arduino would increase the scope of their supported features. As a platform abstraction layer, using the arduino.h C++ header provides an excellent embedded development experience and there is a large number of compatible libraries online. But the scope is very, very small and limited. For features like parallel IO however you often have to drop down to inline assembly/invoke platform sdk.


I have been using PlatformIO as of late, which has the option of selecting frameworks when the project is created. While that doesn't really solve the problem of having a universal abstraction layer, it does ease the burden of installing frameworks since it handles that automatically and it provides a more consistent development environment. As for that universal abstraction layer, I doubt that it is practical. There is too much variation across features offered by MCUs. Ultimately some features would end up unsupported on some products unless every vendor decided to put their weight behind it (and agree upon how everything should be implemented).


One problem I have with Arduino and such is that they sell their products as educational toys, and thus unsuitable to incorporate into any serious product.


Arduinos are just a dev board. You can always take the underlying micrcontroller and design your own PCB.


You don’t even need to use it with Arduino hardware. At this point I think there are more people using the Arduino SDK with ESP devices.


Agreed, the ESP32 ecosystem is a stellar example of what MCU vendors could offer.


When I used it for a hobby project, it was pretty gnarly. All I wanted to do was sample an ADC channel at 8Khz. The software triggered ADC proved to be fundamentally broken (it felt like a race condition in their lower level coprocessor), and I had to switch to I2S so that the ADC would be accessed by DMA. The documentation was basically non-existent and the most relevant code examples were an incoherent mess. I eventually got it to work barely well enough, after probably spending 16 hours on it.

I'm still impressed with the wifi functionality. It's a cool chip, and I appreciate the company's willingness to support hobbiests. I wouldn't use their SDK as the reference for a particularly good vendor support package, though. On the other hand, maybe there are no good vendor support packages and ESP32 is one of the least worst?


I've had similar experience with the ESP-IDF, with racey bugs in certain parts and broken i2c, etc. Though turns out most of those were due to their v4.1 branch being listed as stable (AFAICT) but it's actually pretty broken. Switched to v4.0 and everything works. Still it's open source and the dev's will respond to GitHub issues.

Overall I'd say it's not too bad, or not so much worse than other chips. It seems every piece of embedded hardware will have quirks and the SDK's will have pain points. From what I gather and my limited experience, it's more a matter of how many and how severe. I prefer at least an open source SDK where it's possible to share patches with the broader community. From what I gather here and elsewhere the only other open source SDK's are for the Nordic chips.

P.S. the esp32's ADC supposedly has terrible calibration issues.


I used the ESP32 to build a two lead ECG SD logger with real time viewing over wifi. I randomly had to get a pacemaker this year, and it's been very useful to self-diagnose device issues (such as why my heart rate was jumping back and forth between 50bpm and 100bpm!) The pacemaker generates 350uS 2V "capture pulses", and they're very easy to detect coming out of an analog ECG amplifier front end. The ESP32 would completely miss maybe 1 out of every 5 capture pulses until I switched to I2S. Now it reliably samples every single pulse. The data coming out is good enough to see what I need to see; qualitatively how my heart is beating.

When using software triggered ADC conversions via an 8Khz timer interrupt, the racey issue would occur periodically for several milliseconds at a time. The ADC would act as if its input was disconnected and floating. With a changing input signal, the samples would read as a nearly constant voltage and completely miss blips that were well within the Nyquist frequency.

Switching to I2S/DMA seemed to work around that issue. Even with the ADC working reliably, the samples suffer from what I can only guess is horrendous differential non-linearity. The waveform really looks quite ugly, despite the ADC being driven by a very low impedance amplifier output. At least I can clean up the ECG plots with a low pass filter to make them pretty. The pacemaker pulse is a 350uS square wave, but the heart's natural impulses are significantly lower frequency.


Fantastic that you're able to track down your health issues. One day I really want to get into making health tracking sensors.

Oh I've read about the ESP32 ADC being pretty non-linear, but the racey issues are new to me. Do you have example code/repo for how you use the I2S to read the ADC? I'd like to add it to my ESP32 wrapper library (see `elcritch/Nesper` on GitHub)). It's mostly started as Nim wrapper's for the ESP-IDF api's, but it is letting me wrap certain errors conditions in the ESP-IDF and configure workarounds. Mostly small stuff, but handy.


Due to all the medical scans I've had done, I can confidently say that my heart is in perfect health other than the nerves being flaky. The pacemaker is a nearly flawless replacement, but it has its technical limitations. It's been very handy to see what's going on when my heart is beating funny.

Here's the relevant section. I was getting pretty fed up with battling the ESP32, so it's probably not the prettiest.

http://sgtnoodle.com/files/adc_snippet.txt

The function runs as an RTOS task, woken up once every millisecond by a timer. I also pinned it to one of the two CPUs because I was originally doing software triggered sampling at 8Khz. The timer and pinning is rather overkill since I2S clocks the sampling and has tons of buffer.

In an effort to get better data, I have the I2S sample at 40Khz. The function copies the samples out the I2S buffers, and down-samples to 8Khz. The 8Kz samples are pushed into a circular buffer that uses a std::atomic head pointer to make it thread safex.


Thanks! I'll take a look at it and try and port it over. Hopefully it'll help some other poor souls (possibly myself) who have to deal with the same issues.


Are you using the IDF directly or via something like PlatformIO? I've been using PIO a lot, but I wonder if there's advantages to just using the IDF on its own


I’m invoking idf.py directly. I edit my code in vim.


The author's article only covers ARM Cortex-M4 SDKs; its title is a little over-broad.


Microchips MplabX is maybe not the worst I have used but what irritates me about it is you have to purchase a license for the c compiler to get it to produce optimized code. Even though it is a gcc based compiler. You would think a hardware company like them would give away the development tools to encourage people to use their hardware. The funny thing is, is that it's relatively simple to bypass their license check


> The funny thing is, is that it's relatively simple to bypass their license check

That's a business decision. You don't need a secure DRM for products that are intended for other companies and it's waste of money to make one. No sane company will pirate it anyway.


Can personally voice for the Nordic series chips. Their Sdk has allowed for some of the most pleasant coding I have ever done.


The best SDK for me is no SDK. I have been playing around with these new Sipeed Longan Nano things that have an LCD display attached and you just stick it in the USB port. I have been doing mostly low level work writing my own LCD and interrupt handler routines and it's been good so far. There's of course a small amount of work to be done but I don't have to deal with the things mentioned in the article.


Perhaps we could make a similar list for FPGAs. Or for more complicated systems like Nvidia Jetson.


It would be a short article.

Either they work with the open stack, or they don't.

In practice, excluding not-ready-for-general-use work, this is down to three families: iCE40, ECP5 and EOS S3.

iCE40 is a good one to get started with, thanks to dirt cheap development boards such as the iCESugar.


> EOS S3

Interesting. I've been in the industry forever and never heard of Quicklogic. It's not like they're new either.

What's the deal with these guys?

Although I wish somebody would produce a 5V compatible FPGA part that doesn't have a zillion pins. I generally only need a bit of logic, but I have to use a bunch of old-school level translators and an FPGA because the FPGA can't deal directly with 5V.


>What's the deal with these guys?

They did something clever, which is that they added support for their chips into the open stack, themselves.

This has automatically done wonders for their visibility. No, I hadn't heard about them either, until this happened.


If I did that for Xilinx (Vivado) this wouldn't be a comparison but a long rant ^^

There's only two manufacturers of FPGA, Xilinx and Altera, I don't imagine the other one is any better.

More than the tooling, there's a very steep learning curve, it's a professional tool for professional EE/SW, I don't think it's something you can jump into easily like you could jump into an arduino example or a python shell. Literally none of the things mentioned in this article would be applicable because HDL (hardware design) is not software development.


It's a litte out of date('13) but fpga4fun has a comparison[1].

[1] https://www.fpga4fun.com/FPGAsoftware1.html


Sadly the article is obsolete. It's talking about Xilinx ISE, the previous IDE from Xilinx that was retired in 2013. It was already end of life when the article was written.


Arguably, Xilinx and Altera are no longer independent manufacturers, having been bought by AMD and Intel. Whilst those are the major players, there are independent FPGA manufacturers other than those two, Lattice (notable for having an open source toolset) and the PolarFire family from Microsemi, plus Gowin from China.


I'd mention that Xilinx family 7 has project x-ray, and gowin has project apicula.

Not as ready for general use as project icestorm (iCE40) and project trellis (ECP5), but promising.


I really like the nxp sdks, what wasn't rated here is how well written the sdks and peripheral drivers are. My experience with nxp was really good. ST SDK is good as well but has these two different levels to it that add confusion to creating software.


What about the mbed sdk? I’m curious how it compares to these.


I found it easier to work with mbed than STM32Cube for USB and ethernet. I found it really hard to understand the autogenerated STM32Cube with USER CODE BEGIN lines and couldn't make sense of non-working examples for Ethernet and USB for my target board.

Mbed was more of an abstraction layer - I saw a lot of similarities to what STM32Cube does underneath.

The process of adding target platforms has a bit of a steep learning curve. I copied a target with the same MCU and worked from there. But once the target platform is implemented writing code is pretty enjoyable.

See https://github.com/ARMmbed/mbed-os/pull/11648/files for the example of adding a target See https://rohfle.gitlab.io/posts/2019-10-19-olimex-stm32e407-t... for example ethernet code


No mention of ESP8266/ESP32?


brilliant write up. I really hope this gives way ylpre developers writing about their development experience, their quality of life with their tools, discussing what factors effect them. especially embedded but cross-environment compares elsewhere could be neat too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: