Most DBMS don't let you at the files directly - you speak to the DB over a socket. Because SQLite is in-process, bugs in the process that access random file descriptors can write over the SQLite files.
(This is more ammunition for the idea that the real software isolation boundary on a desktop computer should not be "user", but "software author"!)
Most of them would be possible in some variation, although typically with different results.
For instance typically databases are accessed through a socket interface which uses the same pool of file descriptors as open(). So it's probably possible for a database connection to end up with fd 2 and then the write(2, ...) would also send garbage to the database.
Although in this case it probably won't do much unless you're very unlucky because the message will almost certainly not end up being valid for the DB so it'll either ignore it, return an error or maybe drop the connection without corruption. So it would still be a nasty thing to debug but not quite as critical.
More generally there's typically a much better isolation between database and process on traditional processes which makes it difficult for a misbehaving program to mess directly with the DB. And having a server program processing the transactions means that you don't have to rely so heavily on things like filesystem locking to achieve atomicity for instance. That being said buggy or limited filesystems can be a problem for any database, for instance to guarantee that a transaction has been really commited to disk in case of a power outage etc... See this for instance:
SQLite is serverless; server-based DBMSs are free to use other locking mechanisms, other than relying on filesystem primitives, to achieve concurrency.