When you skip the error, it skips the entire transaction. Even if the first part of the transaction would have been successful because there are multiple statements in the transaction and one of them fails, then all of them are skipped on the slave.
This has been accurately described by Jervin Real in this blog post:
https://www.percona.com/blog/2013/07/23/another-reason-why-sql_slave_skip_counter-is-bad-in-mysql/
Something to note is that skipping errors on Aurora is different than on normal MySQL server:
On a normal MySQL slave you would run this;
STOP SLAVE;
SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;
START SLAVE;
show slave status;
If you have thousands of errors and want to skip them all then you can put a large number in the SQL_SLAVE_SKIP_COUNTER. If you have that many error then you probably have larger issues and might want to consider rebuilding the slave to prevent data inconsistency problems.
However on Aurora you can only skip one error at a time:
CALL mysql.rds_skip_repl_error;
show slave status;
I have a client that uses Aurora as a disaster recovery "site". The client has thousands of databases that are being replicated from their local data center into Aurora. If their local data center were to go down, they have all their data in Aurora. However, periodically some statement breaks replication and I have to skip replication if it is safe to do so.
One of the semi frustrating things I have to deal with is the widespread use of the BLACKHOLE storage engine in Aurora by my client. When replicating from the local data center to Aurora, the client does not always want to replicate every single database into Aurora. There might be hundreds of databases on a single server and the client only want to replicate one of them into Aurora. What I do is export and import all the tables for all the database into Aurora. Then I change all the tables for the databases I don't want to replicate to BLACKHOLE. However, eventually people start creating new tables, modifying existing tables on the master server which is located in the local data center and those DDL changes are replicated to the database for which I have set all the tables to BLACKHOLE. Eventually this causes replication to break. Because I don't care about the data in the BLACKHOLE tables I can skip all the errors but sometimes I have to skip hundreds of errors.