This is an interesting read on database performance (not specific to MySQL) after patching to secure data on multi tenant system.
https://blog.appoptics.com/visualizing-meltdown-aws/
Copied from the above article:
Applications that make frequent systems calls to read/write data either over network sockets or from disk systems will need to be better tuned for batching. Incurring small I/O operations is now more costly, and engineers will need to optimize their code to reduce the frequency of such calls. Finding the sweet spot between larger batch sizes and latency is difficult and will require software that adapts for multiple variables simultaneously. It was promising to see that the Kafka consumer libraries were able to optimize for this dynamically as network call latency increased.
Wednesday, January 17, 2018
Tuesday, January 2, 2018
Using triggers to audit database changes
I cannot count on people on my team to inform me of changes they are making to our databases so I've had to create some very basic monitoring which checks the status of certain system variables at a certain interval and saves this into a few tables. I've added triggers to these tables to audit changes and deletions.
This is an old but useful (and free) way of keeping historical information on changes in a MySQL databases. The triggers copy the old and the new value to a generic change log table (sometimes called audit log). In this manner multiple tables can use the same change log table.
In this example I have two tables, one called MyCluster and one called Tag. The Tag table uses key/value to store data while the MyCluster has specific attributes stored in columns for each cluster. Here is an example table structure for the two tables (actual table has many more columns):
CREATE TABLE `MyCluster` (
`Profile` varchar(100) NOT NULL COMMENT 'Account where data came from',
`DBClusterIdentifier` varchar(255) NOT NULL DEFAULT '',
`Endpoint` varchar(255) DEFAULT NULL COMMENT 'Cluster Writer End Point Address',
`EndpointIPAddress` varchar(50) DEFAULT NULL COMMENT 'Cluster Writer End Point IP Address',
`ReaderEndpoint` varchar(255) DEFAULT NULL COMMENT 'Cluster Reader End Point Address',
`ReaderEndpointIPAddress` varchar(50) DEFAULT NULL COMMENT 'Cluster Reader End Point IP Address',
`ClusterCreateTime` timestamp NULL DEFAULT NULL COMMENT 'UTC time',
`CreateTime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`UpdateTime` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00' ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`Profile`,`DBClusterIdentifier`),
KEY `ix_Endpoint` (`Endpoint`),
KEY `ix_EndpointIPAddress` (`EndpointIPAddress`),
KEY `ix_ClusterCreateTime` (`ClusterCreateTime`),
KEY `ix_ReaderEndpoint` (`ReaderEndpoint`),
KEY `ix_ReaderEndpointIPAddress` (`ReaderEndpointIPAddress`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Cluster Specific info';
CREATE TABLE `Tag` (
`Profile` varchar(100) NOT NULL COMMENT 'Account where data came from',
`Type` varchar(255) NOT NULL DEFAULT '',
`Identifier` varchar(255) NOT NULL DEFAULT '',
`Key` varchar(100) NOT NULL DEFAULT '',
`Value` varchar(1000) DEFAULT NULL,
`ResourceARN` varchar(255) DEFAULT NULL,
`CreateTime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`UpdateTime` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00' ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`Profile`,`Type`,`Identifier`,`Key`),
KEY `ix_Identifier` (`Identifier`),
KEY `ix_Type` (`Type`),
KEY `ix_Key` (`Key`),
KEY `ix_Value` (`Value`(255))
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Instance Tag info.';
-- This is the audit table or log table:
CREATE TABLE `Audit` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`Profile` varchar(100) DEFAULT NULL,
`Identifier` varchar(100) NOT NULL DEFAULT '',
`TableName` varchar(100) DEFAULT NULL,
`FieldName` varchar(100) DEFAULT NULL,
`OldValue` varchar(100) DEFAULT NULL,
`NewValue` varchar(100) DEFAULT NULL,
`Type` varchar(100) DEFAULT NULL,
`timestamp` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Data comes from triggers on other tables';
-- Here are the example triggers...
-- Example of saving information from a change to column called EndpointIPAddress:
DROP TRIGGER IF EXISTS MyCluster_BU;
DELIMITER $$
CREATE DEFINER=`root`@`%` TRIGGER MyCluster_BU BEFORE UPDATE ON MyCluster
FOR EACH ROW BEGIN
IF (OLD.EndpointIPAddress <> NEW.EndpointIPAddress AND OLD.EndpointIPAddress <> '' AND NEW.EndpointIPAddress <> '') THEN
INSERT INTO Audit (
`Profile`,
`Identifier`,
`TableName`,
`FieldName`,
`OldValue`,
`NewValue`,
`Type`,
`timestamp`
)
VALUES (
OLD.Profile,
OLD.DBClusterIdentifier,
'MyCluster',
'EndpointIPAddress',
OLD.EndpointIPAddress,
NEW.EndpointIPAddress,
'change',
NOW()
);
END IF;
END$$
DELIMITER ;
-- Example of saving information after a delete occurs:
DROP TRIGGER IF EXISTS MyCluster_AD;
DELIMITER $$
CREATE DEFINER=`root`@`%` TRIGGER MyCluster_AD
AFTER DELETE
ON MyCluster FOR EACH ROW
BEGIN
INSERT INTO Audit (
`Profile`,
`Identifier`,
`TableName`,
`FieldName`,
`OldValue`,
`NewValue`,
`Type`,
`timestamp`
)
VALUES (
OLD.Profile,
OLD.DBClusterIdentifier,
'MyCluster',
'EndPoint',
OLD.EndPoint,
'',
'delete',
NOW()
);
END; $$
DELIMITER ;
-- Example of saving information after a delete occurs on a key value table. The key is StackName and the value can be anything:
DROP TRIGGER IF EXISTS Tag_AD;
DELIMITER $$
CREATE DEFINER=`root`@`%` TRIGGER Tag_AD
AFTER DELETE
ON Tag FOR EACH ROW
BEGIN
IF (OLD.Key = 'StackName') THEN
INSERT INTO Audit (
`Profile`,
`Identifier`,
`TableName`,
`FieldName`,
`OldValue`,
`NewValue`,
`Type`,
`timestamp`
)
VALUES (
OLD.Profile,
OLD.Identifier,
'Tag',
'StackName',
OLD.Value,
'',
'delete',
NOW()
);
END IF;
END; $$
DELIMITER ;
-- Example of saving information from a change to key value. The key is StackName and the Value can be anything.
DROP TRIGGER IF EXISTS Tag_BU;
DELIMITER $$
CREATE DEFINER=`root`@`%` TRIGGER Tag_BU BEFORE UPDATE ON Tag
FOR EACH ROW BEGIN
IF (OLD.Key = 'StackName' AND OLD.VALUE <> NEW.VALUE) THEN
INSERT INTO Audit (
`Profile`,
`Identifier`,
`TableName`,
`FieldName`,
`OldValue`,
`NewValue`,
`Type`,
`timestamp`
)
VALUES (
OLD.Profile,
OLD.Identifier,
'Tag',
'StackName',
OLD.VALUE,
NEW.VALUE,
'change',
NOW()
);
END IF;
END$$
DELIMITER ;
-- Now add some data into the tables, make changes and delete some rows.
INSERT INTO `MyCluster` (`Profile`, `DBClusterIdentifier`, `Endpoint`, `EndpointIPAddress`, `ReaderEndpoint`, `ReaderEndpointIPAddress`, `ClusterCreateTime`, `CreateTime`, `UpdateTime`) VALUES ('test', 'test', 'test', '123', NULL, NULL, NULL, CURRENT_TIMESTAMP, '0000-00-00 00:00:00');
UPDATE `MyCluster` SET `EndpointIPAddress` = '456' WHERE `Profile` = 'test' AND `DBClusterIdentifier` = 'test';
DELETE FROM `MyCluster` WHERE (`Profile` = 'test' AND `DBClusterIdentifier` = 'test');
INSERT INTO `Tag` (`Profile`, `Type`, `Identifier`, `Key`, `Value`, `ResourceARN`, `CreateTime`, `UpdateTime`) VALUES ('test', 'Cluster', '123', 'StackName', 'MyTest', NULL, CURRENT_TIMESTAMP, '0000-00-00 00:00:00');
UPDATE `Tag` SET `Value` = 'MyTestIsDone' WHERE `Profile` = 'test' AND `Type` = 'Cluster' AND `Identifier` = '123' AND `Key` = 'StackName';
DELETE FROM `Tag` WHERE (`Profile` = 'test' AND `Type` = 'Cluster' AND `Identifier` = '123' AND `Key` = 'StackName');
Values that were changed or Deleted for the columns that have triggers setup will now appear in the Audit table.
PROS:
1. Easy to implement.
2. Very simple triggers
3. Only one table needed to keep history for any number of tables
CONS
1. Data type for old and new values is very generic, all data no matter what type it originally was is stored as TEXT
2. There are no foreign key constraints between the tables. The columns in the change log table can refer to anything. Without constraints, there is nothing to stop accidental or intentional manipulating of the numbers to values that don’t exist in the source table.
3. Triggers add additional overhead to the system which could slow performance
4. Writing queries to revert data is not simple
This is an old but useful (and free) way of keeping historical information on changes in a MySQL databases. The triggers copy the old and the new value to a generic change log table (sometimes called audit log). In this manner multiple tables can use the same change log table.
In this example I have two tables, one called MyCluster and one called Tag. The Tag table uses key/value to store data while the MyCluster has specific attributes stored in columns for each cluster. Here is an example table structure for the two tables (actual table has many more columns):
CREATE TABLE `MyCluster` (
`Profile` varchar(100) NOT NULL COMMENT 'Account where data came from',
`DBClusterIdentifier` varchar(255) NOT NULL DEFAULT '',
`Endpoint` varchar(255) DEFAULT NULL COMMENT 'Cluster Writer End Point Address',
`EndpointIPAddress` varchar(50) DEFAULT NULL COMMENT 'Cluster Writer End Point IP Address',
`ReaderEndpoint` varchar(255) DEFAULT NULL COMMENT 'Cluster Reader End Point Address',
`ReaderEndpointIPAddress` varchar(50) DEFAULT NULL COMMENT 'Cluster Reader End Point IP Address',
`ClusterCreateTime` timestamp NULL DEFAULT NULL COMMENT 'UTC time',
`CreateTime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`UpdateTime` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00' ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`Profile`,`DBClusterIdentifier`),
KEY `ix_Endpoint` (`Endpoint`),
KEY `ix_EndpointIPAddress` (`EndpointIPAddress`),
KEY `ix_ClusterCreateTime` (`ClusterCreateTime`),
KEY `ix_ReaderEndpoint` (`ReaderEndpoint`),
KEY `ix_ReaderEndpointIPAddress` (`ReaderEndpointIPAddress`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Cluster Specific info';
CREATE TABLE `Tag` (
`Profile` varchar(100) NOT NULL COMMENT 'Account where data came from',
`Type` varchar(255) NOT NULL DEFAULT '',
`Identifier` varchar(255) NOT NULL DEFAULT '',
`Key` varchar(100) NOT NULL DEFAULT '',
`Value` varchar(1000) DEFAULT NULL,
`ResourceARN` varchar(255) DEFAULT NULL,
`CreateTime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`UpdateTime` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00' ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`Profile`,`Type`,`Identifier`,`Key`),
KEY `ix_Identifier` (`Identifier`),
KEY `ix_Type` (`Type`),
KEY `ix_Key` (`Key`),
KEY `ix_Value` (`Value`(255))
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Instance Tag info.';
-- This is the audit table or log table:
CREATE TABLE `Audit` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`Profile` varchar(100) DEFAULT NULL,
`Identifier` varchar(100) NOT NULL DEFAULT '',
`TableName` varchar(100) DEFAULT NULL,
`FieldName` varchar(100) DEFAULT NULL,
`OldValue` varchar(100) DEFAULT NULL,
`NewValue` varchar(100) DEFAULT NULL,
`Type` varchar(100) DEFAULT NULL,
`timestamp` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Data comes from triggers on other tables';
-- Here are the example triggers...
-- Example of saving information from a change to column called EndpointIPAddress:
DROP TRIGGER IF EXISTS MyCluster_BU;
DELIMITER $$
CREATE DEFINER=`root`@`%` TRIGGER MyCluster_BU BEFORE UPDATE ON MyCluster
FOR EACH ROW BEGIN
IF (OLD.EndpointIPAddress <> NEW.EndpointIPAddress AND OLD.EndpointIPAddress <> '' AND NEW.EndpointIPAddress <> '') THEN
INSERT INTO Audit (
`Profile`,
`Identifier`,
`TableName`,
`FieldName`,
`OldValue`,
`NewValue`,
`Type`,
`timestamp`
)
VALUES (
OLD.Profile,
OLD.DBClusterIdentifier,
'MyCluster',
'EndpointIPAddress',
OLD.EndpointIPAddress,
NEW.EndpointIPAddress,
'change',
NOW()
);
END IF;
END$$
DELIMITER ;
-- Example of saving information after a delete occurs:
DROP TRIGGER IF EXISTS MyCluster_AD;
DELIMITER $$
CREATE DEFINER=`root`@`%` TRIGGER MyCluster_AD
AFTER DELETE
ON MyCluster FOR EACH ROW
BEGIN
INSERT INTO Audit (
`Profile`,
`Identifier`,
`TableName`,
`FieldName`,
`OldValue`,
`NewValue`,
`Type`,
`timestamp`
)
VALUES (
OLD.Profile,
OLD.DBClusterIdentifier,
'MyCluster',
'EndPoint',
OLD.EndPoint,
'',
'delete',
NOW()
);
END; $$
DELIMITER ;
-- Example of saving information after a delete occurs on a key value table. The key is StackName and the value can be anything:
DROP TRIGGER IF EXISTS Tag_AD;
DELIMITER $$
CREATE DEFINER=`root`@`%` TRIGGER Tag_AD
AFTER DELETE
ON Tag FOR EACH ROW
BEGIN
IF (OLD.Key = 'StackName') THEN
INSERT INTO Audit (
`Profile`,
`Identifier`,
`TableName`,
`FieldName`,
`OldValue`,
`NewValue`,
`Type`,
`timestamp`
)
VALUES (
OLD.Profile,
OLD.Identifier,
'Tag',
'StackName',
OLD.Value,
'',
'delete',
NOW()
);
END IF;
END; $$
DELIMITER ;
-- Example of saving information from a change to key value. The key is StackName and the Value can be anything.
DROP TRIGGER IF EXISTS Tag_BU;
DELIMITER $$
CREATE DEFINER=`root`@`%` TRIGGER Tag_BU BEFORE UPDATE ON Tag
FOR EACH ROW BEGIN
IF (OLD.Key = 'StackName' AND OLD.VALUE <> NEW.VALUE) THEN
INSERT INTO Audit (
`Profile`,
`Identifier`,
`TableName`,
`FieldName`,
`OldValue`,
`NewValue`,
`Type`,
`timestamp`
)
VALUES (
OLD.Profile,
OLD.Identifier,
'Tag',
'StackName',
OLD.VALUE,
NEW.VALUE,
'change',
NOW()
);
END IF;
END$$
DELIMITER ;
-- Now add some data into the tables, make changes and delete some rows.
INSERT INTO `MyCluster` (`Profile`, `DBClusterIdentifier`, `Endpoint`, `EndpointIPAddress`, `ReaderEndpoint`, `ReaderEndpointIPAddress`, `ClusterCreateTime`, `CreateTime`, `UpdateTime`) VALUES ('test', 'test', 'test', '123', NULL, NULL, NULL, CURRENT_TIMESTAMP, '0000-00-00 00:00:00');
UPDATE `MyCluster` SET `EndpointIPAddress` = '456' WHERE `Profile` = 'test' AND `DBClusterIdentifier` = 'test';
DELETE FROM `MyCluster` WHERE (`Profile` = 'test' AND `DBClusterIdentifier` = 'test');
INSERT INTO `Tag` (`Profile`, `Type`, `Identifier`, `Key`, `Value`, `ResourceARN`, `CreateTime`, `UpdateTime`) VALUES ('test', 'Cluster', '123', 'StackName', 'MyTest', NULL, CURRENT_TIMESTAMP, '0000-00-00 00:00:00');
UPDATE `Tag` SET `Value` = 'MyTestIsDone' WHERE `Profile` = 'test' AND `Type` = 'Cluster' AND `Identifier` = '123' AND `Key` = 'StackName';
DELETE FROM `Tag` WHERE (`Profile` = 'test' AND `Type` = 'Cluster' AND `Identifier` = '123' AND `Key` = 'StackName');
Values that were changed or Deleted for the columns that have triggers setup will now appear in the Audit table.
PROS:
1. Easy to implement.
2. Very simple triggers
3. Only one table needed to keep history for any number of tables
CONS
1. Data type for old and new values is very generic, all data no matter what type it originally was is stored as TEXT
2. There are no foreign key constraints between the tables. The columns in the change log table can refer to anything. Without constraints, there is nothing to stop accidental or intentional manipulating of the numbers to values that don’t exist in the source table.
3. Triggers add additional overhead to the system which could slow performance
4. Writing queries to revert data is not simple
Subscribe to:
Posts (Atom)