- Home
- Relational Databases
- Microsoft SQL Server
Category: Microsoft SQL Server
MySQL InnoDB Redo Log Archiving
Feed: Planet MySQL; Author: Frederic Descamps; When performing physical backup on system that are heavily used, it can happen that the backup speed cannot keep up with the redo log generation. This can happen when the backup storage is slower than the redo log storage media and this can lead in inconsistency in the generated backup. MySQL Enterprise Backup (aka MEB) and probably Percona Xtrabackup, benefit from the possibility to sequentially write redo log records to an archive file in addition to the redo log files. This feature was introduced in MySQL 8.0.17. How to enable it ? To enable ... Read More
Amazon RDS increases concurrent copy limit to 20 snapshots per destination region
Feed: Recent Announcements. Amazon RDS now allows you to have up to 20 concurrent snapshot copy requests per destination region per account, an increase from the former limit of five concurrent copies per destination region per account. The new limit applies to snapshots of Microsoft SQL Server, Oracle engines, Amazon RDS for PostgreSQL, Amazon RDS for MySQL, and Amazon RDS for MariaDB engines in AWS Regions where Amazon RDS is available. This feature has been enabled on your account and no further action is needed from you. For more information on copying RDS snapshots, please refer to the documentation guide ... Read More
Ryan Lambert: H3 indexes for performance with PostGIS data
Feed: Planet PostgreSQL. By Ryan Lambert -- Published June 24, 2022I recently started using the H3 hex grid extension in Postgres with the goal of making some not-so-fast queries faster. My previous post, Using Uber's H3 hex grid in PostGIS, has an introduction to the H3 extension. The focus in that post, admittedly, is a PostGIS focused view instead of an H3 focused view. This post takes a closer look at using the H3 extension to enhance performance of spatial searches. The two common spatial query patterns considered in this post are: Nearest neighbor style searches Regional analysis Setup and ... Read More
Stream change data to Amazon Kinesis Data Streams with AWS DMS

Feed: AWS Big Data Blog. In this post, we discuss how to use AWS Database Migration Service (AWS DMS) native change data capture (CDC) capabilities to stream changes into Amazon Kinesis Data Streams. AWS DMS is a cloud service that makes it easy to migrate relational databases, data warehouses, NoSQL databases, and other types of data stores. You can use AWS DMS to migrate your data into the AWS Cloud or between combinations of cloud and on-premises setups. AWS DMS also helps you replicate ongoing changes to keep sources and targets in sync. CDC refers to the process of identifying ... Read More
MySQL JSON Tricks
Feed: Planet MySQL; Author: Michael McLaughlin; Are they really tricks or simply basic techniques combined to create a solution. Before writing these mechanics for using native MySQL to create a compound JSON object, let me point out that the easiest way to get one is to use the MySQL Node.js library, as shown recently in my “Is SQL Programming” blog post. Moving data from a relational model output to a JSON structure isn’t as simple as a delimited list of columns in a SQL query. Let’s look at it in stages based on the MySQL Server 12.18.2 Functions that create ... Read More
Amazon RDS Custom is now available in 2 additional AWS Regions
Feed: Recent Announcements. Amazon RDS Custom is a managed database service for legacy, custom, and packaged applications that require access to the underlying OS and DB environment. Amazon RDS Custom is available for the Oracle and SQL Server database engines. Amazon RDS Custom automates setup, operation, and scaling of databases in the cloud while granting access to the database and underlying operating system to configure settings, install drivers, and enable native features to meet the dependent application's requirements ... Read More
Denish Patel: PSQL Helper: Managing Connections and Simplifying Queries
Feed: Planet PostgreSQL. by Michael Vitale Date 22nd June 2022 Categories Database, postgres, postgresql For folks using PSQL to connect to PG databases, it can be a headache to manage a lot of different DB profile connections. PG makes it a bit easier by organizing db profiles in a file called, .pgpass. It contains one line for each DB Profile like this: localhost:5432:mydb:myuser:mypassword This file must reside in the user’s home directory and not have global permissions. cd ~touch .pgpasschmod 600 .pgpass But it only simplifies having to remember passwords. You still have to use a tedious psql command like ... Read More
Adam Johnson: How to Find and Stop Running Queries on PostgreSQL
Feed: Planet PostgreSQL. Here’s the basic process to find and stop a query. Note you’ll need to connect as a user with adequate permissions to do so, such as an admin account.1. Find the pidPostgreSQL creates one process per connection, and it identifies each process with its operating system process ID, or pid. In order to cancel a query, you need to know the pid for the connection it’s running on.One way to find this out is with the pg_stat_activity view, which provides information about the live queries. For example, try this query:SELECT pid, state, backend_start, substr(query, 0, 100) q ... Read More
MySQL Performance : Benchmark kit (BMK-kit)

Feed: Planet MySQL; Author: Dimitri Kravtchuk; The following is a short HOWTO about deployment and use of Benchmark-kit (BMK-kit). The main idea of this kit is to simplify your life in running various MySQL benchmark workloads with less blood and minimal potential errors. Generally as simple as the following : $ bash /BMK/sb_exec/sb11-Prepare_50M_8tab-InnoDB.sh 32 # prepare data $ for users in 1 2 4 8 16 32 64 128 256 512 1024 do # run OLTP_RW for 5min each load level.. bash /BMK/sb_exec/sb11-OLTP_RW_50M_8tab-uniform-ps-trx.sh $users 300 sleep 15 done Preface I'm seeing the new (Lua-based) Sysbench since the v.1.0 as a ... Read More
A graph a day, keeps the doctor away ! – Full Table Scans
Feed: Planet MySQL; Author: Frederic Descamps; Full table scans can be problematic for performance. Certainly if the scanned tables are large. The worst case is when full table scans are involved in joins and particularly when the scanned table is not the first one (this was dramatic before MySQL 8.0 as Block Nested Loop was used) ! A full table scans means that MySQL was not able to use an index (no index or no filters using it). Effects When Full Table Scans happen (depending of the size of course), a lot of data gets pulled into the Buffer Pool ... Read More
Recent Comments