DBLense Data Studio — User Guide
The all-in-one data management toolkit. Data Studio & restore, streaming server-to-server migration, cross-RDBMS transfers between SQL Server, MySQL, PostgreSQL, and Oracle, plus multi-format export/import to CSV, JSON, XML, SQL, Excel, TXT, and HTML. Single portable .exe — no installation required.
Getting Started
Double-click DBLenseDataStudio.exe — no installation, no prerequisites. The application opens with four tabs representing its four operating modes:
| Tab | Purpose |
|---|---|
| ⬆ BACKUP | Parallel multi-threaded database export with snapshot consistency and GZip compression |
| ⬇ RESTORE | Parallel multi-threaded database restore with drop/recreate, identity insert, and constraint management |
| ⇄ MIGRATE | Streaming producer-consumer pipeline for server-to-server migration (export and import run concurrently) |
| 📦 EXPORT / IMPORT | Cross-RDBMS data transfer and multi-format file export/import with automatic column mapping |
Connecting to a Server
Click the Connect button on any tab to open the connection dialog. Data Studio supports four authentication modes:
| Mode | Description | Use Case |
|---|---|---|
| Windows Authentication | Uses your current Windows domain credentials | On-premises Active Directory environments |
| SQL Server Authentication | Username and password stored in SQL Server | Mixed-mode servers, cross-domain access |
| Azure Entra ID (Password) | Azure AD username + password | Azure SQL MI with non-MFA accounts |
| Azure Entra ID (MFA) | Interactive Azure AD with multi-factor | Azure SQL MI with MFA enabled |
⬆ Backup (Parallel Export)
The Backup tab exports database data using a multi-threaded parallel engine. Each table is exported on its own thread for massive throughput gains over traditional single-threaded tools.
Quick Start
- Connect to your SQL Server instance
- Select one or more databases (or use Select All)
- Choose an output folder
- Configure options (threads, compression, snapshots)
- Click Start Export
Configuration Options
| Option | Description | Default |
|---|---|---|
| Output Folder | Directory where exported .bcp files are saved | Desktop\DBLenseExport |
| Max Threads | Number of parallel export threads (2–16) | 4 |
| Format | Output format: native BCP or SQL scripts | Native |
| Compression | GZip compress output files (.gz) | Off |
| Include Schema | Export DDL (CREATE TABLE) alongside data | On |
| Database Snapshot | Create a snapshot for point-in-time consistent export | On |
Database Selection
After connecting, all databases are listed with checkboxes. Filter options:
- User databases only (default)
- System databases (master, model, msdb)
- All databases
You can also expand a database and select individual tables for a targeted backup.
Snapshot Consistency
When the Database Snapshot option is enabled, Data Studio creates a temporary database snapshot before exporting. This ensures:
- Point-in-time consistency — All tables are exported from the same logical moment
- No locks on production — The snapshot is a read-only copy; production continues normally
- Automatic cleanup — The snapshot is dropped after export completes
Compression
Enable GZip compression to reduce output file sizes by 60–80%. Especially useful for:
- Large databases where storage space is limited
- Exports that will be transferred over the network
- Archival copies uploaded to Azure Blob Storage
Compressed files have a .gz extension and can be restored by Data Studio or any standard GZip tool.
⬇ Restore (Parallel Import)
The Restore tab imports previously exported data using multi-threaded parallel processing — mirroring the speed gains of the Backup tab.
Quick Start
- Connect to the target SQL Server
- Browse to the export folder (Data Studio auto-detects databases and tables from the folder structure)
- Select what to restore
- Configure restore options
- Click Start Restore
Restore Options
| Option | Description |
|---|---|
| Truncate before restore | Clear existing data before importing each table |
| Drop & recreate tables | Re-create table structure from exported schema DDL |
| Identity insert | Preserve original identity column values (SET IDENTITY_INSERT ON) |
| Disable constraints | Temporarily disable foreign keys and check constraints during restore |
| Max Threads | Number of parallel restore threads (2–16) |
⇄ Migrate (Streaming Pipeline)
The Migrate tab provides server-to-server migration using a streaming producer-consumer pipeline. Unlike traditional backup-then-restore workflows, tables begin importing on the target as soon as they finish exporting from the source.
How It Works
- Connect to both source and target SQL Server instances
- Select source databases to migrate
- Optionally override the target database name
- Configure thread count and options
- Click Start Migration
Internally, Data Studio uses Channel<T> (a .NET bounded channel) to coordinate producers (exporters) and consumers (importers). This pipeline architecture means:
- No temp files on disk (optional) — data flows directly from source to target in memory
- Lower total migration time — export and import overlap, reducing wall-clock time by 40–60%
- Real-time progress — you can see both export and import status per table
Migration Options
| Option | Description | Default |
|---|---|---|
| Source Server | The SQL Server instance to export from | — |
| Target Server | The SQL Server instance to import into | — |
| Target Database Override | Rename the database on the target (e.g., Production → Staging) | Same as source |
| Max Threads | Parallel threads for both export and import | 4 |
| Use Snapshots | Create a snapshot on the source for consistency | On |
| Drop & Recreate | Drop and recreate tables on the target | Off |
📦 Export / Import (Cross-RDBMS & File Formats)
The Export/Import tab is the most versatile mode. It supports three transfer directions:
| Direction | Description |
|---|---|
| Database → Database | Transfer tables between any supported database platforms (e.g., MySQL → SQL Server) |
| Database → File | Export tables to CSV, JSON, XML, SQL, XLSX, TXT, or HTML files |
| File → Database | Import data from CSV, JSON, XML, SQL, or XLSX files into any supported database |
Quick Start — Database to Database
- Select direction: Database → Database
- Configure Source: choose provider (SQL Server, MySQL, PostgreSQL, or Oracle), enter connection details, click Test Connection
- Configure Target: choose provider and connection details
- Click Discover Tables to load the source table list
- Select tables to transfer and review column mappings
- Click Start Transfer
Quick Start — Database to File
- Select direction: Database → File
- Configure source database connection
- Choose output folder and file format
- Configure format-specific options (delimiter, encoding, etc.)
- Select tables to export
- Click Start Transfer
Quick Start — File to Database
- Select direction: File → Database
- Browse to the source file (CSV, JSON, XML, SQL, or XLSX)
- Configure target database connection
- Data Studio auto-detects columns and types from the file
- Review column mappings and target table name
- Click Start Transfer
Supported Database Platforms
Data Studio can transfer data between any combination of these four platforms:
| Platform | Versions | Driver | Bulk Insert Method |
|---|---|---|---|
| SQL Server | 2016+ / Azure MI | Microsoft.Data.SqlClient | SqlBulkCopy (maximum throughput) |
| MySQL | 5.7+ / MariaDB 10+ | MySqlConnector | Batched INSERT statements (500 rows/batch) |
| PostgreSQL | 12+ | Npgsql | COPY BINARY (blazing fast imports) |
| Oracle | 12c+ | Oracle.ManagedDataAccess.Core | INSERT ALL batched writes |
File Formats
Data Studio supports 7 file formats for export and 5 for import:
CSV (Comma-Separated Values)
| Option | Description | Default |
|---|---|---|
| Delimiter | Field separator character | , (comma) |
| Text Qualifier | Character to wrap string values | " (double quote) |
| Include Header | Write column names as the first row | On |
| Encoding | File encoding (UTF-8, UTF-16, ASCII) | UTF-8 |
TXT (Tab-Delimited)
Same options as CSV but defaults to tab (\t) as delimiter. Ideal for pasting into spreadsheets.
JSON
| Option | Description | Default |
|---|---|---|
| Pretty Print | Indented, human-readable output | On |
| Wrapper | Array ([ ]) or object with metadata | Array |
XML
| Option | Description | Default |
|---|---|---|
| Root Element | Name of the root XML element | DataSet |
| Row Element | Name of each row element | Row |
| Include Schema | Embed XSD schema in the output | Off |
Excel (XLSX)
Exports styled Excel workbooks with:
- Bold header row with background color
- Auto-fit column widths
- Freeze panes on the header row
- One worksheet per table (named after the table)
Import reads .xlsx files, using the first row as column headers, with automatic type detection from cell values.
SQL Script
| Option | Description | Default |
|---|---|---|
| Include CREATE TABLE | Generate CREATE TABLE DDL before INSERT statements | Off |
| Include DROP TABLE | Generate DROP TABLE IF EXISTS before CREATE | Off |
| Batch Size | Number of INSERT rows per GO batch | 1000 |
HTML
Generates a styled HTML document with a dark-themed table, ready for reports, presentations, or embedding in web pages. Export-only (not importable).
Automatic Column Mapping
When transferring between databases, Data Studio automatically:
- Discovers source columns — reads column names, data types, nullability, and constraints from the source
- Maps to target types — converts source types to equivalent target types (e.g., SQL Server
NVARCHAR(MAX)→ PostgreSQLTEXT) - Generates target DDL — creates the target table if it doesn’t exist, using the mapped types
You can review and override any mapping before starting the transfer. The column mapping grid shows:
- Source column name and type
- Target column name (editable)
- Target data type (editable)
- Include/exclude toggle per column
Common Type Mappings
| SQL Server | MySQL | PostgreSQL | Oracle |
|---|---|---|---|
| INT | INT | INTEGER | NUMBER(10) |
| BIGINT | BIGINT | BIGINT | NUMBER(19) |
| VARCHAR(n) | VARCHAR(n) | VARCHAR(n) | VARCHAR2(n) |
| NVARCHAR(MAX) | LONGTEXT | TEXT | CLOB |
| DATETIME2 | DATETIME(6) | TIMESTAMP | TIMESTAMP |
| BIT | TINYINT(1) | BOOLEAN | NUMBER(1) |
| DECIMAL(p,s) | DECIMAL(p,s) | NUMERIC(p,s) | NUMBER(p,s) |
| VARBINARY(MAX) | LONGBLOB | BYTEA | BLOB |
| UNIQUEIDENTIFIER | CHAR(36) | UUID | RAW(16) |
Error Handling
Data Studio provides three error handling modes for transfers:
| Mode | Behavior |
|---|---|
| Prompt | Pause on each error and ask whether to skip or abort |
| Ignore All | Log the error and continue with remaining tables |
| Abort on First | Stop the entire transfer immediately on the first error |
All errors are captured in the results summary with table name, error message, and row count at time of failure. You can optionally save an error log file for later analysis.
Azure & Cloud Integration
Azure SQL Managed Instance
Data Studio fully supports Azure SQL MI with auto-detection:
- Database snapshots use MI-compatible syntax
- Azure Entra ID authentication (Password or MFA)
- Optimized for cloud network latency
Azure Blob Storage
Upload and download backups directly to/from Azure Blob Storage:
- Enter your Blob container SAS URL
- After backup completes, click Upload to Azure
- For restore, click Download from Azure to pull backup files from the container
Progress Tracking
All four modes display live progress information:
- Per-table progress bars — current row count vs. total rows
- Overall progress — percentage across all selected tables/databases
- Active threads — which tables are currently being processed
- Throughput — rows per second
- Elapsed time and ETA
- Status indicators — ✓ Complete, ▶ In Progress, ✗ Failed
The progress DataGrid can be sorted by status, table name, or row count. After completion, a summary shows total rows transferred, errors encountered, and elapsed time.
Typical Workflows
1. Nightly Backup with Cloud Upload
- Select all user databases on the Backup tab
- Enable snapshot consistency and GZip compression
- Set Max Threads to 4–8 (balance speed vs. server load)
- Export to a local folder
- Upload to Azure Blob Storage for offsite archival
2. Environment Refresh (Dev/Staging from Production)
- Use the Migrate tab for streaming server-to-server transfer
- Connect to Production as source, Dev/Staging as target
- Override target database name if needed
- Enable “Drop & Recreate” for a clean refresh
- Tables are available on the target in seconds, not hours
3. Cross-Platform Migration (MySQL → SQL Server)
- Open the Export/Import tab, direction: Database → Database
- Source: MySQL provider, enter host, port, username, password
- Target: SQL Server provider, enter connection details
- Click Discover Tables
- Select tables — Data Studio maps
INT→INT,VARCHAR→NVARCHAR,DATETIME→DATETIME2, etc. - Click Start Transfer
4. Export Data for Stakeholders
- Direction: Database → File
- Connect to the source database
- Select tables to export
- Choose Excel (XLSX) for business users, JSON for APIs, SQL for deployment scripts
- Click Start Transfer — one file per table in the output folder
5. Bulk Import from CSV/JSON
- Direction: File → Database
- Browse to the source file
- Configure target database connection
- Review auto-detected columns and types
- Click Start Transfer — the target table is auto-created if it doesn’t exist
6. Generate Deployment SQL Scripts
- Direction: Database → File, format: SQL Script
- Enable Include CREATE TABLE and Include DROP TABLE
- Export — each table produces a
.sqlfile withDROP,CREATE, and batchedINSERTstatements - Run the scripts on the target server using Script Runner or SSMS
System Requirements
| Component | Requirement |
|---|---|
| OS | Windows 10/11 or Windows Server 2016+ |
| Runtime | .NET 8.0 (or self-contained build) |
| SQL Server | 2016+ / Azure SQL MI |
| MySQL | 5.7+ / MariaDB 10+ |
| PostgreSQL | 12+ |
| Oracle | 12c+ |
| Disk Space | Sufficient for exported data (compressed or uncompressed) |
| Installation | None — single portable .exe |
| Permissions | Read access (export), db_owner or equivalent (restore/migrate/import) |