» Documentation

Import Data

Importing CSV file

CSV import/export functionality has been built into ScimoreDB Manager. Just right click on the table and choose import/export sub-menu.


Type column delimiter character, choose CSV file input/output. If the first row in CSV contains column names, click on checkbox to ignore the first row for import or generate columns header for export.


Import SQL batch

Execute a list of INSERT SQL commands separated by ";":


Import MS SQL Server

ScimoreDB manager supports importing tables and data from SQL Server. Let's take an example:

  1. Create embedded database "myimport". Open manager and right click on "Embedded" node. Then choose "Create" and choose folder to create database:


  1. Create "NorthWind" database.


  1. Next, right clock on "NorthWind" database and choose "Import SQL Server".


  1. Fill MS SQL Server connection parameters choose database to import and click Next.


  1. The appeared grid lists the tables to import.


  • The checked Import column means the table is selected to import. To unselect all tables, click on Select All button, and then click on any selected Import cell. The operation either check or uncheck the selected tables.
  • TableName column describes the table name as [schema name].[table name]. Note: ScimoreDb does not support multiple schemas, therefore, the table name must be unique.
  • Indexes. If checked indexes will be imported. Note: ScimoreDB requires all table must have primary key.
  • ForeignKeys. Imports foreign key constraints.
  • Choose Partition Fields column describe the column the table will be partitioned by.

    • For embedded database, leave .
    • For distributed database, you can either leave (the data will be replicated on all nodes), or, choose partition column to partition tables' data among cluster nodes.
  • ImportData when checked import rows.

  • View SQL button open query window with designated SQL/DQL statement to import the table. It is useful, if you want to change the table structure when importing.
  • Status column initially shows the approximate number of row to import (taken from SQL Server statistics), and, when importing either error or number of rows imported (Note: number of rows will be higher than actual imported rows, as it sum of imported rows and system catalog changes).

  • Finally, press Import.. button to start importing. While importing you can stop the process by clicking on Cancel button.


Importing data using OLEDB/ODBC with DQL

The DQL command open_oledb can open OLEDB or ODBC drivers. The command accepts 2 parameters: connections string and SELECT statement. The output rows from external database can be inserted to table using DQL bulk_insert command. For example, importing from MySQL:

Reading data from MySQL:

execute dql
open_oledb 'Driver={MySQL ODBC 3.51 Driver}; option=1048576; Server=mysql; Database=debat; Uid=myuser; Pwd=Mypassword,

'select postid, threadid, dateline, title, pagetext from post'

to @Results
project by postid, threadid, title, pagetext


option=1048576 declare forward only cursor, otherwise the MySQL ODBC object will run out of memory if dataset is a big.

Create schema and insert rows from MySQL:

begin tran
--create table and full-text index
create database debate
use debate
create table debate
postid int not null primary key,
threadid int,
dateline varchar,
title varchar,
pagetext text

-- insert and index rows from MySQL
execute dql
open_oledb 'Driver={MySQL ODBC 3.51 Driver}; option=1048576; Server=mysql; Database=debat; Uid=user; Pwd=password,

'select postid,threadid, dateline,title,pagetext from post'

to @T1
project by postid,threadid, title ,pagetext

insert_bulk @T1,'debate.debate'
set postid = @T1.postid,
threadid = @T1.threadid,
dateline = @T1.dateline,
title = @T1.title,
pagetext = @T1.pagetext