» Scimore News Blog

Distributed database - Installation

2010-02-04 - Marius Slyzius


My name is Marius Slyzius and I will be posting a series of the blogs about distributed database:

  • Installing cluster.
  • Create schema. Replicated/Partitioned tables. Execute distributed queries. DQL query plans.
  • Designing "true" shared-nothing schema. Pros/Cons
  • Setup failover/failback

Today is about how to install the cluster.

ScimoreDB distributed is a database where data is partitioned or replicated among X nodes. The node is a database server what was joined to a cluster by SQL command "ALTER CLUSTER". The number of nodes can be 2^n-1 = {3,7,15,31,63,...}. This number of nodes needed to form the binary tree, where a node can send requests to 2 its child's and the child's to 2 their child's. For example we have cluster of 7 nodes and the client sends the query to node 4. In such case, the node 4 will send the query to child's 3,5 , and, each child will send to its childs 3->1,2 and 5 ->6,7. Everything happens in parallel. It is similar to map/reduce, except that the node 4 don't need to send the request to all nodes, only to 2 its childs. The advantage of what is that we can do reduce in parallel as well. For example, "select count(*) from mytable" sent by client to the node 4. Node 4 will start counting rows and sends SQL to 3 and 5 nodes. Node 3 will start counting rows and pass to 1 and 2 nodes to count. Nodes 1 and 2 will count rows , but don't send SQL anymore, since there are no childs. The same will happen with node 5. Now, when node finished counting rows, the aggregation phase starts. In parallel, node 3 and 5 will receive the counted rows from their child's and sum it with local row count. Each node did aggregation of local rows in parallel, and aggregation of aggregated data from (1,2,6,7) on 3 and 5 in parallel too. Finally 4 will aggregate its counted rows with the aggregations from 3 and 5 nodes.

ScimoreDB uses DQL language to define how distributed query is executed. SQL is always converted to DQL. Before executing DQL on distributed DB it will be optimized for distributed query, for example, select rows only from a single node or all, add merge aggregators, stream splitters, etc... To see DQL for SQL, press show query plan button in query window in the manager.

So, how to create distributed DB? First, you need to install db servers. I have created 4 batch files (to install 3 servers, uninstall 3 servers , start 3,and stop 3). All DB servers will be managed on a single PC, however, it could be 3 different machines preferably. The batch files located in "c:\db" folder together with scimoredb.exe (install ScimoreDB version 3 on machine and copy exe from there).


mkdir node1
mkdir node2
mkdir node3
scimoredb.exe -startup/instance=0 -startup/mode=1 -startup/console=1 -net/endpoint="localhost:999"    -db/cachepages=5000 -db/systables="c:\db\node1" -db/syslog="c:\db\node1"  -db/data="c:\db\node1"
scimoredb.exe -startup/instance=1 -startup/mode=1 -startup/console=1 -net/endpoint="localhost:1000"  -db/cachepages=5000 -db/systables="c:\db\node2" -db/syslog="c:\db\node2"  -db/data="c:\db\node2"
scimoredb.exe -startup/instance=2 -startup/mode=1 -startup/console=1 -net/endpoint="localhost:1001"  -db/cachepages=5000 -db/systables="c:\db\node3" -db/syslog="c:\db\node3"  -db/data="c:\db\node3" 


-startup/instance={0,1,2} - instance id is the registry key to separate startup parameters of each DB on the same PC. HKEY_LOCALMACHINE\Software\Scimore\Nodes[0] ... [2]. Also, instance id is added to the end of DB service name. {scimoredb-0,scimoredb-1,scimoredb-2}. Note, instance id's has nothing to do with the nodes in the cluster. If you install databases on distinct PC's set instance=0 for all them.

-net/endpoint="localhost:999-1001" - for single DB server, used only port the database listens. "localhost" has no meaning yet. Database listens on all IP addresses. However, when you join databases to the cluster, each database will become node and connect to each using net/endpoint host:port value. In my case all 3 runs on the same machine, so it’s fine. But, if nodes are on different machine you need to specify correct either IP:port or host:port. Use "host:port" if IP might changes. Using "IP:port" you can specify different IP for comm between nodes and client/node (f.x. infinity band over socket network) for faster communication between nodes than clients to node(s). If the node can't be accessed anymore you need to update [endpoint] value for this node on all nodes. The information is stored in "system.sysinstances" table(nothing to do with -startup/instance). Again, if at least one node inaccessible you will not be able to connect and update the table with new values. To resolve it, start each node with the following parameters (first kill/stop node) and from cmd:

scimoredb.exe -startup/instance=0 -startup/console=1 -net/mode=1 -net/endpoint="localhost:999" 

When node starts it will ignore cluster setup because of net/mode=1 parameter and start single server listening on 999 port. Then, connect with the manager and update table sysinstances in system catalog with correct IP/Host/Port values: update system.systables set [endpoint] = "newIP:port" where instanceid=x. Here, " instanceid" is not instance we used to install, but it is the node ID. Repeat it for each node. When done, restart services and you should be able to connect again.

-db/cachepages=5000 - number of the data pages to cache. Each page is 8kb, more you add better it is. The value can be changed in the registry, followed by service restart.


scimoredb.exe -startup/instance=0 -startup/mode=2 -startup/console=1
scimoredb.exe -startup/instance=1 -startup/mode=2 -startup/console=1
scimoredb.exe -startup/instance=2 -startup/mode=2 -startup/console=1 

-startup/mode=2 - tells to uninstall and delete data files. Startup/mode=20 will uninstall, but leave data files. Note: systables will contain cluster info and later if you install on top new instance without data overwrite, you won't be able to connect to db, because it will "think" it is in the cluster and try to access to other nodes listed in sysinstances table.

Start.cmd: start 3 databases on local PC.

net start scimoredb-0
net start scimoredb-1
net start scimoredb-2 


net stop scimoredb-0
net stop scimoredb-1
net stop scimoredb-2 

Lets make a cluster now: 1. Stop scimoredb-0 service. Run uninstall.cmd. Check if scimoredb.exe process is not running, if it does kill it and run script again. Because, I have installed scimoredb with MSI and it did installed scimoredb-0 (0-instance), so I uninstall it. I could just create new instances {1,2,and 3} and leave 0 running. 2. Run install.cmd. The script will create 3 databases and each database will place system tables under Node1/2/3 folder. Then install finished, verify if NodeX contains files and if services scimoredb-{0,1,2} created. 3. Run start.cmd. Check if 3 scimoredb.exe processes running. Start manager and verify the connection to each database. 4. With manager connect to localhost/ port 999(database 0 on my PC) and in query window execute SQL command: alter cluster ( add endpoint 'localhost:1000', add endpoint 'localhost:1001' ) NOTE: make sure endpoint 'localhost:1000' matches endpoint value used when installed databases (-net/endpoint="localhost:999...).

Restart the manager. Connect to any database. Under "Cluster" tree node in the manager you will see 3 servers for each database/node you connected. Your 3 databases now in a cluster and you are ready to create databases/tables.

If you have trouble to create cluster, ideas/suggestion please don't hesitate to contact me: marius@scimore.com