This chapter contains a number of simple tutorials which demonstrate basic DataStore concepts.
A DataStore file can contain two basic types of data streams: table streams and file streams:
One key to understanding the possible uses of DataStore in applications is that all three kinds of streams may be stored in the same DataStore file.
Each stream is identified by a case-sensitive name, referred to as storeName in the API, which can be up to 192 bytes long. The name is stored along with other information about the stream in the DataStore's internal directory. The forward slash ("/") is used as a directory separator in the name, to provide a hierarchical directory organization. This structure is used by the DataStore Explorer to display the contents of a DataStore in a tree.
This chapter covers DataStore fundamentals, including the directory, using file streams. For table streams, see "DataStore as an embedded database," and "Persisting data in a DataStore."
One of the DataStore's features is the fact that it is a component that you can program visually. But visual programming sometimes presents too many choices at once, making things seem more complicated than they really are. A set of simple exercises can better demonstrate the DataStore's basic essence.
The classic first exercise for a new language is how to display "Hello, World!" The spirit of that tradition will be carried on here. (You will be spared from performing the classic second exercise, a Fahrenheit to Celsius converter.)
First, create a new project for the dsbasic package, which will be used throughout this chapter.
Important: Add the DataStore 3.1 library to the project so that you can access the DataStore classes.
Add a new file to the project, Hello.java
, and start with the following:
// Hello.java package dsbasic; import com.borland.datastore.*; public class Hello { public static void main( String[] args ) { DataStore store = new DataStore(); try { store.setFileName( "Basic.jds" ); if ( !new java.io.File( store.getFileName() ).exists() ) { store.create(); } else { store.open(); } store.close(); } catch ( com.borland.dx.dataset.DataSetException dse ) { dse.printStackTrace(); } } }
After declaring its package, this class imports all the classes in the com.borland.datastore package. That package contains most of the public DataStore classes. (The rest of the public DataStore classes are in the com.borland.datastore.jdbc package, which is needed only for JDBC access. It contains the JDBC driver class, and classes used to implement a DataStore JDBC server. These classes will be covered in "DataStore as an embedded database" and "Multi-user and remote access to DataStores.") DataStore can also be accessed by DataExpress components (packages under com.borland.dx), but those classes will be referenced explicitly so that you can see where each class comes from.
In the main method, a new DataStore object is created. This object represents a physical DataStore file, and contains properties and methods that represent its structure and configuration.
Next, the name "Basic.jds" is assigned to the DataStore object's fileName property. It contains the default file extension ".jds", in lowercase. If the file name does not end with the default extension, it would be appended to the file name when the property is set.
You cannot create the DataStore if a file with that name already exists. Because the fileName property may have been altered when it was set, it's safer to get the property value for the actual file name to search for.
If the file does not exist, the create method will create it. If the method fails for any reason (for example, there's no room on the disk, or someone just created the file in the nanoseconds between this statement and the last) it will throw an exception. Otherwise, you will have an open connection to a new DataStore file.
DSX: See "Creating a new DataStore file". When creating the file, you can also specify options like block size and whether the DataStore will be transactional.
If the file does exist, then a connection will be opened through the open method. The open method is actually a method of the DataStore class' superclass, DataStoreConnection, which (in being what its name implies) contains properties and methods for accessing the contents of a DataStore. (The fileName property is also a property of DataStoreConnection, which means that you can and often do access a DataStore without a DataStore object, as you will see shortly.) Because DataStore is a subclass of DataStoreConnection, it has its own "built-in" connection, which is suitable for simple applications like this. (Note that DataStore can create a new DataStore file, but DataStoreConnection cannot.)
But the excitement is short-lived. Immediately after opening a connection to the DataStore file, creating the file in the process if necessary, that connection is closed with the close method (this is also inherited from DataStoreConnection). Because there was only that one built-in connection, now that all the connections to the DataStore are closed, the DataStore file itself will shutdown.
It is vital that you close any connections that you open before you exit your application (or call the DataStore.shutdown method, which closes all connections). Opening a connection starts a daemon thread that will keep running and prevent your application from terminating properly; you must close those connections or your application will hang on exit.
Most of the methods in the DataStore classes may throw a DataSetException, or more specifically one of its subclasses, DataStoreException. Most of these exceptions are of the fatal "should never happen" or "don't do that" variety. For example, you can't set the fileName property if the connection is already open. You can't create the DataStore file if one already exists. You can't open a connection if the named file isn't really a DataStore file. You might get an IO exception when writing data when closing a connection.
Subsequently, almost all DataStore code is inside a try block. In this case, if an exception is thrown, a stack trace is printed.
If you run the application now, it won't do much; just create the file Basic.jds
. If you then run it a second time, it will do even less; just open and close a connection. Before proceeding, you should delete the file.
There is no special function for deleting a DataStore file. You can use the java.io.File.delete method or anything else. As an aside example, if you always want to create a new DataStore file, you could do something like this code fragment:
// store is DataStore with fileName property set java.io.File storeFile = new java.io.File( store.getFileName() ); if ( storeFile.exists() ) { storeFile.delete(); } store.create();
If the DataStore file is transactional, it is accompanied by transaction log files, which must also be deleted. For more information on transaction log files, see "Transaction log files".
DSX: See "Deleting the DataStore file". The DataStore Explorer will automatically delete any associated transaction log files.
Add the highlighted statements to the if block in the main method:
if ( !new java.io.File( store.getFileName() ).exists() ) { store.create(); try { store.writeObject( "hello", "Hello, DataStore! It's " + new java.util.Date() ); } catch ( java.io.IOException ioe ) { ioe.printStackTrace(); } } else {
The writeObject method attempts to store a Java object as a file stream in the DataStore using Java serialization. (Note that you can also store objects in a table.) The object to be stored must implement the java.io.Serializable interface. A java.io.IOException (more specifically, a java.io.NotSerializableException) will be thrown if it doesn't. Another reason for the exception would be if the write failed (for example, you ran out of disk space).
The first parameter specifies the storeName, the name that identifies the object in the DataStore. The name is case-sensitive. The second parameter is the object to store. In this case, it is a string with a greeting and the current date and time. The java.lang.String class implements java.io.Serializable, so the string can be stored with writeObject.
Add the highlighted statements to the else block in the main method:
} else { store.open(); try { String s = (String) store.readObject( "hello" ); System.out.println( s ); } catch ( com.borland.dx.dataset.DataSetException dse ) { dse.printStackTrace(); } catch ( java.lang.ClassNotFoundException cnfe ) { cnfe.printStackTrace(); } catch ( java.io.IOException ioe ) { ioe.printStackTrace(); } }
The readObject method attempts to retrieve the named object from the DataStore. Like writeObject, it may throw an IOException for mundane reasons like disk failure. It also cannot reconstitute the stored object without the object's class. If that class is not in the classpath, a java.lang.ClassNotFoundException is thrown.
If the named object cannot be found, a DataStoreException with the error code STORE_NOT_FOUND is thrown. It's important to catch that exception (a subclass of DataSetException) here, even though there's another catch at the bottom of the method, because jumping there would bypass the call to close the DataStore connection. (The code is structured in this somewhat awkward way for pedagogical reasons.)
Because readObject is defined to return a java.lang.Object, you almost always cast the return value to the expected data type. (If the object is not actually of that expected type, you will get a java.lang.ClassCastException.) Here, it is more of a formality, because the System.out.println method can take a generic Object reference.
You can now run Hello.java
. The first time it runs, it will create the DataStore file and store the greeting string. When you run it again (and again...) the greeting with the date and time it was created will be displayed in the console.
For the simple persistent storage of objects, the DataStore has a number of advantages over using the JDK classes in the java.io package:
Of course, an internal directory system would be practically useless without a way to get the contents of the directory.
The DataStoreConnection.
openDirectory method returns the contents of the DataStore in an appropriately searchable structure. More on that in a moment. First, add the following program, AddObjects.java
, to the project and run it to add a few more objects to the DataStore:
// AddObjects.java package dsbasic; import com.borland.datastore.*; public class AddObjects { public static void main( String[] args ) { DataStoreConnection store = new DataStoreConnection(); int[] intArray = { 5, 7, 9 }; java.util.Date date = new java.util.Date(); java.util.Properties properties = new java.util.Properties(); properties.setProperty( "a property", "a value" ); try { store.setFileName( "Basic.jds" ); store.open(); store.writeObject( "add/create-time", date ); store.writeObject( "add/values", properties ); store.writeObject( "add/array of ints", intArray ); } catch ( com.borland.dx.dataset.DataSetException dse ) { dse.printStackTrace(); } catch ( java.io.IOException ioe ) { ioe.printStackTrace(); } finally { try { store.close(); } catch ( com.borland.dx.dataset.DataSetException dse ) { dse.printStackTrace(); } } } }
The program does things slightly differently than Hello.java
. First, it uses a DataStoreConnection object instead of a DataStore to access the DataStore file, but it's used in the same way. You set the fileName property, open the connection, use the writeObject method to store objects, and close the connection.
The location of the close method call is another difference. Because you always want to call close, no matter what happens in the main body of the method, it's placed after the catch blocks, inside a finally block. This way, the connection will always be closed, even if there is an unhandled error. The close method is safe to call even if the connection never opened; in that case, it does nothing.
This time, three objects are written to the DataStore: an array of integers, a Date object (as opposed to a Date object converted into a string), and a hashtable. They are named so that they will be in a directory named "add"; the forward slash (or solidus or virgule: "/") is the directory separator character. One of the names contains spaces, which is perfectly valid.
Add another file to the project,
Dir.java
:
// Dir.java package dsbasic; import com.borland.datastore.*; public class Dir { public static void print( String storeFileName ) { DataStoreConnection store = new DataStoreConnection(); com.borland.dx.dataset.StorageDataSet storeDir; try { store.setFileName( storeFileName ); store.open(); storeDir = store.openDirectory(); while ( storeDir.inBounds() ) { System.out.println( storeDir.getString( DataStore.DIR_STORE_NAME ) ); storeDir.next(); } store.closeDirectory(); } catch ( com.borland.dx.dataset.DataSetException dse ) { dse.printStackTrace(); } finally { try { store.close(); } catch ( com.borland.dx.dataset.DataSetException dse ) { dse.printStackTrace(); } } } public static void main( String[] args ) { if ( args.length > 0 ) { print( args[0] ); } } }
This class needs a command-line argument, the name of a DataStore file, which is passed to its print method. The print method accesses that DataStore, using code similar to what you've seen before.
What's significant here is that in addition to defining a DataStoreConnection to access the DataStore, a StorageDataSet is declared. After opening a connection to the DataStore, the openDirectory method of the DataStoreConnection is called to get the contents of the DataStore's directory. The directory of a DataStore is represented by a table.
DSX: See "Viewing DataStore file information".
The DataStore directory table has nine columns--nine pieces of information about each stream in the DataStore, as shown in this table:
The columns may be referenced by name or number. There are constants defined as DataStore class variables for each of the column names. These constants are the preferred way of referencing a column; they provide compile-time checking to ensure that you are referencing a valid column. There are also constants with names that end with _STATE for the different values for the State column, and constants for the different values and bit masks for the Type column with names that end with _STREAM.
Times in the DataStore directory are UTC (a compromise between the French [TUC] and English [CUT] acronyms for Coordinated Universal Time), suitable for creating dates with java.util.Date(long).
As with many file systems, when you delete something in a DataStore, the space it occupied is marked as available, but the contents and the directory entry that points to it are not wiped clean. This leaves the possibility of undeleting something. For more details, see "Deleting and undeleting streams".
The Type column indicates whether a stream is a file or table stream, but there are also many internal table stream subtypes (for things like indexes and aggregates). These internal streams are marked with the HIDDEN_STREAM bit to indicate that they should not be displayed. Of course, when you're reading the directory, you get to decide.
These internal streams have the same StoreName as the table stream with which they are associated. This means that the StoreName alone does not always uniquely identify each stream when interacting with the DataStore at a low level. Some of the internal stream types often have multiple instances. Therefore, the ID for each stream is required to guarantee uniqueness at a low level. But the StoreName is unique enough for the storeName parameter used at the API level. For example, when you delete a table stream, all the streams with that StoreName are deleted.
The directory table is sorted by the first five columns. Due to the values stored in the State column, this means that all active streams will be listed first, in alphabetical order by name; they are then followed by all deleted streams, ordered by their delete time, oldest to most recent. (You cannot use a DataSetView to use a different sort order.)
You manipulate the DataStore directory table as you would any table with the DataExpress API. Use the next and inBounds methods to navigate through each entry in the directory, and the appropriate get... method to read the desired information for each stream.
You may not write to the DataStore directory; it is read-only.
To run
Dir.java
, set the runtime parameters in the Project Properties dialog box to the DataStore file to check; in this case, Basic.jds
. When it runs, a loop goes through the directory, listing the name of every stream, something like:
add/array of ints add/create-time add/values hello
You can include a lot more information in the directory listing. The most difficult part is making the formatting decisions for the various bits of information available in all the columns of the DataStore directory. As a simple example, to display whether the stream is a table or file stream, add the highlighted statements to the beginning of the loop:
while ( storeDir.inBounds() ) { short dirVal = storeDir.getShort( DataStore.DIR_TYPE ); if ( (dirVal & DataStore.TABLE_STREAM) != 0 ) { System.out.print( "T" ); } else if ( (dirVal & DataStore.FILE_STREAM) != 0 ) { System.out.print( "F" ); } else { System.out.print( "?" ); } System.out.print( " " ); System.out.println( storeDir.getString( DataStore.DIR_STORE_NAME ) ); storeDir.next(); }
That addition would change the output to:
F add/array of ints F add/create-time F add/values F hello
indicating that all the serialized objects are indeed file streams.
When you're not using the DataStore directory, you should close it by calling the DataStoreConnection. closeDirectory method. Most DataStore operations would modify the directory in some way. If the directory is open, it would have to be notified, which would slow down your application.
If you try to access the directory StorageDataSet when the directory is closed, you will get a DataSetException with the error code DATASET_NOT_OPEN.
Although you could search the DataStore directory manually, the DataStoreConnection provides two methods for checking if a stream exists, without having to open the directory. The tableExists method checks for table streams, and the fileExists method checks for file streams. Both methods take a storeName parameter, and ignore streams that are deleted. They return true if there is an active stream of the corresponding type with that name in the DataStore, or false otherwise. Remember that stream names are case-sensitive, and you cannot have a table stream and a file stream with the same name.
For example, if you ran the following code fragment against Basic.jds
as it is at this point in the tutorial:
store.tableExists( "hello" )
it would return false, because although there is a stream named "hello", it's a file stream, not a table stream. You would get the same result from:
store.fileExists( "Hello" )
this time because the name does not match case. When the name and type match:
store.fileExists( "hello" )
In addition to serializing discrete objects as file streams, you can store and retrieve data streams in a DataStore through a com.borland.datastore. FileStream object. Although FileStream is a subclass of java.io.InputStream, it has a method for writing to the stream as well, so the same object can be used for both read and write access. It also provides random access via a seek method. Being a subclass of InputStream makes it easy to use streams stored in the DataStore in generic situations that expect an input stream; you will probably read a stream more often than you write one.
DSX: See "Importing files".
Suppose you have an application that uses
boilerplate documents that are modified for individual customers. There is a field in the customer table that contains their personalized copy, but you need to store the original somewhere as well, to make fresh copies for new customers. You could store the original as a file stream in the DataStore. The following utility program, ImportFile.java
, will do this for you; add it to the project.
// ImportFile.java package dsbasic; import com.borland.datastore.*; public class ImportFile { private static final String DATA = "/data"; private static final String LAST_MOD = "/modified"; public static void read( String storeFileName, String fileToImport ) { read( storeFileName, fileToImport, fileToImport ); } public static void read( String storeFileName, String fileToImport, String streamName ) { DataStoreConnection store = new DataStoreConnection(); try { store.setFileName( storeFileName ); store.open(); FileStream fs = store.createFileStream( streamName + DATA ); byte[] buffer = new byte[ 4 * store.getDataStore().getBlockSize() * 1024 ]; java.io.File file = new java.io.File( fileToImport ); java.io.FileInputStream fis = new java.io.FileInputStream( file ); int bytesRead; while ( (bytesRead = fis.read( buffer )) != -1 ) { fs.write( buffer, 0, bytesRead ); } fs.close(); fis.close(); store.writeObject( streamName + LAST_MOD, new Long( file.lastModified() ) ); } catch ( com.borland.dx.dataset.DataSetException dse ) { dse.printStackTrace(); } catch ( java.io.FileNotFoundException fnfe ) { fnfe.printStackTrace(); } catch ( java.io.IOException ioe ) { ioe.printStackTrace(); } finally { try { store.close(); } catch ( com.borland.dx.dataset.DataSetException dse ) { dse.printStackTrace(); } } } public static void main( String[] args ) { if ( args.length == 2 ) { read( args[0], args[1] ); } else if ( args.length >= 3 ) { read( args[0], args[1], args[2] ); } } }
The program takes the name of a DataStore file, the name of the file to import, and an optional stream name as parameters. If no stream name is specified, the file name is used. The main method calls the appropriate form of the read method; the two-argument read method calls the three-argument read method.
When importing the file, the date it was last modified is recorded with it. The "/modified" suffix is appended to the stream name for this date, while the "/data" suffix is appended to the stream name to contain the data from the file. These suffixes are defined as class variables.
The read method then begins with some now-familiar preliminaries: it opens a connection to the DataStore file with a DataStoreConnection object.
As with most file stream APIs, there are separate methods for creating new file streams and accessing existing file streams. The method to create a new file stream is createFileStream, and its only parameter is the storeName of the stream to create.
If there is already a file stream with that name, even if it's actually a serialized object, it will be lost without warning; you may want to check if such a file stream exists with the fileExists method first (ImportFile.java
does not). If there is a table stream with that name, createFileStream will throw a DataStoreException with the error code
DATASET_EXISTS, because you can't have a table stream and a file stream with the same name.
When createFileStream is successful, it returns a FileStream object that represents the new, empty file stream.
A simple copy operation like this uses a loop to read and write the file in chunks; the question is how big should those chunks be? There's the obvious problem of making them too small, and making them really large may cause performance problems as well. As a conservative start, you can make it a small multiple of the DataStore's block size.
The DataStore's block size is stored in the DataStore object's blockSize property. Whenever you use a DataStoreConnection to access a DataStore, it automatically creates an instance of DataStore. Other DataStoreConnection objects in the same process that connect to the same DataStore share that DataStore object. (Access to a DataStore file is exclusive to a single process; multi-user access is provided through a single server process.) The DataStoreConnection has a read-only property named dataStore that contains a reference to the connected DataStore object.
The FileStream object writes an array of bytes. The array is declared in this statement:
byte[] buffer = new byte[ 4 * store.getDataStore().getBlockSize() * 1024 ];
The getDataStore method gets the reference to the DataStore object, and from that the getBlockSize method gets the blockSize property. This property is in kilobytes, so it is multiplied by 1024, and the resulting block size is multiplied by four, the arbitrarily-chosen number of blocks to read in each chunk.
The FileStream object's write method takes an array of bytes, just like a java.io.OutputStream, although the only form of the method is the one that also specifies the starting offset and length.
The java.io.FileInputStream object reads from a file into an array of bytes. It returns the number of bytes read, or -1 if the end-of-file is reached. In the loop, the number of bytes read is checked for the end-of-file value. If it's not the end-of-file, the number of bytes read are written, starting with the first byte in the array. For every iteration of the loop except the last, the entire array will be filled by reading and written into the FileStream. The last iteration will most likely not fill the entire array.
Once you're done with a file stream, you should close it. The FileStream object uses the close method (as does the FileInputStream).
After closing the file stream, the last-modified date is written using a java.lang.Long object to encapsulate the primitive long value. (You cannot save primitives with serialization.)
To test ImportFile.java
, you could import some source code files into Basic.jds
.
Use the openFileStream method to open an existing file stream by name. Like createFileStream, it returns a FileStream object at the beginning of the stream. You can then go to any position in the stream with the seek method, write to the stream, and read from it with the read method. FileStream also supports InputStream marking with the mark and reset methods.
To demonstrate opening, seeking, and reading is the following program, PrintFile.java
. Add it to the project:
// PrintFile.java package dsbasic; import com.borland.datastore.*; public class PrintFile { private static final String DATA = "/data"; private static final String LAST_MOD = "/modified"; public static void printBackwards( String storeFileName, String streamName ) { DataStoreConnection store = new DataStoreConnection(); try { store.setFileName( storeFileName ); store.open(); FileStream fs = store.openFileStream( streamName + DATA ); int streamPos = fs.available(); while ( --streamPos >= 0 ) { fs.seek( streamPos ); System.out.print( (char) fs.read() ); } fs.close(); System.out.println( "Last modified: " + new java.util.Date( ((Long) store.readObject( streamName + LAST_MOD )).longValue() ) ); } catch ( com.borland.dx.dataset.DataSetException dse ) { dse.printStackTrace(); } catch ( java.io.IOException ioe ) { ioe.printStackTrace(); } catch ( java.lang.ClassNotFoundException cnfe ) { cnfe.printStackTrace(); } finally { try { store.close(); } catch ( com.borland.dx.dataset.DataSetException dse ) { dse.printStackTrace(); } } } public static void main( String[] args ) { if ( args.length == 2 ) { printBackwards( args[0], args[1] ); } } }
To demonstrate random access with the seek method (and to make things slightly more interesting), this program prints a file stream backwards. The length of the file stream is determined by calling the FileStream's available method and used as a file pointer. When reading from the file, the file pointer is moved forward, so the position must be decremented and set for each byte read in the loop. There are two forms of the
read method: one that reads into a
byte array (the same form of the method used by the FileInputStream in ImportFile.java
), and one that returns a single byte. The single-byte form is used; each byte is cast into a character to be printed.
The DataStoreConnection class' copyStreams method makes a new copy of one or more streams in the same DataStore, or copies the streams to a different DataStore. If an error is encountered in an original stream, an attempt will be made to correct that error in the copy. copyStreams is also the way to upgrade an older DataStore file into the current format.
The copyStreams method takes six parameters, as listed in the following table:
Each of the options reverses the default behavior of copyStreams, which is to:
If copyStreams stops because either of the last two conditions occur, it throws a DataSetException. Status messages for each stream that is copied will be output to the designated PrintStream.
DSX: The DataStore Explorer provides a UI for copying streams to a new DataStore file with these parameters. See "Copying DataStore streams".
As mentioned earlier, forward slashes in stream names are used to simulate a hierarchical directory structure--the key word being simulate. copyStreams is oblivious to a directory structure. It simply treats names as strings; you must use the forward slash when necessary to impose structure.
The first two parameters, sourcePrefix and sourcePattern, determine which streams get copied. sourcePrefix is used in combination with the destPrefix parameter to rename a stream when it is copied; that is, to change the prefix (the beginning) of the storeName of the resulting copy of the stream.
If you specify a sourcePrefix, the stream name must start with that string. It's usually used to specify the name of a directory, ending with a forward slash. The destPrefix is then set to a different directory name, also ending with a forward slash. The sourcePrefix will be stripped from the name, and the destPrefix will be prepended to the name of the copy. For example, suppose you have the stream named "add/create-time", and you want to create a copy named "tested/create-time", in effect making a copy in a different directory. You would set sourcePrefix to "add/", and destPrefix to "tested/".
Although the prefix parameters are usually used for directories, you can rename streams in other ways. For example, you can rename "hello" to "jello" by specifying "h" and "j" for the sourcePrefix and destPrefix respectively; or "three/levels/deep" to "not-a-peep" by specifying "three/levels/d" and "not-a-p", in effect moving a stream up to the root directory of the DataStore. You can also do the reverse, making the destPrefix longer (with more directory levels) than the sourcePrefix. For example, by leaving the sourcePrefix blank but specifying a destPrefix that ends with a forward slash, all the streams from of the original DataStore file will be placed under a directory in the destination DataStore.
If you're not renaming the copy of the stream, there's no reason to use either prefix parameter, so you should set both of them to an empty string or null. Note that if you're making a copy of a stream in the same DataStore file, you must rename the copy.
The sourcePattern parameter is matched against everything after the sourcePrefix, using the standard wildcard characters "*" (for zero or more characters) and "?" (for a single character). If the sourcePrefix is empty, that means that the pattern is matched against the entire string. If you want to copy all the streams in a directory, you can put the directory name in the sourcePattern, followed by a forward slash, and leave the sourcePrefix empty. For example, if you want to copy everything in the "add" directory, that translates to copying everything that starts with "add/", so the sourcePattern would be "add/*". That would include everything in subdirectories, because the sourcePattern matches the entire rest of the string. (There is no direct way to prevent the copying of streams in subdirectories.)
The sourcePattern is matched against names of active streams only; copyStreams does not copy deleted streams.
You may use the following program,
Dup.java
, to make a backup copy of a DataStore file or upgrade an older file:
// Dup.java package dsbasic; import com.borland.datastore.*; public class Dup { public static void copy( String sourceFile, String destFile ) { DataStoreConnection store1 = new DataStoreConnection(); DataStore store2 = new DataStore(); try { store1.setFileName( sourceFile ); store2.setFileName( destFile ); if ( !new java.io.File( store2.getFileName() ).exists() ) { store2.create(); } else { store2.open(); } store1.open(); store1.copyStreams( "", // From root directory "*", // Every stream store2, "", // To root directory DataStore.COPY_IGNORE_ERRORS, System.out ); } catch ( com.borland.dx.dataset.DataSetException dse ) { dse.printStackTrace(); } finally { try { store1.close(); store2.close(); } catch ( com.borland.dx.dataset.DataSetException dse ) { dse.printStackTrace(); } } } public static void main( String[] args ) { if ( args.length == 2 ) { copy( args[0], args[1] ); } } }
This program copies the contents of one store into another. A DataStoreConnection object is used to open the source DataStore. A DataStore object is used for the destination, so that the DataStore file can be created if necessary.
For the copyStreams method, the sourcePrefix and destPrefix are empty strings, and the sourcePattern is just "*", which copies everything, without renaming. Unrecoverable errors will be ignored, and status messages are displayed in the console.
With this program, you can combine the contents of more than one DataStore file into a single file, as long as the stream names are different (COPY_OVERWRITE is not specified as an option).
Deleting streams is easy and certain; undeleting them is not guaranteed to work and requires a bit more effort. Streams are deleted by name. Understanding what happens when you delete or try to undelete a file stream, whether it's an arbitrary file or serialized object, is simpler because there's only one stream with that name. Table streams often have additional internal support streams with the same name, as explained under "Stream details"; they're a little more complicated.
The DataStoreConnection. deleteStream method takes the name of the stream to delete. For a file stream, the individual stream is deleted; for table stream, the main stream and all its support streams are deleted.
Deleting a stream does not actually overwrite or clear the stream contents. Like most file systems, the space used by the stream is marked as available, and the directory entry that points to that space is marked as deleted. The time the stream was deleted is recorded. Over time, new stream contents may overwrite the space that was formerly occupied by the deleted stream, making the content of the deleted stream unrecoverable.
DSX: See "Deleting streams".
Blocks in the DataStore file formerly occupied by deleted streams are reclaimed according to the following rules:
Because table streams have multiple streams with the same name, the stream name alone is not sufficient for attempting to undelete a stream. You must use a row from the DataStore directory. It contains enough information to uniquely identify a particular stream.
The DataStoreConnection. undeleteStream method takes such a row as a parameter. You can pass the directory dataset itself; the current row in the directory dataset will be used as the row to undelete.
Note that you can create a new stream with the name of a deleted stream. You cannot undelete that stream while its name is being used by an active stream.
DSX: See "Undeleting streams".
The following program,
DeleteTest.java
, demonstrates both deletion and undeletion in a pedagogical (and therefore somewhat atypical) way:
// DeleteTest.java package dsbasic; import com.borland.datastore.*; public class DeleteTest { public static void main( String[] args ) { DataStoreConnection store = new DataStoreConnection(); com.borland.dx.dataset.StorageDataSet storeDir; com.borland.dx.dataset.DataRow locateRow, dirEntry; String storeFileName = "Basic.jds"; String fileToDelete = "add/create-time"; try { store.setFileName( storeFileName ); store.open(); storeDir = store.openDirectory(); locateRow = new com.borland.dx.dataset.DataRow( storeDir, new String[] { DataStore.DIR_STATE, DataStore.DIR_STORE_NAME } ); locateRow.setShort( DataStore.DIR_STATE, DataStore.ACTIVE_STATE ); locateRow.setString( DataStore.DIR_STORE_NAME, fileToDelete ); if ( storeDir.locate( locateRow, com.borland.dx.dataset.Locate.FIRST ) ) { System.out.println( "Deleting " + fileToDelete ); dirEntry = new com.borland.dx.dataset.DataRow( storeDir ); storeDir.copyTo( dirEntry ); store.closeDirectory(); System.out.println( "Before delete, fileExists: " + store.fileExists( fileToDelete ) ); store.deleteStream( fileToDelete ); System.out.println( "After delete, fileExists: " + store.fileExists( fileToDelete ) ); store.undeleteStream( dirEntry ); System.out.println( "After undelete, fileExists: " + store.fileExists( fileToDelete ) ); } else { System.out.println( fileToDelete + " not found or already deleted" ); store.closeDirectory(); } } catch ( com.borland.dx.dataset.DataSetException dse ) { dse.printStackTrace(); } finally { try { store.close(); } catch ( com.borland.dx.dataset.DataSetException dse ) { dse.printStackTrace(); } } } }
In this program, the name of the DataStore file and the stream to be deleted are hard-coded. The stream is "add/create-time", which was added to Basic.jds
in the demonstration program AddObjects.java
. It's a file stream primarily because the fileExists method is used to check whether the deletion and undeletion worked.
The program begins by opening a connection to the DataStore and opening its directory. Next, it locates the directory entry for the stream that is about to be deleted.
Note: In normal usage, you would probably locate the directory entry for the stream after it has been deleted, and use the directory dataset to undelete the stream; it's done differently here to demonstrate individual directory rows, to be explained shortly.
To locate the row, a new com.borland.dx.dataset.DataRow is instantiated from the directory dataset, specifying the two columns that will be used in the search: the State and StoreName. The program then attempts to locate the directory entry for the specified stream, which must be active. Finding the row not only positions the directory at the desired entry, but it also indicates that the stream exists and is active, so that the program can proceed to the next step.
When you pass a directory dataset to a method like undeleteStream, the current row is used. But because of the way the DataStore directory is sorted (as explained in "Directory sort order") when a stream is deleted, its directory entry will probably "fly away" to its new position at the bottom of the directory as the most recently deleted stream; the current row will then be referencing something else (probably the next stream alphabetically). To undelete the same stream, you could either attempt to relocate the directory entry for the now-deleted stream, or you can copy the directory data for the stream into a separate directory row before you delete.
Using an individual directory row has a few advantages. Unlike the live DataStore directory dataset, an individual row is a static copy. It's smaller, and after making the copy, you can close the directory dataset to make operations faster. (For this simple demonstration, the overhead for creating the individual row probably outweighs any performance benefit.) You can make static copies of as many directory entries as you want, and manage them any way you want.
To create the individual directory row, another DataRow is instantiated from the directory dataset (so that it has the same structure), and the copyTo method copies the data from the current row. And just to prove that it really works, the DataStore directory is closed.
The file stream is then deleted by name, using the plain name string defined at the beginning of the method. (You could use the name from the directory entry, which should be the same, but that's a little too convoluted.) Finally, the stream is undeleted, using the individual directory entry.
The only way to shrink a DataStore file--removing unused blocks and directory entries for deleted streams--is to copy the streams to a new DataStore file using copyStreams. Only active streams are copied, resulting in a packed version of the file.
DSX: See "Packing the DataStore file".
pubsweb@borland.com
Copyright © 1999, Inprise Corporation. All rights reserved.