TClientDataSet, InternalCalc, AutoIncrement, and Key Violations

Embarcadero’s C++ Builder/Delphi/RAD Studio product includes a TClientDataSet, which allows the programmer to operate an in-memory data table.  It is part of a larger functionality for data transfer and storage.  For example, you can easily persist the dataset in a binary or XML format and save it in a file or dataset field.  You can also load it with data from a persistent database, allow the user to edit the data, and then post the changes back to the database.  It not only keeps track of the current state of the data, but also which records have been added, deleted, or changed, and within those records, what the field values were, and can use that information to automatically create the SQL that will update the database.  When this was introduced by Borland, they charged $5,000 for a developer to license it, but slowly started liberalizing their licensing as other components (e.g. ADO Recordsets and then ADO.NET) developed the same capability.

However, it has some strange behavior and interactions with other components that can trip you up in strange ways.  How it handles AutoInc fields is one of those.

TClientDataSet (CDS) AutoInc fields can operate in one of two ways, depending (generally) on whether you have used a TDataSetProvider to load data into the CDS.  If you have not, then when a new record is posted to the CDS, a new value for AutoInc, starting at 1, will be put into the AutoInc field.  This happens even if AutoGenerateValue is arNone, and regardless of the AutoInc fields ReadOnly flag.  Even if you put your own value into the AutoInc field, your value will be overwritten with the new, auto incremented value.  If there is a way to stop this behavior, other than loading data from a TDataSetProvider (cds->Data = prov->Data), I haven’t found it.

However, when you go to ApplyUpdates(), the generated SQL will not have those AutoInc values, which is good, because the persisted database will assign its own AutoInc values.

So, what happens if you first load data from the database, using the TDataSetProvider?  Well, something completely different, which is good, because the AutoInc fields will already have values from the persisted database.  Now, if you create a new record in the CDS with Append(), the AutoInc field will be Null.  That record can be posted to the CDS.  However, when you then Append() a second record and attempt to Post() it, your Post() will fail with a “Key Violation” exception, because otherwise there would have been two records with the same AutoInc value in the CDS (i.e., Null).

The workaround for this problem (other than using GUIDs rather than AutoInc fields for your primary key, which might be a great choice if it is an option) is to assign a unique value to the AutoInc field in your AfterInsert handler for the CDS.  Something like:

static int AutoIncValue = -1;
DataSet->FieldByName("ID") = AutoIncValue--;

will work (don’t forget to turn off the default ‘ReadOnly’ flag for the AutoInc field in the CDS), and this will generate a series of negative numbers for the AutoInc field in the CDS.  (No need to call Edit(), since the DataSet will already be in State dsInsert when the AfterInsert handler is called. )  That way, if the AutoInc field is not generating its own values (again, generally after you have loaded some records using the TDataSetProvider from the persistent database), AutoInc will get a series of progressively negative values which will not clash with any of  your records loaded from the database.  Just be aware that, if you have NOT loaded any records from your persistent data store, including the case where you loaded a data packet that did not happen to have any records, your AutoInc values will be ignored EVEN IN THE CDS once you call Post() to post the record to the CDS.  Thus, your first record, to which you assigned an AutoInc of -1 in AfterInsert, will become 1 after the Post() call.  (Of course, it will likely become something else in the persistent data store, unless it is the first record there as well).

This strange behavior makes the CDS harder to use, because you cannot use the AutoInc field to link tables in your briefcase, and have to use another field that won’t be changed by the CDS underneath you.  Unfortunately, while the InternalCalc field would seem to be ideal for that purpose, it won’t work, for two reasons.

The first reason, which makes absolutely no sense to me, is that THE PRESENCE OF AN INTERNAL CALC FIELD IN THE CDS CAUSES THE AUTOINC FIELD TO ASSIGN ITS OWN INCREASING VALUES, EVEN WHEN YOU HAVE LOADED DATA FROM A DATA PACKET, AND EVEN IF YOU HAVE ALREADY ASSIGNED A DIFFERENT VALUE TO THAT AUTOINC FIELD!  That means that, if the data you loaded happens to have an AutoInc value less than the number of records you are adding to the CDS, you will get a “Key Violation”  when you call Post() on the record that matches.  For example, if you load a record with “1” in the AutoInc field, then Append() a record, assign -1 to AutoInc in your AfterInsert handler, and then call Post(), your -1 gets replaced with 1, and the Post() fails, because otherwise there would have been two two records with 1 in the AutoInc field.  If you load a record with “2” in AutoInc, the first new record in the CDS will get the 1, and the second record will cause the “Key Violation”.

The second problem with InternalCalc fields when using them in a briefcase is that they do not get included in the Delta DataSet passed to BeforeUpdateRecord, where you could use them to update any linked tables you have in your briefcase.

Thus, my workaround for these problems:

1) Create an AfterInsert handler for the CDS.  Use it to assign a progressively negative number to the AutoInc field.  Get the progressively negative number from a centralized routine, so it won’t clash with other progressively negative numbers in other CDSs in your briefcase.  Do NOT use the AutoInc field for anything else, and certainly not for linking tables, because, should you happen to load a Data packet which happens not to have any records, your AutoInc values will be overwritten with positive numbers which (probably) match the AutoInc values of records in your persistent data store which you did not load.

2) Create a second field, called “LinkingID”, in your CDS.  Make that a fkData so that it will be passed in your DeltaDS to the BeforeUpdateRecord handler and so it does not make AutoInc assign progressively positive numbers, which could clash with the AutoInc of your loaded records, as a fkInternalCalc would.  You will also need LinkingID in the DataSet you are loading the data packet from through the TDataSetProvider, but it should NOT be part of your persistent data store.  Otherwise, you will get a “Field ‘LinkingID’ not found” Exception when you try to assign the data packet from the provider.  A fkCalculated field in the source dataset is ideal for this LinkingID field in the source dataset.  Use the OnCalcFields handler of the source dataset to set the value of this field to that of the AutoInc field in the source.  If you are loading from pure SQL, you can include something like “ID AS [LinkingID]” in your SELECT clause.  Note that this will make the “LinkingID” field act as ReadOnly in the CDS for records that have been loaded from the data store, even though ReadOnly is false for that field, and even though you can edit “LinkingID” in the CDS for newly-inserted records.

3) In your CDS’s AfterInsert, along with assigning your progressively decreasing negative number to the AutoInc field (where it may be destroyed by Post()), also assign it to both your LinkingID field, where it will NOT be destroyed by Post().  Although you cannot edit LinkingID for records loaded from the data store, you CAN edit it for new records.  Note that you should not call Edit() and Post() in AfterInsert, but can just assign the new value to LinkingID.

4) Now, you can use LinkingID to link data sets in your briefcase.  You can create new records in linked tables, and assign the LinkingID value to foreign keys in those tables.  However, remember that your negative LinkingID values for new records in the CDS will NOT wind up in your persistent data store, so the foreign keys will need to be updated when the data is persisted.

5) You can do that in the BeforeUpdateRecord handler of the TDataSetProvider.  You should ApplyUpdates() for your master table first.  In BeforeUpdateRecord, you will have UpdateKind of ukInsert when you get the inserted record.  You can then get DeltaDS->FieldByName(“LinkingID”)->AsInteger, which will be the LinkingID, and which will be negative.  The trick is that you have to post the inserted record yourself, using a second DataSet or whatever method you choose, and get the new AutoInc value from the persistent data store, all within the BeforeUpdateRecord call.  Now, save both the negative, temporary LinkingID and the new, permanent AutoInc value returned by the persistent data store.  If you use a single, centralized (within your app) source of those temporary negative ‘AutoInc’s, you can use a single table or array to store the corresponding permanent AutoInc’s for all of your tables.  Don’t forget to set “Applied” true in BeforeUpdateRecord to tell the provider that you have inserted the new record in the permanent data store.

6) For the detail tables, call ApplyUpdates() after the master’s ApplyUpdates().  In their BeforeUpdateRecord, for either ukInsert or ukModify, check the foreign keys for references to your master table.  If the foreign key is negative, that means it points to a temporary LinkingID.  Replace it with the permanent AutoInc from the data store from the table you got back in step 5.  You just look up the negative value and replace it with the corresponding positive value you got back from the data store for that temporary negative LinkingID.  (This is why you can’t use the AutoInc field directly instead of the LinkingID — if the CDS changes your negative AutoInc values to positive values, and you had used those positive values for your foreign keys, when you are saving the detail table records you won’t know if the positive foreign key references the primary key value of your new record or the primary key of some other record in the data store)

Or, you can just use GUIDs to assign your primary keys and forget AutoInc fields altogether!

(BTW, another way in which InternalCalc fields and TClientDataSet don’t get along is that, if you have a CDS with an InternalCalc field, you can only call CreateDataSet() once.  If you try to call it again, even after setting the CDS->Active = false, you get a “Name not unique in this context” exception.  Don’t ask me why that error message makes sense.  If there is no InternalCalc field, then no problem calling CreateDataSet() and setting Active false as many times as you want.  As noted on Quality Central, Embarcadero doesn’t consider this behavior (or the non-helpful error message) a bug).

Installing Gnostice eDocEngine in C++ Builder with TRichView

I have been using the TRichView RTF editor in C++ Builder XE. I needed PDF creation for my project, and had been planning to use the Gnostice eDocEngine product, which has a component for exporting from TRichView. However, the Gnostice eDocEngine installation failed for both the TRichView and THtmlViewer components.

I considered other PDF creation libraries mentioned on the TRichView website. However, the llionsoft product does not work with amy versions of C++ Builder since 2006 (according to their website), and the wPDF license prohibits sending PDF files created by it over the Internet (and one of the important functions of my program is emailing the .pdf’s created).  Thus, since I would not be able to email the .pdf’s created by wPDF, that license was unacceptible. Gnostice has a much better license.

In reviewing the error message, it appeared that the Gnostice component was not finding the TRichView component, because TRichView was installed into C++ Builder rather than into Delphi.

The solution was to install TRichView into Delphi, so that Gnostice could find it, but so that C++ Builder could also use it. Sergey Tkachenko (the author of TRichView) helpfully provided this info to do that:

Well, there is a way for installing a Delphi package both for Delphi and C++Builder.

How to do it
1) Uninstall all RichView-related C++Bulder packages. Delete all RichView-related obj hpp and dcu files.
2) Open RVPkgDXE.dproj, right click in the Project Manager, choose “Options”.
In the Options dialog, choose Build configuration (combobox)=”Base”. On the page Delphi Compiler | Output – C/C++, choose C/C++ Output file generation = Generate all C++Builder files. OK to close the dialog. Save the package and install.
Repeat for all RichView packages.
3) In all your projects, change references from RVPkgCBXE to RVPkgDXE and so on.

Differences from the old approach:
– HPP files are not placed in the same directory as pas-files, they are placed in $(BDSCOMMONDIR)hpp (such as Documents and SettingsAll UsersRad Studio8.0hpp)
– OBJ files are not created. Instead, they are assempled in LIB file placed in $(BDSCOMMONDIR)Dcp (such as RVPkgDXE.lib)

In the final point, once you do this, you will have to add the TRichView .lib files into the project manually, since C++ Builder will no longer do that.  That’s inconvenient, but not a deal-killer.

The Gnostice eDocEngine installation for TRichView only works when TRichView is installed in Delphi. Thus, I had to uninstall and reinstall the entire TRichView stack.

I had to remove all of the .bpl, .hpp, etc. files so they wouldn’t be found, and everything installed into C++ Builder was uninstalled using Components / Install

Then, reinstall the entire stack into Delphi, but for each component, be sure to set the ‘Create All C++ Files’ for each one. That creates the .hpp files, etc. in the /Users/Public/Documents/RadStudio/8.0 (for XE) folders.

Most components will require the previously-installed and used components put into the Requires portion of the project. You will know that is needed because, on installation (or sometimes use) you will get an error that a component cannot be installed because it contains a unit that is also used by another component. When you get that error, it means you have to go back and add the other component (which was compiled first) into the required section of the new, later component.

Ultimately, it was possible to install the TRichView eDocEngine connector by installing into Delphi first, and then into C++ Builder (it didn’t work when installing into Delphi and C++ Builder at the same time)

FastReport connectors installed without problem. I could not get the THtmlViewer connector to install using the automatic installation program, but it did install using the same technique — install into Delphi, creating the C++ Builder files. The installation program produces a log file (which is named by the installation program when it fails).

The THtmlViewer component installation program failed. The manual installation went as follows:

Build the gtHtmlVwExpD15.dproj project first. It does not install, and the context menu in Delphi does not offer an Install option. Then, build and then install DCLgtHtmlVwExpD15.dproj . Having created the C++ Builder files, the HTMLViewer connector for eDocEngine worked.

Obviously, this will only work if you have RAD Studio rather than just C++ Builder.

The same technique worked for the DevExpress ExpressPrinting component. The automated install failed because it requires (in the literal sense) the Delphi-only version of the DevExpress libraries. However, I was able to get a manual install to work by first loading the gtXPressExpD15.dproj project (from the Source folder of the eDocEngine installation) into Rad Studio. I changed the project to Activate the Release build, and to create all C++ files. However, the build failed because of the requirement for the Delphi-only library. I therefore removed the reference to that library, and added a reference to dxPSCoreRS15.dcp from the DevExpress Library folder. The build then succeeded. Then, I loaded the DCLgtXPressExpD15.dproj project, activated the Release build, changed the project options to create the C++ files (no need to change the requires), and the Build and then Install succeeded, and I was able to use the component in my Delphi and C++ Builder projects.

For the PDFToolkit (starting with 3), the installation went OK, but compiling a program with a TgtPDFViewer component fails with a slew of link errors, starting with “Unresolved external ‘GdipCloneMatrix’ referenced from c: . . . GTPDF32DXE.LIB . Goolging those functions reveal that they are part of the GDI library. The solution was to add the gdiplus.lib library into the project. That file is in the C:Program Files (x86)EmbarcaderoRAD Studio8.0libwin32releasepsdk folder. Right click on the project, select Add. . ., and pick that file to add to the project. Then, it will compile and run.

As of this writing, the PDFToolkit version 4 installation program does not work. It includes the gtPDFViewer.hpp file, which tries to include files such as System.SysUtils.hpp, System.Classes.hpp, and Vcl.Controls.hpp. None of those files exist. Of course, there are files such as SysUtils.hpp, Classes.hpp, Controls.hpp, and Forms.hpp, and includes for those files would work. However, PDFTooklit version 3 does install correctly.

Update Dec 8, 2011:  PDFToolkit version 4 ( does the same thing when installed in both C++ Builder XE and C++ Builder XE2, becauses the XE2 installation causes an extra include of the XE2 files EVEN IN XE PROJECTS.  The workaround is not to install the XE2 version, only the XE version.  Then, the file compiles, but the link still fails. According to an email from Gnostice:

Please add the following lib’s into your project before building the Project

(PDF toolkit installation path)PDFtoolkit VCLLibRADXEgtPDFkitDXEProP.lib
(PDF toolkit Installation path)SharedLibRADXEcbcrypt32.lib
(PDF toolkit Installation path)SharedLibRADXEcbgdiplus.lib
(PDF toolkit Installation path)SharedLibRADXEfreetype2.lib
(PDF toolkit Installation path)SharedLibRADXEgtPDF32DXE.lib
(PDF toolkit Installation path)SharedLibRADXEgtusp.lib

With those changes, a project using the PDF Toolkit 4 compiles and links. Hopefully they will come up with a fix for the install problem before my other libraries are ready for use with XE2.

Compiling THtmlViewer to use in C++ Builder XE

The THtmlViewer component is a component that displays HTML in a Delphi/C++ Builder form.  It is also used by the outstanding TRichView component for importing HTML.  It is available under the MIT license, so it can be used in commercial projects.  It is hosted on Google Code :  It can be downloaded using subversion as described here:, and as noted on that page,

# Non-members may check out a read-only working copy anonymously over HTTP.
svn checkout thtmlviewer-read-only

The author of TRichView recommends that you NOT use the trunk version, but rather branch 11:

svn checkout thtmlviewer-read-only

The 2010 Delphi project imports into Delphi XE and compiles, and you can install the resulting package, but the components only show up when running Delphi!

To create the components for C++ projects, you need to create a C++ package, but NOT using File/New/Package — C++ Builder

Instead, Component/Install Component, Install into new package, select the same .pas files used in the Delphi package (basically, all of them in the source folder)
You then need to name the package and choose its save location (for the package folder).  You can give it a description, which will later show up in the Component/Install Packages… dialog

Make sure to specify that you want a C++ Package, not a Delphi package.  Select Finish, and your package will be created, although linking will fail with an error.

You have more work to do before it will work properly.  You need to specify the -LUDesignIDE option to the Delphi compiler — in the Project Options/Delphi Compiler/Compiling/Other options/Additional options to pass to the compiler, include “-LUDesignIDE” (without the quotes).  Be sure to use the correct build configuration at the top of the dialog — you will want a Release build, so select Base or Release.

Also, Delphi needs to know to make the .hpp files, etc.  In the — in the Project Options/Delphi Compiler/Compiling/Output – C/C++ / C/C++ Output file generation, pick “Generate all C++ Builder files (including package libs)” so you get the header files as well as the package lib to install.

Finally, when you try to install the package, you will get an error that it conflicts with a file included in the vclimg150 package.  The solution is to include vclimg150.bpl in the Requires list for the package.  Right-click on Requires, and add vclimg150.bpl (just type the name — you don’t need to browse to the file, and when it shows up in the requires list, it will be vclimg.bpi, even though you typed vclimg150.bpl)

Now, pick the Release build, and build it (In the Project Manager, open up Build Configurations, right click on Release, and select Build)

Then, you need to install it, using Component/Install Packages .

First, save and close your THTMLViewer C++ Project.  Then, WITH NO PROJECTS OPEN, Select Component/Install Packages…  THTMLViewer should not be listed in the design packages check list box.

Click Add… , and go to the directory where your library was placed (this is set under Project/Options for the project that made the THTMLViewer component — by default under Windows 7 and RADStudio XE it is C:UsersPublicDocumentsRAD Studio8.0Bpl .  Select the .bpl package library that you just made, and click OK.

Then, you can create a new C++ Project, and you should be able to select the THtmlViewer component and drop it on the form.  Make a FormCreate handler, containing the line HtmlViewer1->LoadFromString(WideString(“Hello”)); .  Compile the project, and it may complain about missing .h files.  Just browse to the source directory, where the .hpp files should be.  You can select the .hpp file even though RAD Studio is looking for the .h file.  If it compiles and you see “Hello” in the window, you know you are done!

Incidentally, creating the component project under Delphi is easier than under C++ Builder — Delphi automatically recognizes and fixes the vclimg150 problem, presenting it in a dialog box, with “OK” to add the reference and rebuild the component.  Also, Delphi automatically installs the component.  However, the component does not install under both C++ Builder and Delphi at the same time (I could not figure out how to do that), and since I don’t really need it under Delphi, I did not pursue it. UniDAC in C++ Builder 2010 to access SQL Server Compact Edition

DevArt makes a number of data access products for Delphi/C++ Builder as well as .NET . I downloaded a trial of the UniDAC Universal Data Access Components for VCL. Unfortunately, the documentation is sparse, to say the least, and C++ builder choked on compiling even a very simple application. Here are a few notes on getting this working to access SQL Server Compact Edition (SQL CE).

Also unfortunately, Microsoft seems to have left a glaring (and even actually hard to believe) defect in its product line by not including any ability to transfer data to or from SQL Server (or any other database) and SQL Server Compact Edition. Thus, I wrote a small utility to transfer my data into SQL CE.

The only code I could find on DevArt’s website for accessing SQL CE was for Delphi rather than for C++ Builder. However, the following works to access SQL CE and read a list of tables:

UniConnection1->SpecificOptions->Values["OLEDBProvider"] = "prCompact";
UniConnection1->Database = "C:\work\VS2010Tests\CreatedDB01.sdf";
TStrings* list = new TStringList();
UniConnection1->GetTableNames(list, true);

You add a TUniConnection to the form, and then you must set ProviderName in the TUniConnection to ‘SQL Server’ in the property combo box to avoid the EDatabaseError ‘Provider is not defined’.

However, C++ Builder will still fail to link the project with the error [ILINK32 Error] Fatal: Unable to open file ‘SQLSERVERUNIPROVIDER.OBJ’ Apparently the fix for that is to manually edit your .cbproj project file (!), find the <AllPackageLibs> element, and add msprovider140.lib

Now, your project will compile and fill the listbox with the list of tables!

Konica Minolta Twain Driver Not Recognized

I recently had a problem getting the Konica Minolta Twain Driver for the C253 scanner (among others) to be recognized by the twain device manager, and thus it was not listed as one of the twain devices available, either in PhotoShop or in Atalasoft DotTwain.  The nice people at our local Hughes Calihan Konica Minolta here in Phoenix helped me figure this out, along with Lou Franco of Atalasoft (see his comments below), and I wanted to post the solution for anyone having a similar problem.

Ultimately, the problem was that another software package (I believe it was Business Object’s Crystal Reports XI Release 2), installed a copy of the LIBEAY32.dll into the C:WindowsSystem32 directory.  LIBEAY32.dll is part of the open source OpenSSL suite, and I have 18 (!) different versions on my system.  They mostly live in harmony, but when the Konica Minolta twain driver tried to load, it would get the version of LIBEAY32.dll that Crystal Reports had put into System32 (since that is very early in the Dynamic Link Library Search Order — see and when the LIBEAY32.dll that was loaded did not have the proper ordinal entry point, the Konica Minolta twain driver would not be loaded by the twain device manager.

When PhotoShop loaded, it would emit an error message about the missing ordinal in LIBEAY32.dll; when File/Import was pulled up in the menu, the Konica Minolta twain device would just be missing, and there would be no error here.

Compounding the problem was that my test application using the Twain source manager via the Atalasoft DotTwain ShowSelectSource() function did NOT issue any error.

However, a test application I made with Visual C++ loading the Twain device source library for the Konica Minolta scanner did produce the error.

It turns out that the only difference between my test application and Photoshop was the SetErrorMode() function, which sets the process’ ErrorMode. You can call GetErrorMode() and SetErrorMode() following these imports:
private extern static uint SetErrorMode(uint mode);

private extern static uint GetErrorMode();

If you then call SetErrorMode(0) before the Atalasoft ShowSelectSource() function, the user DOES see the error messages from the operating system. However, the Twain Source Manager twain_32.dll does not return any error code to ShowSelectSource(), so obviously ShowSelectSource() cannot return any error code either. As noted below, the only way for a calling program to get an indication that a source did not load is to call the twain source DLL directly rather than through the Twain Source Manager, and observe that the LoadLibrary call returns NULL.

Having figured out that the problem was that the Business Objects LIBEAY32.dll was in the System32 directory, the solution was a little difficult.  The Konica Minolta Twain Driver worked once the LIBEAY32.dll was removed  (or renamed) in System32, but Crystal Reports XI Release 2 tries to repair it’s installation if if finds that file missing. 

However, by placing the LIBEAY32.dll from the Konica Minota twain driver directory (in a subdirectory of C:windowstwain_32, where the twain device files live) into System32, both the Konica Minota twain driver and Crystal Reports seem to be happy.  For good measure, I put a copy of the LIBEAY32.dll that Crystal had put into System32 into Crystal’s own directory (since that has higher priority in the .dll search order) so that Crystal should load its own LIBEAY32.dll

For reference, I tracked down the problem by making a test Visual Studio C++ app and trying to load the Konica Minolta twain device driver (mostly from :

#include <windows.h>

Then, in the click handler:

HINSTANCE hDLL; // Handle to DLL
UINT uErrorMode = SetErrorMode(0); // so you get an error message from the OS
LPCWSTR str=L"C:\Windows\twain_32\KONICA MINOLTA\RTM_V3\kmtw3R.ds";
hDLL = LoadLibrary(str);

The LoadLibrary call produces a MessageBox (see the SetErorrMode() docs) with the error, and returns NULL, if there is a problem loading the twain source driver library. Note that the twain device driver will need other files in that directory, and you will get those errors first; you can fix that problem by adding it to the PATH for testing. The System32 will still be ahead of the PATH (but not ahead of the application .exe directory) so you will get the error message you are looking for. Also note that the twain device driver library, in actual use, will NOT need the PATH to be set; the twain device manager appears to take care of that.

Another approach that works is to change the current working directory to the directory containing the twain source driver before calling LoadLibrary on the driver, as this will more closely approximate the DLL Search Order used by the twain source manager. Again, the problem is that, although the source driver does install the files it needs into its own directory, the LIBEAY32.dll that Crystal installs into System32 is still AHEAD of the LIBEAY32.dll installed into the source driver’s directory! (see Lou Franco’s comments below) DLL Search Order is a fairly complex topic, and can vary depending on a number of factors; google “Dynamic Link Library Search Order” for info. Note that, unless SafeDllSearchMode is disabled, changing the current working directory does not change the DLL search order.

Also, when I tried this with a 64 bit version of Vista, Crystal installs LIBEAY32.dll into /Windows/SysWOW64, which is the directory that takes the place of /Windows/System32.

Regretably, when LoadLibrary fails, FormatMessage produces only a message that the operating system could not run the file. The only detailed info available seems to be the message box provided directly to the user by the OS, and only when SetErrorMode(0) is in effect.

See also: for a similar problem.

— Edited 12/31/08 10PM to add info about using SetErrorMode() to show the error message box, that the lack of error reporting to the application occurs at the Twain Source Manager level, to reinforce the info about DLL Search Order, and to take Lou Franco’s comments into account; Edited 12/3/09 to add info re 64 bit Vista – JMN

Ruby on Rails 2.3 and PostgreSQL on Ubuntu Hardy 8.04 LTS and 10.04 LTS Server

Update: A few changes for 10.04 LTS, using PostgreSQL 8.4

When running rails (other than rails –version), I got the error “No such file to load: net/https”. That was fixed by installing libopenssl-ruby, as in:
aptitude install libopenssl-ruby
This should probably be done before installing rails, although installing it after rails was installed fixed the problem

gem update –system produces a message that “gem update –system is disabled on Debian. RubyGems can be updated using the official Debian repositiries by aptitude or apt-get”

Instead, I found the following at

sudo gem install rubygems-update
sudo update_rubygems note: this will clean out your gems!

Note: I had to reinstall rails after update_rubygems, which I ran after I had installed rails. I would probably do this before installing rails.

The “-y” flag is now the default, and if you use it you get a message to that effect

irb and apache2 were already installed by the time I got to those steps. There is an apache2 metapackage that I would probably use instead of the apache 2.2 packages noted below if I still needed to install apache2.

Before you can run the programs you have installed with gem (e.g. Rails), you will need to add:
export PATH=/var/lib/gems/1.8/bin:$PATH

When I ran the Passenger installation, I got a message to install three more packages:

aptitude install apache2-prefork-dev
aptitude install libapr1-dev
aptitude install libaprutil1-dev

However, only the first one of those actually did anything. Since the passenger installation gives good diagnostics, it is reasonable to let that tell you what still needs to be installed.

Following the instructions on the Passenger install for configuring Apache, the sample configuration included some inline comments with ‘#’ — these caused an error in Apache2 and had to be moved to a separate line.

Passenger may need a file named .htaccess to be installed in the /public directory of your rails app, with the following two lines:

PassengerEnabled on
PassengerAppRoot /full/path/to/the/root/of/your/rails/app

The PassengerAppRoot should NOT be your rails app’s public directory, but the .htaccess file needs to be in that public directory. The Passenger docs incorrectly state that the PassengerAppRoot is assumed to be the parent of the public directory, but that is only true if the public directory is named in DocumentRoot, and not if you are using an alias.

Also, if you are using an alias and the Rails app is not in the root of the website, you may need config.action_controller.relative_url_root = "/test" in your config/environment.rb file

Also note that, except where noted, the installation commands need to be run as root (sudo su -) or with sudo.

There has been much confusion and consternation about setting up Ruby on Rails with PostgreSQL

(e.g., see:

There seems to be a lot of support for running this on a Mac, but less so for running it on modern Ubuntu. There are several moving parts here, so once I had figured them out, I wanted to record my notes to save others some of the same aggravation.

Note that there are some other issues and differences between MySQL and PostgreSQL – for example see:

In particular, one difference noted there between PostgreSQL and other SQL’s is that PostgreSQL is stricter about the difference between single and double quotes.  Double quotes are for “delimited identifiers”, such as table and column names, and prevent them from being mistaken for keywords.  For example, “SELECT” could be the name of a table or column or variable, whereas SELECT is an SQL keyword.  Single quotes are for string constants.  Use two adjacent single quotes for a literal single quote, as in ‘Dianne”s horse’.  Where this will get you is if you use double quotes in :conditions=>”” and :joins=>””, which will work in MySQL but not PostgreSQL.  Another difference is that “like” may need to be changed to “ilike” in PostgreSQL if you want case insensitive queries.

This post doesn’t attempt to address all issues, but just to get a system from a base Ubuntu Hardy (8.04 LTS) to a working Ruby on Rails 2.2/PostgreSQL 8.3 system.  This will also install working sqlite3 and postgresql drivers, and will test the installation as we proceed.

It also doesn’t attempt to address migration of data; do a web search on “mysql postgresql yml” to see several alternatives here.

(Some of these installation instructions are modified from Agile Web Development with Rails, third edition beta, which I assume you already have)

apt-get update
apt-get upgrade
aptitude install build-essential

if aptitude is not installed, that will cause an error.  Install with:

apt-get install aptitude


aptitude install ruby rubygems ruby1.8-dev libsqlite3-dev
gem update --system

At the end of a lot of output, was the notice that

RubyGems installed the following executables:

If 'gem' was installed by a previous RubyGems installation, you may need to
remove it by hand

In my case, I did have to remove the old ‘gem’ file by hand:

mv /usr/bin/gem /usr/bin/gem.old
mv /usr/bin/gem1.8 /usr/bin/gem

If you get the error about the uninitialized constant Gem::GemRunner(NameError), this is your problem


gem install -y rails

if you get an error that “could not find rails (>0) in any repository”, simply try again

gem install -y rails

To use irb, you need:

aptitude install irb

if you want git:

aptitude install git-core git-doc

if you want apache:

aptitude install apache2.2-common

For passenger:

gem install passenger

You may get some instructions about additional software to install for the passenger apache2 module to be compiled.  You will also get some instructions for configuring passenger to work under apache2.  Be aware that, with Ubuntu, you are encouraged NOT to edit the apache2.conf file, which may need updating with a new version of Ubuntu, but rather to edit other files included by apache2.conf, such as httpd.conf and the sites-available files (linked into sites-enabled when you want them to be enabled).

To use sqlite3 (e.g., for initial testing)

gem install sqlite3-ruby

For PostgreSQL:

aptitude install postgresql postgresql-client

Now, in order to access PostgreSQL, you need to have a PostgreSQL user defined, as well as a PostgreSQL database defined.

The PostgreSQL installation creates the ‘postgres’ Linux user, the ‘postgres’ PostgreSQL user, and the ‘postgres’ database, so to get into the database, you can just (from root):

su postgres

and poke around (psql has pretty good help – use l to list databases, du to list users, ? for help, and q to quit.)

Exit psql with ‘q’

To create a PostgreSQL user so you can test rails with PostgreSQL (in my case, I created user ‘nachbar’, since that is my Linux username) FROM THE SHELL (not from psql):

su postgres
createuser nachbar

(answer ‘y’ to the question about being a superuser)

If you get an error that, for example ‘Ident authentication failed for user “xxxx” ‘, that means you forgot the ‘su postgres’.  Ident authentication means that PostgreSQL will allow Linux user ‘postgres’ in because there is also a PostgreSQL user ‘postgres’

Once you have created your user (in my case, ‘nachbar’), AS THAT USER, try:

psql postgres

Here, ‘postgres’ is the DATABASE name to which you are connecting.  If you don’t specify a database name, psql will try to connect to a database with the same name as your username, which does not exist.  (try just ‘psql’ here to see that error)

Once you have psql working and your user set up in PostgreSQL, create a test rails application and test sqlite3 — as your own user (i.e., not root):

rails test
cd test
script/generate model product title:string
rake db:create
rake db:migrate
(that should return nil, since there are no products saved yet)
p.title="My Title"

The last command should read you  back “My Title” from your saved Product

Now, exit the console, and switch your app to PostgreSQL


edit config/database.yml:

under development:, change adapter to ‘postgresql’ and database to ‘test_development’.  No need to set a username, password, or anything else

Install the postgresql adaptor (as root)

First: install the postgreSQL header files:

aptitude install libpq-dev
gem install postgres

Then, test it:

require 'rubygems'
require 'postgres'

Now, (back as your own user, not root, and in the rails test project directory): create the PostgreSQL database:

rake db:create
rake db:migrate

Test that these were created in PostgreSQL:

psql test_development
l  (to list databases)
dt  (to list tables - should include the products table)
q  (to exit psql)

Run the same “script/console” test above, which should give the same results as it did with sqlite3.

Check the PostgreSQL database:

psql test_development
select * from products;
(don’t forget the semicolon.  Should show your your “My Title” product, now in PostgreSQL)

Rails is running with PostgreSQL!

Note that we did not set a user or password in database.yml, because we had created the ‘nachbar’ user as a PostgreSQL superuser, and that was the user that script/console and rake were running as.  We used ‘Ident’ authentication in this case.  There are several choices here, including creating another PostgreSQL user under which Rails will run.  Since ‘nachbar’ is now a PostgreSQL superuser, you can run the createuser command as ‘nachbar’ or ‘postgres’, but not as root!  In PostgreSQL, if the password is null, password authentication will always fail.

Other miscellaneous notes

PostgreSQL configuration notes:

PostgreSQL is set up to allow multiple “clusters”.  Installation creates a single cluster, “main”, which will probably be all you need.  In the following, “main” could refer to multiple directories if you have multiple clusters.  Also “8.3” is my PostgreSQL version number.  Other versions will, of course, have different directory names.

PostgreSQL configuration goes into /etc/postgresql/8.3/main and /etc/postgresql-common

PostgreSQL bin is in /usr/lib/postgresql/8.3/bin .  That directory is NOT added to the PATH, but appropriate links for psql, createuser, etc. are placed into /usr/bin.  Other commands, such as pg_ctl may not be in the path.  The base path for the Ubuntu bash shell is set in /etc/login.defs file in the ENV_SUPATH and ENV_PATH vars

The data directory is /var/lib/postgresql/8.3/main — see /var/lib/postgresql/8.3/main/postmaster.opts

According to /etc/init.d/postgresql-8.3, environment vars are set in /etc/postgresql/8.3/<cluster>/environment

possible options to /etc/init.d/postgresql-8.3 are:

start, stop, restart, reload, force-reload, status, autovac-start, autovac-stop, autovac-restart

(the functions are sourced from /usr/share/postgresql-common/init.d-functions)

On init, the init.d script looks for directories in /etc/postgresql/<version> (by default, ‘main’ exists there) then, in those directories, look for postgresql.conf, which is the file that sets the data directory (/var/lib/postgresql/8.3/main), and the hba_file and ident_file (in /etc/postgreql/8.3/main), port, etc., as well as all sorts of configuration FOR THE SERVER

start.conf determines whether the specific server gets started on bootup

to backup:

pg_dumpall > outputfile

to stop the server:

pg_ctl stop

some samples in


Rake and Rails data on PostgreSQL

The ‘postgresql’ database driver supports rake commands such as

rake db:drop
rake db:create
rake db:schema:load RAILS_ENV=production

Be aware that PostgreSQL does not use autoincrement fields, but rather implements a more structured system using PostgreSQL sequences. Rails will create these for you, and will tell you about it. Thus, the rake db:schema:load will produce messages like:

-- create_table("appts", {:force=>true})
NOTICE: CREATE TABLE will create implicit sequence "appts_id_seq" for serial column ""
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "appts_pkey" for table "appts"

Those notices tell you that this mechanism is working properly.

Rails, Passenger, and PostgreSQL users

As indicated in the Passenger docs, passenger will run as the user that owns config/environment.rb, but that can be changed as indicated in the User Switching section of the Passenger docs, and can be modified by the PassengerUserSwitching and PassengerDefaultUser options in your httpd.conf or apache2 site files. Whichever user Passenger runs as must have a PostgreSQL user with the appropriate rights. Options include making that user a PostgreSQL superuser, or instituting better access controls with SQL GRANT commands.

In addition, options include other than the “Ident” mechanism of logging into PostgreSQL that we have discussed above. See the PostgreSQL website for details.

As one example, you can create the group and user “passenger” to use for passenger to run as, including the PostgreSQL user.:

adduser --system --group passenger

Change the group for the railsapp directory files by cd to the railsapp directory and issuing

chgrp -R passenger *

Change the mode for the log files and directory, so that the group (now ‘passenger’) can change those

cd log
chmod g+w .
chmod g+w *

Create the PostgreSQL user passenger:

su postgres
createuser passenger

(answer ‘n’ to all three questions: superuser, create databases, create roles)

Grant access to the passenger PostgreSQL user:

su postgres
c myrailsapp_production
grant all on audits, sessions, users, mymodel1s, mymodel2s to passenger;
grant all on sequence audits_id_seq, sessions_id_seq, users_id_seq, mymodel1s_id_seq, mymodel2s_id_seq to passenger;

Either change the owner of config/environment.rb to passenger, or set PassengerDefaultUser to passenger

Now Passenger will run as the ‘passenger’ user, and will also set the effective group to the default group of the ‘passenger’ user (also passenger, in this setup). It will access PostgreSQL as the PostgreSQL passenger user, as well, using ident authentication.  Of course, ident authentication works only within a single machine.  To access PostgreSQL from another machine, set the hostname, username, password, and port in Rails.

touch tmp/restart.txt to restart Passenger on the next request.

Setting timezone on Ubuntu (different than setting it for your Rails app)

ln -sf /usr/share/zoneinfo/America/Phoenix /etc/localtime

Setting up the mail server on Ubuntu so Action Mailer Works:

Mail: exim4 was already running, but would not deliver except locally.  Make changes to /etc/exim4/update-exim4.conf.conf – esp change configtype to ‘internet’ (so mail can go out to the internet) but leave local_interfaces at ‘’ so mail will be accepted only from the local system.  Also change readhost to ‘’ so headers show that as the origin, and hide_mailname so readhost works.  Also, change /etc/mailname to, to indicate the domain of the user sending the mail.

The Virtual Server

To reproduce what I have done, I Actually implemented the above on the 1&1 VPS I Linux package imaged to Ubuntu 8.04 LTS (64 bit). I think you can get a discount on that if you click this link:

Happy Hacking!

Flash Player Bug with RoR 2:HTTPService fires fault by http status code 201

Regarding Flexible Rails: Flex 3 on Rails 2, by Peter Armstring, and its Forum

This relates to a previous thread, but the solution is buried deep within the thread. There is a bug in Flash Player, which has been reported:

Adobe considers this bug report “closed” with the “resolution” of “cannot fix”. Basically, Flash Player HTTPService incorrectly faults on status code 201, which indicates “successful creation”. The Rails 2 scaffolding code returns this status code 201 on successful creation, triggering the fault event from HTTPService, and preventing the code in on page 318 (for example) from working.

Since Adobe has given up on fixing this error, a workaround is required. One workaround would be to intercept the fault event, locate the status code 201, and treat it as “success”. However, I cannot find the status code in the fault event (!). You could also just treat the fault as a “success”, but then you wouldn’t know whether the create was successful.

The best workaround seems to be to change the status code returned from 201 to 200. This can be done in the rails controller. In this case, using iteration 8 code, pomodo/app/controllers/locations_controller.rb, line 55, change “:created” to “:ok” and will work again.

James Nachbar

Flex-Rails:protect_from_forgery problem with Rails 2.1 produces ioError 2032

Update for Rails 2.2: According to the release notes: “Request forgery protection has been tightened up to apply to HTML-formatted content requests only” in Rails 2.2 — I have not tested this, but it should obviate the problem addressed in this post for Rails 2.2 and newer.

Regarding Flexible Rails: Flex 3 on Rails 2, by Peter Armstrong:

The book talks about commenting out protect_from_forgery, and then uncommenting it in iteration 5 without mentioning what had changed to allow protect_from_forgery to be used.

In reviewing old vs. new rails code (particularly vendor/rails/actionpack/lib/action_controller/request_forgery_protection.rb), it appears that the older versions of rails did not run the forgery protection check for .xml requests, but the newer versions do. Thus, unless you are manually adding the appropriate parameters (see the above file for the current test being done to see if the form request is forged), you will fail the forgery test unless you prevent the test from running. More info on that here:

at a minimum you will need:
skip_before_filter :verify_authenticity_token
in your sessions_controller.rb to avoid the ioError 2032.

You can track this error down by adding a fault event handler to the HTTPService (e.g. in LoginBox.mxml on page 153). You can also look at the output from the server (the “ruby scriptserver” command) which will show status code 422 instead of 200 for the “session.xml” request.

For a more detailed look, go to the rails log at logdevelopment.log and look at the end for the most recent error. It will show that ActionController::InvalidAuthenticityToken was thrown by /vendor/rails/actionpack/lib/action_controller/request_forgery_protection.rb:86:in `verify_authenticity_token’

CSRF attacks are not so relevant for applications running within Flash Player (as opposed to, for example, applications running within a browser), since Flash Player won’t go from one site to another.

If you want to continue to use forgery protection for the .html requests, the best solution is to

1) uncomment protect_from_forgery (so the protection token is generated),

2) skip_before_filter :verify_authenticity_token in the controllers that need to allow .xml to be served without the forgery protection, and then

3) call “verify_authenticity_token” (the same call used by request_forgery_protection.rb) within the .html generation code that you want to protect. verify_authenticity_token will throw the InvalidAuthenticityToken exception if the token is not correct.

If you want to protect your .xml calls too, the check within verify_authenticity_token is:
form_authenticity_token == params[request_forgery_protection_token]
so you would need to get your rails app to send the form_authenticity_token to the Flex client when the session is created, and then your subsequent calls will need to set the “request_forgery_protection_token” param.

James Nachbar

Flex-Rails: Non-Debug Flash Player caches, so fails to update list – status code 304

Regrading Flex/Ruby on Rails Programming:

And then just when everything was working in the debug Flash Player, I decided to fire-up IE & run the application in Flash Player in non-debug mode, and it stopped working: after creating an item, the list blanked out rather than being updated.

Ultimately, the problem was that, in non-debug mode, using IE (but apparently not Firefox), Flash issued a “conditional get”, and was getting a 304 “not modified” response instead of the updated data. In debug mode, Flash was issuing a regular GET, and thus got the correct info. Thus, the application worked in debug mode, but not in non-debug mode.

I have seen that RoR 2.1 included some new caching functionality, although I don’t know if this is the kind of caching they are talking about, or why rails was reporting “not modified” even after the database upon which the response was based had been modified..

That Rails was returning status code 304 could be seen in the server window (“ruby scriptserver”)

For some reason, even though I am creating a new HTTPService object for each call, the return from the POST (i.e., the one object being created) was still being returned in the result event when I sent a GET to obtain the entire list. I could determine that by sending the result event info from the list command to the debug window:

var x:XMLList = XMLList(event.result.children());

Even though this was the result of the GET call, I was still getting the result of the POST.

My fix (actually more of a workaround) was to add a time-generated string (“?” + Number(new Date()) ) to the end of the request URI, thus avoiding the caching problem. A better solution might be to send a “no-cache” header from the RoR portion, although I have not tested that. More on avoiding caching here:

More evil IE caching, I guess!

James Nachbar