How do I commit data set 1 w/o clearing locks on data set 2

PColbert

Member²
Sharing one TOracleSession is proving to be a pain.

In one of my applications, the user can open multiple edit windows. Each window corresponds to a database row, and the window's data set locks the row immediately, to prevent other users from editing the same data at the same time. Each window also has its own Save and Cancel buttons. Save implies Commit, and Cancel implies Rollback.

Unfortunately, Commit and Rollback unlock _all_ locks, so after closing just one window, all the others become unprotected. Any other user can edit the same rows.

I have just tried CachedUpdates, in an effort to limit the unlocking to just the data sets involved. Unfortunately, CommitUpdates and CancelUpdates also appear to clear _all_ locks owned by the session.

Is there any way around this? Or do I have to use a different TOracleSession for each window?

If I must do the latter, then I will have to raise the SESSIONS parameter in our customers' initialization files. I have been reluctant to do so since I don't know what effect it will have on Oracle's memory usage and performance.
 
I have calculated the server's memory cost for each additional SESSION at about 290KB. That's 290KB per window per user. If 5 users each need an additional 20 windows open, that's about 29MB of RAM on the server! (Calculations based on observation of an 8.1.6 server, Std. Edition.)

Please tell me there's another solution!
 
Actually, I was hoping there was a way to avoid losing the locks in the first place. Then I wouldn't have to use one session per window.
 
The only "solution" is a more optimistic locking approach. Set the LockingMode to lmCheckImmediate. This way the records won't be locked while the user is editing the data, but only during ApplyUpdates.

------------------
Marco Kalter
Allround Automations
 
Originally posted by mkalter:
The only "solution" is a more optimistic locking approach. Set the LockingMode to lmCheckImmediate...

In other words, don't hold the locks in the first place. We are understandably reluctant to take that approach. Some of these locked records are the door to as much as 20-30 minutes of data entry. We would be poor hosts indeed if, _after_ that much effort, we told the user he couldn't save any of it, because someone else changed it first without his knowledge!

This is particularly annoying because many users choose "Save" even when they are just looking, i.e., they haven't actually changed anything. This sort of misbehavior makes conflicts like the above several times more likely than it should be.

I think, for now, we will live with our Session budget. As it turns out, most of our installations have fewer than three users, so it's unlikely that they'll run into a problem. Should users decide to scale up, I'll have to look at the multi-threaded solution.
 
Very long transactions are usually considered bad. I think more than a few seconds is normally considered long.

One solution that I've used in the past is for you to write your own locking mechanism. This can then be as simple or complex as you like, for example allow a user to 'check-out' a record for updating using some status field or check-out table.

One benefit of this is that you can write a method for a 'supervisor' to override locks. This can even be part of the application. In your example, a user going to lunch (or on leave!) leaving record locked would be quite serious. If this was a database lock you would need a DBA to kill the session.

In more complex examples you can use a briefcase model where data is checked out of the central database and copied down to local tables. The user can manipulate the local data running various queries against the changes before checking it back in to the central database...

[This message has been edited by jpickup (edited 20 March 2001).]
 
Back
Top