Jump to content

Transactions?


Recommended Posts

Posted

Hi, I know I asked this back in 3.x days and was told no, but is there anything in 4.x similar to transactions where in the rare case of the server having an issue in-between updating multiple related tables, there would not be a nightmare? Is it a limitation of php or mysql that it's not done? If so, how does IPS handle this in important situations, such as ones involving payments in commerce, etc...?

I have a situation where I am storing various totals in multiple rows and adding them together to store the totals in a different table. What I was currently doing is only selecting one row from the "many" part of that 1: many relationship, and when the totals for that one row are updated I was updating the TOTAL based on the difference of the new individual number minus the old individual number and adding that to the total.

Well, this is a prime example of a situation where if the server goes down in the middle, numbers would be ruined and if the form is simply resubmitted, it wouldn't work because the "difference" could no longer be accepted as accurate. So if there is nothing similar to transactions I am going to have to just select all rows that need to be summed and sum them in my code with the new values instead of using the "difference". This way if the server goes down in the middle, a resubmit of the form will fix the totals just fine.

Posted

The customer isn't concerned about it and thinks I should leave it as it is, rather than taking time to do something related to that very rare scenario. Still curious on this, though, because I typically like being prepared for worst case scenarios.

Posted

I see mysql does have transactions, commits, etc... (had only used mysql via IPB, so never had looked up about this), but when searching through IPB files, I don't see any use of it. Is there some drawback or maybe there is complication due to different sites using something other than mysql? I assume about all of them have something similar to transaction and commits, though, which are commonly used in other cases.

Posted

the DB class in IPS extends the mysql class in php.

you'll need to make sure the table you are wanting transactions to run on, is innodb (afaik, myisam doesn't support transactions).  here is some example code from one of my apps that i use transactions in:

        $db = Db::i();
        $db->autocommit(false);
        $db->begin_transaction();

        try{
            $names = [ ['name' => 'bar'], ['name' => 'foo' ], ['name' => 'deez'], ['name' => 'froyo']];
            foreach( $names as $name ){
                $db->insert('some_table', $name );
            }

            $db->commit();
        }
        catch( \Exception $e){
            \IPS\Log::debug($e);
            $db->rollback();
        }

so the DB class does support transactions, but it doesn't actually sound like you need them. i use them in this app, cause they are importing up to 1 million rows at a time from a csv file, and since there are like 30 databases, this could cause a lot of issues with the relations, so if the importer hit an error, it logs it, and then rolls back the changes, preserving the relations. it was either this or spend 3 days rebuilding the relation data :).

Posted
5 hours ago, CodingJungle said:

the DB class in IPS extends the mysql class in php.

you'll need to make sure the table you are wanting transactions to run on, is innodb (afaik, myisam doesn't support transactions).  here is some example code from one of my apps that i use transactions in:


        $db = Db::i();
        $db->autocommit(false);
        $db->begin_transaction();

        try{
            $names = [ ['name' => 'bar'], ['name' => 'foo' ], ['name' => 'deez'], ['name' => 'froyo']];
            foreach( $names as $name ){
                $db->insert('some_table', $name );
            }

            $db->commit();
        }
        catch( \Exception $e){
            \IPS\Log::debug($e);
            $db->rollback();
        }

so the DB class does support transactions, but it doesn't actually sound like you need them. i use them in this app, cause they are importing up to 1 million rows at a time from a csv file, and since there are like 30 databases, this could cause a lot of issues with the relations, so if the importer hit an error, it logs it, and then rolls back the changes, preserving the relations. it was either this or spend 3 days rebuilding the relation data :).

Thanks. As far as needing, well that depends on definition. Since most of my apps are related to contests of some sort, thus stats being important, really many places could get screwed up if there were ever to be an issue right in the middle of work being done. And some of my other situations aren't ones so easily fixed, although none would be that difficult to fix, either. So if a customer ever had an issue I could come up with something to fix their situation for them.

I would have thought for something like commerce, though, they'd need to do something like that because what if they receive a payment then an issue happens and it doesn't log that it was received? Do you have any other ideas, since transactions being only for innodb would create headaches, since probably all tables are myisam, presently? Is it a problem for the data if switching over?

Like you said, though, really I probably don't "need" to go through all of that for situations that I could technically fix, but not every situation is as easily fixable as this example one, either.

Posted

Just from a little reading, it appears that when you use begin_transaction, it automatically disables autocommit, so you wouldn't have to use the autocommit line would you? Do you even need to use rollback when it fails? Wouldn't it simply never commit it if it errors out, anyway? Although, then again, the fact that rollback even exists must mean it's needed sometimes...

I still doubt I will use them, but it's so nice and easy to do them, judging by that code... Are there any drawbacks? Anything related to performance?

Off topic, but I think I'd be driven crazy if I use that array syntax. I am so used to typing out array. But, I'm oddly tempted to do it and see if I start liking the feel of doing it that way. lol.

Posted

I have some real headaches due to not being able to do transactions (probably myiasm tables, for one thing).

In one situation I need to select some rows, get the max from a column in them, then insert a row and put the value at max + 1. This is not a case where auto_increment will be appropriate.

In another case, I need to know the current auto_increment value, change it, and know the new value.

In each of these cases, the problem is what if someone else is about to insert into it also and the same number gets used twice?

Does IPB already have something built in for locking the table? And even then... well locking the writing still doesn't 100% solve this and if reading were disabled, it would cause all kinds of other issues.

There doesn't seem to be any perfect way and even though it's never happened before, I don't like the possibility of bad data being there.

Posted

I guess I could do a sort of flood control, but globally where only 1 member can go through this method at a time, ie it add a flood row at the beginning, delete it at the end, and no other member can go through the method between them, as a way of a workaround locking. Better than real locking, really, because then it doesn't prevent the table being read or written to in ways that don't affect this function's goal.

Only down side is the extra select, insert, and delete. But those 3 involving a count, 1 row, 1 row, respectively isnt so bad....

Posted

If you are encountering these issues, you've probably engineered/planned your application wrong I'm afraid.

For instance, there's generally no reason you ever need to get the current auto-increment value before inserting. Instead, what you should do is use the result of \IPS\Db::i()->insert(), which will be the newly inserted ID.  This does not run into any issues where something may have been inserted in between an original select and then an insert.

Posted
7 hours ago, bfarber said:

If you are encountering these issues, you've probably engineered/planned your application wrong I'm afraid.

For instance, there's generally no reason you ever need to get the current auto-increment value before inserting. Instead, what you should do is use the result of \IPS\Db::i()->insert(), which will be the newly inserted ID.  This does not run into any issues where something may have been inserted in between an original select and then an insert.

No, you're misunderstanding the situation. There are several totally different situations I am talking about.

1. I do that when doing individual inserts and needing insert ids. I am talking about when doing a bulk insert and needing the ids for relational rows. You've got to have a way to identify the rows and select them back to get their ids. Or in one situation I can deduce them from the insert id, as mentioned in the other topic you responded to me in.

2. In a whole separate situation, I need to know a max of a column within a set of rows. It is not the auto-increment column, it is one that needs to act similar to auto incrementing in the beginning, but within only x rows of the table.

3. The one related to someone doing something in-between was one that would be like if you had clubs with a maximum number of members who can be in them. if your limit is 10, 9 have joined, someone else attempts to join, you've of course got to check the limit and see if it's been met. It will pass. Well before you update it to show that the 10th joined, another member could try to join and get the 9 from the check, thus 11 could join when the limit is 10.

When I was searching for solutions, I found where others had these same issues and it was not bad engineering, it was a case of there are only very few solutions.

(also, another in-between one was due to the fact that I am not talking about auto incrementing. if you're setting a column to max+1, it's a problem if 2 people try to do it at once and get the same max value before it's been updated).

If I just needed the insert id form an auto increment column, that would of course be beyond simple. But that wasn't what I was talking about.

Posted

So, more simply put... when adding a new row, I wasn't saying I need to know what the insert id will be ahead of time and oops someone else inserted so now it won't be right. I was saying it's a permission thing, where it checks a column (not an auto increment one, mind you) to see if someone has permission and between the time of the check and the update, someone else could be doing the check based off of the old number. Nothing to do with insert ids at all.

The auto increment comments were about an entirely different situation where I needed the insert ids from bulk inserts and was trying to get them. And I didn't need them before insert, either. That was yet a third situation. I'm confusing people because I was talking about several situations related to the same area of code, but separate issues.

So the only remaining issue is that one where a second person could pass a check because they checked between another person's check and update.

Posted

someone gave me an idea on the auto increment sort of situation. But I still have the situation similar to my "club example" I listed above. In THAT situation, if the table isn't locked, and I don't "lock" the method, only other thing I can think of is do the same select yet again at the end of the function to be sure nobody else snuck through. (again, this one isn't related to auto increment).

Posted

I think you should focus your efforts and energy on more important bits than the extremely remote possibility that an insert snuck in within the microsecond time differences you are talking about here. The same theoretical issue could happen with Gallery since you can limit how many images can be stored in an album - in all my years working here, it never has. ?

 

Posted
5 hours ago, bfarber said:

I think you should focus your efforts and energy on more important bits than the extremely remote possibility that an insert snuck in within the microsecond time differences you are talking about here. The same theoretical issue could happen with Gallery since you can limit how many images can be stored in an album - in all my years working here, it never has. ?

 

ok, thanks for the input. to my knowledge, it hasn't ever happened over the years in my apps, either and this client said it ahs never happened in the years of him heavily using this. And if it were to happen, I'd just manually fix it for him.

Posted

Man, the time we have to edit, when in the members group on here, is mighty low. lol.

Well, if I want to be extra safe here, I do know a couple easy fixes for my situations.

1. the case related to multiple people getting the same max - that is all about ordering and, when it comes time for the ordering to matter, I already have them in an array, ordered by that column ascending. All I'd have to do to fool proof the duplicate order values, is loop through with an incrementing count starting at 1 and manually set their values. That would be so negligible time-wise, I probably should just do it. It solves that issue, due to my usage.

(in fact, they don't even NEED the value until that later time anyway. I really don't even need to fool with the max, and could just reorder later... but I may keep it as it is for a certain reason I have).

2. someone sneaking in when a limit was reached - it already lowers a count of openings. I can later have it check to see if the count is below zero. if so, they will be required to manually remove the final row (already a feature.... you can already manually remove rows or I could even have it automatically do so).

So if I do want to do safeguarding, those are two very easy, simple, light work ways of handling it. I mean these are really simple solutions, I might as well do them.

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...