Search Results

Search found 10177 results on 408 pages for 'thumbs db'.

Page 273/408 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • The CHOICE : Firebird or H2

    - by blow
    Hi, i have to choice a database to use in server-mode for a java desktop application. I think both are great java database. In my opinion (im NOT well-informed): H2 PRO Is java based Develeopment say it is very very fast Easy to install, configure and use with java application H2 CONS Is a young project Reliability doubt for commercial porpouse FireBird PRO Rock solid project Well documented Should be fast and well optimized for large data Has a java driver... FireBird CONS It is not java based ... ? So, i can't choice between this great db, can i have a suggestion? Thank.

    Read the article

  • Deploying a Rails App to Multiple Servers using Capistrano - Best Practices

    - by Louise
    I have a rails application that I need to deploy to 3 servers - machine1.com, machine2.com and machine3.com. I want to be able to deploy it to all machines at once and each machine individually. Can someone help me out with a skeleton Capistrano config file / recipe? Should it all be in deploy.rb or should I break it out in machine1.rb, etc? I thought I was on the right track getting Capistrano to take in command line arguments, but it choked when I tried set the roles within the namespaces. I'd pass in 'hosts=1,2,3' as an argument and set the role:app/web/db to "machine#{host}.com" after splitting on the command and going into an each do |host| {}... Anyway, other than creating 4 different deploy.rb files and renaming it before running cap:deploy each time, I'm stumped. I'd like to be able to do the following: cap deploy:machine1:latest_version_from_svn cap deploy:all_machines:latest:version_from_svn Just don't know if it should all be in deploy.rb split up with namespaces or if it should be broken into multiple deploy*.rb files.

    Read the article

  • Recommendations for supporting both Oracle and MSSQL in the same ASP.NET app with NHibernate

    - by Hugo Zapata
    Our client wants to support both SQLServer and Oracle in the next project. Our experience comes from .NET/SQL Server platform. We will hire an Oracle developer, but our concern is with the DataAccess code. Will NHibernate make the DB Engine transparent for us? I don't think so, but i would like to hear from developers who have faced similar situations. I know this question is a little vague, because i don't have Oracle experience, so i don't know what issues we will find.

    Read the article

  • MySQL - Calculating a value from two columns in each row, using the result in a WHERE or HAVING clause

    - by taber
    I have a MySQL db schema like so: id flags views done ------------------------------- 1 2 20 0 2 66 100 0 3 25 40 0 4 30 60 0 ... thousands of rows ... I want to update all of the rows whose flags / views are = 0.2. First as a test I want to try to SELECT to see how many rows would actually get updated. I tried: SELECT flags/views AS poncho FROM mytable HAVING poncho 0.2 But this only returns like 2 rows and I know there should be a lot more than that. It seems like it's calculating the poncho value on all rows or something odd. What am I doing wrong? Thanks!

    Read the article

  • Optimize SQL with Interbase

    - by Roland Bengtsson
    I was inspired by the good answers from my previous question about SQL. Now this SQL is run on a DB with Interbase 2009. It is about 21 GB in size. SELECT DistanceAsMeters, Bold_Id, Created, AddressFrom.CityName_CO as FromCity, AddressTo.CityName_CO as ToCity FROM AddrDistance LEFT JOIN Address AddressFrom ON AddrDistance.FromAddress = AddressFrom.Bold_Id LEFT JOIN Address AddressTo ON AddrDistance.ToAddress = AddressTo.Bold_Id Where DistanceAsMeters = 0 and PseudoDistanceAsCostKm = 0 and not AddrDistance.bold_id in (select bold_id from DistanceQueryTask) Order By Created Desc There are 840000 rows with AddrDistance 190000 rows with Address and 4 with DistanceQueryTask. The question is, can this be done faster? I guess, the same query is run many times select bold_id from DistanceQueryTask. Note that I'm not interested in stored procedures, just plain SQL :)

    Read the article

  • LINQ-to-SQL Query Timing Out

    - by kevinw
    I'm running this query in LINQ: var unalloc = db.slot_sp_getUnallocatedJobs("Repair", RadComboBox1.SelectedValue, 20); It runs when I first open the page, but when I go back to it and try to run the same query with a different value, "Con", being passed through, the linq to sql designer.cs tells me that I've got a timeout error. Any ideas? Edit: This is what's in the designer: [Function(Name="dbo.slot_sp_getUnallocatedJobs")] Public ISingleResult<slot_sp_getUnallocatedJobsResult> slot_sp_getUnallocatedJobs([Parameter(Name="JobType", DbType="VarChar(20)")] string jobType, [Parameter(Name="Contract", DbType="VarChar(10)")] string contract, [Parameter(Name="Num", DbType="Int")] System.Nullable<int> num) { IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), jobType, contract, num); return ((ISingleResult<slot_sp_getUnallocatedJobsResult>)(result.ReturnValue)); } } This is the error: SQLException was unhandled by user code Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

    Read the article

  • What is the RSA SecurID packet format?

    - by bmatthews68
    I am testing a client application that authenticates using RSA SecurID hardware tokens. The authentication is failing and I am not finding any useful information in the log files. I am using Authentication Manager 8.0 and the Java SDK. I have a traffic capture which I would like to analyze with Wireshark to and from port 5500 on the authentication agent. But I can't find the packet format searching the internet or on the the RSA SecurCare knowledge base. Can anybody direct me to the packet format? Here is an extract from the rsa_api_debug.log file which dumps the UDP payload of the request and the response: [2013-11-06 15:11:08,602] main - b.a():? - Sending 508 bytes to 192.168.10.121; contents: 5c 5 0 3 3 5 0 0 2 0 0 0 0 0 1 ea 71 ee 50 6e 45 83 95 8 39 4 72 e 55 cf cc 62 6d d5 a4 10 79 89 13 d5 23 6a c1 ab 33 8 c3 a1 91 92 93 4f 1e 4 8d 2a 22 2c d0 c3 7 fc 96 5f ba bf 0 80 60 60 9d 1d 9c b9 f3 58 4b 43 18 5f e0 6d 5e f5 f4 5d df bf 41 b9 9 ae 46 a0 a9 66 2d c7 6 f6 d7 66 f1 4 f8 ad 8a 9f 4d 7e e5 9c 45 67 16 15 33 70 f0 1 d5 c0 38 39 f5 fd 5e 15 4f e3 fe ea 70 fa 30 c9 e0 18 ab 64 a9 fe 2c 89 78 a2 96 b6 76 3e 2e a2 ae 2e e0 69 80 8d 51 9 56 80 f4 1a 73 9a 70 f3 e7 c1 49 49 c3 41 3 c6 ce 3e a8 68 71 3f 2 b2 9b 27 8e 63 ce 59 38 64 d1 75 b7 b7 1f 62 eb 4d 1d de c7 21 e0 67 85 b e6 c3 80 0 60 54 47 e ef 3 f9 33 7b 78 e2 3e db e4 8e 76 73 45 3 38 34 1e dd 43 3e 72 a7 37 72 5 34 8e f4 ba 9d 71 6c e 45 49 fa 92 a f6 b bf 5 b 4f dc bd 19 0 7e d2 ef 94 d 3b 78 17 37 d9 ae 19 3a 7e 46 7d ea e4 3a 8c e1 e5 9 50 a2 eb df f2 57 97 bc f2 c3 a7 6f 19 7f 2c 1a 3f 94 25 19 4b b2 37 ed ce 97 f ae f ec c9 f5 be f0 8f 72 1c 34 84 1b 11 25 dd 44 8b 99 75 a4 77 3d e1 1d 26 41 58 55 5f d5 27 82 c d3 2a f8 4 aa 8d 5e e4 79 0 49 43 59 27 5e 15 87 a f4 c4 57 b6 e1 f8 79 3b d3 20 69 5e d0 80 6a 6b 9f 43 79 84 94 d0 77 b6 fc f 3 22 ca b9 35 c0 e8 7b e9 25 26 7f c9 fb e4 a7 fc bb b7 75 ac 7b bc f4 bb 4f a8 80 9b 73 da 3 94 da 87 e7 94 4c 80 b3 f1 2e 5b d8 2 65 25 bb 92 f4 92 e3 de 8 ee 2 30 df 84 a4 69 a6 a1 d0 9c e7 8e f 8 71 4b d0 1c 14 ac 7c c6 e3 2a 2e 2a c2 32 bc 21 c4 2f 4d df 9a f3 10 3e e5 c5 7f ad e4 fb ae 99 bf 58 0 20 0 0 0 0 0 0 0 0 0 0 [2013-11-06 15:11:08,602] main - b.b():? - Enterring getResponse [2013-11-06 15:11:08,618] main - b.a():? - Enterring getTimeoutValue(AceRequest AceAuthV4Request[AbstractAceRequest[ hdr=AcePacketHeader[Type=92 Ver=5 AppID=3 Enc=ENCRYPT Hi-Proto=5 Opt=0 CirID=0] created=1383750668571 trailer=AcePackeTrailer[nonce=39e7a607b517c4dd crc=722833884]] user=bmatthews node-sec-req=0 wpcodes=null resp-mac=0 m-resp-mac=0 client=192.168.10.3 passcode==ZTmY|? sec-sgmt=AceSecondarySegments[ cnt=3] response=none]) [2013-11-06 15:11:08,618] main - b.a():? - acm base timeout: 5 [2013-11-06 15:11:08,618] main - b.b():? - Timeout is 5000 [2013-11-06 15:11:08,618] main - b.b():? - Current retries: 0 [2013-11-06 15:11:10,618] main - b.b():? - Received 508 bytes from 192.168.10.121; contents: 6c 5 0 3 3 6 0 0 0 0 0 1 4d 18 55 ca 18 df 84 49 70 ee 24 4a a5 c3 1c 4e 36 d8 51 ad c7 ef 49 89 6e 2e 23 b4 7e 49 73 4 15 d f4 d5 c0 bf fc 72 5b be d1 62 be e0 de 23 56 bf 26 36 7f b f0 ba 42 61 9b 6f 4b 96 88 9c e9 86 df c6 82 e5 4c 36 ee dc 1e d8 a1 0 71 65 89 dc ca ee 87 ae d6 60 c 86 1c e8 ef 9f d9 b9 4c ed 7 55 77 f3 fc 92 61 f9 32 70 6f 32 67 4d fc 17 4e 7b eb c3 c7 8c 64 3f d0 d0 c7 86 ad 4e 21 41 a2 80 dd 35 ba 31 51 e2 a0 ef df 82 52 d0 a8 43 cb 7c 51 c 85 4 c5 b2 ec 8f db e1 21 90 f5 d7 1b d7 14 ca c0 40 c5 41 4e 92 ee 3 ec 57 7 10 45 f3 54 d7 e4 e6 6e 79 89 9a 21 70 7a 3f 20 ab af 68 34 21 b7 1b 25 e1 ab d 9f cd 25 58 5a 59 b1 b8 98 58 2f 79 aa 8a 69 b9 4c c1 7d 36 28 a3 23 f5 cc 2b ab 9e f a1 79 ab 90 fd 5f 76 9f d9 86 d1 fc 4c 7a 4 24 6d de 64 f1 53 22 b0 b7 91 9a 7c a2 67 2a 35 68 83 74 6a 21 ac eb f8 a2 29 53 21 2f 5a 42 d6 26 b8 f6 7f 79 96 5 3b c2 15 3a b d0 46 42 b7 74 4e 1f 6a ad f5 73 70 46 d3 f8 e a3 83 a3 15 29 6e 68 2 df 56 5c 88 8d 6c 2f ab 11 f1 5 73 58 ec 4 5f 80 e3 ca 56 ce 8 b9 73 7c 79 fc 3 ff f1 40 97 bb e3 fb 35 d1 8d ba 23 fc 2d 27 5b f7 be 15 de 72 30 b e d6 5c 98 e8 44 bd ed a4 3d 87 b8 9b 35 e9 64 80 9a 2a 3c a2 cf 3e 39 cb f6 a2 f4 46 c7 92 99 bc f7 4a de 7e 79 9d 9b d9 34 7f df 27 62 4f 5b ef 3a 4c 8d 2e 66 11 f7 8 c3 84 6e 57 ba 2a 76 59 58 78 41 18 66 76 fd 9d cb a2 14 49 e1 59 4a 6e f5 c3 94 ae 1a ba 51 fc 29 54 ba 6c 95 57 6b 20 87 cc b8 dc 5f 48 72 9c c0 2c dd 60 56 4e 4c 6c 1d 40 bd 4 a1 10 4e a4 b1 87 83 dd 1c f2 df 4c [2013-11-06 15:11:10,618] main - a.a():? - Response status is: 1 [2013-11-06 15:11:10,618] main - a.a():? - Authenticaton failed for bmatthews ! [2013-11-06 15:11:10,618] main - AuthSessionFactory.shutdown():? - RSA Authentication API shutdown invoked [2013-11-06 15:11:10,618] main - AuthSessionFactory.shutdown():? - RSA Authentication API shutdown successful

    Read the article

  • First ASM program

    - by Tal
    Hello, I'm trying to run my first ASM 8086 program on MASM on Windows Vista 64bit OS. I put this program on my MASM editor: .model small .stack .data message db "Hello world, I'm learning Assembly !!!", "$" .code main proc mov ax,seg message mov ds,ax mov ah,09 lea dx,message int 21h mov ax,4c00h int 21h main endp end main and the MASM editor gives me this output that I got no idea what's wrong with the program: Assembling: D:\masm32\First.asm D:\masm32\First.asm(9) : error A2004: symbol type conflict D:\masm32\First.asm(19) : warning A4023: with /coff switch, leading underscore required for start address : main _ Assembly Error Where is the problem with this code? This is my first ASM program please remember. Thank you :)

    Read the article

  • How to include associative table information and still retain strong typing

    - by mwright
    I am using LINQ to SQL to create strongly typed objects in my project. Let's say I have an object that is represented by a database table. This object has a "Current State" that is kept in an associative table. I would like to make a single db call where I pull back the two tables joined but am unsure how I should be populating that information into some sort of object to preserve strong typing within my model so that the view using the information can just consume the information from the objects. I looked into creating a view model for this but it doesn't seem to quite fit. Am I thinking about this in the wrong way? What information can I include to help clarify my problem? Other details that may or may not be important: It's an MVC project....

    Read the article

  • Database Replication OOD Pattern

    - by MrOnigiri
    Greetings fellow overflowers, After reading on MSDN about correct strategies on how to perform database replication, and understanding their suggestion on Master-Subordinate Incremental Replication. It left me wondering, what OOD design pattern should I use on this... The main elements of this strategy are the Acquirer, the Manipulator and the Writer. The first fetches data from the database and passes on to the second which might perform simple transformations to the data, before handling it to the final element, the writer, that writes the desired data on the destination Database. I thought about using the Chain of Responsibility pattern, but the Acquirer, Manipulator and Writer don't share a common role among theme, so It makes no sense. Should these elements be written as separate classes, or methods inside my service? Of course I'll be creating a DB Helper class as well, but that doesn't constitutes a problem. Wondering what your opinions on this are! Thanks for your replies

    Read the article

  • PHP: form action on same page, still show same until refresh

    - by Karem
    Yes, Im having a little edit profile page, index.php?mode=profile. Lets take the username in the editprofile form as example. The username is already in the username-field. So i changed from "Peter" to "Tom" and press save. The action is ?mode=profile&edit=true. So now when i have pressed save it has updated the column in the db from Peter to Tom. But this field keeps having the value "Peter" until if i do press refresh (or f5), then "Tom" will appear. Like it hasnt updated in the database anything, although it did but it still shows Peter until next refresh.. like it caches, but it shouldnt cache nothing? Any help on this? Is it because its on the same "page" / file? what can i do

    Read the article

  • Server authorization with MD5 and SQL.

    - by Charles
    I currently have a SQL database of passwords stored in MD5. The server needs to generate a unique key, then sends to the client. In the client, it will use the key as a salt then hash together with the password and send back to the server. The only problem is that the the SQL DB has the passwords in MD5 already. Therefore for this to work, I would have to MD5 the password client side, then MD5 it again with the salt. Am I doing this wrong, because it doesn't seem like a proper solution. Any information is appreciated.

    Read the article

  • mysql partitioning

    - by Yang
    just want to verify that database partition is implemented only at the database level, when we query a partitioned table, we still do our normal query, nothing special with our queries, the optimization is performed automatically when parsing the query, is that correct? e.g. we have a table called 'address' with a column called 'country_code' and 'city'. so if i want to get all the addresses in New York, US, normally i wound do something like this: select * from address where country_code = 'US' and city = 'New York' if now the table is partitioned by 'country_code', and i know that now the query will only be executed on the partition which contains country_code = US. My question is do I need to explicitly specify the partition to query in my sql statement? or i still use the previous statement and the db server will optimize it automatically? Thanks in advance!

    Read the article

  • KO 2.3.4 - Accessing validation array from callbacks in models

    - by kenny99
    Hi, Apologies if this is an oversight or sheer stupidity on my part but I can't quite figure out how to access the validation array from a callback in a model (using ORM and KO 2.3.4). I want to be able to add specific error messages to the validation array if a callback returns false. e.g This register method: public function register(array & $array, $save = FALSE) { // Initialise the validation library and setup some rules $array = Validation::factory($array) ->pre_filter('trim') ->add_rules('email', 'required', 'valid::email', array($this, 'email_available')) ->add_rules('confirm_email', 'matches[email]') ->add_rules('password', 'required', 'length[5,42]') ->add_rules('confirm_password', 'matches[password]'); return ORM::validate($array, $save); } Callback: public function email_available($value) { return ! (bool) $this->db ->where('email', $value) ->count_records($this->table_name); } I can obviously access the current model from the callback, but I was wondering what the best way to add custom error from the callback would be?

    Read the article

  • Doctrine2 Mutliple DBs without Symfony2?

    - by ehime
    Hey guys I know that using the Doctrinebundle in Symfony2 it is possible to instantiate multiple DB connections under Doctrine... $connectionFactory = $this->container->get('doctrine.dbal.connection_factory'); $connection = $connectionFactory->createConnection(array( 'driver' => 'pdo_mysql', 'user' => 'foo_user', 'password' => 'foo_pass', 'host' => 'foo_host', 'dbname' => 'foo_db', )); I'm curious if this is the case if you are using PURELY Doctrine though?, I've set up Doctrine via Composer like so... { "config": { "vendor-dir": "lib/" }, "require": { "doctrine/orm": "2.3.4", "doctrine/dbal": "2.3.4" } } And have been looking for my ConnectionFactory class but am not seeing it anywhere? Am I required to use Symfony2 to do this? Thanks!

    Read the article

  • Select from multiple tables, remove duplicates

    - by staze
    I have two tables in a SQLite DB, and both have the following fields: idnumber, firstname, middlename, lastname, email, login One table has all of these populated, the other doesn't have the idnumber, or middle name populated. I'd LIKE to be able to do something like: select idnumber, firstname, middlename, lastname, email, login from users1,users2 group by login; But I get an "ambiguous" error. Doing something like: select idnumber, firstname, middlename, lastname, email, login from users1 union select idnumber, firstname, middlename, lastname, email, login from users2; LOOKS like it works, but I see duplicates. my understanding is that union shouldn't allow duplicates, but maybe they're not real duplicates since the second user table doesn't have all the fields populated (e.g. "20, bob, alan, smith, [email protected], bob" is not the same as "NULL, bob, NULL, smith, [email protected], bob"). Any ideas? What am I missing? All I want to do is dedupe based on "login". Thanks!

    Read the article

  • pass form builder in remote_function in rails ?

    - by richard moss
    hi all i have select box where on change i need to grab the value and via remote function get some field names from db and then generate those field further down the form depwning on whatoption from the select box is chosen. The problem is is that the fields are in a f.form_for so are using the formbuilder f that has the select box in. So when i render the partial via ajax in the controller i get an error as i dont have a reference to the local form builder f. does anyone know how or if i can get reference to the form builder orif can pass it in a remote function call and then pass into my locals in the partial ? thanks alot, any help will be great as been stuck on this a long time! cheers rick

    Read the article

  • only 1 record is being inserted

    - by bobobobo
    I'm running an insert statement using OLE DB and an ICommandWithParameters. In the ICommandText, I made sure to set: params.cParamSets = n ; Then cmdTxt-Execute( NULL, IID_NULL, ¶ms, &rowsAffected, NULL ) ; Where n 1, but in my database, all I see is 1 insert happening. The docs say cParamSets is greater than one) can be specified only if DBPROP_MULTIPLEPARAMSETS is VARIANT_TRUE and the command does not return any rowsets. But I set DBPROP_MULTIPLEPARAMSETS in my DBPROPs, and its and INSERT statement so it should not return any rowsets.

    Read the article

  • How to SET ARITHABORT ON for connections in Linq To SQL

    - by Laurence
    By default, the SQL connection option ARITHABORT is OFF for OLEDB connections, which I assume Linq To SQL is using. However I need it to be ON. The reason is that my DB contains some indexed views, and any insert/update/delete operations against tables that are part of an indexed view fail if the connection does not have ARITHABORT ON. Even selects against the indexed view itself fail if the WITH(NOEXPAND) hint is used (which you have to use in SQL Standard Edition to get the performance benefit of the indexed view). Is there somewhere in the data context I can specify I want this option ON? Or somewhere in code I can do it?? I have managed a clumsy workaround, but I don't like it .... I have to create a stored procedure for every select/insert/update/delete operation, and in this proc first run SET ARITHABORT ON, then exec another proc which contains the actual select/insert/update/delete. In other words the first proc is just a wrapper for the second. It doesn't work to just put SET ARITHABORT ON above the select/insert/update/delete code.

    Read the article

  • arbitrary typed data in django model

    - by Dmitry Shevchenko
    I have a model, say, Item. I want to store arbitrary amount of attributes on it, like title, description, release_date. And i want them to be not just strings but have python type, so string, boolean, datetime etc. What are my options here? EAV pattern with separate name-value table won't work because of the same DB type across all values. JSONField can probably help, but it doesn't know about datetime, for example. Also i was looking at PickeField, it fits perfectly, but i'm a bit concerned about performance.

    Read the article

  • Git access on Heroku deployment and others: connection refused

    - by Toby Hede
    I have suddenly run into an issue using git. I created a new app, went to push to Heroku and now see: ssh: connect to host heroku.com port 22: Connection refused My other previously working Heroku apps no longer work, receiving the same error. Other Heroku commands work (create, info, db:push). I also see the error when accessing Git on my unfuddle accounts. I can SSH to other services, so it doesn't look like it's my machine. Any ideas?

    Read the article

  • Alright to truncate database tables when also using Hibernate?

    - by Marcus
    Is it OK to truncate tables while at the same time using Hibernate to insert data? We parse a big XML file with many relationships into Hibernate POJO's and persist to the DB. We are now planning on purging existing data at certain points in time by truncating the tables. Is this OK? It seems to work fine. We don't use Hibernate's second level cache. One thing I did notice, which is fine, is that when inserting we generate primary keys using Hibernate's @GeneratedValue where Hibernate just uses a key value one greater than the highest value in the table - and even though we are truncating the tables, Hibernate remembers the prior value and uses prior value + 1 as opposed to starting over at 1. This is fine, just unexpected. Note that the reason we do truncate as opposed to calling delete() on the Hibernate POJO's is for speed. We have gazillions of rows of data, and truncate is just so much faster.

    Read the article

  • Handling Special char such as ^ÛY, ^ÛR in java

    - by RJ
    Hi, Has anybody encountered special char such as ^ÛY, ^ÛR ? Q1. How do I do an ftp of the files containing these chars? The chars are not seen once I do a ftp on AIX (bi or ascii) and hence I am unable to see my program to replace these, working. Q2. My java program doesn't seem to recognise these or replace these if I search for these explicitly (^ÛY, ^ÛR ) in the file however a replace using regular expression seems to work (I could only see the difference in the length of the string). My program is executed on AIX. Any insights why java cannot recognise these? Q3. Does the Oracle database recognise these chars? An update is failing where my program indicates the string to be of lesser length and without these characters but the db complains "value too large for column" as the string to be updated contains these chars and hence longer. thanks in advance, RJ

    Read the article

  • versioning fails for onetomany collection holder

    - by Alexander Vasiljev
    given parent entity @Entity public class Expenditure implements Serializable { ... @OneToMany(mappedBy = "expenditure", cascade = CascadeType.ALL, orphanRemoval = true) @OrderBy() private List<ExpenditurePeriod> periods = new ArrayList<ExpenditurePeriod>(); @Version private Integer version = 0; ... } and child one @Entity public class ExpenditurePeriod implements Serializable { ... @ManyToOne @JoinColumn(name="expenditure_id", nullable = false) private Expenditure expenditure; ... } While updating both parent and child in one transaction, org.hibernate.StaleObjectStateException is thrown: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): Indeed, hibernate issues two sql updates: one changing parent properties and another changing child properties. Do you know a way to get rid of parent update changing child? The update results both in inefficiency and false positive for optimistic lock. Note, that both child and parent save their state in DB correctly. Hibernate version is 3.5.1-Final

    Read the article

  • mysql query performance help

    - by Stefano
    Hi I have a quite large table storing words contained in email messages mysql> explain t_message_words; +----------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+---------+------+-----+---------+----------------+ | mwr_key | int(11) | NO | PRI | NULL | auto_increment | | mwr_message_id | int(11) | NO | MUL | NULL | | | mwr_word_id | int(11) | NO | MUL | NULL | | | mwr_count | int(11) | NO | | 0 | | +----------------+---------+------+-----+---------+----------------+ table contains about 100M rows mwr_message_id is a FK to messages table mwr_word_id is a FK to words table mwr_count is the number of occurrencies of word mwr_word_id in message mwr_message_id To calculate most used words, I use the following query SELECT SUM(mwr_count) AS word_count, mwr_word_id FROM t_message_words GROUP BY mwr_word_id ORDER BY word_count DESC LIMIT 100; that runs almost forever (more than half an hour on the test server) mysql> show processlist; +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- | Id | User | Host | db | Command | Time | State | Info +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- processlist | 41 | root | localhost:3148 | tst_db | Query | 1955 | Copying to tmp table | SELECT SUM(mwr_count) AS word_count, mwr_word_id FROM t_message_words GROUP BY mwr_word_id | +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- 3 rows in set (0.00 sec) Is there anything I can do to "speed up" the query (apart from adding more ram, more cpu, faster disks)? thank you in advance stefano

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >