Search Results

Search found 27723 results on 1109 pages for 'sql puzzle'.

Page 585/1109 | < Previous Page | 581 582 583 584 585 586 587 588 589 590 591 592  | Next Page >

  • handling long running large transactions with perl dbi

    - by 1stdayonthejob
    I've got a large transaction comprising of getting lots of data from database A, do some manipulations with this data, then inserting the manipulated data into database B. I've only got permissions to select in database A but I can create tables and insert/update etc in database B. The manipulation and insertion part is written in perl and already in use for loading data into database B from other data sources, so all that's required is to get the necessary data from database A and using it to initialize the perl classes. How can I go about doing this so I can easily track back and pick up from where the error happened if any error occurs during the manipulation or insertion procedures (database disconnection, problems with class initialization because of invalid values, hard disk failure etc...)? Doing the transaction in one go doesn't seem like a good option because the amount data from database A means it would take at least a day or 2 for data manipulation and insertion into database B. The data from database A can be grouped into around 1000 groups using unique keys, with each key containing 1000s of rows each. One way I thought I could do is to write a script that does commits per group, meaning I've got to track which group has already been inserted into database B. The only way I can think of to track the progress of which groups have been processed or not is either in a log file or in a table in database B. A second way I thought could work is to dump all the necessary fields needed for loading the classes for manipulation and insertion into a flatfile, read the file to initialize the classes and insert into database B. This also means that I got to do some logging, but should narrow it down to the exact row in the flatfile if any error occurs. The script will look something like this: use strict; use warnings; use DBI; #connect to database A my $dbh = DBI->connect('dbi:oracle:my_db', $user, $password, { RaiseError => 1, AutoCommit => 0 }); #statement to get data based on group unique key my $sth = $dbh->prepare($my_sql); my @groups; #I have a list of this already open my $fh, '>>', 'my_logfile' or die "can't open logfile $!"; eval { foreach my $g (@groups){ #subroutine to check if group has already been processed, either from log file or from database table next if is_processed($g); $sth->execute($g); my $data = $sth->fetchall_arrayref; #manipulate $data, then use it to load perl classes for insertion into database B #. #. #. } print $fh "$g\n"; }; if ($@){ $dbh->rollback; die "something wrong...rollback"; } So if any errors do occur, I can just run this script again and it should skip the groups or rows that have been processed and continue. Both these methods is just variations on the same theme, and both require going back to where I've been tracking my progress (in table or file), skip the ones that've been commited to database B and process the remaining data. I'm sure there's a better way of doing this but am struggling to think of other solutions. Is there another way of handling large transactions between databases that require data manipulation between getting data out from one and inserting into another? The process doesn't need to be all in Perl, as long as I can reuse the perl classes for manipulating and inserting the data into the database.

    Read the article

  • complex MySQL Order by not working

    - by Les Reynolds
    Here is the select statement I'm using. The problem happens with the sorting. When it is like below, it only sorts by t2.userdb_user_first_name, doesn't matter if I put that first or second. When I remove that, it sorts just fine by the displayorder field value pair. So I know that part is working, but somehow the combination of the two causes the first_name to override it. What I want is for the records to be sorted by displayorder first, and then first_name within that. SELECT t1.userdb_id FROM default_en_userdbelements as t1 INNER JOIN default_en_userdb AS t2 ON t1.userdb_id = t2.userdb_id WHERE t1.userdbelements_field_name = 'newproject' AND t1.userdbelements_field_value = 'no' AND t2.userdb_user_first_name!='Default' ORDER BY (t1.userdbelements_field_name = 'displayorder' AND t1.userdbelements_field_value), t2.userdb_user_first_name; Edit: here is what I want to accomplish. I want to list the users (that are not new projects) from the userdb table, along with the details about the users that is stored in userdbelements. And I want that to be sorted first by userdbelements.displayorder, then by userdb.first_name. I hope that makes sense? Thanks for the really quick help! Edit: Sorry for disappearing, here is some sample data userdbelements userdbelements_id userdbelements_field_name userdbelements_field_value userdb_id 647 heat 1 648 displayorder 1 - Sponsored 1 645 condofees 1 userdb userdb_id userdb_user_name userdb_emailaddress userdb_user_first_name userdb_user_last_name 10 harbourlights [email protected] Harbourlights 1237 Northshore Blvd, Burlington 11 harbourview [email protected] Harbourview 415 Locust Street, Burlington 12 thebalmoral [email protected] The Balmoral 2075 & 2085 Amherst Heights Drive, Burlington

    Read the article

  • Newbie database index question

    - by RenderIn
    I have a table with multiple indexes, several of which duplicate the same columns: Index 1 columns: X, B, C, D Index 2 columns: Y, B, C, D Index 3 columns: Z, B, C, D I'm not very knowledgeable on indexing in practice, so I'm wondering if somebody can explain why X, Y and Z were paired with these same columns. B is an effective date. C is a semi-unique key ID for this table for a specific effective date B. D is a sequence that identifies the priority of this record for the identifier C. Why not just create 6 indexes, one for each X, Y, Z, B, C, D? I want to add an index to another column T, but in some contexts I'll only be querying on T alone while in others I will also be specifying the B, C and D columns... so should I create just one index like above or should I create one for T and one for (T, B, C, D)? I've not had as much luck as expected when googling for comprehensive coverage of indexing. Any resources where I can get a through explanation and lots of examples of B-tree indexing?

    Read the article

  • Filter entities that match all pairs

    - by Jon
    I have an entity (let's say Person) with a set of arbitrary attributes with a known subset of values. I need to search for all of these entities that match all my filter conditions. For example, my table structures look like this: Person: id | name 1 | John Doe 2 | Jane Roe 3 | John Smith Attribute: id | attr_name 1 | Sex 2 | Eye Color ValidValue: id | attr_id | value_name 1 | 1 | Male 2 | 1 | Female 3 | 2 | Blue 4 | 2 | Green 5 | 2 | Brown PersonAttributes id | person_id | attr_id | value_id 1 | 1 | 1 | 1 2 | 1 | 2 | 3 3 | 2 | 1 | 2 4 | 2 | 2 | 4 5 | 3 | 1 | 1 6 | 3 | 2 | 4 In JPA, I have entities built for all of these tables. What I'd like to do is perform a search for all entities matching a given set of attribute-value pairs. For instance, I'd like to be able to find all males (John Doe and John Smith), all people with green eyes (Jane Roe or John Smith), or all females with green eyes (Jane Roe). I see that I can already take advantage of the fact that I only really need to match on value_id, since that's already unique and tied to the attr_id. But where can I go from there?

    Read the article

  • [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified - works

    - by Matt
    Hello, I am developing a java app (with odbc bridge - forgive me - the only paradox driver I have been able to obtain is the microsoft odbc driver) - which works fine while in eclipse, (and netbeans) - connecting and obtaining data from an ancient paradox 5.x database. So long as it is run from inside my IDE - it compiles and runs flawlessly. When I export it to a runable jar, suddenly [code][Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified[/code] occurs. The jar is being run on the same box as my developing IDE - so I am confused about the cause. It is being run via console from a user account, as per the IDE. My connection string is "jdbc:odbc:Driver={Microsoft Paradox Driver (*.db )};DriverID=538; Fil=Paradox 5.X; DefaultDir=C:\paradox\database\location\" - obtained from connectionstrings.com - and as mentioned before, working fine while run from the IDE. The above seems to 'magically' create its own connection, avoiding the setup of a dsn - I am unsure quite how it does - but it works. The only other thing I can think that might be pertinent is that my PC is a 64bit o/s (windows server 2008). Please help, any suggestions or comments will be greatly appreciated. Thanks, Matt

    Read the article

  • Validate Linq2Sql before SubmitChanges()

    - by Nick Gotch
    Can anyone tell me if/how you can validate the changes in a data context in Linq2Sql before calling SubmitChanges(). The situation I have is that I create a context, perform multiple operations and add many inserts alongside other processing tasks and then rollback if the submit fails. What I'd prefer to do is make some kind of "Validate()" call after certain tasks are done so that I can handle it before submitting the entire job.

    Read the article

  • sqlite3 JOIN, GROUP_CONCAT using distinct with custom separator

    - by aiwilliams
    Given a table of "events" where each event may be associated with zero or more "speakers" and zero or more "terms", those records associated with the events through join tables, I need to produce a table of all events with a column in each row which represents the list of "speaker_names" and "term_names" associated with each event. However, when I run my query, I have duplication in the speaker_names and term_names values, since the join tables produce a row per association for each of the speakers and terms of the events: 1|Soccer|Bobby|Ball 2|Baseball|Bobby - Bobby - Bobby|Ball - Bat - Helmets 3|Football|Bobby - Jane - Bobby - Jane|Ball - Ball - Helmets - Helmets The group_concat aggregate function has the ability to use 'distinct', which removes the duplication, though sadly it does not support that alongside the custom separator, which I really need. I am left with these results: 1|Soccer|Bobby|Ball 2|Baseball|Bobby|Ball,Bat,Helmets 3|Football|Bobby,Jane|Ball,Helmets My question is this: Is there a way I can form the query or change the data structures in order to get my desired results? Keep in mind this is a sqlite3 query I need, and I cannot add custom C aggregate functions, as this is for an Android deployment. I have created a gist which makes it easy for you to test a possible solution: https://gist.github.com/4072840

    Read the article

  • How to avoid Cartesian product in an INNER JOIN query?

    - by flhe
    I have 6 tables, let's call them a,b,c,d,e,f. Now I want to search all the colums (except the ID columns) of all tables for a certain word, let's say 'Joe'. What I did was, I made INNER JOINS over all the tables and then used LIKE to search the columns. INNER JOIN ... ON INNER JOIN ... ON.......etc. WHERE a.firstname ~* 'Joe' OR a.lastname ~* 'Joe' OR b.favorite_food ~* 'Joe' OR c.job ~* 'Joe'.......etc. The results are correct, I get all the colums I was looking for. But I also get some kind of cartesian product, I get 2 or more lines with almost the same results. How can i avoid this? I want so have each line only once, since the results should appear on a web search.

    Read the article

  • What is a columnar database?

    - by Raj More
    I have been working with warehousing for a while now. I am intrigued by Columnar Databases and the speed that they have to offer for data retrievals. I have multi-part question: How do Columnar Databases work? How do they differ from relational databases? Is there a trial version of a columnar database I can install to play around? (I am on Windows 7)

    Read the article

  • Persistence classes in Qt

    - by zarzych
    Hi, I'm porting a medium-sized CRUD application from .Net to Qt and I'm looking for a pattern for creating persistence classes. In .Net I usually created abstract persistence class with basic methods (insert, update, delete, select) for example: public class DAOBase<T> { public T GetByPrimaryKey(object primaryKey) {...} public void DeleteByPrimaryKey(object primaryKey) {...} public List<T> GetByField(string fieldName, object value) {...} public void Insert(T dto) {...} public void Update(T dto) {...} } Then, I subclassed it for specific tables/DTOs and added attributes for DB table layout: [DBTable("note", "note_id", NpgsqlTypes.NpgsqlDbType.Integer)] [DbField("note_id", NpgsqlTypes.NpgsqlDbType.Integer, "NoteId")] [DbField("client_id", NpgsqlTypes.NpgsqlDbType.Integer, "ClientId")] [DbField("title", NpgsqlTypes.NpgsqlDbType.Text, "Title", "")] [DbField("body", NpgsqlTypes.NpgsqlDbType.Text, "Body", "")] [DbField("date_added", NpgsqlTypes.NpgsqlDbType.Date, "DateAdded")] class NoteDAO : DAOBase<NoteDTO> { } Thanks to .Net reflection system I was able to achieve heavy code reuse and easy creation of new ORMs. The simplest way to do this kind of stuff in Qt seems to be using model classes from QtSql module. Unfortunately, in my case they provide too abstract an interface. I need at least transactions support and control over individual commits which QSqlTableModel doesn't provide. Could you give me some hints about solving this problem using Qt or point me to some reference materials? Update: Based on Harald's clues I've implemented a solution that is quite similar to the .Net classes above. Now I have two classes. UniversalDAO that inherits QObject and deals with QObject DTOs using metatype system: class UniversalDAO : public QObject { Q_OBJECT public: UniversalDAO(QSqlDatabase dataBase, QObject *parent = 0); virtual ~UniversalDAO(); void insert(const QObject &dto); void update(const QObject &dto); void remove(const QObject &dto); void getByPrimaryKey(QObject &dto, const QVariant &key); }; And a generic SpecializedDAO that casts data obtained from UniversalDAO to appropriate type: template<class DTO> class SpecializedDAO { public: SpecializedDAO(UniversalDAO *universalDao) virtual ~SpecializedDAO() {} DTO defaultDto() const { return DTO; } void insert(DTO dto) { dao->insert(dto); } void update(DTO dto) { dao->update(dto); } void remove(DTO dto) { dao->remove(dto); } DTO getByPrimaryKey(const QVariant &key); }; Using the above, I declare the concrete DAO class as following: class ClientDAO : public QObject, public SpecializedDAO<ClientDTO> { Q_OBJECT public: ClientDAO(UniversalDAO *dao, QObject *parent = 0) : QObject(parent), SpecializedDAO<ClientDTO>(dao) {} }; From within ClientDAO I have to set some database information for UniversalDAO. That's where my implementation gets ugly because I do it like this: QMap<QString, QString> fieldMapper; fieldMapper["client_id"] = "clientId"; fieldMapper["name"] = "firstName"; /* ...all column <-> field pairs in here... */ dao->setFieldMapper(fieldMapper); dao->setTable("client"); dao->setPrimaryKey("client_id"); I do it in constructor so it's not visible at a first glance for someone browsing through the header. In .Net version it was easy to spot and understand. Do you have some ideas how I could make it better?

    Read the article

  • ER Diagram flaws

    - by spacker_lechuck
    I have the following ER Diagram for a bank database - customers may have several accounts, accounts may be held jointly by several customers, and each customer is associated with an account set and accounts are members of one or more account sets. What design rules are violated? What modifications should be made and why? So far, a few flaws I'm not sure about are: 1) Redundant owner-address attribute in AcctSets Entity. 2) This ER does not include accounts with multiple owners with different addresses. My Question is: How would I go about fixing these flaws and/or other flaws that I may be missing from my analysis? Thanks!

    Read the article

  • Override delete behaviour in NHibernate

    - by David
    Hi all In my application users cannot truly delete records. Rather, the record's Deleted field gets set to 1, which hides it from selects. I need to maintain this behaviour and I'm looking into whether NHibernate is appropriate for my app. Can I override NHibnernate's delete behaviour so that instead of issuing DELETE statements, it issues UPDATES, as described above? I would obviously also need to override its SELECT behaviour to include the 'AND Deleted = 0' clause. Or read from a view instead. I dunno. TIA for your advice. David

    Read the article

  • Websql to google maps markers

    - by Roy van Neden
    I am busy with my web application for a school project. It has has two pages. The first page uploads the location(latitude and longitude), price, date and kind of fuel. It works and i saved it with websql.(see screenshot) Now i want to get everything out of the web database and put it as a marker on my google maps card. I have my own location already. But i dont know how to get everything from the database to the map as a marker. I'm using jquery mobile/html5/css/javascript only. Code to put it in a array or something else that will work. db.transaction(function(tx){ tx.executeSql('SELECT brandstofsoort, literprijs, datum, latitude, longitude FROM brandstofstatus', [], function (tx, results) { var lengte = results.rows.length, i; for(var i = 0; i< lengte; i++){ var locations = [ [ ], [ ], [ ], [ ], [ ] ]; } // / for loop });// /tx.executeSql });// /db.transaction Thanks in advance!

    Read the article

  • SqlCeResultSet re-use

    - by pdiddy
    Can I reuse an existing result set to filter? Lets say I have a result set that contains all data. Then I want to filter that resultset. Can I just execute the sqlcommand on the same resultset? I tried and it didn't seem to work. Does someone know the reason? So the only way to make it work was to create a new resultset and execute the command on that one. So this way I'm always creating a new resulset, can this have some issue?

    Read the article

  • Django ORM QuerySet intersection by a field

    - by Sri Raghavan
    These are the (pseudo) models I've got. Blog: name etc... Article: name blog creator etc User (as per django.contrib.auth) So my problem is this: I've got two users. I want to get all of the articles that the two users published on the same blog (no matter which blog). I can't simply filter the Article model by both users, because that would yield the set of Articles created by both users. Obviously not what I want. but can I filter somehow to get all of the articles where a field of the object matches between the two querysets?

    Read the article

  • sorting two tables (full join)

    - by Ruslan
    i'm joining tables like: select * from tableA a full join tableB b on a.id = b.id But the output should be: row without null fields row with null fields in tableB row with null fields in tableA Like: a.id a.name b.id b.name 5 Peter 5 Jones 2 Steven 2 Pareker 6 Paul null null 4 Ivan null null null null 1 Smith null null 3 Parker

    Read the article

  • Help with Linked Server Error

    - by Randy Minder
    In SSMS 2008, I am trying to execute a stored procedure in a database on another server. The call looks something like the following: EXEC [RemoteServer].Database.Schema.StoredProcedureName @param1, @param2 The linked server is set up correctly, and has both RPC and RPC OUT set to true. Security on the linked server is set to Be made using the login's current security context. When I attempt to execute the stored procedure, I get the following error: Msg 18483, Level 14, State 1, Line 1 Could not connect to server 'RemoteServer' because '' is not defined as a remote login at the server. Verify that you have specified the correct login name. I am connected to the local server using Windows Authentication. Anyone know why I would be getting this error?

    Read the article

  • ASP.NET MCV 2, re-use of SQL-Connection string

    - by cc0
    Hi, so I'm very very far from an expert on MVC or ASP.NET. I just want to make a few simple Controllers in C# at the moment, so I have the following question; Right now I have the connection string used by the controller, -inside- the controller itself. Which is kind of silly when there are multiple controllers using the same string. I'd like to be able to change the connection string in just one place and have it affect all controllers. Not knowing a lot about asp.net or the 'm' and 'v' part of MVC, what would be the best (and simplest) way of going about accomplishing just this? I'd appreciate any input on this, examples would be great too.

    Read the article

  • SQLServer Binary Data with ActiveRecord and JDBC

    - by John Duff
    I'm using the activerecord-jdbc-adapter with ActiveRecord to be able to access a SQLServer database for Rails Application running under jRuby and am having trouble inserting binary data. The Exception I am getting is below. Note I just have a blurb for the binary data from the fixtures that was working fine for MySQL. ActiveRecord::StatementInvalid: ActiveRecord::ActiveRecordError: Operand type clash: nvarchar is incompatible with image: INSERT INTO blobstorage_datachunks ([id], [datafile_id], [chunk_number], [data]) VALUES (369397133, 663419003, 0, N'GIF89a@') When I created the tables the migration had binary and SQLServer used Image instead. We're using Rails 2.3.5, SQLServer Express 2008. What I'm looking for is a way to get the binary data into SQLServer with ActiveRecord. Thanks in advance for the help.

    Read the article

  • Row as XML in SP

    - by user171523
    From SP i need to get a row as as XML (Includeing all fileds.) Is there any way we can get like below. Declare @xmlMsg varchar(4000) select * into #tempTable from dbo.order for xml raw select @xmlMsg = 1 from #tempTable print '@xmlMsg' + @xmlMsg Row i would like to get it as XML output.

    Read the article

< Previous Page | 581 582 583 584 585 586 587 588 589 590 591 592  | Next Page >