Search Results

Search found 15456 results on 619 pages for 'global temporary tables'.

Page 326/619 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • Database Schema Versioning Strategies

    - by Jack Ryan
    I work on a project that uses a reasonably large database, the live version weighing in at somewhere around 60-80GB. The live database is the only real definitive source of our schema, and because of its size duplicating this database is too slow to be done often. This means we have ended up developing our database schema in a pretty ad hoc way, using sql compare to migrate changes from dev dbs to the live system, and only wiping our dev dbs every month or two. I am hoping to get some pointers on how to improve our database development work flow so that we have a little more control. Some things to think about: Currently nobody is really in charge of the database schema, all developers can change it if they need to, though generally these decisions are talked about before they are done. There are stored procedures, functions, and views in the database. These should probably be dumped to files so they can be reloaded on every build. Schema changes should probably be checked in as scripts. We have started to do this recently. However all our scripts must then be numbered (because there may be dependencies between them), and must be re runnable (because our build script currently runs them all in order). This makes them hard to read because they are full of conditionals that check whether tables or columns already exist. This is a step that is often forgotten by developers. Getting a new database should be quick and easy. This is currently a big problem, it takes several hours to get a copy of last nights backup and restore it onto a dev machine. Some mechanism needs to be in place to allow developers to update static data. We have tables that contain data that is never updated through the application, but does potentially need to be changed when we do a new release (often this drives dropdowns). The whole thing needs to be runnable as part of a build script. Are there any tools that can be used to help to do this? Eventually I would like to be at a point where a new DB can be built from scratch without copying any data from the live system. I don't mind writing some scripts to glue all the steps together but each part should be easily editable so that we continue to use it rather than make changes directly on DBs.

    Read the article

  • Hibernate's version values in Grails app

    - by xain
    I'm looking at a database dump file, and I see many records in various tables with their version number set in values other than 0 (even 94 in one case). I understand it has to do with hibernate locking strategy, but my concern is that today is Sunday, and the site has almost no visitors so is: is this normal ? Or is there a known hibernate bug or even some programming malpractice producing this ?

    Read the article

  • Opening and snooping DLLs

    - by Russel
    Is there a way you can open and snoop DLLs? Like, see what functions are inside, etc? Is there some sort of header in them with tables of functions, information embedded by the maker of the dll, etc? Also, how do you find/view that information? Thanks! Russel

    Read the article

  • primary key datatype in sql server database

    - by ooo
    i see after installing the asp.net membership tables, they use the data type "uniqueidentifier" for all of the primary key fields. I have been using "int" data type and doing increment by one on inserts. Is there any particular benefits to using the uniqueIdentifier data type compared to my current model of using int and auto increments on new inserts ?

    Read the article

  • Best database (mysql) structure for this case:

    - by robert
    we have three types of data (tables): Book (id,name,author...) ( about 3 million of rows) Category (id,name) ( about 2000 rows) Location (id,name) ( about 10000 rows) A Book must have at least 1 type of Category (up to 3) AND a Book must have only one Location. I need to correlate this data to get this query faster: Select Books where Category = 'cat_id' AND Location = 'loc_id' Select Books where match(name) against ('name of book') AND Location = 'loc_id' Please I need some help. Thanks

    Read the article

  • Where does Drupal store NODE data?

    - by RD
    This is a follow up to my previous question: http://stackoverflow.com/questions/1284476/where-does-drupal-store-node-body-content Now, I tried adding values into node and node-revision, but still the node data is not showing. So, obviously more data is stored somewhere else. So basically, I want to know, which tables are affected when you create a new node?

    Read the article

  • IF/ELSE makes stored procedure not return a result set

    - by Brendan Long
    I have a stored procedure that needs to return something from one of two databases: IF @x = 1 SELECT @y FROM Table_A ELSE IF @x = 2 SELECT @y FROM Table_B Either SELECT alone will return what I want, but adding the IF/ELSE makes it stop returning anything. I tried: IF @x = 1 RETURN SELECT @y FROM Table_A ELSE IF @x = 2 RETURN SELECT @y FROM Table_B But that causes a syntax error. The two options I see are both horrible: Do a UNION and make sure that only one side has any results: SELECT @y FROM Table_A WHERE @x = 1 UNION SELECT @y FROM Table_B WHERE @x = 2 Create a temporary table to store one row in, and create and delete it every time I run this procedure (lots). Neither solution is elegant, and I assume they would both be horrible for performance (unless MS SQL is smart enough not to search the tables when the WHERE class is always false). Is there anything else I can do? Is option 1 not as bad as I think?

    Read the article

  • Outlook Mobile Service Configuration Issue

    - by cbeckner
    I am working on writing a OMS implementation. I have verified that service is compliant with the service and schema definitions. When trying to set up the account in Outlook 2007 to test the service, it allows me to use an https address, but not an http address. According to the documentation (http://msdn.microsoft.com/en-us/library/bb277363.aspx) "The URL of the OMS Web service can be either http or https, but it is https if not otherwise specified" I have not been able to find any doucmentation that would explain why Outlook will not even let me try to do anything in the wizard if the service url does not start with https. The error that it returns when a http address is entered is: The web service address is incorrect or corrupted. Check the web service address or contact your administrator I have also tried creating a temporary cert on my local machine to test the service, but outlook is rejecting the cert because it is not valid. Is there any way to test the service or run it over http?

    Read the article

  • elastix cdr stop working

    - by dreddko
    CDR was working before 19 march. Unfortunately i dont remember what kind of changes i made to configuration, but this exactly not changes to CDR config. elastix 2.4.0 asterisk 11.7.0 mysql 5.0.95 elastix*CLI> cdr show status Call Detail Record (CDR) settings ---------------------------------- Logging: Disabled Mode: Simple /etc/asterisk/cdr.conf [general] enable=yes unanswered = yes /etc/asterisk/cdr_mysql.conf [global] hostname = localhost dbname=asteriskcdrdb password = *MYPASSWROD* user = asteriskcdruser userfield=1 ;port=3306 ;sock=/tmp/mysql.sock loguniqueid=yes mysql> SHOW GRANTS FOR 'asteriskcdruser'@'localhost'; +-----------------------------------------------------------------------------------------------+ | Grants for asteriskcdruser@localhost | +-----------------------------------------------------------------------------------------------+ | GRANT USAGE ON *.* TO 'asteriskcdruser'@'localhost' IDENTIFIED BY PASSWORD 'HASHHERE' | | GRANT ALL PRIVILEGES ON `asteriskcdrdb`.* TO 'asteriskcdruser'@'localhost' | +-----------------------------------------------------------------------------------------------+ 2 rows in set (0.00 sec)

    Read the article

  • Create web application with ajax from the begining or add ajax later?

    - by klew
    I'm working on my first Ruby on Rails aplication and it's quite big (at least for me ;) - database has about 25 tables). I'm still learning Ruby and Rails and I never wrote anything in Javascript nor Ajax. Should I add Ajax to my application from the begining? Or maybe it will be better to add it latter? Or in the other words: is it (relatively) easy to add ajax to existing web application?

    Read the article

  • DataContext to DB

    - by JD
    Hi all, I have designed my DB using the ORM in VS 2008. What is the best way to export this to an SQL server so it will create the tables and relations on SQL Server? Thanks, JD

    Read the article

  • how to bind association in RoR

    - by ashok
    I have two tables, AppTemplate and AppTemplateMeta AppTemplate table has column id, MetaID, name etc. I have associated these two model like this class AppTemplate < ActiveRecord::Base set_table_name 'AppTemplate' belongs_to :app_template_meta, :class_name => "AppTemplateMeta", :foreign_key => 'MetaID' end If we fetch data using AppTemplate.all, I want associated meta details also. But currently it's not returning associated meta details. It just returns AppTemplate details. any guys can help me for this

    Read the article

  • MySQL search for user and their roles

    - by Jenkz
    I am re-writing the SQL which lets a user search for any other user on our site and also shows their roles. An an example, roles can be "Writer", "Editor", "Publisher". Each role links a User to a Publication. Users can take multiple roles within multiple publications. Example table setup: "users" : user_id, firstname, lastname "publications" : publication_id, name "link_writers" : user_id, publication_id "link_editors" : user_id, publication_id Current psuedo SQL: SELECT * FROM ( (SELECT user_id FROM users WHERE firstname LIKE '%Jenkz%') UNION (SELECT user_id FROM users WHERE lastname LIKE '%Jenkz%') ) AS dt JOIN (ROLES STATEMENT) AS roles ON roles.user_id = dt.user_id At the moment my roles statement is: SELECT dt2.user_id, dt2.publication_id, dt.role FROM ( (SELECT 'writer' AS role, link_writers.user_id, link_writers.publication_id FROM link_writers) UNION (SELECT 'editor' AS role, link_editors.user_id, link_editors.publication_id FROM link_editors) ) AS dt2 The reason for wrapping the roles statement in UNION clauses is that some roles are more complex and require a table join to find the publication_id and user_id. As an example "publishers" might be linked accross two tables "link_publishers": user_id, publisher_group_id "link_publisher_groups": publisher_group_id, publication_id So in that instance, the query forming part of my UNION would be: SELECT 'publisher' AS role, link_publishers.user_id, link_publisher_groups.publication_id FROM link_publishers JOIN link_publisher_groups ON lpg.group_id = lp.group_id I'm pretty confident that my table setup is good (I was warned off the one-table-for-all system when researching the layout). My problem is that there are now 100,000 rows in the users table and upto 70,000 rows in each of the link tables. Initial lookup in the users table is fast, but the joining really slows things down. How can I only join on the relevant roles? -------------------------- EDIT ---------------------------------- Explain above (open in a new window to see full resolution). The bottom bit in red, is the "WHERE firstname LIKE '%Jenkz%'" the third row searches WHERE CONCAT(firstname, ' ', lastname) LIKE '%Jenkz%'. Hence the large row count, but I think this is unavoidable, unless there is a way to put an index accross concatenated fields? The green bit at the top just shows the total rows scanned from the ROLES STATEMENT. You can then see each individual UNION clause (#6 - #12) which all show a large number of rows. Some of the indexes are normal, some are unique. It seems that MySQL isn't optimizing to use the dt.user_id as a comparison for the internal of the UNION statements. Is there any way to force this behaviour? Please note that my real setup is not publications and writers but "webmasters", "players", "teams" etc.

    Read the article

  • Can I optimize this at all?

    - by Moshe
    I'm working on an iOS app and I'm using the following code for one of my tables to return the number of rows in a particular section: return [[kSettings arrayForKey:@"views"] count]; Is there any other way to write that line of code so that it is more memory efficient? EDIT: kSettings = NSUserDefaults standardUserDefaults. Is there any way to rewrite my line of code so that whatever memory it occupies is released sooner than it is released now?

    Read the article

  • Is there a fast way to change all the collation to utf8_unicode?

    - by Mark
    I have realised after making about 20 tables that I need to user utf8_unicode as opposed to utf8_general. What is the fastest way to change it using PHPMyAdmin? I had one idea: Export the database as SQL then using notepad run a find and replace on it and then reimport it... but it sounds like a bit of a headache. Is there a better way?

    Read the article

  • Why is Postgres doing a Hash in this query?

    - by Claudiu
    I have two tables: A and P. I want to get information out of all rows in A whose id is in a temporary table I created, tmp_ids. However, there is additional information about A in the P table, foo, and I want to get this info as well. I have the following query: SELECT A.H_id AS hid, A.id AS aid, P.foo, A.pos, A.size FROM tmp_ids, P, A WHERE tmp_ids.id = A.H_id AND P.id = A.P_id I noticed it going slowly, and when I asked Postgres to explain, I noticed that it combines tmp_ids with an index on A I created for H_id with a nested loop. However, it hashes all of P before doing a Hash join with the result of the first merge. P is quite large and I think this is what's taking all the time. Why would it create a hash there? P.id is P's primary key, and A.P_id has an index of its own.

    Read the article

  • Using nsIZipWriter or other to compress a string as a string?

    - by Daniel
    I need to be able to take a javascript string, compress it using any fast and available means and get back a binary string/blob. Background: The extension I'm developing needs to send various large content to my server. It does this conveniently by dynamically creating a form, adding fields to the form and posting it. Some of these fields are just too big bandwidth wise for multiple use. I'd like to be able to compress them before adding them and then maybe base64'ing them if the characters cause a problem in the message. Any ideas? I could use nsiZipWriter with temporary files on disk but that is quite ugly and probably sluggish.

    Read the article

  • Database related Questions

    - by alokpatil
    I am planning to make a railway reservation project... I am maintaining following tables.. trainTable (trainId,trainName,trainFrom,trainTo,trainDate,trainNoOfBoogies)...PK(trainId) Boogie (trainId,boogieId,boogieName,boogieNoOfseats)...CompositeKey(trainId,boogieId)... Seats (trainId,boogieId,seatId,seatStatus,seatType)...CompositeKey(trainId,boogieId,seatId)... user (userId,name...personal details) userBooking (userId,trainId,boogieId,seatId)...Is this good design reply me please...

    Read the article

  • need an sql query

    - by CKeven
    I currently have two tables: 1. car(plate_number, brand, cid) 2. borrow(StartDate, endDate, brand, id) I want to write a query to get all available brand and count of available cars for each brand

    Read the article

  • Can I concatenate multiple MySQL rows into one field?

    - by Dean
    Using MySQL, I can do something like select hobbies from peoples_hobbies where person_id = 5; and get: shopping fishing coding but instead I just want 1 row, 1 col: shopping, fishing, coding The reason is that I'm selecting multiple values from multiple tables, and after all the joins I've got a lot more rows than I'd like. I've looked for a function on MySQL Doc and it doesn't look like the CONCAT or CONCAT_WS functions accept result sets, so does anyone here know how to do this?

    Read the article

  • How to analyse Dalvik GC behaviour?

    - by HRJ
    I am developing an application on Android. It is a long running application that continuously processes sensor data. While running the application I see a lot of GC messages in the logcat; about one every second. This is most probably because of objects being created and immediately de-referenced in a loop. How do I find which objects are being created and released immediately? All the java heap analysis tools that I have tried(*) are bothered with the counts and sizes of objects on the heap. While they are useful, I am more interested in finding out the site where temporary short-lived objects get created the most. (*) I tried jcat and Eclipse MAT. I couldn't get hat to work on the Android heap-dumps; it complained of an unsupported dump file version.

    Read the article

  • ODBC: when is the best time to create my database?

    - by mawg
    I have a windows program which generates PGP forms which will be filled in later. Those PHP forms will populate a database. It looks very much like MySql, but I can't be certain, so let's call it ODBC. And, yes, it does have to be a windows program. There will also be PHP forms which query the database - examine which tables and fields it contains and then generates forms which can be used to search the database (e.g, it finds a table with fields "employee_name", etc and generates a form which lets you search based on employee name. Let's call that design time and run time. At design time, some manager or IT guy or similar gets to define the nature of the database and at runtime 1) a worker fills in the form daily and 2) management can extract reports. Here's my question: given that the database is defined at "design time" (and populated at run time), where and how is best to do so? 1 I could use an ODBC interface from the windows program, but I am having difficulty finding something good to work with Delphi. Things like ADO and firebird tend to expect you to already have a database and allow you to manipulate it, but I can find no code example of how to create a database and some tables, so ... 2 I could used DOS commands from Delphi in my windows program. I just tried and got a response to MySql --version, but am not sure if MySql etc are more interactive. That is, can I use a script file or a very long stacked command with semicolons and returns separating? e.g 'CREATE DATABASE db; CREATE TABLE t1;' 3) Since the best way to work with databases seems to be PHP, perhaps my windows program could spit out a PHP page which would, when run in a browser, create the database. I have tried to make this as uncomplicated as I can, but please feel free to ask questions. It may be that there are several valid ways, but there is probably one 'better' solution in terms of ease of implementation or maintenance. Better scratch option 3. What if the user later wants to come back and have the windows program change the input form? It needs to update the database too.

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >