Search Results

Search found 16059 results on 643 pages for 'global temp tables'.

Page 355/643 | < Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >

  • Opening and snooping DLLs

    - by Russel
    Is there a way you can open and snoop DLLs? Like, see what functions are inside, etc? Is there some sort of header in them with tables of functions, information embedded by the maker of the dll, etc? Also, how do you find/view that information? Thanks! Russel

    Read the article

  • Dynamically Insert Variables into DB Table using PreparedStatement

    - by gran_profaci
    I was working with PreparedStatement today and noticed that it used setString() setTimestamp() etc. to insert variables into the DB. I basically have 20 tables each with at least 15 columns and it would not be feasible for me to manuallt write down all the setters. Considering that I have an ArrayList "Vals" which contains all the variables to be inputted in String format (obtained by getString() using PreparedStatement itself), is there any way I can do an insert without using expressly using the Setters? That would save me a lot of time.

    Read the article

  • Can I concatenate multiple MySQL rows into one field?

    - by Dean
    Using MySQL, I can do something like select hobbies from peoples_hobbies where person_id = 5; and get: shopping fishing coding but instead I just want 1 row, 1 col: shopping, fishing, coding The reason is that I'm selecting multiple values from multiple tables, and after all the joins I've got a lot more rows than I'd like. I've looked for a function on MySQL Doc and it doesn't look like the CONCAT or CONCAT_WS functions accept result sets, so does anyone here know how to do this?

    Read the article

  • need an sql query

    - by CKeven
    I currently have two tables: 1. car(plate_number, brand, cid) 2. borrow(StartDate, endDate, brand, id) I want to write a query to get all available brand and count of available cars for each brand

    Read the article

  • Where does Drupal store NODE data?

    - by RD
    This is a follow up to my previous question: http://stackoverflow.com/questions/1284476/where-does-drupal-store-node-body-content Now, I tried adding values into node and node-revision, but still the node data is not showing. So, obviously more data is stored somewhere else. So basically, I want to know, which tables are affected when you create a new node?

    Read the article

  • javascript function call automatically on page onload instead on onclick

    - by user357134
    I am facing a very strange problem. I have javascript function in my aspx page that i call on the onclick event.but on one particular machine it always automatically called on body onload in infinite loop. Though I already remove all the caching and delte the history and temp files. This seems very strange to me .It works fine on all the machine excpt one. The machine have windows 7 and IE verison 8.0.7600.16385 Can some one please help me. Thanks in advance.

    Read the article

  • Employee Clocking in & Out System database

    - by user164577
    What would be the best database design for employee clocking and out? Right now I have two tables. Employee Base Table: Employee Id, relevant information like name and address, clocked in column Employee Clocked in Table: Employee id, clock in date, Clock in Time, Clocked Out Time. Is this a good way to track clocked in and clocked out? I appreciate any help

    Read the article

  • Database related Questions

    - by alokpatil
    I am planning to make a railway reservation project... I am maintaining following tables.. trainTable (trainId,trainName,trainFrom,trainTo,trainDate,trainNoOfBoogies)...PK(trainId) Boogie (trainId,boogieId,boogieName,boogieNoOfseats)...CompositeKey(trainId,boogieId)... Seats (trainId,boogieId,seatId,seatStatus,seatType)...CompositeKey(trainId,boogieId,seatId)... user (userId,name...personal details) userBooking (userId,trainId,boogieId,seatId)...Is this good design reply me please...

    Read the article

  • Queries stuck in "copying to tmp table"

    - by Parik
    I am running some sample tests against mysql, and finding that there are a bunch of queries which are stuck in "copying to tmp tables". They remain stuck in the same state. They are usually aggregate queries and I can kill those queries. But how can I find out what is causing them to be stuck? I am using mysql 5.1.42 with the innodb plugin.

    Read the article

  • FPS lags with new acer aspire 5755G

    - by Calvin
    The title is kind of self explanatory, as my new laptop lags and has FPS drops. For example my FPS in Starcraft 2 hovers around 20 and constantly drops to 1 with low settings when I know it should run smoothly in high settings. I've updated my Nvidia driver, and set the preferred global settings to the 'High-performance Nvidia processor'. Here are some screen shots. Screen Shot One - Screen Shot Two - Screen Shot Three I'm not sure how to fix this problem, any feed back would be nice!

    Read the article

  • SQL. Sorting by a field

    - by strakastroukas
    I have created a simple view consisting of 3 tables in SQL. By right clicking and selecting Design, in the Object explorer table, i modified my custom view. I just added sortby asc in a field. The problem is that the changes are not reflected in the outout of the View. After saving the view, and selecting Open view the sort is not displayed in output. So what is going on here?

    Read the article

  • created modified function (default) is not working in Cakephp 1.2 version

    - by Jpsworld
    I created an application and all the db tables have 'created,modified' fields that filled automatically by Cakephp's Default Functionality. And i put the Field Type is created datetime NULL, modified datetime NULL, like. But it doesn't work. The data where shows 0000-00-00 00:00:00 Format. The cakephp version is 1.2 , so i put the datetime NULL option ,also i removed the temp,cache files in Model. I need to save the correct date & time format for those 2 fields. If there is any problem with XAMPP version (I use the latest version of XAMPP,1.7.7 PHP: 5.3.8 & mysql v 5.5.16 ) I hope that all are identifies my Issue. Please help me with correct solution. Thanks & Regards, Jpsworld.

    Read the article

  • MySQL search for user and their roles

    - by Jenkz
    I am re-writing the SQL which lets a user search for any other user on our site and also shows their roles. An an example, roles can be "Writer", "Editor", "Publisher". Each role links a User to a Publication. Users can take multiple roles within multiple publications. Example table setup: "users" : user_id, firstname, lastname "publications" : publication_id, name "link_writers" : user_id, publication_id "link_editors" : user_id, publication_id Current psuedo SQL: SELECT * FROM ( (SELECT user_id FROM users WHERE firstname LIKE '%Jenkz%') UNION (SELECT user_id FROM users WHERE lastname LIKE '%Jenkz%') ) AS dt JOIN (ROLES STATEMENT) AS roles ON roles.user_id = dt.user_id At the moment my roles statement is: SELECT dt2.user_id, dt2.publication_id, dt.role FROM ( (SELECT 'writer' AS role, link_writers.user_id, link_writers.publication_id FROM link_writers) UNION (SELECT 'editor' AS role, link_editors.user_id, link_editors.publication_id FROM link_editors) ) AS dt2 The reason for wrapping the roles statement in UNION clauses is that some roles are more complex and require a table join to find the publication_id and user_id. As an example "publishers" might be linked accross two tables "link_publishers": user_id, publisher_group_id "link_publisher_groups": publisher_group_id, publication_id So in that instance, the query forming part of my UNION would be: SELECT 'publisher' AS role, link_publishers.user_id, link_publisher_groups.publication_id FROM link_publishers JOIN link_publisher_groups ON lpg.group_id = lp.group_id I'm pretty confident that my table setup is good (I was warned off the one-table-for-all system when researching the layout). My problem is that there are now 100,000 rows in the users table and upto 70,000 rows in each of the link tables. Initial lookup in the users table is fast, but the joining really slows things down. How can I only join on the relevant roles? -------------------------- EDIT ---------------------------------- Explain above (open in a new window to see full resolution). The bottom bit in red, is the "WHERE firstname LIKE '%Jenkz%'" the third row searches WHERE CONCAT(firstname, ' ', lastname) LIKE '%Jenkz%'. Hence the large row count, but I think this is unavoidable, unless there is a way to put an index accross concatenated fields? The green bit at the top just shows the total rows scanned from the ROLES STATEMENT. You can then see each individual UNION clause (#6 - #12) which all show a large number of rows. Some of the indexes are normal, some are unique. It seems that MySQL isn't optimizing to use the dt.user_id as a comparison for the internal of the UNION statements. Is there any way to force this behaviour? Please note that my real setup is not publications and writers but "webmasters", "players", "teams" etc.

    Read the article

  • Updating a modul leve shared dictionary

    - by Vishal
    Hi, A module level dictionary 'd' and is accessed by different threads/requests in a django web application. I need to update 'd' every minute with a new data and the process takes about 5 seconds. What could be best solution where I want the users to get either the old value or the new value of d and nothing in between. I can think of a solution where a temp dictionary is constructed with a new data and assigned to 'd' but not sure how this works! Appreciate your ideas. Thanks

    Read the article

  • Form creating sites with output

    - by Alex
    Sites lite faary.com wufoo.com/and theformsite.com help you building forms But the output tables that being created are password protected as far as i know. Are there sites/scripts like the above which can make the output visible to all ? Something like a "guest book" script/form that you can edit the fields and the output will show immediately ? Thank you.

    Read the article

  • fastest way to upload an xls file into a database

    - by shmichael
    I have an xls file with ~60 sheets of data. I would like to move them into a database (postgres) such that each sheet's data is stored in a different table. What is the fastest way of creating these tables? I don't care about naming or proper typing of columns. The columns could all be strings for that matter. I don't want to run 60 different csv uploads.

    Read the article

  • mySQL Inconsistent Performance

    - by Jon Hatfield
    Hi, I'm running a mySQL query that joins various tables of 500,000+ rows. Sometimes it takes a second, other times around 15 seconds! This is on my local machine. I have experienced similarly varied times before on other intensive queries, does anyone know why this is? Thanks

    Read the article

  • Does table columns increase select statement execution time

    - by paokg4
    I have 2 tables, same structure, same rows, same data but the first has more columns (fields). For example: I select the same 3 fields from both of them (SELECT a,b,c FROM mytable1 and then SELECT a,b,c FROM mytable2) I've tried to run those queries on 100,000 records (for each table) but at the end I got the same execution time (0.0006 sec) Do you know if the number of the columns (and in the end the size of the one table is bigger than the other) has to do something with the query execution time?

    Read the article

  • How to use LINQ for CRUD with a simple SQL table?

    - by Rob Ferno
    Every LINQ blog I found there seemed around 2 years old, I understand the syntax but need more direction on creating the SQL mapping and context classes. I just need to use LINQ for 2 SQL tables I have, nothing complicated. Do folks write the SQL mapping classes by hand for such cases or is there a decent tool for this? Can someone point me in the right direction?

    Read the article

  • How can I improve my select query for storing large versioned data sets?

    - by Jason Francis
    At work, we build large multi-page web applications, consisting mostly of radio and check boxes. The primary purpose of each application is to gather data, but as users return to a page they have previously visited, we report back to them their previous responses. Worst-case scenario, we might have up to 900 distinct variables and around 1.5 million users. For several reasons, it makes sense to use an insert-only approach to storing the data (as opposed to update-in-place) so that we can capture historical data about repeated interactions with variables. The net result is that we might have several responses per user per variable. Our table to collect the responses looks something like this: CREATE TABLE [dbo].[results]( [id] [bigint] IDENTITY(1,1) NOT NULL, [userid] [int] NULL, [variable] [varchar](8) NULL, [value] [tinyint] NULL, [submitted] [smalldatetime] NULL) Where id serves as the primary key. Virtually every request results in a series of insert statements (one per variable submitted), and then we run a select to produce previous responses for the next page (something like this): SELECT t.id, t.variable, t.value FROM results t WITH (NOLOCK) WHERE t.userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') AND t.id IN (SELECT MAX(id) AS id FROM results WITH (NOLOCK) WHERE userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') GROUP BY variable) Which, in this case, would return the most recent responses for the variables "internat", "veteran", and "athlete" for user 2111846. We have followed the advice of the database tuning tools in indexing the tables, and against our data, this is the best-performing version of the select query that we have been able to come up with. Even so, there seems to be significant performance degradation as the table approaches 1 million records (and we might have about 150x that). We have a fairly-elegant solution in place for sharding the data across multiple tables which has been working quite well, but I am open for any advice about how I might construct a better version of the select query. We use this structure frequently for storing lots of independent data points, and we like the benefits it provides. So the question is, how can I improve the performance of the select query? I assume the nested select statement is a bad idea, but I have yet to find an alternative that performs as well. Thanks in advance. NB: Since we emphasize creating over reading in this case, and since we never update in place, there doesn't seem to be any penalty (and some advantage) for using the NOLOCK directive in this case.

    Read the article

< Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >