Search Results

Search found 32538 results on 1302 pages for 'restore database'.

Page 316/1302 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >

  • Why is MySQL with InnoDB doing a table scan when key exists and choosing to examine 70 times more ro

    - by andysk
    Hello, I'm troubleshooting a query performance problem. Here's an expected query plan from explain: mysql> explain select * from table1 where tdcol between '2010-04-13:00:00' and '2010-04-14 03:16'; +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ | 1 | SIMPLE | table1 | range | tdcol | tdcol | 8 | NULL | 5437848 | Using where | +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ 1 row in set (0.00 sec) That makes sense, since the index named tdcol (KEY tdcol (tdcol)) is used, and about 5M rows should be selected from this query. However, if I query for just one more minute of data, we get this query plan: mysql> explain select * from table1 where tdcol between '2010-04-13 00:00' and '2010-04-14 03:17'; +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ | 1 | SIMPLE | table1 | ALL | tdcol | NULL | NULL | NULL | 381601300 | Using where | +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ 1 row in set (0.00 sec) The optimizer believes that the scan will be better, but it's over 70x more rows to examine, so I have a hard time believing that the table scan is better. Also, the 'USE KEY tdcol' syntax does not change the query plan. Thanks in advance for any help, and I'm more than happy to provide more info/answer questions.

    Read the article

  • How to convert EER to SQL Table?

    - by Khajavi
    I have no problem with converting ER to SQL tables, but I don't know how can I convert EER to SQL tables? as you Know that EER has "is a" specification and inheritance, but I don't know how relational databases can connect with inheritance specification

    Read the article

  • How to record different authentication types (username / password vs token based) in audit log

    - by RM
    I have two types of users for my system, normal human users with a username / password, and delegation authorized accounts through OAuth (i.e. using a token identifier). The information that is stored for each is quite different, and are managed by different subsytems. They do however interact with the same tables / data within the system, so I need to maintain the audit trail regardless of whether human user, or token-based user modified the data. My solution at the moment is to have a table called something like AuditableIdentity, and then have the two types inheriting off that table (either in the single table, or as two seperate tables with 1 to 1 PK with AuditableIdentity. All operations would use the common AuditableIdentity PK for CreatedBy, ModifiedBy etc columns. There isn't any FK constraint on the audit columns, so any text can go in there, but I want an easy way to easily determine whether it was a human or system that made the change, and joining to the one AuditableIdentity table seems like a clean way to do that? Is there a best practice for this scenario? Is this an appropriate way of approaching the problem - or would you not bother with the common table and just rely on joins (to the two seperate un-related user / token tables) later to determine which user type matches which audit records?

    Read the article

  • Left Join 3 Tables and Show True or False on Empty Cells

    - by Jason Shultz
    Ok, this may be confusing so I hope I explain it correctly. I have 4 tables: business photos video category I want to display "featured" businesses on the home page (or elsewhere) and I want to show a Yes or No in the table row based upon whether or not there are photos or videos for that business. Example: Joes Crab Shack has no videos or photos but is featured. So, when his row is echoed out it will show the Business Name and Business Owner but there will be no data in the photo or video cells thus in the video and photos column it will say No. Otherwise, if the opposite was true, it would say Yes. Here's my pastebin

    Read the article

  • What are the benefits of left outer join vs nested aggregate selects to find the newest rows in a table?

    - by RenderIn
    I'm doing: select * from mytable y where y.year = (select max(yi.year) from mytable yi where yi.person = y.person) Is that better or worse from a performance aspect than: select y.* from mytable y left outer join mytable y2 on y.year < y2.year and y.person = y2.person where y2.year is null The explain plan/anecdotal evidence is inconclusive so I am wondering if in general one is better than the other.

    Read the article

  • Rails show latest entry

    - by Danny McClelland
    Hi Everyone, I have a working Rails application on version 2.3.5 - I am using many to many model relations and have got almost everything working. What I would like to do is on my new kase page show the most recent kase job ref at the top. So for example, if I created a new kase with a job ref of "001", if I then went to create a new kase it would show at the top "Your previous kases reference was 001". I have the field of jobref in the new kase form, so I am trying to workout what I need to do to output only the last jobref. If that makes sense! Thanks, Danny

    Read the article

  • Filling data in FormView from different tables

    - by wacky_coder
    Hi, I am using a FormView in an online quiz page for displaying the questions and RadioButtons (for answers) http://stackoverflow.com/questions/2438219/online-quiz-using-asp-dot-net Now, I need to pick the questions according to a TestID and a particular Set of that Test. The testid and the set_number would be passed using Session variables. I'm having 3 testIDs and 3 Sets per TestID and, thus, am using 9 tables to store the Questions, the Options and the correct answer. I need help on how to set the FormView so that it extracts the questions from a particular table only. Do I need to use a StoredProcedure ? If yes, how? PS: If I use only one table to store all the questions from each Set and for each TestID, I can do that, but I'd prefer using separate tables.

    Read the article

  • Mapping a ER model to Rails

    - by Thiago
    Hi there, I want to map the following ER schema to rails: I have an entity called "user" which has a self relationship called "has friend", which has an attribute called "status". In code, I would like to run: User.friends and it should return me an array of users, containing the users that I'm friend of. It shouldn't matter in which side of the relationship I am (i.e. whether I'm the friender or the friendee). Any thoughts?

    Read the article

  • Maintaining integrity of Core Data Entities with many incoming one-to-many relationships

    - by Henry Cooke
    Hi all. I have a Core Data store which contains a number of MediaItem entities that describe, well, media items. I also have NewsItems, which have one-to-many relationships to a number of MediaItems. So far so good. However, I also have PlayerItems and GalleryItems which also have one-to-many relationships to MediaItems. So MediaItems are shared across entities. Given that many entities may have one-to-many relationships, how can I set up reciprocal relationships from a MediaItem to all (1 or more) of the entities which have relationships to it and, furthermore, how can I implement rules to delete MediaItems when the number of those reciprocal relationships drops to 0?

    Read the article

  • Oracle .dmp file

    - by Grasper
    I have an oracle .dmp file, and no access to an oracle install.. Is there any way I can read the data or open it in another program to see what data is in this file?

    Read the article

  • How can I round money values to the nearest $5.00 interval?

    - by Frank Developer
    I have an Informix-SQL based Pawnshop app which calculates an estimate of how much money should be loaned to a customer, based on the weight and purity of gold. The minimum the pawnshop lends is $5.00. The pawnshop employee will typically lend amounts which either ends with a 5 or 0. examples: 10, 15, 20, 100, 110, 125, etc. They do this so as to not run into shortage problems with $1.00 bills. So, if for example my system calculates the loan should be: $12.49, then round it to $10, $12.50 to $15.00, $13.00 to $15.00, $17.50 to $20.00, and so on!..The employee can always override the rounded amount if necessary. Is it possible to accomplish this within the instructions section of a perform screen or would I have to write a cfunc and call it from within perform?.. Are there any C library functions which perform interval rounding of money values?.. On another note, I think the U.S. Government should discontinue the use of pennies so that businesses can round amounts to the nearest nickel, it would save so much time and weight in our pockets!

    Read the article

  • problem sybase autogenerated ids

    - by daedlus
    Hi , I have a table in which the PK column Id of type bigint and it is populated automatically increasing order of 1,2,3...so on i notice that some times all of a sudden the ids that are generated are having very big value . for example the ids are like 1,2,3,4,5,500000000000001,500000000000002 there is a huge jump after 5...ids 6 , 7 were not used at all i do perform delete operations on this table but i am absolutely sure that missing ids were not used before. why does this occur and how can i fix this? many thanks for looking in to this. my env: sybase ase 15.0.3 , linux

    Read the article

  • What's the best way to store sort order in SQL?

    - by Duracell
    The guys at the top want sort order to be customizable in our app. So I have a table that effectively defines the data type. What is the best way to store our sort order. If I just created a new column called 'Order' or something, every time I updated the order of one row I imagine I would have to update the order of every row to ensure posterity. Is there a better way to do it?

    Read the article

  • Tools to work with App Engine data dumps

    - by Thilo
    Using the bulkloader.py utility you can download all data from your application's Datastore. It is not obvious how the data is stored, however. From the looks of it, you get a SQLite file with all data in binary format in a single table: sqlite> .tables bulkloader_database_signature result sqlite> .schema result CREATE TABLE result ( id BLOB primary key, value BLOB not null, sort_key BLOB); Are there any tools to work with this data?

    Read the article

  • ASP.NET configure data source is not returning anything?

    - by Greg McNulty
    I'm selecting table data of the current user: SELECT [ConfidenceLevel], [LoveLevel], [HappinessLevel] FROM [UserData] WHERE ([UserProfileID] = @UserProfileID) I have set a control to the unique user ID and it is getting the correct value: HTML: <asp:Label ID="userID" runat="server" Text="Labeluser"></asp:Label> C#: userID.Text = Membership.GetUser().ProviderUserKey.ToString(); I then use it in the where clause using the Configure Data Source window unique ID = control then controlID userID (fills in .text for me) I compile and run but nothing shows up where the table should be. Any suggestions? Here is the code it has created: <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataSourceID="SqlDataSource1"> <Columns> <asp:BoundField DataField="ConfidenceLevel" HeaderText="ConfidenceLevel" SortExpression="ConfidenceLevel" /> <asp:BoundField DataField="LoveLevel" HeaderText="LoveLevel" SortExpression="LoveLevel" /> <asp:BoundField DataField="HappinessLevel" HeaderText="HappinessLevel" SortExpression="HappinessLevel" /> </Columns> </asp:GridView> <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:ConnectionStringToDB %>" SelectCommand="SELECT [ConfidenceLevel], [LoveLevel], [HappinessLevel] FROM [UserData] WHERE ([UserProfileID] = @UserProfileID)"> <SelectParameters> <asp:ControlParameter ControlID="userID" Name="UserProfileID" PropertyName="Text" Type="Object" /> </SelectParameters> </asp:SqlDataSource>

    Read the article

  • IntegrityError: foreign key violation upon delete

    - by Lukasz Korzybski
    I have Order and Shipment model. Shipment has a foreign key to Order. class Order(...): ... class Shipment() order = m.ForeignKey('Order') ... Now in one of my views I want do delete order object along with all related objects. So I invoke order.delete(). I have Django 1.0.4, PostgreSQL 8.4 and I use transaction middleware, so whole request is enclosed in single transaction. The problem is that upon order.delete() I get: ... File "/usr/local/lib/python2.6/dist-packages/django/db/backends/__init__.py", line 28, in _commit return self.connection.commit() IntegrityError: update or delete on table "main_order" violates foreign key constraint "main_shipment_order_id_fkey" on table "main_shipment" DETAIL: Key (id)=(45) is still referenced from table "main_shipment". I checked in connection.queries that proper queries are executed in proper order. First shipment is deleted, after that django executes delete on order row: {'time': '0.000', 'sql': 'DELETE FROM "main_shipment" WHERE "id" IN (17)'}, {'time': '0.000', 'sql': 'DELETE FROM "main_order" WHERE "id" IN (45)'} Foreign key have ON DELETE NO ACTION (default) and is initially deferred. I don't know why I get foreign key constraint violation. I also tried to register pre_delete signal and manually delete shipment objects before delete on order is called, but it resulted in the same error. I can change ON DELETE behaviour for this key in Postgres but it would be just a hack, I wonder if anyone has a better idea what's going on here. There is also a small detail, my Order model inherits from Cart model, so it actually doesn't have id field but cart_ptr_id and after DELETE on order is executed there is also DELETE on cart, but it seems unrelated? to the shipment-order problem so I simplified it in the example.

    Read the article

  • Star-schema: Separate dimensions for clients and non-clients or shared dimension for attendants?

    - by celopes
    I'm new to modeling star schemas, fresh from reading the Data Warehouse Toolkit. I have a business process that has clients and non-clients calling into conference calls with some of our employees. My fact table, call it "Audience", will contain a measure of how long an attending person was connected to the call, and the cost per minute of this person's connection to the call. The grain is "individual connection to the conference call". Should I use my conformed Client dimension and create a non-client dimension (for the callers that are not yet clients) this way (omitting dimensions that are not part of this questions): Or would it be OK/better to have a non-conformed Attending dimension related to the conformed Client dimension in this manner: Or is there a better/standard mechanism to model business processes like this one?

    Read the article

  • Dynamic Typed Table/Model in J2EE?

    - by Viele
    Hi, Usually with J2EE when we create Model, we define the fields and types of fields through XML or annotation before compilation time. Is there a way to change those in runtime? or better, is it possible to create a new Model based on the user's input during the runtime? such that the number of columns and types of fields are dynamic (determined at runtime)? Help is much appreciated. Thank you.

    Read the article

  • Are Parameters really enough to prevent Sql injections?

    - by Rune Grimstad
    I've been preaching both to my colleagues and here on SO about the goodness of using parameters in SQL queries, especially in .NET applications. I've even gone so far as to promise them as giving immunity against SQL injection attacks. But I'm starting to wonder if this really is true. Are there any known SQL injection attacks that will be successfull against a parameterized query? Can you for example send a string that causes a buffer overflow on the server? There are of course other considerations to make to ensure that a web application is safe (like sanitizing user input and all that stuff) but now I am thinking of SQL injections. I'm especially interested in attacks against MsSQL 2005 and 2008 since they are my primary databases, but all databases are interesting. Edit: To clarify what I mean by parameters and parameterized queries. By using parameters I mean using "variables" instead of building the sql query in a string. So instead of doing this: SELECT * FROM Table WHERE Name = 'a name' We do this: SELECT * FROM Table WHERE Name = @Name and then set the value of the @Name parameter on the query / command object.

    Read the article

  • SQL query performance optimization (TimesTen)

    - by Sergey Mikhanov
    Hi community, I need some help with TimesTen DB query optimization. I made some measures with Java profiler and found the code section that takes most of the time (this code section executes the SQL query). What is strange that this query becomes expensive only for some specific input data. Here’s the example. We have two tables that we are querying, one represents the objects we want to fetch (T_PROFILEGROUP), another represents the many-to-many link from some other table (T_PROFILECONTEXT_PROFILEGROUPS). We are not querying linked table. These are the queries that I executed with DB profiler running (they are the same except for the ID): Command> select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; < 1169655247309537280 > < 1169655249792565248 > < 1464837997699399681 > 3 rows found. Command> select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; < 1169655247309537280 > 1 row found. This is what I have in the profiler: 12:14:31.147 1 SQL 2L 6C 10825P Preparing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272 12:14:31.147 2 SQL 4L 6C 10825P sbSqlCmdCompile ()(E): (Found already compiled version: refCount:01, bucket:47) cmdType:100, cmdNum:1146695. 12:14:31.147 3 SQL 4L 6C 10825P Opening: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.147 4 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.148 5 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.148 6 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.228 7 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.228 8 SQL 4L 6C 10825P Closing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:35.243 9 SQL 2L 6C 10825P Preparing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928 12:14:35.243 10 SQL 4L 6C 10825P sbSqlCmdCompile ()(E): (Found already compiled version: refCount:01, bucket:44) cmdType:100, cmdNum:1146697. 12:14:35.243 11 SQL 4L 6C 10825P Opening: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 12 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 13 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 14 SQL 4L 6C 10825P Closing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; It’s clear that the first query took almost 100ms, while the second was executed instantly. It’s not about queries precompilation (the first one is precompiled too, as same queries happened earlier). We have DB indices for all columns used here: T_PROFILEGROUP.M_ID, T_PROFILECONTEXT_PROFILEGROUPS.M_ID_OID and T_PROFILECONTEXT_PROFILEGROUPS.M_ID_EID. My questions are: Why querying the same set of tables yields such a different performance for different parameters? Which indices are involved here? Is there any way to improve this simple query and/or the DB to make it faster? UPDATE: to give the feeling of size: Command> select count(*) from T_PROFILEGROUP; < 183840 > 1 row found. Command> select count(*) from T_PROFILECONTEXT_PROFILEGROUPS; < 2279104 > 1 row found.

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >