Search Results

Search found 27468 results on 1099 pages for 'sql quiz'.

Page 606/1099 | < Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >

  • Jasper error: Caused by SQLServerException: Transaction (Process ID 58) was deadlocked on thread | c

    - by Saky
    I got the above error in my jasper report mail. The query that is used in the report is quite complicated (for me). Reading different posts I conclude that to solve this the I have to change the query to SET TRANSACTION ISOLATION LEVEL REPEATABLE READ GO BEGIN TRANSACTION ... my query ... COMMIT TRANSACTION ? I wonder if this is the correct way to solve the error and that if it has any side effects? Has it happened to anyone in the Jasper reports? Does anyone know if there is a better solution exist to the problem? (Although that I have not yet tested the above solution, if anyone can give any insight on this will be helpful.)

    Read the article

  • Need some serious help with self join issue.

    - by kralco626
    Well as you may know, you cannot index a view with a self join. Well actually even two joins of the same table, even if it's not technically a self join. A couple of guys from microsoft came up with a work around. But it's so complicated I don't understand it!!! The solution to the problem is here: http://jmkehayias.blogspot.com/2008/12/creating-indexed-view-with-self-join.html The view I want to apply this work around to is: create VIEW vw_lookup_test WITH SCHEMABINDING AS select count_big(*) as [count_all], awc_txt, city_nm, str_nm, stru_no, o.circt_cstdn_nm [owner], t.circt_cstdn_nm [tech], dvc.circt_nm, data_orgtn_yr from ((dbo.dvc join dbo.circt on dvc.circt_nm = circt.circt_nm) join dbo.circt_cstdn o on circt.circt_cstdn_user_id = o.circt_cstdn_user_id) join dbo.circt_cstdn t on dvc.circt_cstdn_user_id = t.circt_cstdn_user_id group by awc_txt, city_nm, str_nm, stru_no, o.circt_cstdn_nm, t.circt_cstdn_nm, dvc.circt_nm, data_orgtn_yr go Any help would be greatly apreciated!!! Thanks so much in advance!

    Read the article

  • How do you compare dates in a LINQ query?

    - by Gina
    I am tring to compare a date from a asp calendar control to a date in the table.... here's what i have... it doesn't like the == ? var query = from details in db.TD_TravelCalendar_Details where details.StartDate == calStartDate.SelectedDate && details.EndDate == calEndDate.SelectedDate select details;

    Read the article

  • Get list of duplicate rows in MySql

    - by user347033
    Hi, i have a table like this ID nachname vorname 1 john doe 2 john doe 3 jim doe 4 Michael Knight I need a query that will return all the fields (select *) from the records that have the same nachname and vorname (in this case, records 1 and 2). Can anyone help me with this? Thanks

    Read the article

  • Howto monitor traffic between IIS and MSSQL

    - by kockiren
    Hello @all, i try to check how much traffic flows between MSSQL Server and IIS Server in different Locations. There are 1 ipcop in every Location and i download the tcpdump file from one Firewall and search for DST=ipmssql and SRC=ipIIS but i did not find the ip from the Database Server. But there are traffic between both. Any suggestions why i did not find the IP Adress from the MSSQL Server? Is this an configuration failure in IPCop or is the Traffic between ISS and MSSQL so strange :-) Regards Rene

    Read the article

  • query optimization

    - by Gaurav
    I have a query of the form SELECT uid1,uid2 FROM friend WHERE uid1 IN (SELECT uid2 FROM friend WHERE uid1='.$user_id.') and uid2 IN (SELECT uid2 FROM friend WHERE uid1='.$user_id.') The problem now is that the nested query SELECT uid2 FROM friend WHERE uid1='.$user_id.' returns a very large number of ids(approx. 5000). The table structure of the friend table is uid1(int), uid2(int). This table is used to determine whether two users are linked together as friends. Any workaround? Can I write the query in a different way? Or is there some other way to solve this issue. I'm sure I am not the first person to face such a problem. Any help would be greatly appreciated.

    Read the article

  • date comparisons in Rails

    - by aressidi
    Hi there, I'm having trouble with a date comparison in a named scope. I'm trying to determine if an event is current based on its start and end date. Here's the named scope I'm using which kind of works, though not for events that have the same start and end date. named_scope :date_current, :conditions => ["Date(start_date) <= ? AND Date(end_date) >= ?", Time.now, Time.now] This returns the following record, though it should return two records, not one... >> Event.date_current => [#<Event id: 2161, start_date: "2010-02-15 00:00:00", end_date: "2010-02-21 00:00:00", ...] What it's not returning is this as well >> Event.find(:last) => #<Event id: 2671, start_date: "2010-02-16 00:00:00", end_date: "2010-02-16 00:00:00", ...> The server time seems to be in UTC and I presume that the entries are being stored in the DB in UTC. Any ideas as to what I'm doing wrong or what to try? Thanks!

    Read the article

  • access: print report question

    - by I__
    here's the design view of my report: how do i force it to print only one set of these per page, because currently it is printing like this: i want it it print only one set of these controls per page

    Read the article

  • MS-ACCESS query to match first few characters in string comparision

    - by neobee
    What query is suitable to compare two tables specied below, however only part of string in location(table1) will matches with the Location(table2). Location(table1) Location(table2) india- north USxcs India-west Indiaasd India- east Indiaavvds India- south Africassdcasv US- north Africavasvdsa us-west UKsacvavsdv uk- east Indiacascsa uk- south UScssca Africa-middle Indiacsasca Africa-south Africaccc Africa-east UKcac only 1st two characters of location(table1) and 1st two characters of location(table2) should match. Please help only any four characters of location(table1)and any two characters of location(table2)should match.

    Read the article

  • Date format error in vb.net ?

    - by ahmed
    I get this error when I run the application Incorrect syntax near 12, on debugging I found that this error is caused due to the # along with the date. Dim backdate as datetime backdate = DateTime.Now.AddDays(-1) on binding the data to the grid to filter the backdate records this error is caused Incorrect syntax near 12. myqry = " select SRNO,SUBJECT,ID where datesend =" backdate Now i am trying to extract only the date or shall I divide the date into day , month and year with DATEPART and take into a variable or convert the date or what should i do , Please help ???

    Read the article

  • how to update tables' structures keeping current data

    - by Leon
    I have an c# application that uses tables from sqlserver 2008 database (runs on standalone pc with local sqlserver). Initially i install database on this pc with some initial data (there are some tables that application uses and the user doesn't touch). The question is - how can i upgrade this database after user created some new data without harming it (i continue developing and can add some new tables or stored procedures or add some columns to existing tables). Thanks in advance!

    Read the article

  • Speeding up inner-joins and subqueries while restricting row size and table membership

    - by hiffy
    I'm developing an rss feed reader that uses a bayesian filter to filter out boring blog posts. The Stream table is meant to act as a FIFO buffer from which the webapp will consume 'entries'. I use it to store the temporary relationship between entries, users and bayesian filter classifications. After a user marks an entry as read, it will be added to the metadata table (so that a user isn't presented with material they have already read), and deleted from the stream table. Every three minutes, a background process will repopulate the Stream table with new entries (i.e. whenever the daemon adds new entries after the checks the rss feeds for updates). Problem: The query I came up with is hella slow. More importantly, the Stream table only needs to hold one hundred unread entries at a time; it'll reduce duplication, make processing faster and give me some flexibility with how I display the entries. The query (takes about 9 seconds on 3600 items with no indexes): insert into stream(entry_id, user_id) select entries.id, subscriptions_users.user_id from entries inner join subscriptions_users on subscriptions_users.subscription_id = entries.subscription_id where subscriptions_users.user_id = 1 and entries.id not in (select entry_id from metadata where metadata.user_id = 1) and entries.id not in (select entry_id from stream where user_id = 1); The query explained: insert into stream all of the entries from a user's subscription list (subscriptions_users) that the user has not read (i.e. do not exist in metadata) and which do not already exist in the stream. Attempted solution: adding limit 100 to the end speeds up the query considerably, but upon repeated executions will keep on adding a different set of 100 entries that do not already exist in the table (with each successful query taking longer and longer). This is close but not quite what I wanted to do. Does anyone have any advice (nosql?) or know a more efficient way of composing the query?

    Read the article

  • How to reuse results with a schema for end of day stock-data

    - by Vishalrix
    I am creating a database schema to be used for technical analysis like top-volume gainers, top-price gainers etc.I have checked answers to questions here, like the design question. Having taken the hint from boe100 's answer there I have a schema modeled pretty much on it, thusly: Symbol - char 6 //primary Date - date //primary Open - decimal 18, 4 High - decimal 18, 4 Low - decimal 18, 4 Close - decimal 18, 4 Volume - int Right now this table containing End Of Day( EOD) data will be about 3 million rows for 3 years. Later when I get/need more data it could be 20 million rows. The front end will be asking requests like "give me the top price gainers on date X over Y days". That request is one of the simpler ones, and as such is not too costly, time wise, I assume. But a request like " give me top volume gainers for the last 10 days, with the previous 100 days acting as baseline", could prove 10-100 times costlier. The result of such a request would be a float which signifies how many times the volume as grown etc. One option I have is adding a column for each such result. And if the user asks for volume gain in 10 days over 20 days, that would require another table. The total such tables could easily cross 100, specially if I start using other results as tables, like MACD-10, MACD-100. each of which will require its own column. Is this a feasible solution? Another option being that I keep the result in cached html files and present them to the user. I dont have much experience in web-development, so to me it looks messy; but I could be wrong ( ofc!) . Is that a option too? Let me add that I am/will be using mod_perl to present the response to the user. With much of the work on mysql database being done using perl. I would like to have a response time of 1-2 seconds.

    Read the article

  • What is the most efficient/elegant way to parse a flat table into a tree?

    - by Tomalak
    Assume you have a flat table that stores an ordered tree hierarchy: Id Name ParentId Order 1 'Node 1' 0 10 2 'Node 1.1' 1 10 3 'Node 2' 0 20 4 'Node 1.1.1' 2 10 5 'Node 2.1' 3 10 6 'Node 1.2' 1 20 What minimalistic approach would you use to output that to HTML (or text, for that matter) as a correctly ordered, correctly intended tree? Assume further you only have basic data structures (arrays and hashmaps), no fancy objects with parent/children references, no ORM, no framework, just your two hands. The table is represented as a result set, which can be accessed randomly. Pseudo code or plain English is okay, this is purely a conceptional question. Bonus question: Is there a fundamentally better way to store a tree structure like this in a RDBMS? EDITS AND ADDITIONS To answer one commenter's (Mark Bessey's) question: A root node is not necessary, because it is never going to be displayed anyway. ParentId = 0 is the convention to express "these are top level". The Order column defines how nodes with the same parent are going to be sorted. The "result set" I spoke of can be pictured as an array of hashmaps (to stay in that terminology). For my example was meant to be already there. Some answers go the extra mile and construct it first, but thats okay. The tree can be arbitrarily deep. Each node can have N children. I did not exactly have a "millions of entries" tree in mind, though. Don't mistake my choice of node naming ('Node 1.1.1') for something to rely on. The nodes could equally well be called 'Frank' or 'Bob', no naming structure is implied, this was merely to make it readable. I have posted my own solution so you guys can pull it to pieces.

    Read the article

  • make multiple, composite query in oracle

    - by Meloun
    How can i make multiple, composite query in oracle? for example this several queries in one step? 1 CREATE TABLE test (id NUMBER PRIMARY KEY, name VARCHAR2(30)); 2 CREATE SEQUENCE test_sequence START WITH 1 INCREMENT BY 1; 3 CREATE OR REPLACE TRIGGER test_trigger BEFORE INSERT ON test REFERENCING NEW AS NEW FOR EACH ROW BEGIN SELECT test_sequence.nextval INTO :NEW.ID FROM dual; END; 4 INSERT INTO test (name) VALUES ('Jon'); 5 INSERT INTO test (name) VALUES ('Meloun');

    Read the article

  • Optimum size of transaction in Postgres?

    - by Joe
    I'm running a process that does a lot of updates ( 100,000) to a table. I have the choice between putting all the updates in a single transaction or committing transactions every 1000 or so. Ignore for the moment the case where a transaction fails and is aborted. I'm interested in the best size of transaction for memory and speed efficiency.

    Read the article

  • mysql - speedup regex

    - by Uwe
    I have a table: +--------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +--------+------------------+------+-----+---------+----------------+ | idurl | int(11) | NO | PRI | NULL | auto_increment | | idsite | int(10) unsigned | NO | MUL | NULL | | | url | varchar(2048) | NO | | NULL | | +--------+------------------+------+-----+---------+----------------+ the select statement is: SELECT idurl, url FROM URL WHERE idsite = 34 AND url REGEXP '^https\\://www\\.domain\\.com/checkout/step_one\\.php.*' The query needs 5 seconds on a table with 1000000 rows. Can I achieve a speedup with indexes or something else?

    Read the article

  • GROUP BY as a way to pick the first row from a group of similar rows, is this correct, is there any

    - by FipS
    I have a table which stores test results like this: user | score | time -----+-------+------ aaa | 90% | 10:30 bbb | 50% | 9:15 *** aaa | 85% | 10:15 aaa | 90% | 11:00 *** ... What I need is to get the top 10 users: user | score | time -----+-------+------ aaa | 90% | 11:00 bbb | 50% | 9:15 ... I've come up with the following SELECT: SELECT * FROM (SELECT user, score, time FROM tests_score ORDER BY user, score DESC, time DESC) t1 GROUP BY user ORDER BY score DESC, time LIMIT 10 It works fine but I'm not quite sure if my use of ORDER BY is the right way to pick the first row of each group of sorted records. Is there any better practice to achieve the same result? (I use MySQL 5)

    Read the article

  • Why am I getting "Enter Parameter Value" when running my MS Access query?

    - by DanM
    In my query, I use the IIF function to assign either "Before" or "After" to a field named BeforeOrAfter using AS. When I run this query, however, the "Enter Parameter Value" dialog appears, requesting a value for BeforeOrAfter. If I remove BeforeOrAfter DESC from the ORDER BY clause, I don't get the dialog. Here is the offending query: SELECT d.Scenario, e.Event, IIF(d.LogTime < e.Time, 'Before','After') AS BeforeOrAfter, d.HeartRate FROM Data d INNER JOIN Events e ON d.Scenario = e.Scenario WHERE e.Include = Yes ORDER BY d.Scenario, e.Id, BeforeOrAfter DESC Question: Why is my AS BeforeOrAfter not being recognized by the ORDER BY clause? Why does it ask me to enter a parameter value for "BeforeOrAfter" when I run this query? Note: I tried using brackets, single quotes, double quotes, etc., but none of that made any difference.

    Read the article

  • How to make Web Storage persistent in Cordova using JS?

    - by ett
    I have a small quiz app, which is a cross-platform mobile app, that I plan for it to run on Android, iOS, and WP8. I want to store a local highscore, where it will keep track how many points the user had, and if he/she does better than the already stored highscore, update the current highscore. Though, I want the highscore to be persistent, what I mean is, every time the user opens the app I want the last highscore to be present and it to be compared with the new quiz score. Meaning, I don't want the highscore to be deleted after each time the app is closed. I also want for the first time when the app is ran, the highscore to be set to 0, so obviously the first time the user finishes the quiz gets a new highscore. I have those codes so far: scoredb.js // Wait for device API libraries to load document.addEventListener("deviceready", initScoreDB, false); // Device APIs are available function initScoreDB() { window.localStorage.setItem("score", 0); highscore = window.localStorage.getItem("score"); } main.js var correct = 0; var highscore; // In between I have some code that keeps incrementing // correct variable for each correct answer. if (correct > highscore) { window.localStorage.setItem("score", correct); highscore = correct; } It does seem to work okay once the app is started. I did the quiz three times, in the simulator, and it keeps the score as it should. Though, each time I open the app, highscore is reseted to 0. I guess it is due to the fact that I call initScoreDB when the device is ready, and I initialize the score to 0 there and give that value to highscore. Can someone help me to initialize the score to 0 only when the app is ran for the first time, and all the other times to keep the latest highscore as the current highscore and compare it each time with the score that is achieved when the quiz is finished. If someone can help me, I would be glad.

    Read the article

  • How to select only the first rows for each unique value of a column

    - by nuit9
    Let's say I have a table of customer addresses: CName | AddressLine ------------------------------- John Smith | 123 Nowheresville Jane Doe | 456 Evergreen Terrace John Smith | 999 Somewhereelse Joe Bloggs | 1 Second Ave In the table, one customer like John Smith can have multiple addresses. I need the select query for this table to return only first row found where there are duplicates in 'CName'. For this table it should return all rows except the 3rd (or 1st - any of those two addresses are okay but only one can be returned). Is there a keyword I can add to the SELECT query to filter based on whether the server has already seen the column value before?

    Read the article

  • Range partition skip check

    - by user289429
    We have large amount of data partitioned on year value using range partition in oracle. We have used range partition but each partition contains data only for one year. When we write a query targeting a specific year, oracle fetches the information from that partition but still checks if the year is what we have specified. Since this year column is not part of the index it fetches the year from table and compares it. We have seen that any time the query goes to fetch table data it is getting too slow. Can we somehow avoid oracle comparing the year values since we for sure know that the partition contains information for only one year.

    Read the article

< Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >