Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 106/293 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Does the order of conditions in a WHERE clause affect MySQL performance?

    - by Greg
    Say that I have a long, expensive query, packed with conditions, searching a large number of rows. I also have one particular condition, like a company id, that will limit the number of rows that need to be searched considerably, narrowing it down to dozens from hundreds of thousands. Does make any difference to MySQL performance whether I do this: SELECT * FROM clients WHERE (firstname LIKE :foo OR lastname LIKE :foo OR phone LIKE :foo) AND (firstname LIKE :bar OR lastname LIKE :bar OR phone LIKE :bar) AND company = :ugh or this: SELECT * FROM clients WHERE company = :ugh AND (firstname LIKE :foo OR lastname LIKE :foo OR phone LIKE :foo) AND (firstname LIKE :bar OR lastname LIKE :bar OR phone LIKE :bar)

    Read the article

  • How to delete a Dictionary row that is a Double by using an Int?

    - by Richard Reddy
    Hi, I have a Dictionary object that is formed using a double as its key values. It looks like this: Dictionary<double, ClassName> VariableName = new Dictionary<double, ClassName>(); For my project I have to have the key as the double as I require values like 1.1,1.2,2.1,2.2,etc in my system. Everything in my system works great except when I want to delete all the keys in a group eg all the 1 values would be 1.1,1.2, etc. I can delete rows if I know the full value of the key eg 1.1 but in my system I will only know the whole number. I tried to do the following but get an error: DictionaryVariable.Remove(j => Convert.ToInt16(j.Key) == rowToEdit).OrderByDescending(j => j.Key); Is there anyway to remove all rows per int value by converting the key? Thanks, Rich

    Read the article

  • C# logic order and compiler behavior

    - by Terrapin
    In C#, (and feel free to answer for other languages), what order does the runtime evaluate a logic statement? Example: DataTable myDt = new DataTable(); if (myDt != null && myDt.Rows.Count > 0) { //do some stuff with myDt } Which statement does the runtime evaluate first - myDt != null or: myDt.Rows.Count > 0 ? Is there a time when the compiler would ever evaluate the statement backwards? Perhaps when an "OR" operator is involved?

    Read the article

  • Oracle: TABLE ACCESS FULL with Primary key?

    - by tim
    There is a table: CREATE TABLE temp ( IDR decimal(9) NOT NULL, IDS decimal(9) NOT NULL, DT date NOT NULL, VAL decimal(10) NOT NULL, AFFID decimal(9), CONSTRAINT PKtemp PRIMARY KEY (IDR,IDS,DT) ) ; SQL>explain plan for select * from temp; Explained. SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial')); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- --------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| --------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 61 | 2 (0)| | 1 | TABLE ACCESS FULL| TEMP | 1 | 61 | 2 (0)| --------------------------------------------------------------- Note ----- - 'PLAN_TABLE' is old version 11 rows selected. SQL server 2008 shows in the same situation Clustered index scan. What is the reason?

    Read the article

  • T-SQL UPDATE trigger help

    - by Tan
    Hi iam trying to make an update trigger in my database. But i get this error every time the triggers trigs. Error MEssage: The row value(s) updated or deleted either do not make the row unique or they alter multiple rows(3rows) and heres my trigger ALTER TRIGGER [dbo].[x1pk_qp_update] ON [dbo].[x1pk] FOR UPDATE AS BEGIN TRY DECLARE @UserId int , @PackareKod int , @PersSign varchar(10) SELECT @PackareKod = q_packarekod , @PersSign = q_perssign FROM INSERTED IF @PersSign IS NOT NULL BEGIN IF EXISTS (SELECT * FROM [QPMardskog].[dbo].[UserAccount] WHERE [Account] = @PackareKod) BEGIN SET @UserId = (SELECT [UserId] FROM [QPMardskog].[dbo].[UserAccount] WHERE [Account] = @PackareKod) UPDATE [QPMardskog].[dbo].[UserAccount] SET [Active] = 1 WHERE [Account] = @PackareKod UPDATE [QPMardskog].[dbo].[User] SET [Active] = 1 WHERE [Id] = @UserId END END END TRY But i only update one row in the table how can it says 3 rows. Please advise.

    Read the article

  • How big can a SQL Server row be before it's a problem?

    - by John Leidegren
    Occasionally I run into this limitation using SQL Server 2000 that a row size can not exceed 8K bytes. SQL Server 2000 isn't really state of the art, but it's still in production code and because some tables are denormalized that's a problem. However, this seems to be a non issue with SQL Server 2005. At least, it won't complain that row sizes are bigger than 8K, but what happens instead and why was this a problem in SQL Server 2000? Do I need to care about my rows growing? Should I try and avoid large rows? Are varchar(max) and varbinary(max) a solution or expensive, in terms of size in database and/or CPU time? Why do I care at all about specifying the length of a particular column, when it seems like it's just a matter of time before someones going to hit that upper limit?

    Read the article

  • insert DropDownList and TextField in Gridview

    - by userk
    HI,I need to create a GridView which DataSource is an object. Depending on the object I may need some columns with DropDownLists or TextFields (but not all rows) As I don't know the number or columns, they have to be dynamic. I found this solution: TemplateField t = new TemplateField(); t.HeaderTemplate = new GridViewTemplate("header", "title"); t.ItemTemplate = new GridViewTemplate("combobox", "val"); GridView1.Columns.Add(t); GridView1.DataSource = ds; GridView1.DataBind(); Where GridViewTemplate extends ITemplate. This didn't work for me, it fills all rows of a column and I had no way to control witch DropDownList and TextFields need to be created (object info). All the DropDownLists need to have an ID also only known by the object. There are some way I can do this?

    Read the article

  • Why is this loop over mysql resultset slow? (1.4ms per cycle)

    - by pawpro
    The $res contains around 488k rows the whole loop takes 61s! that's over 1.25ms per cycle! What is taking all that time? while($row = $res->fetch_assoc()) { $clist[$row['upload_id']][$row['dialcode_id']][$row['carrier_id']]['std'] = $row['cost_std']; $clist[$row['upload_id']][$row['dialcode_id']][$row['carrier_id']]['ecn'] = $row['cost_ecn']; $clist[$row['upload_id']][$row['dialcode_id']][$row['carrier_id']]['wnd'] = $row['cost_wnd']; $dialcode_destination[$row['upload_id']][$row['carrier_id']][$row['dialcode_id']]['other_destination'] = $row['destination_id']; $dialcode_destination[$row['upload_id']][$row['carrier_id']][$row['dialcode_id']]['carrier_destination'] = $row['carrier_destination_id']; } Now resultset of 10 rows, smaller arrays and performance 30 times higher (0.041ms) not the fastest still but better. while($row = $res->fetch_assoc()) { $customer[$row['id']]['name'] = $row['name']; $customer[$row['id']]['code'] = $row['customer']; }

    Read the article

  • Scala : reference is ambiguous (imported twice)

    - by tk
    I want to use a method as a parameter of another method of the same class. I have a class and objects which are companions: class mM(var elem:Matrix){ //apply a function on a dimension rows (1) or cols (2) def app(func:Iterable[Double]=>Double)(dim : Int) : Matrix = { ... } //utility function def logsumexp(): Double = {...} } object mM{ def apply(elem:Matrix):mM={new mM(elem)} def logsumexp(elem:Iterable[Double]): Double ={ this.apply(elem.asInstanceOf[Matrix]).logsumexp() } } Normally I use logsumexp like this mM(matrix).logsumexp but if want to apply it to the rows I can't use mM(matrix).app(mM.logsumexp)(1), I get the error: error: reference to mM is ambiguous; it is imported twice in the same scope by import mM and import mM What is the most elegant solution ? Should I change logsumexp() to another class ? Thanks,=)

    Read the article

  • PHP+MYSQL Server Config

    - by Matias
    Hi guys, I am parsing an XML file with PHP and inserting the rows in a MYSQL database. I am using PHP simplexml_load_files to load the XML and a foreach to loop through the array and insert the rows into my database. It works perfectly fine with small files i am testing, but it comes to reality I need to parse a large 500mb XML file and nothing happens. I was wondering what was the right Php.ini config for this case ? I have a VPS Linux Cent OS, with 256 mb of dedicated Memory and MYSQL 5.0.5. I have also set php memory_limit = 256M (maximum of my server) Any suggestions, similar experiences will be greatly appreciated Thanks

    Read the article

  • RegEx - character not before match

    - by danneth
    I understand the consepts of RegEx, but this is more or less the first time I've actually been trying to write some myself. As a part of a project, I'm attempting to parse out strings which match to a certain domain (actually an array of domains, but let's keep it simple). At first I started out with this: url.match('www.example.com') But I noticed I was also getting input like this: http://www.someothersite.com/page?ref=http://www.example.com These rows will ofcourse match for www.example.com but I wish to exclude them. So I was thinking along these lines: Only match rows that contain www.example.com, but not after a ? character. This is what I came up with: var reg = new RegExp("[^\\?]*" + url + "(\\.*)", "gi"); This does however not seem to work, any suggestions would be greatly appreciated as I fear I've used what little knowledge I yet possess in the matter.

    Read the article

  • get n records at a time from a temporary table

    - by Claudiu
    I have a temporary table with about 1 million entries. The temporary table stores the result of a larger query. I want to process these records 1000 at a time, for example. What's the best way to set up queries such that I get the first 1000 rows, then the next 1000, etc.? They are not inherently ordered, but the temporary table just has one column with an ID, so I can order it if necessary. I was thinking of creating an extra column with the temporary table to number all the rows, something like: CREATE TEMP TABLE tmptmp AS SELECT ##autonumber somehow##, id FROM .... --complicated query then I can do: SELECT * FROM tmptmp WHERE autonumber>=0 AND autonumber < 1000 etc... how would I actually accomplish this? Or is there a better way? I'm using Python and PostgreSQL.

    Read the article

  • MSSQL Sum query

    - by ldb
    today my problem is this i have 2 column and i wish check if the sum of that columns isn't Higher then a value(485 for example) and if is do a query...i though to do SELECT * FROM table WHERE ColumnA+ColumnB<485 But isn't working... i've already tried with SELECT Sum(ColumnA)+Sum(ColumnB) AS Total FROM table but it gives me 1 column with the sum of all rows, i instead want a row for every sum. so how can i do..? xD i hope you understood if not just ask that i try to explain it better! and thanks in advice for who want to help me! EDIT: I Found out XD the problem was that the columns was Smallint and the result of 1 or more rows was more than 32k so it wasn't working! Thanks At all!!

    Read the article

  • Macro for search Variabl, Date or value

    - by John
    To whom it may concern Good Day I have an excel work book with 10 sheets. In that work book 1 to 5 rows are header. I would like to search a Value, Variable or Date as I required. If it found then all rows should copy to a new work book. I need button for run macro. Program should ask what I need to search for. If I put a date macro should search all workbook if found all result should copy to a new workbook. Can any one give a solution for this.

    Read the article

  • Where's the rest of the space used in this table?

    - by Eric H.
    I'm using SQL Server 2005. I have a table whose row size should be 124 bytes. It's all ints or floats, no NULL columns (so everything is fixed width). There is only one index, clustered. The fill factor is 0. After inserting a ton of data, sp_spaceused returns the following name rows reserved data index_size unused OHLC_Bar_Trl 117076054 29807664 KB 29711624 KB 92344 KB 3696 KB which shows a rowsize of approx (29807664*1024)/117076054 = 260 bytes/row. Where's the rest of the space? Is there some DBCC command I need to run to tighten up this table (I could not insert the rows in correct index order, so maybe it's just internal fragmentation)?

    Read the article

  • "Othello" game needs some clarification

    - by pappu
    I am trying to see if my understanding of "othello" fame is correct or not. According to the rules, we flip the dark/light sides if we get some sequence like X000X = XXXXX. The question I have is if in the process of flipping 0-X or X- 0, do we also need to consider the rows/columns/diagonals of newly flipped elements? e.g. consider board state as shown in above image(New element X is placed @ 2,3) When we update board, we mark elements from 2,3 to 6,3 as Xs but in this process elements like horizontal 4,3 to 4,5 and diagonal 2,3 to 4,5 are also eligible for update? so do we update those elements as well? or just the elements which have starting as 2,3 (i.e update rows/column/diagonal whose starting point is the element we are dealing with, in our case 2,3?) Please help me understand it

    Read the article

  • problem reading a csv file in python

    - by Hossein
    Hi, I am trying to read a very simple but somehow large(800Mb) csv file using the csv library in python. The delimiter is a single tab and each line consists of some numbers. Each line is a record, and I have 20681 rows in my file. I had some problems during my calculations using this file,it always stops at a certain row. I got suspicious about the number of rows in the file.I used the code below to count the number of row in this file: tfdf_Reader = csv.reader(open('v2-host_tfdf_en.txt'),delimiter=' ') c = 0 for row in tfdf_Reader: c = c + 1 print c To my surprise c is printed with the value of 61722!!! Why is this happening? What am I doing wrong?

    Read the article

  • Would vector of vectors be contiguous?

    - by user1150989
    I need to allocate a vector of rows where row contains a vector of rows. I know that a vector would be contiguous. I wanted to know whether a vector of vectors would also be contiguous. Example code is given below vector<long> firstRow; firstRow.push_back(0); firstRow.push_back(1); vector<long> secondRow; secondRow.push_back(0); secondRow.push_back(1); vector< vector < long> > data; data.push_back(firstRow); data.push_back(secondRow); Would the sequence in memory be 0 1 0 1?

    Read the article

  • How to make a section header with an non-rectangular shape without ugly underflow?

    - by mystify
    I made an custom UITableView. Then I made a custom header for sections. It has round corners. But unfortunately, the rows of the section are visible in those round corners when the header floats over them. I could just make a background color so the corners are not transparent. But that is not a solution since my whole table has a background image and the section header can move. Is there any way to get the clipping region for the rows a little bit more downwards? I mean: They should not appear under that section header.

    Read the article

  • Which of the two ways should I use to insert tags into mysql?

    - by ggfan
    For each ad, I allow users to choose up to 5 tags. Right now, in my database, I have it like... Posting_id TagID 5 1 5 2 5 3 6 5 6 1 But i was thinking if I should make it like... Posting_id TagID 5 1 2 3 6 5 1 Then first option is much easier to insert and retrieve data. But if I have 100 posts with 3 tags each, that's 300 rows...so ALOT more rows The second option requires using explode() impode(), etc but it is much cleaner. Which option should I do and why? thanks!

    Read the article

  • How to remember the prior page before accessing subsequent pages across frame

    - by Ricky
    Hi guys: I get two frames, says A and B. Clicking a link in A will trigger page in B changing from URL_A to URL_B. How do I remember URL_A, so that when users click cacnel button in URL_B, they can go back to URL_A? how do I get mainFrame's URL in fraTopMenu? <frameset rows="60,*" cols="*" frameborder="no" border="0" framespacing="0"> <frame src="/Common/Manager/TopMenu.aspx" name="fraTopMenu" scrolling="no" noresize="noresize" id="fraTopMenu" title="" /> <frameset rows="*" cols="185,*" framespacing="0" frameborder="no" border="0"> <frame src="/Common/Manager/LeftMenu.aspx" name="leftFrame" id="leftFrame" title="" /> <frame src="<%= MainUrl %>" name="mainFrame" id="mainFrame" /> </frameset> </frameset>

    Read the article

  • Not getting return value

    - by scottO
    I am trying to get a return value and it keeps giving me an error. I am trying to grab the "roleid" after the username has been validated by sending it the username-- I can't figure out what I am doing wrong? public string ValidateRole(string sUsername) { string matchstring = "SELECT roleid FROM tblUserRoles WHERE UserName='" + sUsername +"'"; SqlCommand cmd = new SqlCommand(matchstring); cmd.Connection = new SqlConnection("Data Source=(local);Initial Catalog="mydatabase";Integrated Security=True"); cmd.Connection.Open(); cmd.CommandType = CommandType.Text; SqlDataAdapter sda = new SqlDataAdapter(); DataTable dt = new DataTable(); sda.SelectCommand = cmd; sda.Fill(dt); string match; if (dt.Rows.Count > 0) { foreach (DataRow row in dt.Rows) { match = row["roleid"].ToString(); return match; } } else { match = "fail"; return match; } }

    Read the article

  • hadoop - large database query

    - by Mastergeek
    Situation: I have a Postgres DB that contains a table with several million rows and I'm trying to query all of those rows for a MapReduce job. From the research I've done on DBInputFormat, Hadoop might try and use the same query again for a new mapper and since these queries take a considerable amount of time I'd like to prevent this in one of two ways that I've thought up: 1) Limit the job to only run 1 mapper that queries the whole table and call it good. or 2) Somehow incorporate an offset in the query so that if Hadoop does try to use a new mapper it won't grab the same stuff. I feel like option (1) seems more promising, but I don't know if such a configuration is possible. Option(2) sounds nice in theory but I have no idea how I would keep track of the mappers being made and if it is at all possible to detect that and reconfigure. Help is appreciated and I'm namely looking for a way to pull all of the DB table data and not have several of the same query running because that would be a waste of time.

    Read the article

  • How do I select from a stored procedure in Sybase?

    - by Nick Fortescue
    My DBA has constructed me a stored procedure in a Sybase database, for which I don't have the definition. If I run it, it returns a resultset with a set of columns and values. I would like to SELECT further to reduce the rows in the result set. Is this possible? From this question it seems like I could insert the results into a temporary table, but I'm not sure I've got permissions to do this. Is there any way I can SELECT certain rows, or if not, can someone give me example code for simulating with a temporary table?

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >