Search Results

Search found 33538 results on 1342 pages for 'select query'.

Page 296/1342 | < Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >

  • Select all items of union but not the last one!

    - by 100r
    I have few texboxes, dropdown lists, etc.. They have their css classes. I want to select all elements of specific class(es) but NOT the last element from group of all classes. <asp:TextBox ID="TextBox1" runat="server" CssClass="class1" ></asp:TextBox> <asp:TextBox ID="TextBox2" runat="server" CssClass="class2" ></asp:TextBox> <asp:TextBox ID="TextBox3" runat="server" CssClass="class2" ></asp:TextBox> I want to select only TextBox1 and TextBox2, not TextBox3! selector should be something like this $("(.class1,.class1):not(:last)") or something like $(".class1,.class1").filter(":not(:last)") but of course none of it is working :) any sugestions? tnx in advance!

    Read the article

  • How do you increase Internet Explorer 7's "Select as you type" timeout for comboboxes?

    - by Iain Fraser
    In Internet Explorer 7, you can select options from comboboxes by typing the first few letters of the value you're looking for. However, some people in our organisation are a bit slow and can't type their selection quick enough, with the result that the timeout is triggered and the "select as you type" process starts all over again. Example: If I type A-R-M-A (looking for Armadale) then wait half a second and type D, I'll get selections beginning with the letter D. What I want to do is increase this timeout to allow for slow typers. (We're in a corporate environment so rolling out these changes to all machines won't be a problem).

    Read the article

  • Insert into ... Select *, how to ignore identity?

    - by Haoest
    I have a temp table with the exact structure of a concrete table T. It was created like this: select top 0 * into #tmp from T After processing and filling in content into #tmp, I want to copy the content back to T like this: insert into T select * from #tmp This is okay as long as T doesn't have identity column, but in my case it does. Is there anyways I can ignore the auto-increment identity column from #tmp when I copy to T? My motivation is to avoid having to spell out every column name in the Insert Into list. EDIT: toggling identity_insert wouldn't work because the pkeys in #tmp may collide with those in T if rows were inserted into T outside of my script, that's if #tmp has auto-incremented the pkey to sync with T's in the first place.

    Read the article

  • How to select parent object of a hyperlink whose href match the requested page/file name using jQuer

    - by ARS
    How to select parent object of a hyperlink whose href match the requested page/file name using jQuery? I have following code <div> <div class="menu-head"> <a href="empdet.aspx">employees</a> <a href="custdet.aspx">customers</a> </div> <div class="menu-head"> <a href="depdet.aspx">departments</a> </div> <div> I want a Jquery to change the color of the parent div corresponding a hyperlink. If the user is browsing custdet.aspx the respective parent div background should be changed to red. Edit: I have a method to retrieve the file name. I just need the right selector to select the parent.

    Read the article

  • User Interface. Multiple select with priority.

    - by Andrew Florko
    I'm designing user interface and want to ask your advises how to make it more user-friendly. Please tell any suggestions and if you have ever seen implementation of something familiar please share the link. University. There are 40+ specialities grouped into 5 faculties. User choose several he is interested in and than orders them by priority. For example I am interested in "programming microcontrollers", "system analysis" and "experimental physic". I must find them quickly in "programming faculty", select them and then order - what I prefer most and what I prefer less then others I select. Any ideas welcome :)

    Read the article

  • how to select specific number of child entities instead of all in entity framework 3.5?

    - by Sasha
    Hi all, i am wondering how can i select specific number of child objects instead of taking them all with include? lets say i have object 'Group' and i need to select last ten students that joined the group. When i use '.Include("Students"), EF includes all students. I was trying to use Take(10), but i am pretty new to EF and programming as well, so i couldn't figure it out. Any suggestions? UPDATED: ok, i have Group object already retrieved from db like this: Group group = db.Groups.FirstOrDefault(x=>x.GroupId == id) I know that i can add Include("Students") statement, but that would bring ALL students, and their number could be quite big whether i need only freshest 10 students. Can i do something like this: var groupWithStudents = group.Students.OrderByDescending(//...).Take(10);? The problem with this is that Take< no longer appears in intellisense. Is this clear enough? Thanks for responses

    Read the article

  • How do I add line-heigh to a <select> using jQuery?

    - by Rohan
    I have a element <select name="dropdown" id="dropdown"> <option>Option 1</option> <option>Option 2</option> <option>Option 3</option> <option>Option 4</option> </select> Now if I add a CSS line-height property to the dropdown, it doesn't work. How would I use jQuery to style this? I prefer not to use plugins, because this is the only styling I wish to apply.

    Read the article

  • INSERT INTO ... SELECT ... vs dumping/loading a file in MySQL

    - by Daniel Huckstep
    What are the implications of using a INSERT INTO foo ... SELECT FROM bar JOIN baz ... style insert statement versus using the same SELECT statement to dump (bar, baz) to a file, and then insert into foo by loading the file? In my messing around, I haven't seen a huge difference. I would assume the former would use more memory, but the machine that this runs on has 8GB of RAM, and I never even see it go past half used. Are there any huge (or long term) performance implications that I'm not seeing? Advantages/disadvantages of either?

    Read the article

  • PHP, MySQL - would results-array shuffle be quicker than "select... order by rand()"?

    - by sombe
    I've been reading a lot about the disadvantages of using "order by rand" so I don't need update on that. I was thinking, since I only need a limited amount of rows retrieved from the db to be randomized, maybe I should do: $r = $db->query("select * from table limit 500"); for($i;$i<500;$i++) $arr[$i]=mysqli_fetch_assoc($r); shuffle($arr); (i know this only randomizes the 500 first rows, be it). would that be faster than $r = $db->("select * from table order by rand() limit 500"); let me just mention, say the db tables were packed with more than...10,000 rows. why don't you do it yourself?!? - well, i have, but i'm looking for your experienced opinion. thanks!

    Read the article

  • Why does this MSDN example for Func<> delegate have a superfluous Select() call?

    - by Dan
    The MSDN gives this code example in the article on the Func Generic Delegate: Func<String, int, bool> predicate = ( str, index) => str.Length == index; String[] words = { "orange", "apple", "Article", "elephant", "star", "and" }; IEnumerable<String> aWords = words.Where(predicate).Select(str => str); foreach (String word in aWords) Console.WriteLine(word); I understand what all this is doing. What I don't understand is the Select(str => str) bit. Surely that's not needed? If you leave it out and just have IEnumerable<String> aWords = words.Where(predicate); then you still get an IEnumerable back that contains the same results, and the code prints the same thing. Am I missing something, or is the example misleading?

    Read the article

  • Implementing "select distinct ... from ..." over a list of Python dictionaries

    - by daveslab
    Hi folks, Here is my problem: I have a list of Python dictionaries of identical form, that are meant to represent the rows of a table in a database, something like this: [ {'ID': 1, 'NAME': 'Joe', 'CLASS': '8th', ... }, {'ID': 1, 'NAME': 'Joe', 'CLASS': '11th', ... }, ...] I have already written a function to get the unique values for a particular field in this list of dictionaries, which was trivial. That function implements something like: select distinct NAME from ... However, I want to be able to get the list of multiple unique fields, similar to: select distinct NAME, CLASS from ... Which I am finding to be non-trivial. Is there an algorithm or Python included function to help me with this quandry? Before you suggest loading the CSV files into a SQLite table or something similar, that is not an option for the environment I'm in, and trust me, that was my first thought.

    Read the article

  • How can I update a column in a table with the result of a select statement that uses row being updat

    - by Sailing Judo
    This SQL statement example is very close to what I think I need... update table1 set value1 = x.value1 from (select value1, code from table2 where code = something) as x However, what I need to do is change the "something" in the above example to a value from the row that is being updated. For example, I tried this but it didn't work: update table1 A set value1 = x.value1 from (select value1, code from table2 where code = A.something) as x This is a one time operation to update an existing table and I'm not really looking for high performance way to do this. Any solution that gets the task done is good enough.

    Read the article

  • Codeigniter: how do I select count when `$query->num_rows()` doesn't work for me?

    - by mOrloff
    I have a query which is returning a sum, so naturally it returns one row. I need to count the number of records in the DB which made that sum. Here's a sample of the type of query I am talking about (MySQL): SELECT i.id, i.vendor_quote_id, i.product_id_requested, SUM(i.quantity_on_hand) AS qty, COUNT(i.quantity_on_hand) AS count FROM vendor_quote_item AS i JOIN vendor_quote_container AS c ON i.vendor_quote_id = c.id LEFT JOIN company_types ON company_types.company_id = c.company_id WHERE company_types.company_type = 'f' AND i.product_id_requested = 12345678 I have found and am now using the select_min(), select_max(), and select_sum() functions, but my COUNT() is still hard-coded in. The main problem is that I am having to specify the table name in a tightly coupled manner with something like $this->$db->select( 'COUNT(myDbPrefix_vendor_quote_item.quantity_on_hand) AS count' ) which kills portability and makes switching environments a PIA. How can/should I get my the count values I am after with CI in an uncoupled way??

    Read the article

  • When is a SQL function not a function?

    - by Rob Farley
    Should SQL Server even have functions? (Oh yeah – this is a T-SQL Tuesday post, hosted this month by Brad Schulz) Functions serve an important part of programming, in almost any language. A function is a piece of code that is designed to return something, as opposed to a piece of code which isn’t designed to return anything (which is known as a procedure). SQL Server is no different. You can call stored procedures, even from within other stored procedures, and you can call functions and use these in other queries. Stored procedures might query something, and therefore ‘return data’, but a function in SQL is considered to have the type of the thing returned, and can be used accordingly in queries. Consider the internal GETDATE() function. SELECT GETDATE(), SomeDatetimeColumn FROM dbo.SomeTable; There’s no logical difference between the field that is being returned by the function and the field that’s being returned by the table column. Both are the datetime field – if you didn’t have inside knowledge, you wouldn’t necessarily be able to tell which was which. And so as developers, we find ourselves wanting to create functions that return all kinds of things – functions which look up values based on codes, functions which do string manipulation, and so on. But it’s rubbish. Ok, it’s not all rubbish, but it mostly is. And this isn’t even considering the SARGability impact. It’s far more significant than that. (When I say the SARGability aspect, I mean “because you’re unlikely to have an index on the result of some function that’s applied to a column, so try to invert the function and query the column in an unchanged manner”) I’m going to consider the three main types of user-defined functions in SQL Server: Scalar Inline Table-Valued Multi-statement Table-Valued I could also look at user-defined CLR functions, including aggregate functions, but not today. I figure that most people don’t tend to get around to doing CLR functions, and I’m going to focus on the T-SQL-based user-defined functions. Most people split these types of function up into two types. So do I. Except that most people pick them based on ‘scalar or table-valued’. I’d rather go with ‘inline or not’. If it’s not inline, it’s rubbish. It really is. Let’s start by considering the two kinds of table-valued function, and compare them. These functions are going to return the sales for a particular salesperson in a particular year, from the AdventureWorks database. CREATE FUNCTION dbo.FetchSales_inline(@salespersonid int, @orderyear int) RETURNS TABLE AS  RETURN (     SELECT e.LoginID as EmployeeLogin, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ) ; GO CREATE FUNCTION dbo.FetchSales_multi(@salespersonid int, @orderyear int) RETURNS @results TABLE (     EmployeeLogin nvarchar(512),     OrderDate datetime,     SalesOrderID int     ) AS BEGIN     INSERT @results (EmployeeLogin, OrderDate, SalesOrderID)     SELECT e.LoginID, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ;     RETURN END ; GO You’ll notice that I’m being nice and responsible with the use of the DATEADD function, so that I have SARGability on the OrderDate filter. Regular readers will be hoping I’ll show what’s going on in the execution plans here. Here I’ve run two SELECT * queries with the “Show Actual Execution Plan” option turned on. Notice that the ‘Query cost’ of the multi-statement version is just 2% of the ‘Batch cost’. But also notice there’s trickery going on. And it’s nothing to do with that extra index that I have on the OrderDate column. Trickery. Look at it – clearly, the first plan is showing us what’s going on inside the function, but the second one isn’t. The second one is blindly running the function, and then scanning the results. There’s a Sequence operator which is calling the TVF operator, and then calling a Table Scan to get the results of that function for the SELECT operator. But surely it still has to do all the work that the first one is doing... To see what’s actually going on, let’s look at the Estimated plan. Now, we see the same plans (almost) that we saw in the Actuals, but we have an extra one – the one that was used for the TVF. Here’s where we see the inner workings of it. You’ll probably recognise the right-hand side of the TVF’s plan as looking very similar to the first plan – but it’s now being called by a stack of other operators, including an INSERT statement to be able to populate the table variable that the multi-statement TVF requires. And the cost of the TVF is 57% of the batch! But it gets worse. Let’s consider what happens if we don’t need all the columns. We’ll leave out the EmployeeLogin column. Here, we see that the inline function call has been simplified down. It doesn’t need the Employee table. The join is redundant and has been eliminated from the plan, making it even cheaper. But the multi-statement plan runs the whole thing as before, only removing the extra column when the Table Scan is performed. A multi-statement function is a lot more powerful than an inline one. An inline function can only be the result of a single sub-query. It’s essentially the same as a parameterised view, because views demonstrate this same behaviour of extracting the definition of the view and using it in the outer query. A multi-statement function is clearly more powerful because it can contain far more complex logic. But a multi-statement function isn’t really a function at all. It’s a stored procedure. It’s wrapped up like a function, but behaves like a stored procedure. It would be completely unreasonable to expect that a stored procedure could be simplified down to recognise that not all the columns might be needed, but yet this is part of the pain associated with this procedural function situation. The biggest clue that a multi-statement function is more like a stored procedure than a function is the “BEGIN” and “END” statements that surround the code. If you try to create a multi-statement function without these statements, you’ll get an error – they are very much required. When I used to present on this kind of thing, I even used to call it “The Dangers of BEGIN and END”, and yes, I’ve written about this type of thing before in a similarly-named post over at my old blog. Now how about scalar functions... Suppose we wanted a scalar function to return the count of these. CREATE FUNCTION dbo.FetchSales_scalar(@salespersonid int, @orderyear int) RETURNS int AS BEGIN     RETURN (         SELECT COUNT(*)         FROM Sales.SalesOrderHeader AS o         LEFT JOIN HumanResources.Employee AS e         ON e.EmployeeID = o.SalesPersonID         WHERE o.SalesPersonID = @salespersonid         AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')         AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ); END ; GO Notice the evil words? They’re required. Try to remove them, you just get an error. That’s right – any scalar function is procedural, despite the fact that you wrap up a sub-query inside that RETURN statement. It’s as ugly as anything. Hopefully this will change in future versions. Let’s have a look at how this is reflected in an execution plan. Here’s a query, its Actual plan, and its Estimated plan: SELECT e.LoginID, y.year, dbo.FetchSales_scalar(p.SalesPersonID, y.year) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; We see here that the cost of the scalar function is about twice that of the outer query. Nicely, the query optimizer has worked out that it doesn’t need the Employee table, but that’s a bit of a red herring here. There’s actually something way more significant going on. If I look at the properties of that UDF operator, it tells me that the Estimated Subtree Cost is 0.337999. If I just run the query SELECT dbo.FetchSales_scalar(281,2003); we see that the UDF cost is still unchanged. You see, this 0.0337999 is the cost of running the scalar function ONCE. But when we ran that query with the CROSS JOIN in it, we returned quite a few rows. 68 in fact. Could’ve been a lot more, if we’d had more salespeople or more years. And so we come to the biggest problem. This procedure (I don’t want to call it a function) is getting called 68 times – each one between twice as expensive as the outer query. And because it’s calling it in a separate context, there is even more overhead that I haven’t considered here. The cheek of it, to say that the Compute Scalar operator here costs 0%! I know a number of IT projects that could’ve used that kind of costing method, but that’s another story that I’m not going to go into here. Let’s look at a better way. Suppose our scalar function had been implemented as an inline one. Then it could have been expanded out like a sub-query. It could’ve run something like this: SELECT e.LoginID, y.year, (SELECT COUNT(*)     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = p.SalesPersonID     AND o.OrderDate >= DATEADD(year,y.year-2000,'20000101')     AND o.OrderDate < DATEADD(year,y.year-2000+1,'20000101')     ) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; Don’t worry too much about the Scan of the SalesOrderHeader underneath a Nested Loop. If you remember from plenty of other posts on the matter, execution plans don’t push the data through. That Scan only runs once. The Index Spool sucks the data out of it and populates a structure that is used to feed the Stream Aggregate. The Index Spool operator gets called 68 times, but the Scan only once (the Number of Executions property demonstrates this). Here, the Query Optimizer has a full picture of what’s being asked, and can make the appropriate decision about how it accesses the data. It can simplify it down properly. To get this kind of behaviour from a function, we need it to be inline. But without inline scalar functions, we need to make our function be table-valued. Luckily, that’s ok. CREATE FUNCTION dbo.FetchSales_inline2(@salespersonid int, @orderyear int) RETURNS table AS RETURN (SELECT COUNT(*) as NumSales     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ); GO But we can’t use this as a scalar. Instead, we need to use it with the APPLY operator. SELECT e.LoginID, y.year, n.NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID OUTER APPLY dbo.FetchSales_inline2(p.SalesPersonID, y.year) AS n; And now, we get the plan that we want for this query. All we’ve done is tell the function that it’s returning a table instead of a single value, and removed the BEGIN and END statements. We’ve had to name the column being returned, but what we’ve gained is an actual inline simplifiable function. And if we wanted it to return multiple columns, it could do that too. I really consider this function to be superior to the scalar function in every way. It does need to be handled differently in the outer query, but in many ways it’s a more elegant method there too. The function calls can be put amongst the FROM clause, where they can then be used in the WHERE or GROUP BY clauses without fear of calling the function multiple times (another horrible side effect of functions). So please. If you see BEGIN and END in a function, remember it’s not really a function, it’s a procedure. And then fix it. @rob_farley

    Read the article

  • How to best design a date/geographic proximity query on GAE?

    - by Dane
    Hi all, I'm building a directory for finding athletic tournaments on GAE with web2py and a Flex front end. The user selects a location, a radius, and a maximum date from a set of choices. I have a basic version of this query implemented, but it's inefficient and slow. One way I know I can improve it is by condensing the many individual queries I'm using to assemble the objects into bulk queries. I just learned that was possible. But I'm also thinking about a more extensive redesign that utilizes memcache. The main problem is that I can't query the datastore by location because GAE won't allow multiple numerical comparison statements (<,<=,=,) in one query. I'm already using one for date, and I'd need TWO to check both latitude and longitude, so it's a no go. Currently, my algorithm looks like this: 1.) Query by date and select 2.) Use destination function from geopy's distance module to find the max and min latitude and longitudes for supplied distance 3.) Loop through results and remove all with lat/lng outside max/min 4.) Loop through again and use distance function to check exact distance, because step 2 will include some areas outside the radius. Remove results outside supplied distance (is this 2/3/4 combination inefficent?) 5.) Assemble many-to-many lists and attach to objects (this is where I need to switch to bulk operations) 6.) Return to client Here's my plan for using memcache.. let me know if I'm way out in left field on this as I have no prior experience with memcache or server caching in general. -Keep a list in the cache filled with "geo objects" that represent all my data. These have five properties: latitude, longitude, event_id, event_type (in anticipation of expanding beyond tournaments), and start_date. This list will be sorted by date. -Also keep a dict of pointers in the cache which represent the start and end indices in the cache for all the date ranges my app uses (next week, 2 weeks, month, 3 months, 6 months, year, 2 years). -Have a scheduled task that updates the pointers daily at 12am. -Add new inserts to the cache as well as the datastore; update pointers. Using this design, the algorithm would now look like: 1.) Use pointers to slice off appropriate chunk of list based on supplied date. 2-4.) Same as above algorithm, except with geo objects 5.) Use bulk operation to select full tournaments using remaining geo objects' event_ids 6.) Assemble many-to-manys 7.) Return to client Thoughts on this approach? Many thanks for reading and any advice you can give. -Dane

    Read the article

  • ActiveRecord + CodeIgniter - Return single value from query, not in array form.

    - by txmail
    Say you construct an activerecord query that will always just return a single value, how do you just address that single value instead of getting an array in return? For instance I am using an ActiveRecord query to return the SUM of a single column, it will only return this one single SUM, instead of having to parse the array is there a way to assign the value as a function return equal to that value instead of getting an array?

    Read the article

  • How can I fill SQL Server table from excel only using sql query?

    - by Phsika
    How can I do that with Microsoft.ACE.OLEDB.12.0? CREATE TABLE [dbo].[Addresses_Temp] ( [FirstName] VARCHAR(20), [LastName] VARCHAR(20), [Address] VARCHAR(50), [City] VARCHAR(30), [State] VARCHAR(2), [ZIP] VARCHAR(10) ) GO INSERT INTO [dbo].[Address_Temp] ( [FirstName], [LastName], [Address], [City], [State], [ZIP] ) SELECT [FirstName], [LastName], [Address], [City], [State], [ZIP] FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0', 'Excel 12.0;Database=C:\Source\Addresses.xlsx;IMEX=1', 'SELECT * FROM [Sayfa1$]') How can I do that?

    Read the article

  • Why is Oracle using a skip scan for this query?

    - by Jason Baker
    Here's the tkprof output for a query that's running extremely slowly (WARNING: it's long :-) ): SELECT mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn FROM (SELECT /*+ FIRST_ROWS(1) */ mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn, ROWNUM AS ora_rn FROM (SELECT mbr.comment_idn AS mbr_comment_idn, mbr.crt_dt AS mbr_crt_dt, mbr.data_source AS mbr_data_source, mbr.dol_bl_rmo_ind AS mbr_dol_bl_rmo_ind, mbr.dxcg_ctl_member AS mbr_dxcg_ctl_member, mbr.employment_start_dt AS mbr_employment_start_dt, mbr.employment_term_dt AS mbr_employment_term_dt, mbr.entity_active AS mbr_entity_active, mbr.ethnicity_idn AS mbr_ethnicity_idn, mbr.general_health_status_code AS mbr_general_health_status_code, mbr.hand_dominant_code AS mbr_hand_dominant_code, mbr.hgt_feet AS mbr_hgt_feet, mbr.hgt_inches AS mbr_hgt_inches, mbr.highest_edu_level AS mbr_highest_edu_level, mbr.insd_addr_idn AS mbr_insd_addr_idn, mbr.insd_alt_id AS mbr_insd_alt_id, mbr.insd_name AS mbr_insd_name, mbr.insd_ssn_tin AS mbr_insd_ssn_tin, mbr.is_smoker AS mbr_is_smoker, mbr.is_vip AS mbr_is_vip, mbr.lmbr_first_name AS mbr_lmbr_first_name, mbr.lmbr_last_name AS mbr_lmbr_last_name, mbr.marital_status_cd AS mbr_marital_status_cd, mbr.mbr_birth_dt AS mbr_mbr_birth_dt, mbr.mbr_death_dt AS mbr_mbr_death_dt, mbr.mbr_expired AS mbr_mbr_expired, mbr.mbr_first_name AS mbr_mbr_first_name, mbr.mbr_gender_cd AS mbr_mbr_gender_cd, mbr.mbr_idn AS mbr_mbr_idn, mbr.mbr_ins_type AS mbr_mbr_ins_type, mbr.mbr_isreadonly AS mbr_mbr_isreadonly, mbr.mbr_last_name AS mbr_mbr_last_name, mbr.mbr_middle_name AS mbr_mbr_middle_name, mbr.mbr_name AS mbr_mbr_name, mbr.mbr_status_idn AS mbr_mbr_status_idn, mbr.mpi_id AS mbr_mpi_id, mbr.preferred_am_pm AS mbr_preferred_am_pm, mbr.preferred_time AS mbr_preferred_time, mbr.prv_innetwork AS mbr_prv_innetwork, mbr.rep_addr_idn AS mbr_rep_addr_idn, mbr.rep_name AS mbr_rep_name, mbr.rp_mbr_id AS mbr_rp_mbr_id, mbr.same_mbr_ins AS mbr_same_mbr_ins, mbr.special_needs_cd AS mbr_special_needs_cd, mbr.timezone AS mbr_timezone, mbr.upd_dt AS mbr_upd_dt, mbr.user_idn AS mbr_user_idn, mbr.wgt AS mbr_wgt, mbr.work_status_idn AS mbr_work_status_idn FROM mbr JOIN mbr_identfn ON mbr.mbr_idn = mbr_identfn.mbr_idn WHERE mbr_identfn.mbr_idn = mbr.mbr_idn AND mbr_identfn.identfd_type = :identfd_type_1 AND mbr_identfn.identfd_number = :identfd_number_1 AND mbr_identfn.entity_active = :entity_active_1) WHERE ROWNUM <= :ROWNUM_1) WHERE ora_rn > :ora_rn_1 call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 9936 0.46 0.49 0 0 0 0 Execute 9936 0.60 0.59 0 0 0 0 Fetch 9936 329.87 404.00 0 136966922 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 29808 330.94 405.09 0 136966922 0 0 Misses in library cache during parse: 0 Optimizer mode: FIRST_ROWS Parsing user id: 36 (JIVA_DEV) Rows Row Source Operation ------- --------------------------------------------------- 0 VIEW (cr=102 pr=0 pw=0 time=2180 us) 0 COUNT STOPKEY (cr=102 pr=0 pw=0 time=2163 us) 0 NESTED LOOPS (cr=102 pr=0 pw=0 time=2152 us) 0 INDEX SKIP SCAN IDX_MBR_IDENTFN (cr=102 pr=0 pw=0 time=2140 us)(object id 341053) 0 TABLE ACCESS BY INDEX ROWID MBR (cr=0 pr=0 pw=0 time=0 us) 0 INDEX UNIQUE SCAN PK_CLAIMANT (cr=0 pr=0 pw=0 time=0 us)(object id 334044) Rows Execution Plan ------- --------------------------------------------------- 0 SELECT STATEMENT MODE: HINT: FIRST_ROWS 0 VIEW 0 COUNT (STOPKEY) 0 NESTED LOOPS 0 INDEX MODE: ANALYZED (SKIP SCAN) OF 'IDX_MBR_IDENTFN' (INDEX (UNIQUE)) 0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'MBR' (TABLE) 0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PK_CLAIMANT' (INDEX (UNIQUE)) ******************************************************************************** Based on my reading of Oracle's documentation of skip scans, a skip scan is most useful when the first column of an index has a low number of unique values. The thing is that the first index of this column is a unique primary key. So am I correct in assuming that a skip scan is the wrong thing to do here? Also, what kind of scan should it be doing? Should I do some more hinting for this query? EDIT: I should also point out that the query's where clause uses the columns in IDX_MBR_IDENTFN and no columns other than what's in that index. So as far as I can tell, I'm not skipping any columns.

    Read the article

  • Java Google App Engine Datastore: 'IN' operator available on JDO query filters, as with Python?

    - by Jim Blackler
    This page describes an 'IN' operator that can be used in GAE Datastore to compare a field against a list of possible matches, not just a single value: However this is for Python. In Java (App Engine 1.2.5), trying query.setFilter("someField IN param"); on my javax.jdo.query fires a JDOUserException 'Portion of expression could not be parsed: IN param'. Is there a way this can be done?

    Read the article

  • Adding custom query to paginator (KNP Paginator) in Symfony2?

    - by Unni
    I have the following query , how to write these query in Symfony2 SELECT Inventory_Stock.id, Inventory_Stock.quantity, SUM(InventoryUsage.quantity) ,Inventory_Stock.quantity - SUM(InventoryUsage.quantity) AS Stock FROM Inventory_Stock LEFT JOIN InventoryUsage ON Inventory_Stock.id = InventoryUsage.InventoryStock_id WHERE Inventory_Stock.id = 26 OR Inventory_Stock.id = 27 GROUP BY Inventory_Stock.id ORDER BY Stock DESC I need to paginate this as well Would be great if any one could help me out as this has been pulling my hair out for couple of days.

    Read the article

  • Is there a way to add or remove a line of an LINQ to DB Query based on if the value being checked is

    - by RandomBen
    If I have a query like this: String Category = HttpContext.Current.Request.QueryString["Product"].ToString(); IQueryable<ItemFile> pressReleases = from file in connection.ItemFile where file.Type_ID == 8 && file.Category == Category select file; Is there a way to make this LINQ Query so that I do no use the line file.Category == Category if Category is null or empty?

    Read the article

  • How can I replace/speed up a text search query which uses LIKE ?

    - by Jules
    I'm trying to speed up my query... select PadID from Pads WHERE (keywords like '%$search%' or ProgramName like '%$search%' or English45 like '%$search%') AND RemovemeDate = '2001-01-01 00:00:00' ORDER BY VersionAddDate DESC I've done some work already, I have a keywords table so I can add ... PadID IN (SELECT PadID FROM Keywords WHERE word = '$search') ... However its going to be a nightmare to split up the words from English45 and ProgramName into a word table. Any ideas ? EDIT : (also provided actual table names)

    Read the article

  • Can a database function be called in the predicate of a llblgen query?

    - by Dan Appleyard
    I want to use a table-valued database function in the where clause of a query I am building using LLBLGen Pro 2.6 (self-servicing). SELECT * FROM [dbo].[Users] WHERE [dbo].[Users].[UserID] IN ( SELECT UserID FROM [dbo].[GetScopedUsers] (@ScopedUserID) ) I am looking into the FieldCompareSetPredicate class, but can't for the life of me figure out what the exact signature would be. Any help would be greatly appreciated.

    Read the article

< Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >