Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 202/293 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Multiple LIKE in SQL

    - by ninumedia
    I wanted to search through multiple rows and obtain the row that contains a particular item. The table in mySQL is setup so each id has a unique list (comma-delimited) of values per row. Ex: id | order 1 | 1,3,8,19,34,2,38 2 | 4,7,2,190,38 Now if I wanted to pull the row that contained just the number 19 how would I go about doing this? The possibilities I could figure in the list with a LIKE condition would be: 19, ,19 ,19, I tried the following and I cannot obtain any results, Thank you for your help! SELECT * FROM categories WHERE order LIKE '19,%' OR '%,19%' OR '%,19%' LIMIT 0 , 30

    Read the article

  • Fulltext and composite indexes and how they affect the query

    - by Brett
    Just say I had a query as below.. SELECT name,category,address,city,state FROM table WHERE MATCH(name,subcategory,category,tag1) AGAINST('education') AND city='Oakland' AND state='CA' LIMIT 0, 10; ..and I had a fulltext index as name,subcategory,category,tag1 and a composite index as city,state; is this good enough for this query? Just wondering if something extra is needed when mixing additional AND's when making use of the fulltext index with the MATCH/AGAINST. Edit: What I am trying to understand is, what happens with the additional columns that are within the query but are not indexed in the chosen index (the fulltext index), the above example being city and state. How does MySQL now find the matching rows for these since it can't use two indexes (or can it?) - so, basically, I'm trying to understand how MySQL goes about finding the data optimally for the columns NOT in the chosen fulltext index and if there is anything I can or should do to optimize the query.

    Read the article

  • Large tables of static data with DBGhost

    - by Paulo Manuel Santos
    We are thinking of restructuring our database development and deployment processes by using DBGhost, we want to move away from the central development database and bring the database to the source control. One of the problems we have is a big table with static data (containing translated language strings), it has close to 200K rows. I know that our best solution is to move these stings into resource files, but until we implement that, will DbGhost be able to maintain all this static data and generate our development and deployment databases in a short time? And if not is there a good alternative to filling up this table whenever we need to?

    Read the article

  • How to create reusable WPF grid layout

    - by zendar
    I have a window with tab control and number of pages - tab items. Each tab item has same grid layout - 6 rows and 4 columns. Now, each tab item contains grid with row and column definitions, so almost half of XAML is definition of grids. How can I define this grid in one place and reuse that definition in my application? Template? User control? Besides 6x4, I have only two more grid dimensions that repeat: 8x4 and 6x6.

    Read the article

  • Trying to get JQuery Autocomplete working on Asp.Net page.

    - by JasonMHirst
    Can someone shed some light on the problem please: I have the following: $(document).ready(function () { $("#txtFirstContact").autocomplete({url:'http://localhost:7970/Home/FindSurname' }); }); On my Asp.Net page. The http request is a function on an MVC Controller and that code is here: Function FindSurname(ByVal surname As String, ByVal count As Integer) Dim sqlConnection As New SqlClient.SqlConnection sqlConnection.ConnectionString = My.Settings.sqlConnection Dim sqlCommand As New SqlClient.SqlCommand sqlCommand.CommandText = "SELECT ConSName FROM tblContact WHERE ConSName LIKE '" & surname & "%'" sqlCommand.Connection = sqlConnection Dim ds As New DataSet Dim da As New SqlClient.SqlDataAdapter(sqlCommand) da.Fill(ds, "Contact") sqlConnection.Close() Dim contactsArray As New List(Of String) For Each dr As DataRow In ds.Tables("Contact").Rows contactsArray.Add(dr.Item("ConSName")) Next Return Json(contactsArray, JsonRequestBehavior.AllowGet) End Function As far as I'm aware, the Controller is returning JSON data, however I don't know if the Function Parameters are correct, or indeed if the format returned is interprettable by the AutoComplete plugin. If anyone can assist in the matter I'd really appreciate it.

    Read the article

  • Filter by virtual column?

    - by user329957
    I have the following database structure : [Order] OrderId Total [Payment] OrderId Amount Every Order can have X payment rows. I want to get only the list of orders where the sum of all the payments are < than the order Total. I have the following SQL but I will return all the orders paid and unpaid. SELECT o.OrderId, o.UserId, o.Total, o.DateCreated, COALESCE(SUM(p.Amount),0) AS Paid FROM [Order] o LEFT JOIN Payment p ON p.OrderId = o.OrderId GROUP BY o.OrderId, o.Total, o.UserId, o.DateCreated I have tried to add Where (Paid < o.Total) but it does not work, any idea? BTM I'm using SQL CE 3.5

    Read the article

  • jQuery - OnClick, change background color for table cells

    - by andrew
    Hi all, Let me show you a demo: here it is working for only rows. its not working for cells. i want to change cells' (tds') background colors with mouse clicks. For example: a have a table, and it has 4 tds. table's background color is white. if i click to a td, a td should be red, than if i click to b, b td should be red and a td should be white again. if i click to c than, c should be red and b should be white right now. A - B C - D Can anyone help me?

    Read the article

  • Validating Column Data Stored as CSV Against Another Table

    - by Jakkwylde
    I wanted to see what some suggested approaches would be to validate a field that is stored as a CSV against a table containing appropriate values. Althought it would be desired, it is NOT an option to split the CSV list into another related table. In the example data below I would be trying to capture the code 99 for widget A. Below is an example data representation. Table: Widgets WidgetName WidgetCodeList A 1, 2, 3 B 1 C 2, 3 D 99 Table: WidgetCodes WidgetCode 1 2 3 An earlier approach was to query the CSV column as rows using various string manipulations and CONNECT_BY_LEVEL however the performance was not acceptible.

    Read the article

  • Killing the mysqld process

    - by Josh K
    I have a table with ~800k rows. I ran an update users set hash = SHA1(CONCAT({about eight fields})) where 1; Now I have a hung Sequel Pro process and I'm not sure about the mysqld process. This is two questions: What harm can possibly come from killing these programs? I'm working on a separate database, so no damage should come to other databases on the system, right? Assume you had to update a table like this. What would be a quicker / more reliable method of updating without writing a separate script. I just checked with phpMyAdmin and it appears as though the query is complete. I still have Sequel Pro using 100% of both my cores though...

    Read the article

  • Approach for altering Primary Key from GUID to BigInt in SQL Server related tables

    - by Tom
    I have two tables with 10-20 million rows that have GUID primary keys and at leat 12 tables related via foreign key. The base tables have 10-20 indexes each. We are moving from GUID to BigInt primary keys. I'm wondering if anyone has any suggestions on an approach. Right now this is the approach I'm pondering: Drop all indexes and fkeys on all the tables involved. Add 'NewPrimaryKey' column to each table Make the key identity on the two base tables Script the data change "update table x, set NewPrimaryKey = y where OldPrimaryKey = z Rename the original primarykey to 'oldprimarykey' Rename the 'NewPrimaryKey' column 'PrimaryKey' Script back all the indexes and fkeys Does this seem like a good approach? Does anyone know of a tool or script that would help with this? TD: Edited per additional information. See this blog post that addresses an approach when the GUID is the Primary: http://www.sqlmag.com/blogs/sql-server-questions-answered/sql-server-questions-answered/tabid/1977/entryid/12749/Default.aspx

    Read the article

  • R webscraping: interrogating for date and importance

    - by adam.888
    I am able to webscrape a table from a webpage containing news library(XML) webpage <- "http://www.tradingeconomics.com/calendar" tables <- readHTMLTable(webpage ) n.rows <- unlist(lapply(tables, function(t) dim(t)[1])) dfcal <- as.data.frame(tables$calendar) However I do not know how to interrogate for date or for importance. For example how could I webscrape news from Jan 2014? I am able to do this on the webpage by altering button settings, but how can I do it from within R? I was also not able to collect the importance column data. Also are there better ways for collecting economic news from within R? I have looked on http://www.rseek.org/ but could not find anything. Thank you for your help.

    Read the article

  • ASP.NET MVC: Is is possible to set a global variable?

    - by Sergio
    Hello, I have a process within my MVC 2 application that takes a large amount of time and alters many rows in the database in the process. There is a chance that two or more users could attempt to perform this action at the same time, which would lead to undesirable effects. Is there a way to set a global flag somewhere within asp.net that I can check against all requests to see if the action in question is currently being executed? (a bit that I flip prior to running query, and then then flip back on completition) Or is there a better way of handling this situation? Thanks

    Read the article

  • Efficient persistent storage for simple id to table of values map for java

    - by wds
    I need to store some data that follows the simple pattern of mapping an "id" to a full table (with multiple rows) of several columns (i.e. some integer values [u, v, w]). The size of one of these tables would be a couple of KB. Basically what I need is to store a persistent cache of some intermediary results. This could quite easily be implemented as simple sql, but there's a couple of problems, namely I need to compress the size of this structure on disk as much as possible. (because of amount of values I'm storing) Also, it's not transactional, I just need to write once and simply read the contents of the entire table, so a relational DB isn't actually a very good fit. I was wondering if anyone had any good suggestions? For some reason I can't seem to come up with something decent atm. Especially something with an API in java would be nice.

    Read the article

  • tablerows hover with table even/odd styled via CSS

    - by root
    even/odd styles set on tablerows but hovering of table rows don't work . CSS style : #table_even tr:hover { background-color:#fffbae!important; } /* hovering */ #table_even tr:nth-child(odd) td { background-color:#fbfbfb } /*odd*/ #table_even tr:nth-child(even) td { background-color:#e8ecee } /* even*/ HTML Codes: <table id="table_even" style="width: 100%"> <tr> <td>##</td> <td>##</td> </tr> <tr> <td>##</td> <td>##</td> </tr> <tr> <td>##</td> <td>##</td> </tr> </table> how can be solve ?

    Read the article

  • searching a mysql database

    - by Bill Parson
    currently i have a database of music that i have db'd in mysql, now i am writing a php frontend for it, and it will list out everything in a table, it works, but if i search "the beatles" it gives me 453 results(correct) however if i just search "beatles" it results in 0 rows, how would i go about making it able to search for something like that? heres my current line: $query2 = "SELECT * From `songs` WHERE `Artist` like '".$_REQUEST['q']."' OR `Album` like '".$_REQUEST['q']."' OR `Genre` like '".$_REQUEST['q']."' OR `Title` like '".$_REQUEST['q']."';";

    Read the article

  • php - regex - catch string inside multiple tags

    - by aSeptik
    Hi all guys! still on regex! ;-))) Assuming we have an html file with a lot of <tr> rows with same structure like this below, where (.*?) is the content i need to extract! <tr align= # ><th width= # ><a OnClick="(.*?)"href= # >(.*?)</a><td width= # >(.*?)<td width= # align= # >(.*?)</td></tr> maybe with a nice preg_match_all() ? thanks for the time! Luca Filosofi!

    Read the article

  • Changing ylim (axis limits) drops data falling outside range. How can this be prevented?

    - by Alex Holcombe
    df <- data.frame(age=c(10,10,20,20,25,25,25),veg=c(0,1,0,1,1,0,1)) g=ggplot(data=df,aes(x=age,y=veg)) g=g+stat_summary(fun.y=mean,geom="point") Points reflect mean of veg at each age, which is what I expected and want to preserve after changing axis limits with the command below. g=g+ylim(0.2,1) Changing axis limits with the above command unfortunately causes veg==0 subset to be dropped from the data, yielding "Warning message: Removed 4 rows containing missing values (stat_summary)" This is bad because now the data plot (stat_summary mean) omits the veg==0 points. How can this be prevented? I simply want to avoid showing the empty part of the plot- the ordinate from 0 to .2, but not drop the associated data from the stat_summary calculation.

    Read the article

  • Problem with layout of control when dynamically creating winform and controls for the Form

    - by Ashwani Roy
    I get this response from a web service call. Something like this <Response> <Control1 type = "DropdownList" value= "USA,UK,Sweden,UAE"/> <Control2 type = "Textbox" value= "Contries"/> <Control3 type = "Button" value= "None"> </Response> Based on this I de-serialize it into List<Controls>. Now I need to be able to dynamically create a winform based on these controls. My problem is the layout. I want to be able to create them nicely separated (If possible vertically aligned) in batches of lets say 5.So If I need 15 controls I have 3 columns and 5 rows. What would be best way to achieve this? I know that I can use the inbuilt positioning properties like top, width etc., but maybe someone out there has done something similar in a better way.

    Read the article

  • editButtonItem set but no minus buttons?

    - by QAD
    My edit button is placed in viewDidLoad: self.navigationItem.rightBarButtonItem = self.editButtonItem; It shows up correctly on the nav bar, and tapping this button indeed change it to Done. However, no minus buttons show up in my table rows. Swiping a row, then tap Delete works, though. Any ideas? EDIT 1: Here's how I'm doing: - (void)loadView { tableView = [[UITableView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame]]; tableView.delegate = self; tableView.dataSource = self; tableView.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight; self.view = tableView; } EDIT 2: My observation is that the edit and minus buttons display fine if my tableview is created in IB (RootViewController). The other two (or three) tableview are created by the aforemention code, so that might be the problem. Guess I'd have to dive in to isEditing, editing and whatnot.

    Read the article

  • PHP scandir strange behaviour

    - by kmunky
    ok, so what i'm trying to do is to scan the "previous" folder. $scanned = scandir(".."); foreach($scanned as $file){ if(is_file($file)){ print $file."<br />"; } } even though in my ".." directory i have 20+ files i get just three rows index.jpg index.php template.dtd i noticed that if i don;t use that is_file if clause it returns all file names; but i really need that if clause.

    Read the article

  • SQL statement HAVING MAX(some+thing)=some+thing

    - by Andreas
    I'm having trouble with Microsoft Access 2003, it's complaining about this statement: select cardnr from change where year(date)<2009 group by cardnr having max(time+date) = (time+date) and cardto='VIP' What I want to do is, for every distinct cardnr in the table change, to find the row with the latest (time+date) that is before year 2009, and then just select the rows with cardto='VIP'. This validator says it's OK, Access says it's not OK. This is the message I get: "you tried to execute a query that does not include the specified expression 'max(time+date)=time+date and cardto='VIP' and cardnr=' as part of an aggregate function." Could someone please explain what I'm doing wrong and the right way to do it? Thanks

    Read the article

  • why is there extra using where in execution plan of query

    - by user366534
    I see plan of query: EXPLAIN SELECT * FROM `subscribers` WHERE state =4 AND date_added < '2010-12-23 11:47:45' It shows: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE subscribers range state_date_added state_date_added 9 NULL 8 Using where Here is indexes of table: Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment subscribers 0 PRIMARY 1 subscriber_id A 382039 NULL NULL BTREE subscribers 0 email_list_id 1 email_address A 191019 NULL NULL BTREE subscribers 0 email_list_id 2 list_id A 382039 NULL NULL BTREE subscribers 1 FK_list_id 1 list_id A 10 NULL NULL BTREE subscribers 1 state_date_added 1 state A 12 NULL NULL BTREE subscribers 1 state_date_added 2 date_added A 8128 NULL NULL BTREE The last two lines describes index what is supposed for the query. Why is there in extra column using where? Even If I fetch only state and date_added column, it has in extra column: Using where; Using index. I understand why it has using index, but I don't understand Using where here.

    Read the article

  • how to aggregate this data in R

    - by stevejb
    Hello, I have a data frame in R with the following structure. > testData date exch.code comm.code oi 1 1997-12-30 CBT 1 468710 2 1997-12-23 CBT 1 457165 3 1997-12-19 CBT 1 461520 4 1997-12-16 CBT 1 444190 5 1997-12-09 CBT 1 446190 6 1997-12-02 CBT 1 443085 .... 77827 2004-10-26 NYME 967 10038 77828 2004-10-19 NYME 967 9910 77829 2004-10-12 NYME 967 10195 77830 2004-09-28 NYME 967 9970 77831 2004-08-31 NYME 967 9155 77832 2004-08-24 NYME 967 8655 What I want to do is produce a table the shows for a given date and commodity the total oi across every exchange code. So, the rows would be made up of unique(testData$date) and the columns would be unique(testData$comm.code) and each cell would be the total oi over all exch.codes on a given day. Thanks,

    Read the article

  • How to resize a silverlight control to a full webpage.

    - by ForeverDebugging
    I'm currently using a datagrid within a user control. The datagrid resizes based on the number of rows, and therefore goes off the webpage. I've tried wrapping the control around a ScrollViewer, but in order to have the scroll bar visible and working, I need to set the MaxHeight of the datagrid. The problem is that I don't know what the MaxHeight of the datagrid should be because it differs based on the size of the browser window appearing on the screen. Any suggestions on how to determine the correct size of the control?

    Read the article

  • VB.NET LINQ Result Set Manipulation.

    - by davemackey
    I have a table that looks like this: ID / Description / textValue / dateValue / timeValue / Type 1 / FRRSD / NULL / 2010-04-16 00:00:00.000 / NULL / classdates Now I've got a LINQ command to pull only the rows where the type is classdates from this table: Dim dbGetRegisterDates As New dcConfigDataContext Dim getDates = (From p In dbGetRegisterDates.webConfigOptions _ Where p.Type = "classdates" _ Select p) I now want to display the data in five different labels like so: lblClass1.Text = "Your class is from " & getDates.Description("FRRSD").dateValue & "to " & getDates.Description("FRRCD").dateValue Basically, I want to pull a row based on the description column value and then return the datevalue column value from that same row.

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >