Search Results

Search found 27905 results on 1117 pages for 'sql authority'.

Page 640/1117 | < Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >

  • MySQL Query, how to group and count in one row ?

    - by Akarun
    Hi All, To simplify, I have tree tables: products, products-vs-orders, orders products fields : 'ProductID', 'Name', 'isGratis', ... products-vs-orders fields : 'ProductID', 'OrderID' orders fields : 'OrderID', 'Title', ... Actually, I have a query like this: SELECT orders.OrderID, orders.Title, COUNT(`products`.`isGratis`) AS "Quantity", `products`.`isGratis` FROM `orders`, `products-vs-orders`, `products` WHERE `orders`.`OrderID` = `products-vs-orders`.`OrderID` AND `products-vs-orders`.`ProductID` = `products`.`ProductID` GROUP BY `products`.`PackID`, `products`.`isGratis` This query works and return this surch of result: OrderID, Title, Quantity, isGratis 1 My Order 20 0 1 My Order 3 1 2 An other 8 0 2 An other 1 1 How can I retrieve the count of products 'gratis' and 'paid' in to separate cols ? OrderID, Title, Qt Paid, Qt Gratis 1 My Order 20 3 2 An other 8 1 Thanks for your help

    Read the article

  • Best method to search hierarchical data

    - by WDuffy
    I'm looking at building a facility which allows querying for data with hierarchical filtering. I have a few ideas how I'm going to go about it but was wondering if there are any recommendations or suggestions that might be more efficient. As an example imagine that a user is searching for a job. The job areas would be as follows. 1: Scotland 2: --- West Central 3: ------ Glasgow 4: ------ Etc 5: --- North East 6: ------ Ayrshire 7: ------ Etc A user can search specific (i.e. Glasgow) or in a larger area (i.e. Scotland). The two approaches I am considering are: keep a note of children in the database for each record (i.e. cat 1 would have 2, 3, 4 in its children field) and query against that record with a SELECT * FROM Jobs WHERE Category IN Areas.childrenField. Use a recursive function to find all results who have a relation to the selected area. The problems I see from both are: Holding this data in the db will mean having to keep track of all changes to structure. Recursion is slow and inefficent. Any ideas, suggestion or recommendations on the best approach? I'm using C# ASP.NET with MSSQL 2005 DB.

    Read the article

  • c# finding matching words in table column using Linq2Sql

    - by David Liddle
    I am trying to use Linq2Sql to return all rows that contain values from a list of strings. The linq2sql class object has a string property that contains words separated by spaces. public class MyObject { public string MyProperty { get; set; } } Example MyProperty values are: MyObject1.MyProperty = "text1 text2 text3 text4" MyObject2.MyProperty = "text2" For example, using a string collection, I pass the below list var list = new List<>() { "text2", "text4" } This would return both items in my example above as they both contain "text2" value. I attempted the following using the below code however, because of my extension method the Linq2Sql cannot be evaluated. public static IQueryable<MyObject> WithProperty(this IQueryable<MyProperty> qry, IList<string> p) { return from t in qry where t.MyProperty.Contains(p, ' ') select t; } I also wrote an extension method public static bool Contains(this string str, IList<string> list, char seperator) { if (String.IsNullOrEmpty(str) || list == null) return false; var splitStr = str.Split(new char[] { seperator }, StringSplitOptions.RemoveEmptyEntries); foreach (string s in splitStr) foreach (string l in list) if (String.Compare(s, l, true) == 0) return true; return false; } Any help or ideas on how I could achieve this?

    Read the article

  • Does introducing foreign keys to MySQL reduce performance

    - by Tam
    I'm building Ruby on Rails 2.3.5 app. By default, Ruby on Rails doesn't provide foreign key contraints so I have to do it manually. I was wondering if introducing foreign keys reduces query performance on the database side enough to make it not worth doing. Performance in this case is my first priority as I can check for data consistency with code. What is your recommendation in general? do you recommend using foreign keys? and how do you suggest I should measure this?

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

  • mysql: get all rows into 1 column

    - by andufo
    hi, i have 3 tables: post (id_post, title) tag (id_tag, name) post_tag (id_post_tag, id_post, id_tag) Lets suppose that id_post 3 has 4 linked tags 1,2,3,4 (soccer, basket, tennis and golf). Is there a way to return something like this in ONE row? col 1 id_post = 3 col 2 tags = soccer basket tennis golf Thanks

    Read the article

  • Custom configuration file custom_config.js not working?

    - by nanoquetz9l
    Hello gals/guys, I am trying to use a custom configuration file instead of the default config.js file in the ckeditor root. I have placed a renamed copy of the config.js file in my webroot and call it with customConfig. It is not working for me though. Is my syntax creating any issues? I used the dev docs site as a ref: http://docs.cksource.com/CKEditor_3.x/D ... igurations Any ides or comments will really help me out. Im stuck. Thanks!! nano <p></p> CKEDITOR.replace( 'ticket_text' ); CKEDITOR.replace( 'ticket_text1', { customConfig : '/ckeditor/custom_config.js' });

    Read the article

  • HQl equivalent of sql query

    - by kash
    String SQL_QUERY = "SELECT count(*) FROM (SELECT * FROM Url as U where U.pageType=" + 1 + " group by U.pageId having count(U.pageId) = 1)"; query = session.createQuery(SQL_QUERY); I am getting an error org.hibernate.hql.ast.QuerySyntaxException: unexpected token: ( near line 1, column 23 [ SELECT count() FROM (SELECT * FROM Url as U where U.pageType = 2 group by U.pageId having count(U.pageId) = 1)]

    Read the article

  • CakePHP repeats same queries

    - by Rytis
    I have a model structure: Category hasMany Product hasMany Stockitem belongsTo Warehouse, Manufacturer. I fetch data with this code, using containable to be able to filter deeper in the associated models: $this->Category->find('all', array( 'conditions' => array('Category.id' => $category_id), 'contain' => array( 'Product' => array( 'Stockitem' => array( 'conditions' => array('Stockitem.warehouse_id' => $warehouse_id), 'Warehouse', 'Manufacturer', ) ) ), ) ); Data structure is returned just fine, however, I get multiple repeating queries like, sometimes hundreds of such queries in a row, based on dataset. SELECT `Warehouse`.`id`, `Warehouse`.`title` FROM `beta_warehouses` AS `Warehouse` WHERE `Warehouse`.`id` = 2 Basically, when building data structure Cake is fetching data from mysql over and over again, for each row. We have datasets of several thousand rows, and I have a feeling that it's going to impact performance. Is it possible to make it cache results and not repeat same queries?

    Read the article

  • PostgreSQL: How to index all foreign keys?

    - by biggusjimmus
    I am working with a large PostgreSQL database, and I are trying to tune it to get more performance. Our queries and updates seem to be doing a lot of lookups using foreign keys. What I would like is a relatively simple way to add Indexes to all of our foreign keys without having to go through every table (~140) and doing it manually. In researching this, I've come to find that there is no way to have Postgres do this for you automatically (like MySQL does), but I would be happy to hear otherwise there, too.

    Read the article

  • Stored procedure with output parameters vs. table-valued function?

    - by abatishchev
    Which approach is better to use if I need a member (sp or func) returning 2 parameters: CREATE PROCEDURE Test @in INT, @outID INT OUT, @amount DECIMAL OUT AS BEGIN ... END or CREATE FUNCTION Test ( @in INT ) RETURNS @ret TABLE (outID INT, amount DECIMAL) AS BEGIN ... END What are pros and cons of each approach considering that the result will passed to another stored procedure: EXEC Foobar @outID, @outAmount

    Read the article

  • PHP - SQL query to get update time from table status

    - by Tribalcomm
    This is my php code (I already have a connection to the db): $array = mysql_query("SHOW TABLE STATUS FROM mytable;"); while ($array = mysql_fetch_array($result)) { $updatetime = $array['Update_time']; } echo $updatetime; I get: Warning: mysql_fetch_array(): supplied argument is not a valid MySQL result resource. I am running MySQL 5.0.89 and PHP5. I do not want to add a new field to the table... I want to use the table status... Any help? Thanks!

    Read the article

  • Databinding expression for retrieving value of related collection using LINQ

    - by joshb
    I have a GridView that is bound to a LINQDataSource control that is returning a collection of customers. Within my DataGrid I need to display the home phone number of a customer, if they have one. The phone numbers of a customer are stored in a separate table with a foreign key pointing to the customer table. The following binding expression gets me the first phone number for a customer: <asp:TemplateField HeaderText="LastName" SortExpression="LastName"> <ItemTemplate> <asp:Label ID="PhoneLabel" runat="server" Text='<%# Eval("Phones[0].PhoneNumber") %>'></asp:Label> </ItemTemplate> </asp:TemplateField> I need to figure out how to get the home phone number specifically (filter based on phone type) and handle the scenario where the customer does not have a home phone in the database. Right now it's throwing an out of range exception if the customer does not have any phone numbers. I've tried using the Where operator with a lambda expression to filter the phone type but it doesn't work: <%# Eval("Phones.Where(p => p.PhoneTypeId == 2).PhoneNumber") %> Solutions or links to any good articles on the subject would be much appreciated.

    Read the article

  • How to use database to generate multiple folder content page?

    - by VenomVipes
    Scenario :I am trying to build a Mobile Entertainment Portal. It will enable users to download Music & Movies to their Cell Phones... Problem Exp : Suppose I upload 100 folders of Songs, each folder is for one Album. I want a way to generate a page with all the folders name (Album Name) in it. If user click on the page, they should be taken to a page where they get list of all songs in the album. Clicking on any song name will let them download it. Can it be done anyway or will I have to manually design each of the 3 pages for each album. If I do that, its time consuming and also will be difficult to change anything like footer, header...

    Read the article

  • How to do INSERT into a table records extracted from another table

    - by Martin
    I'm trying to write a query that extracts and transforms data from a table and then insert those data into another table. Yes, this is a data warehousing query and I'm doing it in MS Access. So basically I want some query like this: INSERT INTO Table2(LongIntColumn2, CurrencyColumn2) VALUES (SELECT LongIntColumn1, Avg(CurrencyColumn) as CurrencyColumn1 FROM Table1 GROUP BY LongIntColumn1); I tried but get a syntax error message. What would you do if you want to do this?

    Read the article

  • How to retrieve last primary Id from mdb's table?

    - by William
    I got table with next columns: Id, Name, Age, Class I am trying to insert new row in db like this: INSERT INTO MyTable (Name, Age, Class) VALUES (@name, @age, @class) And get an exeption: "Index or primary key cannot contain a Null value." The question is how to add a new row without knowing next primary Id, or maybe there is a way to get this Id from the table with the help of another query ?

    Read the article

  • Service Broker not working after database restore

    - by roryok
    Have a working Service Broker set up on a server, we're in the process of moving to a new server but I can't seem to get Service Broker set up on the new box. Have done the obvious (to me) things like Enabling Broker on the DB, dropping the route, services, contract, queues and even message type and re adding them, setting ALTER QUEUE with STATUS ON SELECT * FROM sys.service_queues gives me a list of the queues, including my own two, which show as activation_enabled, receive_enabled etc. Needless to say the queues aren't working. When I drop messages into them nothing goes in and nothing comes out. Any ideas? I'm sure there's something really obvious I've missed...

    Read the article

  • Error using iif in ms access query

    - by naveen
    I am trying to fire this query in MS Access SELECT file_number, IIF(invoice_type='Spent on Coding',SUM(CINT(invoice_amount)), 0) as CodingExpense FROM invoice GROUP BY file_number I am getting this error Error in list of function arguments: '=' not recognized. Unable to parse query text. I tried replacing IIF with SWITCH to no avail. What's wrong with my query and how to correct this?

    Read the article

  • Why is this postgresql query so slow?

    - by user315975
    I'm no database expert, but I have enough knowledge to get myself into trouble, as is the case here. This query SELECT DISTINCT p.* FROM points p, areas a, contacts c WHERE ( p.latitude > 43.6511659465 AND p.latitude < 43.6711659465 AND p.longitude > -79.4677941889 AND p.longitude < -79.4477941889) AND p.resource_type = 'Contact' AND c.user_id = 6 is extremely slow. The points table has fewer than 2000 records, but it takes about 8 seconds to execute. There are indexes on the latitude and longitude columns. Removing the clause concering the resource_type and user_id make no difference. The latitude and longitude fields are both formatted as number(15,10) -- I need the precision for some calculations. There are many, many other queries in this project where points are compared, but no execution time problems. What's going on?

    Read the article

< Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >