Search Results

Search found 4815 results on 193 pages for 'parameterized queries'.

Page 128/193 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Use LINQ to count the number of combinations existing in two lists

    - by Ben McCormack
    I'm trying to create a LINQ query (or queries) that count the total number of occurences of a combinations of items in one list that exist in a different list. For example, take the following lists: CartItems DiscountItems ========= ============= AAA AAA AAA BBB AAA BBB BBB CCC CCC DDD The result of the query operation should be 2 since I can find two combinations of AAA and BBB (from DiscountItems) within the contents of CartItems. My thinking in approaching the query is to join the lists together to shorten CartItems to only include items from DiscountItems. The solution would be to find the CartItem in the resulting query that occurs the least amount of times, thus indicating how many combinations of items exist in CartItems. How can this be done? Here's the query I already have, but it's not working. query results in an enumeration with 100 items, far more than I expected. Dim query = From cartItem In Cart.CartItems Group Join discountItem In DiscountGroup.DiscountItems On cartItem.SKU Equals discountItem.SKU Into Group Select SKU = cartItem.SKU, CartItems = Group Return query.Min(Function(x) x.CartItems.Sum(Function(y) y.Quantity))

    Read the article

  • C# mysqlreader on same connection error

    - by dominiquel
    Hi, I must find a way to do this in C#, if possible... I must loop on my folder list (mysql table), and for each folder I instanciate I must do another query, but when I do this it says : "There is already an open DataReader associated with this Connection" and I am inside a mysqlreader loop already. Note that I have oversimplified the code just to show you, the fact is that I must do queries inside a mysqlreader loop, and it looks to be impossible as they are on the same connection? MySqlConnection cnx = new MySqlConnection(connexionString); cnx.Open(); MySqlCommand command= new MySqlCommand("SELECT * FROM folder WHERE folder_id = " + id, cnx); MySqlDataReader reader= commande.ExecuteReader(); while (reader.Read()) { this.folderList[this.folderList.Length] = new CFolder(reader.GetInt32"folder_id"), cnx); } reader.Close(); cnx.Close();

    Read the article

  • Understanding Plesk Watchdog statistics

    - by weotch
    We have Plesk 8.3 installed. I've started using their Watchdog module to track server useage. Our server routinely has trouble with the amount of traffic we have and I think our MySQL queries need to be smarter. Anyway, looking at the stats from Watchdog, it seems like MySQL usage is low compared to so something else making up the "overall" usage. See this: I was hoping someone with a lot of Plesk exeprience could help me understand what I'm seeing here. Can I not trust Watchdog's reports or am I missing something?

    Read the article

  • Database Design Question

    - by deniz
    Hi, I am designing a database for a project. I have a table that has 10 columns, most of them are used whenever the table is accessed, and I need to add 3 more rows; View Count Thumbs Up (count) Thumbs Down (Count) which will be used on %90 of the queries when the table is accessed. So, my question is that whether it is better to break the table up and create new table which will have these 3 columns + Foreign ID, or just make it 13 columns and use no joins? Since these columns will be used frequently, I guess adding 3 more columns is better, but if I need to create 10 more columns which will be used %90 of the time, should I add them as well, or create a new table and use joins? I am not sure when to break the table if the columns are used very frequently. Do you have any suggestions? Thanks in advance,

    Read the article

  • Can I expect a performance gain from removing this JOIN?

    - by makeee
    I have a "items" table with 1 million rows and a "users" table with 20,000 rows. When I select from the "items" table I do a join on the "users" table (items.user_id = user.id), so that I can grab the "username" from the users table. I'm considering adding a username column to the items table and removing the join. Can I expect a decent performance increase from this? It's already quite fast, but it would be nice to decrease my load (which is pretty high). The downside is that if the user changes their username, items will still reflect their old username, but this is okay with me if I can expect a decent performance increase. I'm asking stackoverflow because benchmarks aren't telling me too much. Both queries finish very quickly. Regardless, I'm wondering if removing the join would lighten load on the database to any significant degree.

    Read the article

  • Valid Email Addresses - XSS and SQL Injection

    - by PAAMAYIM_NEKUDOTAYIM
    Since there are so many valid characters for email addresses, are there any valid email addresses that can in themselves be XSS attacks or SQL injections? I couldn't find any information on this on the web. The local-part of the e-mail address may use any of these ASCII characters: Uppercase and lowercase English letters (a–z, A–Z) Digits 0 to 9 Characters ! # $ % & ' * + - / = ? ^ _ ` { | } ~ Character . (dot, period, full stop) provided that it is not the last character, and provided also that it does not appear two or more times consecutively (e.g. [email protected]). http://en.wikipedia.org/wiki/E-mail_address#RFC_specification I'm not asking how to prevent these attacks (I'm already using parametrized queries and HTML purifier), this is more a proof-of-concept. The first thing that came to mind was 'OR [email protected], except that spaces are not allowed. Do all SQL injections require spaces?

    Read the article

  • Custom SQL Server driver

    - by hoodoos
    I had a crazy thought about writing my own SQL Server driver to make it work something like non-blocking http client, so it won't be thread thirsty and could handle lots of db queries within one thread. I tried to look over google for some guidelines about implementing SQL Server client protocol, but found none really, where do those guys get information about it when they write own implementations for PHP or python? I need a really low level to be documented so I can implement all phases of working with a connection through sockets. And would be really nice to have a an example in c# language. :)

    Read the article

  • Python - Using "Google AJAX Search" API's Local Search Objects

    - by user330739
    Hi! I've just started using Google's search API to find addresses and the distances between those addresses. I used geopy for this, but, I often had the problem of not getting the correct addresses for my queries. I decided to experiment, therefore, with Google's "Local Search" (http://code.google.com/apis/ajaxsearch/local.html). Anyway, I wanted to ask if I could use the "Local Search" objects provided by the API within python. Something tells me that I can't and that I have to use json. Does anyone know if there is a work around? PS: Im trying to make something like this: http://www.google.com/uds/samples/random/lead.html ... except a matrix type deal where the insides will be filled with distances between the addresses. Thanks for reading!

    Read the article

  • Help Forming An SQL Query That Selects The Max Difference Of Two Fields

    - by Frank
    I'm trying to select a record with the most effective votes. Each record has an id, the number of upvotes (int) and the number of downvotes (int) in a MySQL database. I know basic update, select, insert queries but I'm unsure of how to form a query that looks something like: SELECT * FROM topics WHERE MAX(topic.upvotes - topic.downvotes). Please excuse my made up SQL. The tutorials on SQL I find on the internet cover very basic stuff. Does anyone recommend a good book on this subject?

    Read the article

  • Storing search result for paging and sorting

    - by Mattias
    I've been implementing MS Search Server 2010 and so far its really good. Im doing the search queries via their web service, but due to the inconsistent results, im thinking about caching the result instead. The site is a small intranet (500 employees), so it shouldnt be any problems, but im curious what approach you would take if it was a bigger site. I've googled abit, but havent really come over anything specific. So, a few questions: What other approaches are there? And why are they better? How much does it cost to store a dataview of 400-500 rows? What sizes are feasible? Other points you should take into consideration. Any input is welcome :)

    Read the article

  • HQL updates and domain objects

    - by CaptainAwesomePants
    I have what may be a pretty elementary Hibernate question. Do HQL (and/or Criteria) update queries cause updates to live domain objects? And do they automatically flush now-invalid domain objects from the first-level cache? Example: Player playerReference1 = session.get(Player.class,1); session.createQuery("update players set gold = 100").executeUpdate(); //Question #1 -- does playerReference1.getGold() now return 100? Player playerReference2 = session.get(Player.class,1); //Question #2 -- does playerReference2.getGold() return 100, or is it the same exact object? Should I make a practice of evicting all objects that are affected by an HQL update if there's a chance some code will need it later?

    Read the article

  • is there a tool to see the difference between two database tables in mssql?

    - by reinier
    What is a good tool to see the differences between 2 tables (or even better, the datasets returned by 2 queries). EDIT: I'm not interested in the schema changes. Just assume that the schemas are the same. background as to why: I'm porting some legacy code which can fill a database with some pre-calced data. The easiest way to see if I got everything right, is to check the output of the old program, with the new one. I was thinking that if there is some kind of 'diff' tool for databases, this might be great.

    Read the article

  • Reuse select query in a procedure in Oracle

    - by Jer
    How would I store the result of a select statement so I can reuse the results with an in clause for other queries? Here's some pseudo code: declare ids <type?>; begin ids := select id from table_with_ids; select * from table1 where id in (ids); select * from table2 where id in (ids); end; ... or will the optimizer do this for me if I simply put the sub-query in both select statements?

    Read the article

  • Proper abstraction of the database tier in a 3 tier system?

    - by Earlz
    Hello, I am creating a 3 tier application. Basically it goes Client - (through optional server to be a thin-client) - Business Logic - Database Layer And basically making it so that there is never any skipping around. As such, I want for all of the SQL queries and such to be in the Database Layer. Well, now I'm a bit confused. I made a few static classes to start off the database tier but what should I do for the database connections? Should I just create a new database connection anytime I enter the Database Layer or would that be wasteful? Does Connection.Open() take time whenever you have a ConnectionPool? To me, it just feels wrong for the Business tier to have to pass in a IdbConnection object to the Database tier. It seems like the Database tier should handle all of that DB-specific code. What do you think? How can I do it the proper way while staying practical?

    Read the article

  • HQL multiple updates. Is there a better way?

    - by folone
    I have a Map, that I want to persist. The domain object is something like this: public class Settings { private String key; private String value; public String getKey() { ... } public String getValue() { ... } public void setKey() { ... } public void setValue() { ... } } The standard approach is to generate a Setting for each pair, and saveOrUpdate() it. But it generates way too much queries, because I need to save lots of settings at a time, and it really affects perfomance. Is there a way to do this using one update query?

    Read the article

  • Update query with conditional?

    - by dmontain
    I'm not sure if this possible. If not, let me know. I have a PDO mysql that updates 3 fields. $update = $mypdo->prepare("UPDATE tablename SET field1=:field1, field2=:field2, field3=:field3 WHERE key=:key"); But I want field3 to be updated only when $update3 = true; (meaning that the update of field3 is controlled by a conditional statement) Is this possible to accomplish with a single query? I could do it with 2 queries where I update field1 and field2 then check the boolean and update field3 if needed in a separate query. //run this query to update only fields 1 and 2 $update_part1 = $mypdo->prepare("UPDATE tablename SET field1=:field1, field2=:field2 WHERE key=:key"); //if field3 should be update, run a separate query to update it separately if ($update3){ $update_part2 = $mypdo->prepare("UPDATE tablename SET field3=:field3 WHERE key=:key"); } But hopefully there is a way to accomplish this in 1 query?

    Read the article

  • PHP - Too many mysql_query("SELECT .. ") .. ?

    - by Mike
    Hey, I'm making an e-shop and to display the tree of categories and all the products with their multiple variations of prices I made like more than 150 mysql_query("SELECT ..."); queries on one page. (If I count the "while" loops). Is it too many, and if yes, can it have any negative effect? (ofc. it takes longer to load the data ..) Also can I anyhow achieve the effect of this code without doing it that way? $result2 = mysql_query("SELECT * FROM ceny WHERE produkt_id='$id' ORDER BY gramaz"); $result3 = mysql_query("SELECT * FROM ceny WHERE produkt_id='$id' ORDER BY gramaz"); $result4 = mysql_query("SELECT * FROM ceny WHERE produkt_id='$id' ORDER BY gramaz"); $result5 = mysql_query("SELECT * FROM ceny WHERE produkt_id='$id' ORDER BY gramaz"); while( $row2 = mysql_fetch_array( $result2 )) { } while( $row3 = mysql_fetch_array( $result2 )) { } while( $row4 = mysql_fetch_array( $result2 )) { } while( $row5 = mysql_fetch_array( $result2 )) { } Thanks, Mike.

    Read the article

  • ASP.NET Thread Safety in aspx.cs code behind file

    - by Tim Michalski
    I am thinking of adding a DataContext as a member variable to my aspx.cs code-behind class for executing LinqToSql queries. Is this thread safe? I am not sure if a new instance of this code-behind class is created for each HTTP request, or if the instance is shared amongst all request threads? My fear is that I will get 10 simultaneous concurrent http requests that will be using the same database session. public partial class MyPage : System.Web.UI.Page { private DataContext myDB = new DataContext(); protected void MyAction_Click(object sender, EventArgs e) { myDB.DoWork(); } }

    Read the article

  • Parsing HTML with XPath and PHP

    - by Peter
    Is there a way (using XPath and PHP) to do the following (WITHOUT external XSLT files)? Remove all tables and their contents Remove everything after the first h1 tag Keep only paragraphs (INCLUDING their inner HTML (links, lists, etc)) I received an XSLT answer here, but I'm looking for XPATH queries that don't require external files. Currently, I've got the HTML in question loaded into a SimpleXmlElement via: $doc = @DOMDocument::loadHTML($xml); $data = simplexml_import_dom($doc); Now I need help with: $data = $data->xpath('??????'); Been working with this one for several days to no avail. I really appreciate the help. Edit: I don't particularly care what's inside the paragraphs, as I can use strip_tags to eliminate what I don't want. All I need to do is to isolate the paragraphs from the rest of the source. I suppose a more specific, accurate requirement would be this: Return only paragraphs (and their html contents) that aren't contained in tables, and only before the first h1 tag

    Read the article

  • Best data store for billions of rows

    - by Jody Powlette
    I need to be able to store small bits of data (approximately 50-75 bytes) for billions of records (~3 billion/month for a year). The only requirement is fast inserts and fast lookups for all records with the same GUID and the ability to access the data store from .net. I'm a SQL server guy and I think SQL Server can do this, but with all the talk about BigTable, CouchDB, and other nosql solutions, it's sounding more and more like an alternative to a traditional RDBS may be best due to optimizations for distributed queries and scaling. I tried cassandra and the .net libraries don't currently compile or are all subject to change (along with cassandra itself). I've looked into many nosql data stores available, but can't find one that meets my needs as a robust production-ready platform. If you had to store 36 billion small, flat records so that they're accessible from .net, what would choose and why?

    Read the article

  • Aggregate functions in ANSI SQL

    - by morpheous
    I want to use multiple aggregate functions in a query. All the examples i have seem on aggregate functions however, are trivial. Typically, they are of the form: SELECT field1,agg_func1, agg_func2 GROUP BY SOME_COLUMNS HAVING agg_func1 OP SOME_SCALAR Where: OP: is a boolean operator (e.g. <, = etc) SOME_SCALAR: is a scalar (i.e. a constant number) What I want to know is if it is possible to write (IN ANSI SQL) queries like: SELECT field1,agg_func1, agg_func2, agg_func3 GROUP BY SOME_COLUMNS HAVING (agg_func1 OP1 agg_func2) OP2 (agg_func2 OP3 agg_func3) Where: OP[N] are boolean operators or ANSI SQL clause operators like 'BETWEEN', 'LIKE', 'IN' etc. Also, assuming this is possible (I have not seen any documentation saying otherwise) are there any efficiency/performance considerations (i.e. penalties) when the HAVING clause consists of a boolean expression combining the output of the aggregate functions - instead of the normal comparison of the output of the aggregate with a constant number (e.g. min('salary') 100 ) - which is often used in the most banal examples involving aggregate functions?

    Read the article

  • Custom keys for Google App Engine models (Python)

    - by Cameron
    First off, I'm relatively new to Google App Engine, so I'm probably doing something silly. Say I've got a model Foo: class Foo(db.Model): name = db.StringProperty() I want to use name as a unique key for every Foo object. How is this done? When I want to get a specific Foo object, I currently query the datastore for all Foo objects with the target unique name, but queries are slow (plus it's a pain to ensure that name is unique when each new Foo is created). There's got to be a better way to do this! Thanks.

    Read the article

  • PHP e-commerce site talking to internal database for stock / ordering?

    - by CitrusTree
    Hi. I'm working on an e-commerce site (either bespoke with PHP, or using Drupal/Ubercart), and I'd like to investigate the site interacting with an internal (filemaker) database we use to manage stock and orders. Currently we manually transfer orders from the web site to our own database, and the site does not check or record changes in stock. My plan to allow the 2 to interact is as follows: Make the internal database available externaly on a machine with a fixed IP Allow external access from the site only Connect to the internal database using ODBC (or similar) Use simple queries to check stock / record stock changes / record order details Am I missing something here as this sounds quite straight forward? Is there another solution I should be taking a look at? Thanks in advance for any help or comments.

    Read the article

  • INET_ATON() and INET_NTOA() in PHP?

    - by blerh
    I want to store IP addresses in my database, but I also need to use them throughout my application. I read about using INET_ATON() and INET_NTOA() in my MySQL queries to get a 32-bit unsigned integer out of an IP address, which is exactly what I want as it will make searching through the database faster than using char(15). The thing is, I can't find a function that does the same sort of thing in PHP. The only thing I came across is: http://php.net/manual/en/function.ip2long.php So I tested it: $ip = $_SERVER['REMOTE_ADDR']; echo ip2long($ip); And it outputs nothing. In the example they gave it seems to work, but then again I'm not exactly sure if ip2long() does the same thing as INET_ATON(). Does someone know a PHP function that will do this? Or even a completely new solution to storing an IP address in a database? Thanks.

    Read the article

  • which is better, creating a materialized view or a new table?

    - by Carson
    I have some demanding mysql queries that are needed to grap same up-to-date datasets from 5-7 mysql tables. I am thinking of creating a table or materialized view to gather all demanding columns from other tables, so as to increase performance. If I create that table, I may need to do extra insert / update / delete operation each time other tables updated. if I create materialized view, I am worrying if the performance can be greatly improved. Because data from other tables are changing very frequently. Most likely, the view may need to be created first everytime before selecting it. Any ideas? e.g. how to cache? other extra measures I can do?

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >