Search Results

Search found 4815 results on 193 pages for 'parameterized queries'.

Page 128/193 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Is it OK to allow users to query an OLTP SQL Server database with excel?

    - by user169867
    I have a SQL Server 2005 database used by several applications. Some users wish to query the database directly from excel. I can understand this, because it is a useful tool for adhoc queries and then getting the data in a format that's easily transmitted and manipulated by other users. My question is: Does Excel (say 2003/2007) do its querying in a way that won't cause concurency issues? Or is it done in such a way that a seperate datawarehouse database needs to be made to handle this scenario? Thanks for any advise.

    Read the article

  • Help Forming An SQL Query That Selects The Max Difference Of Two Fields

    - by Frank
    I'm trying to select a record with the most effective votes. Each record has an id, the number of upvotes (int) and the number of downvotes (int) in a MySQL database. I know basic update, select, insert queries but I'm unsure of how to form a query that looks something like: SELECT * FROM topics WHERE MAX(topic.upvotes - topic.downvotes). Please excuse my made up SQL. The tutorials on SQL I find on the internet cover very basic stuff. Does anyone recommend a good book on this subject?

    Read the article

  • Zend without Database

    - by dbemerlin
    Hi, i googled for an hour now but maybe my Google-Fu is just too weak, i couldn't find a solution. I want to create an application that queries a service via JSON requests (all data and backend/business logic is stored in the service). With plain PHP it's simple enough since i just make a curl request, json_decode the result and get what i need. This already works quite well. A request might look like this: Call http://service-host/userlist with body: {"logintoken": "123456-1234-5678-901234"} Get Result: { "status": "Ok", "userlist":[ {"name": "foo", "id": 1}, {"name": "bar", "id": 2} ] } Now we want to get that into the Zend Framework since it's a hobby project and we want to learn about Zend. The problem is that all information i could find use a Database. Is there even a way to create a Zend Project that does not use a Database? And how can i write a model that represents the actions instead of objects and object-relations?

    Read the article

  • C# mysqlreader on same connection error

    - by dominiquel
    Hi, I must find a way to do this in C#, if possible... I must loop on my folder list (mysql table), and for each folder I instanciate I must do another query, but when I do this it says : "There is already an open DataReader associated with this Connection" and I am inside a mysqlreader loop already. Note that I have oversimplified the code just to show you, the fact is that I must do queries inside a mysqlreader loop, and it looks to be impossible as they are on the same connection? MySqlConnection cnx = new MySqlConnection(connexionString); cnx.Open(); MySqlCommand command= new MySqlCommand("SELECT * FROM folder WHERE folder_id = " + id, cnx); MySqlDataReader reader= commande.ExecuteReader(); while (reader.Read()) { this.folderList[this.folderList.Length] = new CFolder(reader.GetInt32"folder_id"), cnx); } reader.Close(); cnx.Close();

    Read the article

  • Use LINQ to count the number of combinations existing in two lists

    - by Ben McCormack
    I'm trying to create a LINQ query (or queries) that count the total number of occurences of a combinations of items in one list that exist in a different list. For example, take the following lists: CartItems DiscountItems ========= ============= AAA AAA AAA BBB AAA BBB BBB CCC CCC DDD The result of the query operation should be 2 since I can find two combinations of AAA and BBB (from DiscountItems) within the contents of CartItems. My thinking in approaching the query is to join the lists together to shorten CartItems to only include items from DiscountItems. The solution would be to find the CartItem in the resulting query that occurs the least amount of times, thus indicating how many combinations of items exist in CartItems. How can this be done? Here's the query I already have, but it's not working. query results in an enumeration with 100 items, far more than I expected. Dim query = From cartItem In Cart.CartItems Group Join discountItem In DiscountGroup.DiscountItems On cartItem.SKU Equals discountItem.SKU Into Group Select SKU = cartItem.SKU, CartItems = Group Return query.Min(Function(x) x.CartItems.Sum(Function(y) y.Quantity))

    Read the article

  • Regex for finding valid sphinx fields

    - by mlissner
    I'm trying to validate that the fields given to sphinx are valid, but I'm having difficulty. Imagine that valid fields are cat, mouse, dog, puppy. Valid searches would then be: @cat search terms @(cat) search terms @(cat, dog) search term @cat searchterm1 @dog searchterm2 @(cat, dog) searchterm1 @mouse searchterm2 So, I want to use a regular expression to find terms such as cat, dog, mouse in the above examples, and check them against a list of valid terms. Thus, a query such as: @(goat) Would produce an error because goat is not a valid term. I've gotten so that I can find simple queries such as @cat with this regex: (?:@)([^( ]*) But I can't figure out how to find the rest. I'm using python & django, for what that's worth.

    Read the article

  • is there a tool to see the difference between two database tables in mssql?

    - by reinier
    What is a good tool to see the differences between 2 tables (or even better, the datasets returned by 2 queries). EDIT: I'm not interested in the schema changes. Just assume that the schemas are the same. background as to why: I'm porting some legacy code which can fill a database with some pre-calced data. The easiest way to see if I got everything right, is to check the output of the old program, with the new one. I was thinking that if there is some kind of 'diff' tool for databases, this might be great.

    Read the article

  • Using indexes on/through a MySQL view

    - by Peeja
    We've got a MySQL table in which rows are never updated, but instead new rows are added and the old ones marked obsolete. Think Rails' acts_as_paranoid, but for every update. To make working with Rails sane, we've got a view which selects only the rows which are "current". That makes a much better "table" for our ActiveRecord model. The snag: our indexes aren't being used anymore. Queries on the view don't use the underlying tables' indexes. You can't add an index to a view. Without indexes, the app is unbearably slow. The only solution we've come up with is to build a materialized view, but that's a pain in MySQL because they're not natively supported. Is there a better way to do this?

    Read the article

  • Mysql - multiple values and between

    - by realshadow
    Hey, I need to select 10 products and display them. Each product has 3 different prices. The select to get the prices looks like this: SELECT * FROM products_loans WHERE CODE IN('10X15/12', '10X15/Q10-10', '10X15/Q20-10') AND 550 BETWEEN PRICE_FROM AND PRICE_TO; Where 550 is the base price. Now this select returns 3 rows, but I want to modify it so it will return 30 results. I dont like the idea to execute 10 queries at once. I know I can easily achieve that with "OR", but I would like to ask if there is some other more elegant way to this. The select "should" look like this: SELECT * FROM products_loans WHERE KOD IN('10X15/12', '10X15/Q10-10', '10X15/Q20-10') AND (550, 325, 780) BETWEEN CENA_OD AND CENA_DO; Note that there is no "price" column or anything in the table which I could use to do a JOIN and I cant modify the table.

    Read the article

  • Database Design Question

    - by deniz
    Hi, I am designing a database for a project. I have a table that has 10 columns, most of them are used whenever the table is accessed, and I need to add 3 more rows; View Count Thumbs Up (count) Thumbs Down (Count) which will be used on %90 of the queries when the table is accessed. So, my question is that whether it is better to break the table up and create new table which will have these 3 columns + Foreign ID, or just make it 13 columns and use no joins? Since these columns will be used frequently, I guess adding 3 more columns is better, but if I need to create 10 more columns which will be used %90 of the time, should I add them as well, or create a new table and use joins? I am not sure when to break the table if the columns are used very frequently. Do you have any suggestions? Thanks in advance,

    Read the article

  • Storing search result for paging and sorting

    - by Mattias
    I've been implementing MS Search Server 2010 and so far its really good. Im doing the search queries via their web service, but due to the inconsistent results, im thinking about caching the result instead. The site is a small intranet (500 employees), so it shouldnt be any problems, but im curious what approach you would take if it was a bigger site. I've googled abit, but havent really come over anything specific. So, a few questions: What other approaches are there? And why are they better? How much does it cost to store a dataview of 400-500 rows? What sizes are feasible? Other points you should take into consideration. Any input is welcome :)

    Read the article

  • Best data store for billions of rows

    - by Jody Powlette
    I need to be able to store small bits of data (approximately 50-75 bytes) for billions of records (~3 billion/month for a year). The only requirement is fast inserts and fast lookups for all records with the same GUID and the ability to access the data store from .net. I'm a SQL server guy and I think SQL Server can do this, but with all the talk about BigTable, CouchDB, and other nosql solutions, it's sounding more and more like an alternative to a traditional RDBS may be best due to optimizations for distributed queries and scaling. I tried cassandra and the .net libraries don't currently compile or are all subject to change (along with cassandra itself). I've looked into many nosql data stores available, but can't find one that meets my needs as a robust production-ready platform. If you had to store 36 billion small, flat records so that they're accessible from .net, what would choose and why?

    Read the article

  • HQL multiple updates. Is there a better way?

    - by folone
    I have a Map, that I want to persist. The domain object is something like this: public class Settings { private String key; private String value; public String getKey() { ... } public String getValue() { ... } public void setKey() { ... } public void setValue() { ... } } The standard approach is to generate a Setting for each pair, and saveOrUpdate() it. But it generates way too much queries, because I need to save lots of settings at a time, and it really affects perfomance. Is there a way to do this using one update query?

    Read the article

  • Multiple unrelated in JasperReports

    - by Laren Mortensen
    I am using iReport with JasperReports. I want to include multiple subreports that have unrelated sql queries. I would like to be able to put these all on one report. The problem I am facing is that when I leave the master report sql query empty, none of my subreports have any data. There isn't really anything that the master report sends to the subreports since they are unrelated. Basically how do you throw multiple unrelated reports together into one report.

    Read the article

  • Best performance approach to history mechanism?

    - by Royi Namir
    We are going to create History Mechanism for our changes in DB (DART in pic) via Triggers. we have 600 tables. Each record that will be changed - the trigger will insert the deleted one into XXX. regarding to the XXX : option 1 : clone each table in "Dart" DB and each table now will have a "sister table" e.g. : Table1 will have Table1_History problems : we will have 1200 tables programmer can do mistakes by working on wrong tables... option 2 : make a new DB (DART_2005 in pic) and the history tables will be there option 3 : use linked server which stores the Db which will contain the history tables. question : 1) which option gives the best performance ( I guess 3 is not - but is it 1 or 2 or same ?) 2) Does option 2 is acting like "linked server" ( in queries we will need to select from both DB's...) 3) What is the best practice approach ?

    Read the article

  • INET_ATON() and INET_NTOA() in PHP?

    - by blerh
    I want to store IP addresses in my database, but I also need to use them throughout my application. I read about using INET_ATON() and INET_NTOA() in my MySQL queries to get a 32-bit unsigned integer out of an IP address, which is exactly what I want as it will make searching through the database faster than using char(15). The thing is, I can't find a function that does the same sort of thing in PHP. The only thing I came across is: http://php.net/manual/en/function.ip2long.php So I tested it: $ip = $_SERVER['REMOTE_ADDR']; echo ip2long($ip); And it outputs nothing. In the example they gave it seems to work, but then again I'm not exactly sure if ip2long() does the same thing as INET_ATON(). Does someone know a PHP function that will do this? Or even a completely new solution to storing an IP address in a database? Thanks.

    Read the article

  • PHP - Too many mysql_query("SELECT .. ") .. ?

    - by Mike
    Hey, I'm making an e-shop and to display the tree of categories and all the products with their multiple variations of prices I made like more than 150 mysql_query("SELECT ..."); queries on one page. (If I count the "while" loops). Is it too many, and if yes, can it have any negative effect? (ofc. it takes longer to load the data ..) Also can I anyhow achieve the effect of this code without doing it that way? $result2 = mysql_query("SELECT * FROM ceny WHERE produkt_id='$id' ORDER BY gramaz"); $result3 = mysql_query("SELECT * FROM ceny WHERE produkt_id='$id' ORDER BY gramaz"); $result4 = mysql_query("SELECT * FROM ceny WHERE produkt_id='$id' ORDER BY gramaz"); $result5 = mysql_query("SELECT * FROM ceny WHERE produkt_id='$id' ORDER BY gramaz"); while( $row2 = mysql_fetch_array( $result2 )) { } while( $row3 = mysql_fetch_array( $result2 )) { } while( $row4 = mysql_fetch_array( $result2 )) { } while( $row5 = mysql_fetch_array( $result2 )) { } Thanks, Mike.

    Read the article

  • Ruby implementation of conversion between Latitude/Longitude and OS National Grid Reference point?

    - by Harry Wood
    For converting between Latitude/Longitude and UK's Ordnance Survey National Grid eastings and northings, this seems to be the most popular explanation and reference implementation in JavaScript: http://www.movable-type.co.uk/scripts/latlong-gridref.html The web is littered with other implementations in different languages. Making the conversion via PostGIS queries is another alternative. ...but did anyone implement this maths in ruby? OSGridToLatLong is the direction I'm looking for just at this moment, but I would have thought a library for converting in both directions must surely be available in a gem somewhere. I'm just not searching for the right thing.

    Read the article

  • PHP e-commerce site talking to internal database for stock / ordering?

    - by CitrusTree
    Hi. I'm working on an e-commerce site (either bespoke with PHP, or using Drupal/Ubercart), and I'd like to investigate the site interacting with an internal (filemaker) database we use to manage stock and orders. Currently we manually transfer orders from the web site to our own database, and the site does not check or record changes in stock. My plan to allow the 2 to interact is as follows: Make the internal database available externaly on a machine with a fixed IP Allow external access from the site only Connect to the internal database using ODBC (or similar) Use simple queries to check stock / record stock changes / record order details Am I missing something here as this sounds quite straight forward? Is there another solution I should be taking a look at? Thanks in advance for any help or comments.

    Read the article

  • MySQL: Get only count of result set.

    - by Varun
    I am using MVC with PHP/MySQL. Suppose I am using 10 functions with different queries for fetching details from the database. But at other times I may want to get only the count of the result that will be returned by the query. What is the standard way to handle such situation. Should I write 10 more functions which duplicate almost whole query and return only the count. Or Should I always return the count also with the result set Or I can pass a flag to indicate that the function should return count only, and then based on the flag I will dynamically generate the (select part of) query. Or Is there a better way?

    Read the article

  • Proper abstraction of the database tier in a 3 tier system?

    - by Earlz
    Hello, I am creating a 3 tier application. Basically it goes Client - (through optional server to be a thin-client) - Business Logic - Database Layer And basically making it so that there is never any skipping around. As such, I want for all of the SQL queries and such to be in the Database Layer. Well, now I'm a bit confused. I made a few static classes to start off the database tier but what should I do for the database connections? Should I just create a new database connection anytime I enter the Database Layer or would that be wasteful? Does Connection.Open() take time whenever you have a ConnectionPool? To me, it just feels wrong for the Business tier to have to pass in a IdbConnection object to the Database tier. It seems like the Database tier should handle all of that DB-specific code. What do you think? How can I do it the proper way while staying practical?

    Read the article

  • which is better, creating a materialized view or a new table?

    - by Carson
    I have some demanding mysql queries that are needed to grap same up-to-date datasets from 5-7 mysql tables. I am thinking of creating a table or materialized view to gather all demanding columns from other tables, so as to increase performance. If I create that table, I may need to do extra insert / update / delete operation each time other tables updated. if I create materialized view, I am worrying if the performance can be greatly improved. Because data from other tables are changing very frequently. Most likely, the view may need to be created first everytime before selecting it. Any ideas? e.g. how to cache? other extra measures I can do?

    Read the article

  • ASP.NET Thread Safety in aspx.cs code behind file

    - by Tim Michalski
    I am thinking of adding a DataContext as a member variable to my aspx.cs code-behind class for executing LinqToSql queries. Is this thread safe? I am not sure if a new instance of this code-behind class is created for each HTTP request, or if the instance is shared amongst all request threads? My fear is that I will get 10 simultaneous concurrent http requests that will be using the same database session. public partial class MyPage : System.Web.UI.Page { private DataContext myDB = new DataContext(); protected void MyAction_Click(object sender, EventArgs e) { myDB.DoWork(); } }

    Read the article

  • Getting the final value to this MySQL query...

    - by Jack W-H
    I've got my database set up with three tables - code, tags, and code_tags for tagging posts. This will be the SQL query processed when a post is submitted. Each tag is sliced up by PHP and individually inserted using these queries. INSERT IGNORE INTO tags (tag) VALUES ('$tags[1]'); SELECT tags.id FROM tags WHERE tag = '$tags[1]' ORDER BY id DESC LIMIT 1; INSERT INTO code_tags (code_id, tag_id) VALUES ($codeid, WHAT_GOES_HERE?) The WHAT_GOES_HERE? value at the end is what I need to know. It needs to be the ID of the tag that the second query fetched. How can I put that ID into the third query? I hope I explained that correctly. I'll rephrase if necessary.

    Read the article

  • How to estimate memory need by XPathDocument for a specific xml file

    - by bill seacham
    Is there any way to estimate the memory requirement for creating an XpathDocument instance based on the file size of the xml? XpathDocument xdoc = new XpathDocument(xmlfile); Is there any way to programmatically stop the process of creating the XpathDocument if memory drops to a very low level? Since it loads the entire xml into memory, it would be nice to know ahead of time if the xml is too big. What I have found is that when I create a new XpathDocument with a big xml file, an outofmemory exception is never fired, but that the process slows to a crawl, only 5 Mb of memory remains a available and the Task Manager reports it is not responding. This happened with a 266 Mb xml file when there was 584 Mb of ram. I was able to load a 150 Mb file with no problems in 18. After loading the xml, I want to do xpath queries using an XpathNavigator and an XpathNodeIterator. I am using .net 2.0, xp sp3.

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >