Search Results

Search found 23005 results on 921 pages for 'query cache'.

Page 120/921 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Database EAV model, record listing as per search

    - by Shyam Sunder Verma
    I am building a dynamic application. I have three tables : ( EAV model style) 1: Items ( ItemId, ItemName) 2: Fields (FieldId, FieldName) 3: Field Values ( ItemID, FieldId, Value) Can you tell me how to write SINGLE query to get starting 20 records from ALL items where FieldId=4 is equal to TRUE. Expected Result : Columns = ItemID | Name | Field1 | Field2 | Field3 Each Row= ItemId | ItemName| Value1 | Value2 | Value3 Important concerns : 1: Number of fields per item are not known 2: I need one to write ONE query. 3: Query will be running on 100K records, so performance is concern. 4: I am using MySQL 5.0, so need solution for MYSQL Should I denormalize the tables if above query is not possible at all ? Any advice ?

    Read the article

  • OData Query Option top Forces Data To Be Sorted By Primary Key

    This post show a simple WCF Data Service (Formerly known as ADO.NET Data Services) project that retrieves data using the Reflection Provider for accessing data. It goes on to show that using $top... This site is a resource for asp.net web programming. It has examples by Peter Kellner of techniques for high performance programming...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Zend_Cache_Backend_Sqlite vs Zend_Cache_Backend_File

    - by Alekc
    Hi, Currently i'm using Zend_Cache_Backend_File for caching my project (especially responses from external web services). I was wandering if I could find some benefit in migrating the structure to Zend_Cache_Backend_Sqlite. Possible advantages are: File system is well-ordered (only 1 file in cache folder) Removing expired entries should be quicker (my assumption, since zend wouldn't need to scan internal-metadatas for expiring date of each cache) Possible disadvantages: Finding record to read (with files zend check if file exists based on filename and should be a bit quicker) in term of speed. I've tried to search a bit in internet but it seems that there are not a lot of discussion about the matter. What do you think about it? Thanks in advance.

    Read the article

  • MySql: Query multiple identical dynamic tables.

    - by JYelton
    I have a database with 500+ tables, each with identical structure, that contain historical data from sensors. I am trying to come up with a query that will locate, for example, all instances where sensor n exceeds x. The problem is that the tables are dynamic, the query must be able to dynamically obtain the list of tables. I can query information_schema.tables to get a list of the tables, like so: SELECT table_name FROM information_schema.tables WHERE table_schema = 'database_name'; I can use this to create a loop in the program and then query the database repeatedly, however it seems like there should be a way to have MySql do the multiple table search. I have not been able to make a stored procedure that works, but the examples I can find are generally for searching for a string in any column. I want to specifically find data in a specific column that exists in all tables. I admit I do not understand how to properly use stored procedures nor if they are the appropriate solution to this problem. An example query inside the loop would be: SELECT device_name, sensor_value FROM device_table WHERE sensor_value > 10; Trying the following does not work: SELECT device_name, sensor_value FROM ( SELECT table_name FROM information_schema.tables WHERE table_schema = 'database_name' ) WHERE sensor_value > 10; It results in an error: "Every derived table must have its own alias." The goal is to have a list of all devices that have had a given sensor value occur anywhere in their log (table). Ultimately, should I just loop in my program once I've obtained a list of tables, or is there a query structure that would be more efficient?

    Read the article

  • XmlDocument caching memory usage

    - by mdsharpe
    We are seeing very high memory usage in .NET web applications which use XmlDocument. A small (~5MB) XML document is loaded into an XmlDocument object and stored in HttpContext.Cache for easy querying and XSLT transformation on each page load. The XML is modified on disk periodically so a cache has a dependency on the file. Such an application appears to be using hundreds of megabytes of RAM. I have experimented with requesting garbage collection on each request start, and this keeps the RAM usage far lower but I cannot imagine this is good practise. Does anyone have any suggestions as to how we can achieve the same goal but with lower RAM usage?

    Read the article

  • dns hierarchy not working !! Please help

    - by nikhilelite
    (DNS1 ,WWW1, Gateway1) (sub-internal network) (DNS0,WWW0,Gateway0) (internal network) DNS1: 192.168.250.3/24 WWW1: 192.168.250.4/24 Gateway1: 192.168.250.1 /24 (internal) :: 192.168.0.150 to 192.168.0.175 (external) DNS0:192.168.0.197/24 WWW0:192.168.0.197/24 Gateway0: 192.168.0.1 (internal) :: 69.94.x.x (external, dynamic ,isp control) Expected behavior: When using dig from internal (192.168.250.0/24) hosts, and query about domain from 192.168.0.197/16 nameserver's hosts (for which its authoritative) , it should return the ip address. Whats happening: After dig, answer section empty, the query is trying to access a.root server instead of 192.168.0.197 ,even though i have defined 192.168.0.197 as dns in gateway1's resolv.conf Why? I need this working asap, can anyone here help ?

    Read the article

  • Undocumented Query Plans: The ANY Aggregate

    - by Paul White
    As usual, here’s a sample table: With some sample data: And an index that will be useful shortly: There’s a complete script to create the table and add the data at the end of this post.  There’s nothing special about the table or the data (except that I wanted to have some fun with values and data types). The Task We are asked to return distinct values of col1 and col2 , together with any one value from the thing column (it doesn’t matter which) per group.  One possible result set is shown...(read more)

    Read the article

  • Fetching database query through function

    - by Shubham Maurya
    I am sick of connecting database in each script i need a more OOP approach to fetching database results. ex like wordpress use wpdb class to fetch results. This what wordpress does to get data <?php $posts = $wpdb->get_results("SELECT ID, post_title FROM $wpdb->posts WHERE post_status = 'publish' AND post_type='post' ORDER BY comment_count DESC LIMIT 0,4") ?> How can i create the same feature too using any class or function and use it in my script Thank you

    Read the article

  • A scheme for expiring downloaded content?

    - by Chad Johnson
    I am going to offer a web API service that allows users to download and "rent" content for a monthly subscription fee. The API will either be open to everyone or possibly just select parties (not sure yet). Each developer must agree to a license, and they receive a developer key for their person. Each software application will have its own key as well. So then end-users will download the software which will interact with my service's API. Each user will have a key for each application as well (probably using OAuth). Content will be cached on first download and accessible offline via just the third-party application that cached the content. If a user cancels their subscription, I plan on doing the following: Deactivate the user's OAuth key for all applications. Do not allow the user's account to download new content via the API (and subsequently any software that uses the API). Now, the big question is: how do I make content expire if they cancel their subscription? If they cancel, they should not have access to content anymore. Here are ideas I've thought of (some of these are half-solutions, not yet fully fleshed out): Require that applications encrypt downloaded content using the user's OAuth key, making it available to only the application. This will prevent most users from going to the cache directory and just copying and keeping files. Update the user's key once a month, forcing content to re-cache on a monthly basic. Users could then access content for a month after they cancel their subscription. Require applications to "phone home" [to the service] periodically and check whether the user's subscription has terminated. If so, require in the API developer license that applications expire cache. If it is found that applications do not comply, their keys (and possibly keys for all developers) are permanently deactivated as a consequence. One major worry is that some applications may blatantly ignore constraints of the license. Is it generally acceptable to rely on applications abiding by the licensing constraints? Bad idea? Any other ideas? Maybe a way to make content auto-expire after x days? Something else? I'm open to out-of-the-box ideas.

    Read the article

  • Separate tables or single table with queries?

    - by Joe
    I'm making an employee information database. I need to handle separated employees. Should I a. set up a query with a macro to send separated employees to a separate table, or b. just add a flag to the single table denoting separation? I understand that it's best practice to take choice b, and the one reason I can think of for this is that any structural changes I make to the table later will have to be done in both places. But it also seems like setting up a flag forces me to filter out that flag for basically every useful query I'm going to make in the future.

    Read the article

  • quering an external oracle db in rails application

    - by railscoder
    I have a website which useses a mysql database for its whole operation . But for a new requirement i need to query a external oracle database( used by other component) and compile a list of items and display in a page in the website. How is it possible to connect to a external database just for rendering a single page. And is it possible to cache the queried result for say 1 month before invalidating the cache and get the updated list of items. i dont want query the external oracle db for each request.

    Read the article

  • How do I do MongoDB console-style queries in PHP?

    - by Zoe Boles
    I'm trying to get a MongoDB query from the javascript console into my PHP app. What I'm trying to avoid is having to translate the query into the PHP "native driver"'s format... I don't want to hand build arrays and hand-chain functions any more than I want to manually build an array of MySQL's internal query structure just to get data. I already have a string producing the exact content I want in the Mongo console: db.intake.find({"processed": {"$exists": "false"}}).sort({"insert_date": "1"}).limit(10); The question is, is there a way for me to hand this string, as is, to MongoDB and have it return a cursor with the dataset I request? Right now I'm at the "write your own parser because it's not valid json to kinda turn a subset of valid Mongo queries into the format the PHP native driver wants" state, which isn't very fun. I don't want an ORM or a massive wrapper library; I just want to give a function my query string as it exists in the console and get an Iterator back that I can work with. I know there are a couple of PHP-based Mongo manager applications that apparently take console-style queries and handle them, but initial browsing through their code, I'm not sure how they handle the translation. I absolutely love working with mongo in the console, but I'm rapidly starting to loathe the thought of converting every query into the format the native writer wants...

    Read the article

  • PHP APC - Why is loading cached array op codes slow?

    - by Aaron Kreider
    I'm using APC to reduce my loading time for my PHP files. My files load very fast, except for one file where I define more than 100 arrays. This 270 kb file takes 200 ms to load. The rest of the files are full of objects, methods, and functions. I'm wondering: does OP code caching not work as well for arrays? My APC cache should be big enough to handle all of my classes. Currently 40% of my cache is free. My hit rate is 99%. apc.shm_size=32 M apc.max_file_size = 1M apc.shm_segments= 1 APC 3.1.6 I'm using PHP 5.2, Apache 2, and Windows Vista.

    Read the article

  • Odd 'UNION' behavior in an Oracle SQL query

    - by RenderIn
    Here's my query: SELECT my_view.* FROM my_view WHERE my_view.trial in (select 2 as trial_id from dual union select 3 from dual union select 4 from dual) and my_view.location like ('123-%') When I execute this query it returns results which do not conform to the my_view.location like ('123-%') condition. It's as if that condition is being ignored completely. I can even change it to my_view.location IS NULL and it returns the same results, despite that field being not-nullable. I know this query seems ridiculous with the selects from dual, but I've structured it this way to replicate a problem I have when I use a 'WITH' clause (the results of that query are where the selects from dual inline view are). I can modify the query like so and it returns the expected results: SELECT my_view.* FROM my_view WHERE my_view.trial in (2, 3, 4) and my_view.location like ('123-%') Unfortunately I do not know the trial values up front (they are queried for in a 'WITH' clause) so I cannot structure my query this way. What am I doing wrong? I will say that the my_view view is composed of 3 other views whose results are UNION ALL and each of which retrieve some data over a DB Link. Not that I believe that should matter, but in case it does.

    Read the article

  • can use more than 1 column in MySQL Group BY?

    - by Am1rr3zA
    Hi, I want write these SQL Query: CREATE VIEW `uniaverage` AS select `averagegrade`.`mjr`,`averagegrade`.`lev` , avg(`averagegrade`.`average`) AS `uniAVG` from `averagegrade` group by `averagegrade`.`lev`, `averagegrade`.`mjr`; But MySQL Query Browser give this error: Operand Should Contain 1 column(s) I somewhere read can use group by on more than 1 column!!! How can I solve this error? or how can I change the Query to get the same result?

    Read the article

  • Create a nested list

    - by sico87
    How would I create a nested list, I currently have this public function getNav($cat,$subcat){ //gets all sub categories for a specific category if(!$this->checkValue($cat)) return false; //checks data $query = false; if($cat=='NULL'){ $sql = "SELECT itemID, title, parent, url, description, image FROM p_cat WHERE deleted = 0 AND parent is NULL ORDER BY position;"; $query = $this->db->query($sql) or die($this->db->error); }else{ //die($cat); $sql = "SET @parent = (SELECT c.itemID FROM p_cat c WHERE url = '".$this->sql($cat)."' AND deleted = 0); SELECT c1.itemID, c1.title, c1.parent, c1.url, c1.description, c1.image, (SELECT c2.url FROM p_cat c2 WHERE c2.itemID = c1.parent LIMIT 1) as parentUrl FROM p_cat c1 WHERE c1.deleted = 0 AND c1.parent = @parent ORDER BY c1.position;"; $query = $this->db->multi_query($sql) or die($this->db->error); $this->db->store_result(); $this->db->next_result(); $query = $this->db->store_result(); } return $query; } public function getNav($cat=false, $subcat=false){ //gets a list of all categories form this level, if $cat is false it returns top level nav if($cat==false || strtolower($cat)=='all-products') $cat='NULL'; $ds = $this->data->getNav($cat, $subcat); $nav = $ds ? $ds : false; $html = ''; //create html if($nav){ $html = '<ul>'; //var_dump($nav->fetch_assoc()); while($row = $nav->fetch_assoc()){ $url = isset($row['parentUrl']) ? $row['parentUrl'].'/'.$row['url'] : $row['url']; $current = $subcat==$row['url'] ? ' class="current"' : ''; $html .= '<li'.$current.'><a href="/'.$url.'/">'.$row['title'].'</a></li>'; } $html .='</ul>'; } return $html; } The sql returns parents and children, for each parent I need the child to nest in a list.

    Read the article

  • Two radically different queries against 4 mil records execute in the same time - one uses brute force.

    - by IanC
    I'm using SQL Server 2008. I have a table with over 3 million records, which is related to another table with a million records. I have spent a few days experimenting with different ways of querying these tables. I have it down to two radically different queries, both of which take 6s to execute on my laptop. The first query uses a brute force method of evaluating possibly likely matches, and removes incorrect matches via aggregate summation calculations. The second gets all possibly likely matches, then removes incorrect matches via an EXCEPT query that uses two dedicated indexes to find the low and high mismatches. Logically, one would expect the brute force to be slow and the indexes one to be fast. Not so. And I have experimented heavily with indexes until I got the best speed. Further, the brute force query doesn't require as many indexes, which means that technically it would yield better overall system performance. Below are the two execution plans. If you can't see them, please let me know and I'll re-post then in landscape orientation / mail them to you. Brute-force query: Index-based exception query: My question is, based on the execution plans, which one look more efficient? I realize that thing may change as my data grows.

    Read the article

  • The query parameter '$format' begins with a system-reserved '$' character but is not recognized

    Tuesday morning I was ranting on Twitter , well really whining, about how WCF Data Services does not support JSON format out of the box. Fortunately I was shown the answer in replies to my rant. So I want to share this with you. First, I made the mistake...(read more)...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The query parameter '$format' begins with a system-reserved '$' character but is not recognized

    Tuesday morning I was ranting on Twitter , well really whining, about how WCF Data Services does not support JSON format out of the box. Fortunately I was shown the answer in replies to my rant. So I want to share this with you. First, I made the mistake...(read more)...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • LINQ/LAMBDA filter query by date [on hold]

    - by inquisitive_one
    I'm trying to use LINQ to SQL to retrieve earnings data for a particular date range. Currently the table is set up as follows: Comp Eps Year Quarter IBM .5 2012 2 IBM .65 2012 3 IBM .60 2012 4 IBM .5 2011 2 IBM .7 2013 1 IBM .8 2013 2 Except for Eps, all fields have a data type of string or char. Eps has a data type of double. Here's my code: var myData = myTable .Where(t => t.Comp.Equals("IBM") && Convert.Int32(string.Format("{0}{1}", t.Year, t.Quarter)) <= 20131); I get the following error when I tried that code: Method 'System.String Format(System.String, System.Object, System.Object)' has no supported translation to SQL How can I select all Eps that has a year & quarter less than "20132" using a lambda expression?

    Read the article

  • Passing a parameter in a Report's Open Event to a parameter query (Access 2007)

    - by JPM
    Hi there, I would like to know if there is a way to set the parameters in an Access 2007 query using VBA. I am new to using VBA in Access, and I have been tasked with adding a little piece of functionality to an existing app. The issue I am having is that the same report can be called in two different places in the application. The first being on a command button on a data entry form, the other from a switchboard button. The report itself is based on a parameter query that has requires the user to enter a Supplier ID. The user would like to not have to enter the Supplier ID on the data entry form (since the form displays the Supplier ID already), but from the switchboard, they would like to be prompted to enter a Supplier ID. Where I am stuck is how to call the report's query (in the report's open event) and pass the SupplierID from the form as the parameter. I have been trying for a while, and I can't get anything to work correctly. Here is my code so far, but I am obviously stumped. Private Sub Report_Open(Cancel As Integer) Dim intSupplierCode As Integer 'Check to see if the data entry form is open If CurrentProject.AllForms("frmExample").IsLoaded = True Then 'Retrieve the SupplierID from the data entry form intSupplierCode = Forms![frmExample]![SupplierID] 'Call the parameter query passing the SupplierID???? DoCmd.OpenQuery "qryParams" Else 'Execute the parameter query as normal DoCmd.OpenQuery "qryParams"????? End If End Sub I've tried Me.SupplierID = intSupplierCode, and although it compiles, it bombs when I run it. And here is my SQL code for the parameter query: PARAMETERS [Enter Supplier] Long; SELECT Suppliers.SupplierID, Suppliers.CompanyName, Suppliers.ContactName, Suppliers.ContactTitle FROM Suppliers WHERE (((Suppliers.SupplierID)=[Enter Supplier])); I know there are ways around this problem (and probably an easy way as well) but like I said, my lack of experience using Access and VBA makes things difficult. If any of you could help, that would be great!

    Read the article

  • Using Queries with Coherence Read-Through Caches

    - by jpurdy
    Applications that rely on partial caches of databases, and use read-through to maintain those caches, have some trade-offs if queries are required. Coherence does not support push-down queries, so queries will apply only to data that currently exists in the cache. This is technically consistent with "read committed" semantics, but the potential absence of data may make the results so unintuitive as to be useless for most use cases (depending on how much of the database is held in cache). Alternatively, the application itself may manually "push down" queries to the database, either retrieving results equivalent to querying the cache directly, or may query the database for a key set and read the values from the cache (relying on read-through to handle any missing values). Obviously, if the result set is too large, reading through the cache may cause significant thrashing. It's also worth pointing out that if the cache is asynchronously synchronized with the database (perhaps via database change listener), that an application may commit a transaction to the database, then generate a key set from the database via a query, then read cache entries through the cache, possibly resulting in a race condition where the application sees older data than it had previously committed. In theory this is not problematic but in practice it is very unintuitive. For this reason it often makes sense to invalidate the cache when updating the database, forcing the next read-through to update the cache.

    Read the article

  • SharePoint 2010 custom search from Layouts page

    - by Faiz
    Hi, I am querying SharePoint 2010 search using FullTextSqlQuery. The query returns results as long as i run it from the webpart. However, for some reason, i need to run the same query from a custom aspx page deployed under layouts. The query returns a wcf exception. Has anyone tried running custom queries from pages deployed to layouts folder under 14 hive? Thanks, Faiz

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >