Search Results

Search found 7116 results on 285 pages for 'nested queries'.

Page 194/285 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • Safe HttpContext.Current.Cache Usage

    - by Burak SARICA
    Hello there, I use Cache in a web service method like this : var pblDataList = (List<blabla>)HttpContext.Current.Cache.Get("pblDataList"); if (pblDataList == null) { var PBLData = dc.ExecuteQuery<blabla>( @"SELECT blabla"); pblDataList = PBLData.ToList(); HttpContext.Current.Cache.Add("pblDataList", pblDataList, null, DateTime.Now.Add(new TimeSpan(0, 0, 15)), Cache.NoSlidingExpiration, CacheItemPriority.Normal, null); } I wonder is it thread safe? I mean the method is called by multiple requesters And more then one requester may hit the second line at the same time while the cache is empty. So all of these requesters will retrieve the data and add to cache. The query takes 5-8 seconds. May a surrounding lock statement around this code prevent that action? (I know multiple queries will not cause error but i want to be sure running just one query.)

    Read the article

  • PHP PDO close()?

    - by PHPLOVER
    Can someone tell me, when you for example update, insert, delete.. should you then close it like $stmt->close(); ? i checked php manual and don't understand what close() actually does. EXAMPLE: $stmt = $dbh->prepare("SELECT `user_email` FROM `users` WHERE `user_email` = ? LIMIT 1"); $stmt->execute(array($email)); $stmt->close(); Next part of my question is, if as an example i had multiple update queries in a transaction after every execute() for each query i am executing should i close them individually ? ... because it's a transaction not sure i need to use $stmt->close(); after each execute(); or just use one $stmt->close(); after all of them ? Thanks once again, phplover

    Read the article

  • Find all clauses related to an atom

    - by Luc Touraille
    This is probably a very silly question (I just started learning Prolog a few hours ago), but is it possible to find all the clauses related to an atom? For example, assuming the following knowledge base: cat(tom). animal(X) :- cat(X). , is there a way to obtain every possible information about tom (or at least all the facts that are explicitly stated in the base)? I understand that a query like this is not possible: ?- Pred(tom). so I thought I could write a rule that would deduce the correct information: meta(Object, Predicate) :- Goal =.. [Predicate, Object], call(Goal). so that I could write queries such as ?- meta(tom, Predicate). but this does not work because arguments to call are not sufficiently instantiated. So basically my question is: is this at all possible, or is Prolog not design to provide this kind of information? And if it is not possible, why?

    Read the article

  • Connection to DB2 in Python

    - by Mestika
    Hi, I'm trying to create a database connection in a python script to my DB2 database. When the connection is done I've to run some different SQL statements. I googled the problem and has read the ibm_db API (http://code.google.com/p/ibm-db/wiki/APIs) but just can't seem to get it right. Here is what I got so far: import sys import getopt import timeit import multiprocessing import random import os import re import ibm_db import time from string import maketrans query_str = None conn = ibm_db.pconnect("dsn=write","usrname","secret") query_stmt = ibm_db.prepare(conn, query_str) ibm_db.execute(query_stmt, "SELECT COUNT(*) FROM accounts") result = ibm_db.fetch_assoc() print result status = ibm_db.close(conn) but I get an error. I really tried everything (or, not everything but pretty damn close) and I can't get it to work. I just need to make a automatic test python script that can test different queries with different indexes and so on and for that I need to create and remove indexes a long the way. Hope someone has a solutions or maybe knows about some example codes out there I can download and study. Thanks Mestika

    Read the article

  • Dealing w/ Sqlite Join results in a cursor

    - by Bill
    I have a one-many relationship in my local Sqlite db. Pretty basic stuff. When I do my left outer join I get back results that look like this: the resulting cursor has multiple rows that look like this: A1.id | A1.column1 | A1.column2 | B1.a_id_fk | B1.column1 | B1.column2 A1.id | A1.column1 | A1.column2 | B2.a_id_fk | B2.column1 | B2.column2 and so on... Is there a standard practice or method of dealing with results like this ? Clearly there is only A1, but it has many B-n relationships. I am coming close to using multiple queries instead of the "relational db way". Hopefully I am just not aware of the better way to do things. I intend to expose this query via a content provider and I would hate for all of the consumers to have to write the same aggregation logic.

    Read the article

  • Does the optimizer filter subqueries with outer where clauses

    - by Mongus Pong
    Take the following query: select * from ( select a, b from c UNION select a, b from d ) where a = 'mung' Will the optimizer generally work out that I am filtering a on the value 'mung' and consequently filter mung on each of the queries in the subquery. OR will it run each query within the subquery union and return the results to the outer query for filtering (as the query would perhaps suggest) In which case the following query would perform better : select * from ( select a, b from c where a = 'mung' UNION select a, b from d where a = 'mung' ) Obviously query 1 is best for maintenance, but is it sacrificing much performace for this? Which is best?

    Read the article

  • Catching constraint violations in JPA 2.0.

    - by Dennetik
    Consider the following entity class, used with, for example, EclipseLink 2.0.2 - where the link attribute is not the primary key, but unique nontheless. @Entity public class Profile { @Id private Long id; @Column(unique = true) private String link; // Some more attributes and getter and setter methods } When I insert records with a duplicate value for the link attribute, EclipseLink does not throw a EntityExistsException, but throws a DatabaseException, with the message explaining that the unique constraint was violated. This doesn't seem very usefull, as there would not be a simple, database independent, way to catch this exception. What would be the advised way to deal with this? A few things that I have considered are: Checking the error code on the DatabaseException - I fear that this error code, though, is the native error code for the database; Checking the existence of a Profile with the specific value for link beforehand - this obviously would result in an enormous amount of superfluous queries.

    Read the article

  • How can I limit the number of connections to an mssql server from my tomcat deployed java applicatio

    - by CJ
    Hi, I have an application that is deployed on tomcat on server A and sends queries to a huge variety of mssql databases on an server B. I am concerned that my application could overload this mssql database server and would like some way to preventing it making requests to connect to any database on that server if some arbitrary number of connections were already in existence and unclosed. I am looking at using connection pooling but am under the impression that this will only pool connections to a specific database on the mssql server, I want to control the total of these combined connections that will occur to many different databases (incidentally I can only find out the names of individual db's dynamically as they change day to day). Will connection pooling take care of this for me, are am I looking at this from the wrong perspective? I have no access to the configuration of the mssql server. Links to tutorials or working examples of your suggested solution are most welcome! Thanks, Caroline

    Read the article

  • HSM - cryptoki - opening sessions overhead

    - by Raj
    I am having a query regarding sessions with HSM. I am aware that there is an overhead if you initialise and finalise the cryptoki api for every file you want to encrypt/decrypt. My queries are, Is there an overhead in opening and closing individual sessions for every file, you want to encrypt/decrypt.(C_Initialize/C_Finalize) How many maximum number of sessions can i have for a HSM simultaneously, with out affecting the performance? Is opening and closing the session for processing individual files the best approach or opening a session and processing multiple files and then closing the session the best approach? Thanks

    Read the article

  • Only the last run in a for loop in Javascript works

    - by Mengfei Murphy
    Here is a for loop written by Javascript. It is trying to apply queries for websql. for (var i = 0; i < 10; i++){ db.transaction(function (tx){ tx.executeSql('INSERT INTO ProjSetsT (ProjID) VALUES (?);', [i]); }); } The attempt is obvious, I tried to add values "0, 1, 2, ... 9" into column ProjID in table ProjSetsT. It does not work. I only got the last element, i.e. "9" inserted, but not the first eight numbers. Is there any syntax mistakes?

    Read the article

  • Oracle - Is there any effects of not having a primary key on a table ?

    - by Sathya
    We use sequence numbers for primary keys on the tables. There are some tables where we dont really use the primary key for any querying purpose. But, we have Indexes on other columns. These are non-unique indexes. The queries use these non-primary key columns in the WHERE conditions. So, I dont really see any benefit of having a primary key on such tables. My experience with SQL 2000 was that, it used to replicate tables which had some primary key. Otherwise it would not. I am using Oracle 10gR2. I would like to know if there are any such side-effects of having tables that dont have primary key.

    Read the article

  • End User Ad-Hoc Reporting Tool: Microsoft SQL Server Management Studio or Microsoft Access?

    - by schultkl
    Our centralized IT department has suggested two primary ad hoc query tools for our general user base of approximately 200 staff members: Microsoft SQL Server Management Studio 2008 (SSMS) Microsoft Access 2003 Environment The backend database is a read-only Microsoft SQL Server 2005 database. The schema is 400+ tables; allowing access to the raw data for our general staff would be a disaster. We will be building an "abstraction layer" over the raw data for our general staff to run ad hoc queries against. The abstraction layer will most likely contain a number of views. A number of users have basic knowledge in Microsoft Access; none have used SSMS. Which of the above tools (or alternative) would be best for a decidedly non-techie user base of approximately 200 people? What are the pros and cons of each? Also, the IT department has suggested teaching people T-SQL so they may use SSMS. Is this reasonable?

    Read the article

  • SQL get data out of BEGIN; ...; END; block in python

    - by Claudiu
    I want to run many select queries at once by putting them between BEGIN; END;. I tried the following: cur = connection.cursor() cur.execute(""" BEGIN; SELECT ...; END;""") res = cur.fetchall() However, I get the error: psycopg2.ProgrammingError: no results to fetch How can I actually get data this way? Likewise, if I just have many selects in a row, I only get data back from the latest one. Is there a way to get data out of all of them?

    Read the article

  • Using Excel To Read Access Without MS Access On Computer

    - by Tom Clark
    I have written code that joins two table in access, using criteria supplied from drop down lists in excel and then returns the data to a specific location on the spreadsheet (titles already on the sheet). This works fine on my box and others with MS Access on the machine, but the purpose of writing this was to give people (associates) that dont have the MS Access on their machines (which is most of them) to be able to do simple queries to the database. When we try to run this on a machine without MS Access, we are getting the error message "Compile Error: Cant find project or library." Since this works fine on any machine so far that has Access, but not the others I am wondering if this is not possible without the actual Access software. Any help or insight would be appreciated. Tom

    Read the article

  • PDO update query with conditional?

    - by dmontain
    I have a PDO mysql that updates 3 fields. $update = $mypdo->prepare("UPDATE tablename SET field1=:field1, field2=:field2, field3=:field3 WHERE key=:key"); But I want field3 to be updated only when $update3 = true; (meaning that the update of field3 is controlled by a conditional statement) Is this possible to accomplish with a single query? I could do it with 2 queries where I update field1 and field2 then check the boolean and update field3 if needed in a separate query. //run this query to update only fields 1 and 2 $update_part1 = $mypdo->prepare("UPDATE tablename SET field1=:field1, field2=:field2 WHERE key=:key"); //if field3 should be update, run a separate query to update it separately if ($update3){ $update_part2 = $mypdo->prepare("UPDATE tablename SET field3=:field3 WHERE key=:key"); } But hopefully there is a way to accomplish this in 1 query?

    Read the article

  • Help me understand Rails eager loading

    - by aaronrussell
    I'm a little confused as to the mechanics of eager loading in active record. Lets say a Book model has many Pages and I fetch a book using this query: @book = Book.find book_id, :include => :pages Now this where I'm confused. My understanding is that @book.pages is already loaded and won't execute another query. But suppose I want to find a specific page, what would I do? @book.pages.find page_id # OR... @book.pages.to_ary.find{|p| p.id == page_id} Am I right in thinking that the first example will execute another query, and therefore making the eager loading pointless, or is active record clever enough to know that it doesn't need to do another query? Also, my second question, is there an argument that in some cases eager loading is more intensive on the database and sometimes multiple small queries will be more efficient that a single large query? Thanks for your thoughts.

    Read the article

  • Retrieve names by ratio of their occurance

    - by jjiffer
    Hello, I'm somewhat new to SQL queries, and I'm struggling with this particular problem. Let's say I have query that returns the following 3 records (kept to one column for simplicity): Tom Jack Tom And I want to have those results grouped by the name and also include the fraction (ratio) of the occurrence of that name out of the total records returned. So, the desired result would be (as two columns): Tom | 2/3 Jack | 1/3 How would I go about it? Determining the numerator is pretty easy (I can just use COUNT() and GROUP BY name), but I'm having trouble translating that into a ratio out of the total rows returned. Any help is much appreciated!

    Read the article

  • Soql query to get all related contacts of an account in an opportunity

    - by Prady
    i have SOQL query which queries for some records based on a where condition. select id, name,account.name ... <other fields> from opportunity where eventname__c='Test Event' i also need to get the related contact details for the account in the opportunity. ie i need to add the email ids of contact who all are part of the account in the opportunity. For each opportunity, i need to get all the contacts emailids who are associated with the account in opportunity. I cant really figure out how to approach this. referring the documentation i can get the contact info of a account using the query SELECT Name, ( SELECT LastName FROM Contacts ) FROM Account How can i use this along with opportunity? Thanks

    Read the article

  • how to create a subquery in sql using count based on outer query

    - by user1754716
    I hope someone can help me with this query. Basically I have two queries that I want to "combine". I want the second query as an extra column along with the first query. The first one is this : SELECT t_Item_Storage_Location.Storage_Loc_Nbr, t_Storage_Location.Storage_Loc_Type_Code, Count(t_Load.Load_Id) AS CurrentLoadCount, t_load.MMM_Id_Nbr FROM t_Load INNER JOIN (t_Storage_Location INNER JOIN t_Item_Storage_Location ON t_Storage_Location.Storage_Loc_Nbr = t_Item_Storage_Location.Storage_Loc_Nbr) ON (t_Load.Storage_Loc_Nbr = t_Item_Storage_Location.Storage_Loc_Nbr) AND (t_Load.MMM_Id_Nbr = t_Item_Storage_Location.MMM_Id_Nbr) where ((((t_Item_Storage_Location.MMM_Id_Nbr) Between '702004%' And '702011%') AND ((t_Item_Storage_Location.Storage_Loc_Nbr) Like '%A') AND ((t_Storage_Location.Storage_Loc_Type_Code)='CD') AND ((t_Load.Active_Status_Ind)='A') AND ((t_Load.QC_Status_Code) Like 'R%') AND ((t_Load.MMM_Facility_Code)='MC')) OR (((t_Item_Storage_Location.Storage_Loc_Nbr) Like '%B')) OR (((t_Item_Storage_Location.Storage_Loc_Nbr) Like '%C')) OR (((t_Item_Storage_Location.Storage_Loc_Nbr) Like '%D')) OR (((t_Item_Storage_Location.Storage_Loc_Nbr) Like '%E')) ) GROUP BY t_Item_Storage_Location.MMM_Id_Nbr, t_Item_Storage_Location.Storage_Loc_Nbr, t_Storage_Location.Storage_Loc_Type_Code, t_Load.MMM_Facility_Code, t_load.MMM_Id_Nbr HAVING Count(t_Load.Load_Id)<4 The second one, is based on the t_load.MMM_Id_Nbr of the first one. Basically I want a count of all the loads with that mmm_id_nbr. SELECT count(Load_ID) as LoadCount, MMM_Id_Nbr, storage_Loc_Nbr FROM t_load WHERE QC_Status_Code like 'R%' and mmm_Facility_Code ='MC' and Active_Status_Ind='A' GROUP by MMM_Id_Nbr, storage_loc_Nbr

    Read the article

  • Recursive query question - break rows into columns?

    - by Stew
    I have a table "Families", like so FamilyID PersonID Relationship ----------------------------------------------- F001 P001 Son F001 P002 Daughter F001 P003 Father F001 P004 Mother F002 P005 Daughter F002 P006 Mother F003 P007 Son F003 P008 Mother and I need output like FamilyID PersonID Father Mother ------------------------------------------------- F001 P001 P003 P004 F001 P002 P003 P004 F001 P003 F001 P004 F002 P005 P006 F002 P006 F003 P007 P008 F003 P008 In which the PersonID of the Father and Mother for a given PersonID are listed (if applicable) in separate columns. I know this must be a relatively trivial query to write (and therefore to find instructions for), but I can't seem to come up with the right search terms. Searching "SQL recursive queries" has gotten me closest, but I can't quite translate those methods to what I'm trying to do here. I'm trying to learn, so multiple methods are welcome, as is vocabulary I should read up on. Thanks!

    Read the article

  • Need to map classes to different databases at runtime in Hibernate

    - by serg555
    I have MainDB database and unknown number (at compile time) of UserDB_1, ..., UserDB_N databases. MainDB contains names of those UserDB databases in some table (new UserDB can be created at runtime). All UserDB have exactly the same table names and fields. How to handle such situation in Hibernate? (database structure cannot be changed). Currently I am planning to create generic User classes not mapped to anything and just use native SQL for all queries: session.createSQLQuery("select * from " + db + ".user where id=1") .setResultTransformer(Transformers.aliasToBean(User.class)); Is there anything better I can do? Ideally I would want to have mappings for UserDB tables and relations and use HQL on required database.

    Read the article

  • How do you verify the correct data is in a data mart?

    - by blockcipher
    I'm working on a data warehouse and I'm trying to figure out how to best verify that data from our data cleansing (normalized) database makes it into our data marts correctly. I've done some searches, but the results so far talk more about ensuring things like constraints are in place and that you need to do data validation during the ETL process (E.g. dates are valid, etc.). The dimensions were pretty easy as I could easily either leverage the primary key or write a very simple and verifiable query to get the data. The fact tables are more complex. Any thoughts? We're trying to make this very easy for a subject matter export to run a couple queries, see some data from both the data cleansing database and the data marts, and visually compare the two to ensure they are correct.

    Read the article

  • Management Studio default file save location

    - by jayrdub
    Open a new query window. Write some SQL. Save the script, the Save File As dialog box opens - but always to the same default location in the Profiles directory. Is there any way to set my default file location? ...Like I used to do with apps from the 1980s? Under Tools|Options a default location can be specified for query results. I need the same thing for new queries (the text editor). Tried changing locations in the Registry but SSMS just overwrote my changes. Any suggestions? (I saw this unanswered question at http://www.eggheadcafe.com/software/aspnet/30098335/management-studio-default.aspx and I had same exact question so I just reposted it here)

    Read the article

  • Why isn't DBIx::Class::Schema::Loader creating my classes?

    - by Robert Wohlfarth
    I am trying to generate static schemas using DBIx::Class in Perl. The command shown below outputs a Schema.pm and no other files. Any idea what I'm doing wrong, or how to to debug this? U:\wohlfarj\Software\PARS>perl -MDBIx::Class::Schema::Loader=make_schema_at,dump_to_dir:.\lib -e "make_schema_at('PARS::Schema',{debug=>1},['dbi:ODBC:PARS','user','password',{AutoCommit=>0}])" Dumping manual schema for PARS::Schema to directory .\lib ... Schema dump completed. I'm using Strawberry Perl on Windows XP. The database is SQL Server 2000, accessed through an ODBC connection. I can successfully run queries using plain old DBI with the same ODBC connection.

    Read the article

  • XSD generation from a MS SQL database using schemas

    - by madprog
    I'm willing to use NDbUnit on a MS SQL database which uses schemas. I have to generate the XSD schema from the database. Visual Studio has a tool to do that, but Visual Studio 2005 doesn't include the schema information in the generated XSD. Therefore, NDbUnit fails because the generated SQL queries do not match the database. Worse, when I try to use Proteus, the XSD schema doesn't validate against the database, and Proteus fails with a warning telling that data could be lost if this check was skipped. So, my question is: is there any tool that would generate my XSD schema properly and from the database information?

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >