Search Results

Search found 4788 results on 192 pages for 'adhoc queries'.

Page 123/192 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Do you use enums in your data applications and how?

    - by Ivan
    My practice shows that a general enterprise application has a lot of entities which's nature corresponds to an elementary enumeration. For example We may have an Order entity which may have such fields as "OrderType", "OrderStatus", "Currency", etc. referencing corresponding Entities which are nothing more than just a textual name bound to a key to be referenced. Using enums would look very natural here. But entities have to be defined in application code at design-time, am I right? While in we need to be able to CRUD enum value variants at runtime and use enums in server-side SQL queries (like stored procedures and views). What are you practices and thoughts on this subject? I am particularly interested in C#4, linq and T-SQL.

    Read the article

  • CakePHP: How do I order results based on a 2-level deep association model?

    - by KcYxA
    I'm hoping I won't need to resort to custom queries. A related question would be: how do I retrieve data so that if an associated model is empty, no record is retrieved at all, as opposed to an empty array for an associated model? As an oversimplified example, say I have the following models: City -- Street -- House How do I sort City results by House numbers? How do I retrieve City restuls that have at least one House in it? I don't want a record with a city name and details and an empty House array as it messes up pagination results. CakePHP retrieves House records belonging to the Street in a separate query, so putting somthing like 'House.number DESC' into the 'order' field of the search query returns a 'field does not exist' error. Any ideas?

    Read the article

  • HSM - cryptoki - opening sessions overhead

    - by Raj
    I am having a query regarding sessions with HSM. I am aware that there is an overhead if you initialise and finalise the cryptoki api for every file you want to encrypt/decrypt. My queries are, Is there an overhead in opening and closing individual sessions for every file, you want to encrypt/decrypt.(C_Initialize/C_Finalize) How many maximum number of sessions can i have for a HSM simultaneously, with out affecting the performance? Is opening and closing the session for processing individual files the best approach or opening a session and processing multiple files and then closing the session the best approach? Thanks

    Read the article

  • Connection to DB2 in Python

    - by Mestika
    Hi, I'm trying to create a database connection in a python script to my DB2 database. When the connection is done I've to run some different SQL statements. I googled the problem and has read the ibm_db API (http://code.google.com/p/ibm-db/wiki/APIs) but just can't seem to get it right. Here is what I got so far: import sys import getopt import timeit import multiprocessing import random import os import re import ibm_db import time from string import maketrans query_str = None conn = ibm_db.pconnect("dsn=write","usrname","secret") query_stmt = ibm_db.prepare(conn, query_str) ibm_db.execute(query_stmt, "SELECT COUNT(*) FROM accounts") result = ibm_db.fetch_assoc() print result status = ibm_db.close(conn) but I get an error. I really tried everything (or, not everything but pretty damn close) and I can't get it to work. I just need to make a automatic test python script that can test different queries with different indexes and so on and for that I need to create and remove indexes a long the way. Hope someone has a solutions or maybe knows about some example codes out there I can download and study. Thanks Mestika

    Read the article

  • PHP PDO close()?

    - by PHPLOVER
    Can someone tell me, when you for example update, insert, delete.. should you then close it like $stmt->close(); ? i checked php manual and don't understand what close() actually does. EXAMPLE: $stmt = $dbh->prepare("SELECT `user_email` FROM `users` WHERE `user_email` = ? LIMIT 1"); $stmt->execute(array($email)); $stmt->close(); Next part of my question is, if as an example i had multiple update queries in a transaction after every execute() for each query i am executing should i close them individually ? ... because it's a transaction not sure i need to use $stmt->close(); after each execute(); or just use one $stmt->close(); after all of them ? Thanks once again, phplover

    Read the article

  • How important is caching for a site's speed with PHP?

    - by benhowdle89
    I've just made a user-content orientated website http://www.humanisms.co.uk Its done in PHP, MySQL and jQuery's AJAX, at the moment there is only a dozen or so submissions and already i can feel it lagging slightly when it goes to a new page (therefore running a new mysql query) Is it most important for me to try and optimise my mysql queries (by prepared statements) or is it worth in looking at CDN's (amazon s3) and caching (much like the WordPress plugin WP Super Cache) which works by serving static HTML files when there hasnt been new content submitted. Which route is the most beneficial, for me as a developer, to take, ie. where am i better off concentrating my efforts to speed up the site?

    Read the article

  • Oracle - Is there any effects of not having a primary key on a table ?

    - by Sathya
    We use sequence numbers for primary keys on the tables. There are some tables where we dont really use the primary key for any querying purpose. But, we have Indexes on other columns. These are non-unique indexes. The queries use these non-primary key columns in the WHERE conditions. So, I dont really see any benefit of having a primary key on such tables. My experience with SQL 2000 was that, it used to replicate tables which had some primary key. Otherwise it would not. I am using Oracle 10gR2. I would like to know if there are any such side-effects of having tables that dont have primary key.

    Read the article

  • WSSQL query for multiple computers at once

    - by Josh
    I can run normal searches just fine. Windows 7 won't let me add a network share to my local index, but I can query the remote index just fine. The problem is that I can't find a way to query two indexes at once. I was hoping that something like this would work: SELECT System.ItemName FROM compA.SystemIndex, compB.SystemIndex WHERE SCOPE='file://compA/pathA' OR SCOPE='file://compB/pathB' but it doesn't. For simple queries, I can query compA and compB separately and then merge the results myself, but I'm hoping for a better way. Anybody here have some experience with this?

    Read the article

  • Soql query to get all related contacts of an account in an opportunity

    - by Prady
    i have SOQL query which queries for some records based on a where condition. select id, name,account.name ... <other fields> from opportunity where eventname__c='Test Event' i also need to get the related contact details for the account in the opportunity. ie i need to add the email ids of contact who all are part of the account in the opportunity. For each opportunity, i need to get all the contacts emailids who are associated with the account in opportunity. I cant really figure out how to approach this. referring the documentation i can get the contact info of a account using the query SELECT Name, ( SELECT LastName FROM Contacts ) FROM Account How can i use this along with opportunity? Thanks

    Read the article

  • How do I display a field's hidden characters in the result of a query in Oracle?

    - by Chris Williams
    I have two rows that have a varchar column that are different according to a Java .equals(). I can't easily change or debug the Java code that's running against this particular database but I do have access to do queries directly against the database using SQLDeveloper. The fields look the same to me (they are street addresses with two lines separated by some new line or carriage feed/new line combo). Is there a way to see all of the hidden characters as the result of a query?I'd like to avoid having to use the ascii() function with substr() on each of the rows to figure out which hidden character is different. I'd also accept some query that shows me which character is the first difference between the two fields.

    Read the article

  • Using Excel To Read Access Without MS Access On Computer

    - by Tom Clark
    I have written code that joins two table in access, using criteria supplied from drop down lists in excel and then returns the data to a specific location on the spreadsheet (titles already on the sheet). This works fine on my box and others with MS Access on the machine, but the purpose of writing this was to give people (associates) that dont have the MS Access on their machines (which is most of them) to be able to do simple queries to the database. When we try to run this on a machine without MS Access, we are getting the error message "Compile Error: Cant find project or library." Since this works fine on any machine so far that has Access, but not the others I am wondering if this is not possible without the actual Access software. Any help or insight would be appreciated. Tom

    Read the article

  • How can I limit the number of connections to an mssql server from my tomcat deployed java applicatio

    - by CJ
    Hi, I have an application that is deployed on tomcat on server A and sends queries to a huge variety of mssql databases on an server B. I am concerned that my application could overload this mssql database server and would like some way to preventing it making requests to connect to any database on that server if some arbitrary number of connections were already in existence and unclosed. I am looking at using connection pooling but am under the impression that this will only pool connections to a specific database on the mssql server, I want to control the total of these combined connections that will occur to many different databases (incidentally I can only find out the names of individual db's dynamically as they change day to day). Will connection pooling take care of this for me, are am I looking at this from the wrong perspective? I have no access to the configuration of the mssql server. Links to tutorials or working examples of your suggested solution are most welcome! Thanks, Caroline

    Read the article

  • Catching constraint violations in JPA 2.0.

    - by Dennetik
    Consider the following entity class, used with, for example, EclipseLink 2.0.2 - where the link attribute is not the primary key, but unique nontheless. @Entity public class Profile { @Id private Long id; @Column(unique = true) private String link; // Some more attributes and getter and setter methods } When I insert records with a duplicate value for the link attribute, EclipseLink does not throw a EntityExistsException, but throws a DatabaseException, with the message explaining that the unique constraint was violated. This doesn't seem very usefull, as there would not be a simple, database independent, way to catch this exception. What would be the advised way to deal with this? A few things that I have considered are: Checking the error code on the DatabaseException - I fear that this error code, though, is the native error code for the database; Checking the existence of a Profile with the specific value for link beforehand - this obviously would result in an enormous amount of superfluous queries.

    Read the article

  • Why isn't DBIx::Class::Schema::Loader creating my classes?

    - by Robert Wohlfarth
    I am trying to generate static schemas using DBIx::Class in Perl. The command shown below outputs a Schema.pm and no other files. Any idea what I'm doing wrong, or how to to debug this? U:\wohlfarj\Software\PARS>perl -MDBIx::Class::Schema::Loader=make_schema_at,dump_to_dir:.\lib -e "make_schema_at('PARS::Schema',{debug=>1},['dbi:ODBC:PARS','user','password',{AutoCommit=>0}])" Dumping manual schema for PARS::Schema to directory .\lib ... Schema dump completed. I'm using Strawberry Perl on Windows XP. The database is SQL Server 2000, accessed through an ODBC connection. I can successfully run queries using plain old DBI with the same ODBC connection.

    Read the article

  • How do you verify the correct data is in a data mart?

    - by blockcipher
    I'm working on a data warehouse and I'm trying to figure out how to best verify that data from our data cleansing (normalized) database makes it into our data marts correctly. I've done some searches, but the results so far talk more about ensuring things like constraints are in place and that you need to do data validation during the ETL process (E.g. dates are valid, etc.). The dimensions were pretty easy as I could easily either leverage the primary key or write a very simple and verifiable query to get the data. The fact tables are more complex. Any thoughts? We're trying to make this very easy for a subject matter export to run a couple queries, see some data from both the data cleansing database and the data marts, and visually compare the two to ensure they are correct.

    Read the article

  • Expand in linq not loading inner data collections from service.

    - by Kit
    I am seeing odd behavior with service queries! I am using MVVM pattern for a silverlight 3 app on 3.5 framework and Dataservices 1.5. The following code eager loads correctly the parent object and the child heirarchy perfectly IF and ONLY IF I am preloading the data. But I would like to fetch a different set of the parent object (and its child heirarchy) on different button clicks. What I am seeing is that on button click, only the parent object is retrieved, and the child heirarchy contains nothing. Any suggestions? Any ideas how to tackle this? Thanks all. The method: DataServiceQuery serviceQuery = (DataServiceQuery)(from m1 in dbEntities.gis_Region.Expand("gis_RegionValue/gis_Measure") where m1.RegionGuid == new Guid(regionGuid) select m1); serviceQuery.BeginExecute(GetRegionDetailAsyncResult, serviceQuery); The wired Async Result: DataServiceQuery query = (DataServiceQuery)result.AsyncState; gis_Region region = query.EndExecute(result).First();

    Read the article

  • How to efficiently handle Where and OrderBy clauses

    - by Goran
    My business layer passes all the required information to UI layer. From what I have read, in general, best practice is to send fetched data to UI layer, and to avoid passing queries like ObjectQuery. My problem with this approach is next: If I am to make a flexible business layer, then I should allow UI to sort the data anyway it requires. Fetching sorted data from database, and then resorting them in UI is kind of bad practice for me, so the only way is to somehow So what are my options? Is there a way to make it like this: public void OrderByMethod(params ...) { .... } so I can call it like this: OrderByMethod(MyEntity.Property1, MyEntity.Property2 descending....); Thanks, Goran

    Read the article

  • Whats the proper way of accessing a database through an assembly?

    - by H4mm3rHead
    Hi, I have a ASP.NET MVC application which is build up as an assembly that queries the database and a asp.net frontend that references this assembly and this assembly abstracts the underlying database. This means that my Assembly contains a app.config file that contains the connectionstring to the database (Linq to Sql data model). How do I go about making this more flexible? Should i make a "initialize()" method somewhere in my assembly which takes the connection string from the asp.net mvc application and then that controls which database to use? or how is this done?

    Read the article

  • Retrieve names by ratio of their occurance

    - by jjiffer
    Hello, I'm somewhat new to SQL queries, and I'm struggling with this particular problem. Let's say I have query that returns the following 3 records (kept to one column for simplicity): Tom Jack Tom And I want to have those results grouped by the name and also include the fraction (ratio) of the occurrence of that name out of the total records returned. So, the desired result would be (as two columns): Tom | 2/3 Jack | 1/3 How would I go about it? Determining the numerator is pretty easy (I can just use COUNT() and GROUP BY name), but I'm having trouble translating that into a ratio out of the total rows returned. Any help is much appreciated!

    Read the article

  • SQL Server indexed view matching of views with joins not working

    - by usr
    Does anyone have experience of when SQL Servr 2008 R2 is able to automatically match indexed view (also known as materialized views) that contain joins to a query? for example the view select dbo.Orders.Date, dbo.OrderDetails.ProductID from dbo.OrderDetails join dbo.Orders on dbo.OrderDetails.OrderID = dbo.Orders.ID cannot be automatically matched to the same exact query. When I select directly from this view ith (noexpand) I actually get a much faster query plan that does a scan on the clustered index of the indexed view. Can I get SQL Server to do this matching automatically? I have quite a few queries and views... I am on enterprise edition of SQL Server 2008 R2.

    Read the article

  • LINQ to SQL: making a "double IN" query crashes

    - by Alex
    I need to do the following thing: var a = from c in DB.Customers where (from t1 in DB.Table1 where t1.Date >= DataTime.Now select t1.ID).Contains(c.ID) && (from t2 in DB.Table2 where t2.Date >= DataTime.Now select t2.ID).Contains(c.ID) select a It doesn't want to run. I get the following error: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. But when I try to run var a = from c in DB.Customers where (from t1 in DB.Table1 where t1.Date >= DataTime.Now select t1.ID).Contains(c.ID) select a OR var a = from c in DB.Customers where (from t2 in DB.Table2 where t2.Date = DataTime.Now select t2.ID).Contains(c.ID) select a It works! I'm sure that there both IN queries contain some customers ids.

    Read the article

  • Which SQL query is faster? Filter on Join criteria or Where clause?

    - by Jon Erickson
    Compare these 2 queries. Is it faster to put the filter on the join criteria or in the were clause. I have always felt that it is faster on the join criteria because it reduces the result set at the soonest possible moment, but I don't know for sure. I'm going to build some tests to see, but I also wanted to get opinions on which would is clearer to read as well. Query 1 SELECT * FROM TableA a INNER JOIN TableXRef x ON a.ID = x.TableAID INNER JOIN TableB b ON x.TableBID = b.ID WHERE a.ID = 1 /* <-- Filter here? */ Query 2 SELECT * FROM TableA a INNER JOIN TableXRef x ON a.ID = x.TableAID AND a.ID = 1 /* <-- Or filter here? */ INNER JOIN TableB b ON x.TableBID = b.ID

    Read the article

  • Question about mysql indexes on low to medium cardinality columns

    - by Kevin J
    I have a general question about the way that database indexing works, particularly in mysql. Let's say I have a table with a million rows with a column "ClientID" that is distributed relatively equally among 30 values. Thus, this column is very low cardinality (30) relative to the primary key (1 million). Now, I understand that you shouldn't create indexes on low cardinality fields. However, in this case, queries are only ever done with one of the 30 clientIDs. Thus, wouldn't creating an index on ClientID be helpful, as the search space is automatically reduced to 1/30th what it normally would be? Or is my understanding of how the index works flawed? Thanks

    Read the article

  • Doing a join across two databases with different collations on SQL Server and getting an error.

    - by Andrew G. Johnson
    I know, I know with what I wrote in the question I shouldn't be surprised. But my situation is slowly working on an inherited POS system and my predecessor apparently wasn't aware of JOINs so when I looked into one of the internal pages that loads for 60 seconds I see that it's a fairly quick, rewrite these 8 queries as one query with JOINs situation. Problem is that besides not knowing about JOINs he also seems to have had a fetish for multiple databases and surprise, surprise they use different collations. Fact of the matter is we use all "normal" latin characters that English speaking people would consider the entire alphabet and this whole thing will be out of use in a few months so a bandaid is all I need. Long story short is I need some kind of method to cast to a single collation so I can compare two fields from two databases. Exact error is: Cannot resolve the collation conflict between "SQL_Latin1_General_CP850_CI_AI" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation.

    Read the article

  • Need to map classes to different databases at runtime in Hibernate

    - by serg555
    I have MainDB database and unknown number (at compile time) of UserDB_1, ..., UserDB_N databases. MainDB contains names of those UserDB databases in some table (new UserDB can be created at runtime). All UserDB have exactly the same table names and fields. How to handle such situation in Hibernate? (database structure cannot be changed). Currently I am planning to create generic User classes not mapped to anything and just use native SQL for all queries: session.createSQLQuery("select * from " + db + ".user where id=1") .setResultTransformer(Transformers.aliasToBean(User.class)); Is there anything better I can do? Ideally I would want to have mappings for UserDB tables and relations and use HQL on required database.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >