Search Results

Search found 7116 results on 285 pages for 'nested queries'.

Page 203/285 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • In MS Access is there a way to allow forms to update while maintaning Read Only

    - by Alex
    I have several forms linked tables via queries. The form pull data such as sales and ratios by selecting a product from the main's form's combo box. I am however having to issues: 1- I would ultimately prefer the combo box to be a free entry; however by just entering in the box and hitting enter (not a button called “enter on a screen” which would initiate recalcs, just normal enter), while it does bring the new information in sub-forms it also changes the information in the original table. If I make the table read only that it just doesn't allow the form to work by saying that the table is read only. 2- The same Read only issue occurs when another user with read only rights tries to use the database. I understand that ready only is functioning as intended, however I am wondering if there is way to make some functions work while disallowing the updating. I am unfortunately learning on the go, so go easy plz. Thank you

    Read the article

  • SQL Server JOIN with optional NULL values

    - by Paul McLoughlin
    Imagine that we have two tables as follows: Trades ( TradeRef INT NOT NULL, TradeStatus INT NOT NULL, Broker INT NOT NULL, Country VARCHAR(3) NOT NULL ) CTMBroker ( Broker INT NOT NULL, Country VARCHAR(3) NULL ) (These have been simplified for the purpose of this example). Now, if we wish to join these two tables on the Broker column, and if a country exists in the CTMBroker table on the Country, we have the following two choices: SELECT T.TradeRef,T.TradeStatus FROM Trades AS T JOIN CTMBroker AS B ON B.Broker=T.Broker AND ISNULL(B.Country, T.Country) = T.Country or SELECT T.TradeRef,T.TradeStatus FROM Trades AS T JOIN CTMBroker AS B ON B.Broker=T.Broker AND (B.COUNTRY=T.Country OR B.Country IS NULL) These are both logically equivalent, however in this specific circumstance for our database (SQL Server 2008, SP1) two different execution plans are produced for these two queries with the second version significantly outperforming the first version in terms of both time and logical reads. My question really is as follows: as a general rule would (2) be preferred to (1), or does this just happen to be exploiting some particular idiosyncracy of the optimiser in 2008 SP1 (that could therefore change with future versions of SQL Server).

    Read the article

  • How do I keep a count of undefined strings within a loop using PHP?

    - by mike
    I'm using a loop within a loop to try to generate keyword combinations and also find the ones that have been used the most. My outside loop just queries a list of keywords (lets use "chicago" as our first keyword, 3 records were found). The inside loop finds all the records in the "posts" table where keyword = "chicago". Within this loop, I need to generate strings based on info I found in the database. Which, would look something like "chicago bulls", "chicago bears", "chicago cubs" etc... I know how to do everything up until this point, but how do I temporary hold these generated strings and count how many times they have been found within the 3 records?

    Read the article

  • Jet Database (ms access) ExecuteNonQuery - Can I make it faster?

    - by bluebill
    Hi all, I have this generic routine that I wrote that takes a list of sql strings and executes them against the database. Is there any way I can make this work faster? Typically it'll see maybe 200 inserts or deletes or updates at a time. Sometimes there is a mixture of updates, inserts and deletes. Would it be a good idea to separate the queries by type (i.e. group inserts together, then updates and then deletes)? I am running this against an ms access database and using vb.net 2005. Public Function ExecuteNonQuery(ByVal sql As List(Of String), ByVal dbConnection as String) As Integer If sql Is Nothing OrElse sql.Count = 0 Then Return 0 Dim recordCount As Integer = 0 Using connection As New OleDb.OleDbConnection(dbConnection) connection.Open() Dim transaction As OleDb.OleDbTransaction = connection.BeginTransaction() 'Using cmd As New OleDb.OleDbCommand() Using cmd As OleDb.OleDbCommand = connection.CreateCommand cmd.Connection = connection cmd.Transaction = transaction For Each s As String In sql If Not String.IsNullOrEmpty(s) Then cmd.CommandText = s recordCount += cmd.ExecuteNonQuery() End If Next transaction.Commit() End Using End Using Return recordCount End Function

    Read the article

  • How many databases to support eCommerce?

    - by Terry Lorber
    I have a system with two databases, one that the customer-facing website uses, the second that is used by the "backroom" order-fulfillment system. I've been asked to run queries from the website to the backroom system. I'd rather not, it seems risky to allow web-based request to run unheeded on the internal system. Additionally, this means opening up routing in the firewall to allow external connections to the internal server. What's the best practice for eCommerce? Run the entire company off of one database? Or individual databases for each system, and middleware to connect them? Sometimes it might be necessary for the web application to pull date from the internal system, but not based on an HTTP request from the internet. I'm sure the best answer is "it depends!" So, if people have a rule of thumb for when to use middleware and when not to, I'd like to here it.

    Read the article

  • jQuery AJAX & Multiple sp Result Sets

    - by Kevin
    Is it possible to use a stored procedure that returns multiple result sets in json format and process them as part of one request using ajax calls in jquery? In other words, I have a stored procedure that returns several result sets that are to be used with a series of select boxes that are all being filtered by the same criteria. If any of the select boxes is chosen that value is then passed to the stored procedure and all the subsequent select box updates reflect only results that match the filtered criteria. I don't want to have to call the same sp multiple times to process the results and was trying not to create multiple queries, so I'm wondering if it's possible to store more than one json result in a single request and then store and process them on the client side.

    Read the article

  • .net, C# Interface between Business Logic and DAL

    - by Joel
    I'm working on a small application from scratch and using it to try to teach myself architecture and design concepts. It's a .NET 3.5, WPF application, and I'm using Sql Compact Edition as my data store. I'm working on the business logic layer, and have just now begun to write the DAL. I'm just using SqlCeComamnds to send over simple queries and SqlCeResultSet to get at the results. I'm starting to design my Insert and Update methods, and here's the issue - I don't know the best way to get the necessary data from the BLL into the DAL. Do I pass in a generic collection? Do I have a massive parameter list with all the data for the database? Do I simply pass in the actual business object (thus tying my DAL to the conrete stuff in the BLL?). I thought about using interfaces - simply passing IBusinessObjectA into the DAL, which provides the simplicity I'm looking for without tying me TOO tightly to current implementations. What do you guys think?

    Read the article

  • How to write custom SQLite functions in Javascript inside a Webkit browser?

    - by Jay Godse
    I have just learned how to use the SQLite database for local storage in a Webkit web browser (e.g. Google Chrome or Apple Safari) using the Javascript API. For example the "Sticky Notes" application. However, I know that SQLite has a function called sqlite_create_function() that lets you add custom functions to your instance of SQLite on the fly which can then be used inside SQL queries. This function is described at sqlite.org. I also know that you can call an equivalent of this API in Ruby as described here. QUESTION: Can anybody show me how to do this in Javascript - i.e. write a custom function in Javascript that can be bound into the SQLite database at run time to be called by the SQLite engine, and all inside a Webkit browser?

    Read the article

  • Refactoring a long method that simply populates

    - by Jeune
    I am refactoring a method which is over 500 lines (don't ask me why) The method basically queries a list of maps from the database and for each map in the list does some computation and adds the value of that computation to the map. There are however too many computations and puts being done that the code has reached over 500 lines already! Here's a sample preview: public List<Hashmap> getProductData(...) { List<Hashmap> products = productsDao.getProductData(...); for (Product product: products) { product.put("Volume",new BigDecimanl(product.get("Height")* product.get("Width")*product.get("Length")); if (some condition here) { //20 lines worth of product.put(..,..) } else { //20 lines worth of product.put(..,..) } //3 more if-else statements like the one above try { product.put(..,..) } catch (Exception e) { product.put("",..) } //over 8 more try-catches of the form above } Any ideas on how to go about refactoring this?

    Read the article

  • Single logical SQL Server possible from multiple physical servers?

    - by TuffyIsHere
    Hi, With Microsoft SQL Server 2005, is it possible to combine the processing power of multiple physical servers into a single logical sql server? Is it possible on SQL Server 2008? I'm thinking, if the database files were located on a SAN and somehow one of the sql servers acted as a kind of master, then processing could be spread out over multiple physical servers, for instance even allowing simultaneous updates where there was no overlap, and in the case of read-only queries on unlocked tables no limit. We have an application that is limited by the speed of our sql server, and probably stuck with server 2005 for now. Is the only option to get a single more powerful physical server? Sorry I'm not an expert, I'm not sure if the question is a stupid one. TIA

    Read the article

  • Dynamic query to immediate execute?

    - by Curtis White
    I am using the MSDN Dynamic linq to sql package. It allows using strings for queries. But, the returned type is an IQueryable and not an IQueryable<T>. I do not have the ToList() method. How can I this immediate execute without manually enumerating over the IQueryable? My goal is to databind to the Selecting event on a linqtosql datasource and that throws a datacontext disposed exception. I can set the query as the Datasource on a gridview though. Any help greatly appreciated! Thanks. The dynamic linq to sql is the one from the samples that comes with visual studio.

    Read the article

  • Using condionals in Linq Programatically

    - by Mike B
    I was just reading a recent question on using conditionals in Linq and it reminded me of an issue I have not been able to resolve. When building Linq to SQL queries programatically how can this be done when the number of conditionals is not known until runtime? For instance in the code below the first clause creates an IQueryable that, if executed, would select all the tasks (called issues) in the database, the 2nd clause will refine that to just issues assigned to one department if one has been selected in a combobox (Which has it's selected item bound to the departmentToShow property). How could I do this using the selectedItems collection instead? IQueryable<Issue> issuesQuery; // Will select all tasks issuesQuery = from i in db.Issues orderby i.IssDueDate, i.IssUrgency select i; // Filters out all other Departments if one is selected if (departmentToShow != "All") { issuesQuery = from i in issuesQuery where i.IssDepartment == departmentToShow select i; }

    Read the article

  • CakePHP model useTable with SQL Views

    - by Chris
    I'm in the process converting our CakePHP-built website from Pervasive to SQL Server 2005. After a lot of hassle the setup I've gotten to work is using the ADODB driver with 'connect' as odbc_mssql. This connects to our database and builds the SQL queries just fine. However, here's the rub: one of our Models was associated with an SQL view in Pervasive. I ported over the view, but it appears using the set up that I have that CakePHP can't find the View in SQL Server. Couldn't find much after some Google searches - has anyone else run into a problem like this? Is there a solution/workaround, or is there some redesign in my future?

    Read the article

  • Why has Foundation 4 made its grid classes less natural and readable?

    - by Brenden
    The Background I love responsive CSS grids. I hate Bootstrap's complex class names. I fell in love with Foundations human readable class names. The Problem With Foundation 4, they have changed four columns to large-4 small-4 columns and in my opinion this makes the HTML markup less clear. This style of CSS class names is exactly why I switched from Bootstrap to Foundation. The Question What advantage is gained by Foundation 4's Grid in making this change? It seems that you can have a different grid layout on smaller screens via media queries, but I can't think of a design that would require this. Note: I've been focused on native mobile development and therefore I may be missing out on recent best practices.

    Read the article

  • CakePHP hasOne ineffeciency?

    - by Andre
    I was looking at examples on the CakePHP website, in particular hasOne used in linking models. http://book.cakephp.org/view/78/Associations-Linking-Models-Together My question is this, is CakePHP using two queries to build the array structure of data returned in a model that uses hasOne linkage? Taken from CakePHP: //Sample results from a $this-User-find() call. Array ( [User] => Array ( [id] => 121 [name] => Gwoo the Kungwoo [created] => 2007-05-01 10:31:01 ) [Profile] => Array ( [id] => 12 [user_id] => 121 [skill] => Baking Cakes [created] => 2007-05-01 10:31:01 ) ) Hope this all makes sense.

    Read the article

  • nhibernate3 weaknesses

    - by Adrakadabra
    from the moment we've migrated from hibernate 2 to hibernate3 ,around 30% of queries can not execute anymore,while there were not any problem with the previous version. does anybody have such problems? for example some of errors we see r like these Specified cast is not valid: Repository<CountrySubdivision>.Find(p => p.Parent.Id == parentId); specified method is not supported: public JsonResult AllEducationDegree(string search) { var data = Repository<EducationDegree> .FindBySpecification(new EducationDegreeSpecification().Search(search)) .Take(10) .Select(p => new NameValue(p.Title, (int)p.Id)) .ToList(); // .AsDropdown(" "); return Json(data, JsonRequestBehavior.AllowGet); } public class EducationDegreeSpecification : FluentSpecification<EducationDegree> { public EducationDegreeSpecification Search(string EducationDegreeSearch) { if (!String.IsNullOrEmpty(EducationDegreeSearch)) { string[] searchs = EducationDegreeSearch.Split(' '); foreach (string search in searchs) { if (!String.IsNullOrEmpty(search)) { AddExpression(p => p.Title.Contains(search)); } } } return this; } }

    Read the article

  • mysql does not utilize my cpu and ram enough?

    - by vick
    Hello Everyone! I am importing a 2.5gb csv file to a mysql table. My storage engine is innodb. Here is the script: use xxx; DROP TABLE IF EXISTS `xxx`.`xxx`; CREATE TABLE `xxx`.`xxx` ( `xxx_id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(128) NOT NULL, `yy` varchar(128) NOT NULL, `yyy` varchar(64) NOT NULL, `yyyy` varchar(2) NOT NULL, `yyyyy` varchar(10) NOT NULL, `url` varchar(64) NOT NULL, `p` varchar(10) NOT NULL, `pp` varchar(10) NOT NULL, `category` varchar(256) NOT NULL, `flag` varchar(4) NOT NULL, PRIMARY KEY (`xxx_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; set autocommit = 0; load data local infile '/home/xxx/raw.csv' into table company fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n' ( name, yy, yyy, yyyy, yyyyy, url, p, pp, category, flag ); commit; Why does my PC (core i7 920 with 6gb ram) only consume 9% cpu power and 60% ram when running these queries?

    Read the article

  • PHP Framework Benefits / Downfalls

    - by Lizard
    I have been a PHP developer for about 10 years now and until about a month ago I have never used a framework. The framework I am now using due to an existing codebase is cakePHP 1.2 I can see certain benefits of the frameworks with the basic helpers like default layouts. I can deffinately seen the benefits of MVC keeping the logic sperate etc. But the query building just seems to be bloated. Is this expected? Am I likely to be able to build better queries than the framework could build? I just feel I could get my apps running better without a framework. What are your thoughts?

    Read the article

  • When to use Hibernate?

    - by Ramo
    Hi All, I was asked in an interview this question so I answered with the following: -Better Performance: - Efficient queries. - 1st and 2nd level caching. - Good caching gives better scalability. - Good Database Portability: - Changing the DB is as easy as changing the dialect configuration. - Increased Developer Productivity: - Think only in object terms not in query language terms. But I also feel that systems fall in one of the below categories, and Hibernate may not be suited for all these cases, I'm interested in your thoughts about this, do you agree with me? please let me know when would use HB in the following case and why. Write Only Systems: Read Only Systems: Write Mostly Systems: Read Mostly Systems: Regards Ramo

    Read the article

  • SSRS 2008 error

    - by syamantak
    I am trying to generate a report using SSRS 2008. During report processing i am facing an error which states.."An error has occured during report processing .Query execution failed for dataset (datasetno).A severe error occured opn the current command.The results if any,should be discarded.Operation cancelled by user ". The datasetname for which query execution is failing is changing randomly.When I am executing those dataset queries seperately it was not throwing any errors.Sometimes I am getting my report without any failure. I am really fix in this issue.Don't have any clue how to solve this error. plz anyone help me. Thanks in advance.

    Read the article

  • Repetitive SQL: What does it mean?

    - by Lijo
    Hi In a different post I got a reply that tells about Repetitive SQL. Could you please explain what is Repetitive SQL? http://stackoverflow.com/questions/2657459/sql-code-smells I thought to make it a new post as it is a different subject. The reply says that use of "multiple stored procedures that perform the exact same joins but different filters" can be avoided using VIEWs. Could you please give an example that can only be achieved using repetitive queries, if we are using Stored Proecure? [The same can be achieved without repetition when used VIEWS] Thanks Lijo

    Read the article

  • How to use multiple database adapter to query involving tables from different databases?

    - by understack
    I've 2 databases, which are set up as mentioned here. How can I write a SQL query which involves database_1.table_1 and database_2.table_1 ? E.g. consider this query $sql = "SELECT distinct database_1.users.id, database_1.users.name FROM database_1.users, database_2.sales WHERE database_2.sales.user_id = database_1.users.id"; How this query could be written using multiple db adapter? Edit I'm using 2 databases, because this way I can change actual database names in application.ini. Is there any other way I can change database names without changing sql queries?

    Read the article

  • Aggregate Functions in Index with IBMDB2

    - by Erkan
    Is there any way to pre aggregate results of aggregate functionts (f.i. count()) and store it in an index? The background is: i want to speed up count() queries. So that: Select count(users) from TE123 where region = 'A'; would be supported by an index like Region Count(Users) A 548 E 458 I know that MQTs would also help for this problem. However, in this case it is not possible to use MQT, as we use kind of an ORM and we don't want to define Entities on MQTs. I just slightly remember - one DBA told me - that there is such a function planned for DB2 V10.

    Read the article

  • How to recalculate primary index?

    - by JohnM2
    I have table in mysql database with autoincrement PRIMARY KEY. On regular basis rows in this table are being deleted an added. So the result is that value of PK of the latest row is growing very fast, but there is not so much rows in this table. What I want to do is to "recalculate" PK in this way, that the first row has PK = 1, second PK = 2 and so on. There are no external dependencies on PK of this table so it would be "safe". Is there anyway it can be done using only mysql queries/tools? Or I have to do it from my code?

    Read the article

  • bulk update/delete entities of different kind in db.run_in_transaction

    - by Ray Yun
    Here goes pseudo code of bulk update/delete entities of different kind in single transaction. Note that Album and Song entities have AlbumGroup as root entity. class AlbumGroup: pass class Album: group = db.ReferenceProperty(reference_class=AlbumGroup,collection_name="albums") class Song: album = db.ReferenceProperty(reference_class=Album,collection_name="songs") def bulk_update_album_group(album_group): updated = [album_group] deleted = [] for album in album_group.albums: updated.append(album) for song in album.songs: if song.is_updated: updated.append(song) if song.is_deleted: deleted.append(song) db.put(updated) db.delete(deleted) a = AlbumGroup.all().filter("...").get() # bulk update/delete album group. for simplicity, album cannot be deleted. db.run_in_transaction(bulk_update_album_group,a) But I met a famous "Only Ancestor Queries in Transactions" error at the iterating reference properties like album.songs or album_group.albums. I guess ancestor() filter does not help because those entities are modified in memory. Should I not to iterate reference property in transaction function and always provide them as function parameters like def bulk_update_album_group(updated,deleted): ??? Is there any good coding pattern for this situation?

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >