Search Results

Search found 10966 results on 439 pages for 'kevin db'.

Page 341/439 | < Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >

  • Anonymous iterator blocks in Clojure?

    - by Checkers
    I am using clojure.contrib.sql to fetch some records from an SQLite database. (defn read-all-foo [] (with-connection *db* (with-query-results res ["select * from foo"] (into [] res)))) Now, I don't really want to realize the whole sequence before returning from the function (i.e. I want to keep it lazy), but if I return res directly or wrap it some kind of lazy wrapper (for example I want to make a certain map transformation on result sequence), SQL-related bindings will be reset and connection will be closed after I return, so realizing the sequence will throw an exception. How can I enclose the whole function in a closure and return a kind of anonymous iterator block (like yield in C# or Python)? Or is there another way to return a lazy sequence from this function?

    Read the article

  • Store data in an inconvenient table or create a derived table?

    - by user1705685
    I have a certain predefined database structure that I am stuck with. The question is whether this structure is OK for ORM or I whether should add a processing layer that would create a more convenient structure every time something is inserted into the original DB. To simplify, here's what it kind of looks like. I have a person table: PersonId Name And I have a properties table: PersonId PropertyType PropertyValue So, for person John Doe... (1, 'John Doe') ...I could have three properties: (1, 'phone', '555-55-55'), (1, 'email', '[email protected]), (1, 'type', 'employee') By using ORM I would like to get a "person" object that would have properties "name", "phone", "email", "type". Can Propel do that? How efficient is it? Is it a better idea to create a table with columns "phone", "email", "type" and fill it automatically as new rows are inserted into the properties table?

    Read the article

  • iPhone CoreData Migration and modifying data

    - by ScottL
    I have an app that has both static data and user entered data in the CoreData store. I understand how to do a lightweight migration to a new database version, but how to I add or modify the static data without affecting the users data? If I have 50 static data entries to add and a couple to modify (ie. spelling mistakes) should they be stored in a different sqlite db and copied over? Also, is it possible to look at the version of the data store so that this copy only happens the first time the app is started up after upgrading? Sorry for the general noob type question, but this is the first time I have ever had to deal with this sort of issue. Thanks, Scott

    Read the article

  • SQL Database dilemma : Optimize for Querying or Writing?

    - by Harry
    I'm working on a personal project (Search engine) and have a bit of a dilemma. At the moment it is optimized for writing data to the search index and significantly slow for search queries. The DTA (Database Engine Tuning Adviser) recommends adding a couple of Indexed views inorder to speed up search queries. But this is to the detriment of writing new data to the DB. It seems I can't have one without the other! This is obviously not a new problem. What is a good strategy for this issue?

    Read the article

  • SELECT only a certain set of rows at a time

    - by prmatta
    I need to select data from one table and insert it into another table. Currently the SQL looks something like this: INSERT INTO A (x, y, z) SELECT x, y, z FROM B b WHERE ... However, the SELECT is huge, resulting in over 2 millions rows and we think it is taking up too much memory. Informix, the db in this case, runs out of virtual memory when the query is run. How would I go about selecting and inserting a set of rows (say 2000)? Given that I don't think there are any row ids etc.

    Read the article

  • Database for Large number of 1kB data chunks (MySQL?)

    - by The Unknown
    I have a very large dataset, each item in the dataset being roughly 1kB in size. The data needs to be queried rapidly by many applications distributed over a network. The dataset has more than a million items (so 500 million+ 1kB data chunks). What would be the best method to storing this dataset (need to allow adding more items, and reading them rapidly, but never modifying already added data)? Would using a MySQL DB using the binary blob format be appropriate? Or should each of these be stored as files on a file system? edit: the number is 1 million items now, but needs to be able to scale to well over 500 million items easily.

    Read the article

  • Bandwidth Monitoring in asp.net

    - by asifch
    HI, We are developing a multi-tenant application in Asp.Net with separate DB for each tenant, in which one of the requirement is to monitor the bandwidth usage for each tenant, i have tried to search but not found much help on the topic,we want to monitor exactly how much bandwidth is being used for each tenant while each tenant can have its own top level domain or a sub domain or a combination of both. so what are the available options, the ones which i can think of can be 1. IIS Log Monitoring means a separate application which will calculate the bandwidth for each tenant. 2. Log Each Request and Response for a tenant from within the application and then calculate the total bandwidth usage based on that. 3. Use some third part components if available So what do you think will be the best approach, also if there is any other way to do this.

    Read the article

  • Capistrano update causes C: to be placed in the current directory (cygwin)

    - by user321775
    When I run cap deploy:update in a directory on my local machine (via cygwin), "C:" magically appears in the directory. Sure enough, I can cd to it and it's my windows C: drive. Now I'm afraid to delete it, but I definitely don't want it in this directory (a rails project under /home/username/blah/blah). Here's my config/deploy.rb file. custom options set :application, "xyz.com" set :repository, "ssh://[email protected]:yyyy/home/git/xxx" set :user, "myname" set :runner, user set :use_sudo, false server "xxx.xxx.xxx.xxx:yyyy", :app, :web, :db, :primary = true deploy to set :deploy_to, "/home/myname/public_html/xyz" repository set :scm, :git set :deploy_via, :copy ssh options default_run_options[:pty] = true ssh_options[:paranoid] = false ssh_options[:port] = yyyy start passenger namespace :deploy do task :start do ; end task :stop do ; end task :restart, :roles = :app, :except = { :no_release = true } do run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}" end end Anyone see the problem? And does anyone know a safe way of getting rid of the C: drives that have already shown up (this has happened in a few directories)?

    Read the article

  • How to find out efficiently the auto-generated id for a new object when using JPA?

    - by webstarg
    Hello, I have an attribute which is annotated with @Id. The ID is going to be generated automatically when persisting the object. That means that the ID-value is not defined before I persist the object. After persisting it, it has an ID (in the database), but unfortunately the field still remains null as long as I don't reload it from the DB. is there any easy way to find out the generated id? Or better: To configure that it will be written into the field? Thanks in advance

    Read the article

  • Copy a Table's data from a Stored Procedure

    - by Niike2
    I am learning how to use SQL and Stored Procedures. I know the syntax is incorrect: Copy data from one table into another table on another Database with a Stored Procedure. The problem is I don't know what table or what database to copy to. I want it to use parameters and not specify the columns specifically. I have 2 Databases (Master_db and Master_copy) and the same table structure on each DB. I want to quickly select a table in Master_db and copy that table's data into Master_copy table with same name. I have come up with something like this: USE Master_DB CREATE PROCEDURE TransferData DEFINE @tableFrom, @tableTo, @databaseTo; INSERT INTO @databaseTo.dbo.@databaseTo SELECT * FROM Master_DB.dbo.@tableFrom GO;

    Read the article

  • What does open source license (like GNU-GPL) mean?

    - by Hemant
    I am looking forward to use an open source product which has GNU-GPL like license and it says that if I use that product, I must share the source code of my application. I am slightly confused about it. I understand that Linux is available under GNU-GPL license as well. Does it mean ALL linux application are and has to be open source? Does it mean I can ask for the source code of complete Oracle DB from Oracle Corp (at least the part that runs on Linux)?

    Read the article

  • Is there a best practice for maintaining history in a database?

    - by Pete
    I don't do database work that often so this is totally unfamiliar territory for me. I have a table with a bunch of records that users can update. However, I now want to keep a history of their changes just in case they want to rollback. Rollback in this case is not the db rollback but more like revert changes two weeks later when they realized that they made a mistake. The distinction being that I can't have a transaction do the job. Is the current practice to use a separate table, or just a flag in the current table? It's a small database, 5 tables each with < 6 columns, < 1000 rows total.

    Read the article

  • Does database affect classes?

    - by satyanarayana
    I had created one class User and UserDAOImpl class for querying DB using class User. As there is one table to be queried, these two classes are sufficient for me. What if there is a case where new fields are to be added to that one table is to be divided into 3 tables( user_info, user_profile and user_address) to store user? As new fields are added, I need to change classes User and UserDAOImpl, it seems these two are not sufficient. It seems database changes affect my classes. In this case, do I need to divide class User into 3 classes as tables are changes? Can any one suggest me how can I solve this without making too many changes?

    Read the article

  • mysql select where count = 0

    - by david parloir
    Hi, In my db, I have a "sales" table and a "sales_item". Sometimes, something goes wrong and the sale is recorded but not the sales item's. So I'm trying to get the salesID from my table "sales" that haven't got any rows in the sales_item table. Here's the mysql query I thought would work, but it doesn't: SELECT s.* FROM sales s NATURAL JOIN sales_item si WHERE s.date like '" . ((isset($_GET['date'])) ? $_GET['date'] : date("Y-m-d")) . "%' AND v.sales_id like '" . ((isset($_GET['shop'])) ? $_GET['shop'] : substr($_COOKIE['shop'], 0, 3)) ."%' HAVING count(si.sales_item_id) = 0; Any thoughts?

    Read the article

  • php mysql search in 2 columns in 2 tables.

    - by andrew fishwick
    Hey, I have two tables in one DB, one called Cottages and one called Hotels. In both tables they have the same named fields. I basically have a search bar that i want it to search in both of the fields in both of the tables. (the two fields being called "Name" and "Location" SO far I have $sql = mysql_query("SELECT * FROM Cottages WHERE Name LIKE '%$term%' or Location LIKE '%$term%' LIMIT 0, 30"); But this only searches the Cottages table, how can I make it search both the cottages and hotel tables? Andy

    Read the article

  • Several different project types in solution use same class. Where can I place this class?

    - by user3605366
    I have one solution with 3 related projects: 1) a Windows console app that reads data and stores it to a mssql DB, 2) WCF service that will retrieve from mssql data, 3) website that will read from the WCF. There could be other projects in the future. The first two projects (and any related future projects) use a Sqlhelper class. Should I create a separate project for it? The most ideal one would be a class library, but I don't know if a WCF invoking a DLL is correct. Any help is appreciated. Thanks.

    Read the article

  • Storing PDFs in MS Access Database using Forms

    - by Matthew Jones
    I need to store PDF files in an Access database on a shared drive using a form. I figured out how to do this in tables (using the OLE Object field, then just drag-and-drop) but I would like to do this on a Form that has a Save button. Clicking the save button would store the file (not just a link) in the database. Any ideas on how to do this? EDIT: I am using Access 2003, and the DB will be stored on a share drive, so I'm not sure linking to the files will solve the problem.

    Read the article

  • Stored Procedure - forcing execution order

    - by meepmeep
    I have a stored procedure that itself calls a list of other stored procedures in order: CREATE PROCEDURE [dbo].[prSuperProc] AS BEGIN EXEC [dbo].[prProc1] EXEC [dbo].[prProc2] EXEC [dbo].[prProc3] --etc END However, I sometimes have some strange results in my tables, generated by prProc2, which is dependent on the results generated by prProc1. If I manually execute prProc1, prProc2, prProc3 in order then everything is fine. It appears that when I run the top-level procedure, that Proc2 is being executed before Proc1 has completed and committed its results to the db. It doesn't always go wrong, but it seems to go wrong when Proc1 has a long execution time (in this case ~10s). How do I alter prSuperProc such that each procedure only executes once the preceding procedure has completed and committed? Transactions?

    Read the article

  • JMS message received at only one server

    - by BJH
    I'm having a problem with a JEE6 application running in a clustered environment using WebSphere ApplicationServer 8. A search index is used for quick search in the UI (using Lucene), which must be re-indexed after new data arrived in the corresponding DB layer. To achieve this we're sending a JMS message to the application, then the search index will be refreshed. The problem is, that the messages only arrives at one of the cluster members. So only there the search index is up to date. At the other servers it remains outdated. How can I achieve that the search index gets updated at all cluster members? Can I receive the message somehow on all servers? Or is there a better way to do this?

    Read the article

  • Accessing a drive on remote server via app.config

    - by user349134
    I am working on a website with a scheduled dataloader exe. The website lives on the web server and the dataloader lives on the DB server. One of the steps in the process is for the dataloader to access the WEB server (to copy/paste a maintenance page file..e.g.\192.168.1.101\c$\maintenance.htm). I am, not surprisingly, running into permissions issues because the dataloader needs to be able to login to the WEB server as an admin to copy the file. Is there a way I can set up logging in (something akin to impersonating a user through an App.config?) Thanks! -KC

    Read the article

  • Transform LINQ Dataset into a Matrix for export

    - by Mad Halfling
    Hi folks, I've got a data table with columns in which include Item, Category and Value (and others, but those are the only relevant ones for this problem) that I access via LINQ in a C# ASP.Net MVC app. I want to transform these into a matrix and output that as a CSV file to pull into Excel as matrix with the items down the side, the categories across the top and the values in the row cells. However, I don't know how many, or what, categories there will be in this table, nor will there always be a record for each item/category combination. I've written this by looping round, getting my "master category" list, then looking again for each item, filling in either blank or Value, depending on whether the item/category record exists, but as there are currently 27000 records in the table, this isn't as fast as I'd like. Is there a slicker and faster way I can do this, maybe via LINQ (firing into a quicker SQL statement so the DB server can do the leg-work), or will any method essentially come back to what I am doing? Thx MH

    Read the article

  • laravel multiple where clauses within a loop

    - by user1424508
    Pretty much I want the query to select all records of users that are 25 years old AND are either between 150-170cm OR 190-200cm. I have this query written down below. However the problem is it keeps getting 25 year olds OR people who are 190-200cm instead of 25 year olds that are 150-170 OR 25 year olds that 190-200cm tall. How can I fix this? thanks $heightarray=array(array(150,170),array(190,200)); $user->where('age',25); for($i=0;$i<count($heightarray);i++){ if($i==0){ $user->whereBetween('height',$heightarray[$i]) }else{ $user->orWhereBetween('height',$heightarray[$i]) } } $user->get(); Edit: I tried advanced wheres (http://laravel.com/docs/queries#advanced-wheres) and it doesn't work for me as I cannot pass the $heightarray parameter into the closure. from laravel documentation DB::table('users') ->where('name', '=', 'John') ->orWhere(function($query) { $query->where('votes', '>', 100) ->where('title', '<>', 'Admin'); }) ->get();

    Read the article

  • Separating data from the UI code with Linq to SQL entities

    - by Sir Psycho
    If it's important to keep data access 'away' from business and presentation layers, what alternatives or approaches can I take so that my LINQ to SQL entities can stay in the data access layer? So far I seem to be simply duplicating the classes produced by sqlmetal, and passing those object around instead simply to keep the two layers appart. For example, I have a table in my DB called Books. If a user is creating a new book via the UI, the Book class generated by sqlmetal seems like a perfect fit although I'm tightly coupling my design by doing so.

    Read the article

  • ASP.MVC ModelBinding Behaviour

    - by OldBoy
    This one has me stumped, despite the numerous posts on here. The scenario is a basic MVC(2) web application with simple CRUD operations. Whenever the edit form is submitted and the UpdateModel() called, an exception is thrown: System.Data.Linq.ForeignKeyReferenceAlreadyHasValueException was unhandled by user code This occurs against a DropDownList value which is a foreign key on the entity table. However, there is another DropDownList list on the form, representing another foreign key, which does not throw the error (unsurprisingly). Changing the property values manually inside the Edit Action: Recipe recipe = repository.GetRecipe(int.Parse(formValues["recipeid"])); recipe.CategoryId = Convert.ToInt32(formValues["CategoryId"].ToString()); recipe.Page = int.Parse(formValues["Page"].ToString()); recipe.PublicationId=Convert.ToInt32(formValues["PublicationId"].ToString()); Allows the CategoryId and Page properties to be updated, and then the error is thrown on the PublicationId. All of the referential integrity is checked an the same in the db and the dbml. Any light shed on this would be most welcome.

    Read the article

  • Storing user settings in table - how?

    - by Mdillion
    I have settings for the user about 200 settings, these include notice settings and tracing settings from user activities on objects. The problem is how to store it in the DB? Should each setting be a row or a column? If colunm then table will have 200 colunms. If row then about 3 colunms but 200 rows per user x even 10 million users = not good. So how else can i store all these settings? NOTE: these settings are a mix of text entry and FK lookups to other tables. Thanks.

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >