Search Results

Search found 36946 results on 1478 pages for 'sample database'.

Page 331/1478 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • Mysql- "FLUSH TABLES WITH READ LOCK" started automatically

    - by ming yeow
    I would like to understand how this happened. I was running a query that would take a long time, but should not lock up any table. However, my dbs were practically down - it seems like it was being locked up by "FLUSH TABLES WITH READ LOCK" 03:21:31 select type_id, count(*) from guid_target_infos group by type_id 02:38:11 select type_id, count(*) from guid_infos group by type_id 02:24:29 FLUSH TABLES WITH READ LOCK But i did not start this command. can someone tell me why it was started automatically?

    Read the article

  • How to use Externel Triggers on Oracle 11g..

    - by RBA
    Hi, I want to fire a trigger whenever an insert command is fired.. The trigger will access a pl/sql file which can change anytime.. So the query is, if we design the trigger, how can we make sure this dynamic thing happens.. As during the stored procedure, it is not workingg.. I think - it should work for 1) External Procedures 2) Execute Statement Please correct me, if I am wrong.. I was working on External Procedures but i am not able to find the way to execute the external procedure from here on.. SQL> CREATE OR REPLACE FUNCTION Plstojavafac_func (N NUMBER) RETURN NUMBER AS 2 LANGUAGE JAVA 3 NAME 'Factorial.J_calcFactorial(int) return int'; 4 / @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ SQL> CREATE OR REPLACE TRIGGER student_after_insert 2 AFTER INSERT 3 ON student 4 FOR EACH ROW How to call the procedure from heree... And does my interpretations are right,, plz suggest.. Thanks.

    Read the article

  • MySQL multiple dependent subqueries, painfully slow

    - by matt80
    I have a working query that retrieves the data that I need, but unfortunately it is painfully slow (runs over 3 minutes). I have indexes in place, but I think the problem is the multiple dependent subqueries. I've been trying to rewrite the query using joins but I can't seem to get it to work. Any help would be greatly appreciated. The tables: Basically, I have 2 tables. The first (prices) holds the prices of items in a store. Each row is the price of an item that day, and new rows are added every day with an updated price. The second table (watches_US) holds the item information (name, description, etc). CREATE TABLE `prices` ( `prices_id` int(11) NOT NULL auto_increment, `prices_locale` enum('CA','DE','FR','JP','UK','US') NOT NULL default 'US', `prices_watches_ID` char(10) NOT NULL, `prices_date` datetime NOT NULL, `prices_am` varchar(10) default NULL, `prices_new` varchar(10) default NULL, `prices_used` varchar(10) default NULL, PRIMARY KEY (`prices_id`), KEY `prices_am` (`prices_am`), KEY `prices_locale` (`prices_locale`), KEY `prices_watches_ID` (`prices_watches_ID`), KEY `prices_date` (`prices_date`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=61764 ; CREATE TABLE `watches_US` ( `watches_ID` char(10) NOT NULL, `watches_date_added` datetime NOT NULL, `watches_last_update` datetime default NULL, `watches_title` varchar(255) default NULL, `watches_small_image_height` int(11) default NULL, `watches_small_image_width` int(11) default NULL, `watches_description` text, PRIMARY KEY (`watches_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; The query retrieves the last 10 prices changes over a period of 30 hours, ordered by the size of the price change. So I have subqueries to get the newest price, the oldest price within 30 hours, and then to calculate the price change. Here's the query: SELECT watches_US.*, prices.*, watches_US.watches_ID as current_ID, ( SELECT prices_am FROM prices WHERE prices_watches_ID = current_ID AND prices_locale = 'US' ORDER BY prices_date DESC LIMIT 1 ) as new_price, ( SELECT prices_date FROM prices WHERE prices_watches_ID = current_ID AND prices_locale = 'US' ORDER BY prices_date DESC LIMIT 1 ) as new_price_date, ( SELECT prices_am FROM prices WHERE ( prices_watches_ID = current_ID AND prices_locale = 'US') AND ( prices_date >= DATE_SUB(new_price_date,INTERVAL 30 HOUR) ) ORDER BY prices_date ASC LIMIT 1 ) as old_price, ( SELECT ROUND(((new_price - old_price)/old_price)*100,2) ) as percent_change, ( SELECT (new_price - old_price) ) as absolute_change FROM watches_US LEFT OUTER JOIN prices ON prices.prices_watches_ID = watches_US.watches_ID WHERE ( prices_locale = 'US' ) AND ( prices_am IS NOT NULL ) AND ( prices_am != '' ) HAVING ( old_price IS NOT NULL ) AND ( old_price != 0 ) AND ( old_price != '' ) AND ( absolute_change < 0 ) AND ( prices.prices_date = new_price_date ) ORDER BY absolute_change ASC LIMIT 10 How would I rewrite this to use joins instead, or otherwise optimize this so it doesn't take over 3 minutes to get a result? Any help would be greatly appreciated! Thank you kindly.

    Read the article

  • Can't create a MySQL query that generates 4 rows for each row in the table it references.

    - by UkraineTrain
    I need to create a MySQL query that generates 4 rows for each row in the table it references. I need some of the information in those rows to repeat and some to be different. In the table each row stands for one day. I need to break the day up in 6 hour increments, hence the four rows for each entry. I need to create one column which for each day will have '12AM', '6AM', '12PM', and '6PM' values and another column will have the corresponding numeric values calculated for those entries. Thanks a lot in advance and I will really appreciate any help on this.

    Read the article

  • New replicaset resident memory is larger than the existing sets

    - by eded
    From the mongodb tutorial of how to resync a set, I wipe all the files in /data/db and restart the mongod process to resync the data. Everything looks ok, I get the same number of documents as the existing two sets(primary and one secondary). However, when I check the memory on MMS. it shows me my new resynced set/mongod process has a different memory status value than the other two. For existing twos using db.serverStatus.mem shows like the following: "mem" : { "bits" : 64, "resident" : 239, "virtual" : 66348, "supported" : true, "mapped" : 32865, "mappedWithJournal" : 65730 } however, the new resynced set shows like: "mem" : { "bits" : 64, "resident" : 1239, "virtual" : 52447, "supported" : true, "mapped" : 25700, "mappedWithJournal" : 51400 } the resynced resident memory is 6-10 times more than the existing ones. I wouder if it is normal because all data comes in suddenly during the resyncing?? and even virtual and mapped value are different too. Can anyone explain?? thanks

    Read the article

  • Unique visitor counting in ASP.NET MVC

    - by Max
    I'd like to do visitor tracking similar to how stackoverflow does it.. By reading through numerous posts, I've figured out some details already: Count only 1 IP hit per 15 minutes (if anonymous) Count only 1 unique user-Login (per day?) Now that leaves the question of the real implementation.. Should I log the two factors live into a table (and increase count) | IP | timestamp | pageurl | Or do the counting AFTERWARDS (e.g. using IIS log files - which don't include the user, right? I know there're some similar posts outside, but NONE really has a great solution in my opinion yet..

    Read the article

  • Efficient update of SQLite table with many records

    - by blackrim
    I am trying to use sqlite (sqlite3) for a project to store hundreds of thousands of records (would like sqlite so users of the program don't have to run a [my]sql server). I have to update hundreds of thousands of records sometimes to enter left right values (they are hierarchical), but have found the standard update table set left_value = 4, right_value = 5 where id = 12340; to be very slow. I have tried surrounding every thousand or so with begin; .... update... update table set left_value = 4, right_value = 5 where id = 12340; update... .... commit; but again, very slow. Odd, because when I populate it with a few hundred thousand (with inserts), it finishes in seconds. I am currently trying to test the speed in python (the slowness is at the command line and python) before I move it to the C++ implementation, but right now this is way to slow and I need to find a new solution unless I am doing something wrong. Thoughts? (would take open source alternative to SQLite that is portable as well)

    Read the article

  • What's the best way to test SQL Server connection programmatically?

    - by backslash17
    I need to develop a single routine that will be fired each 5 minutes to check if a list of SQL Servers (10 to 12) are up and running. I can try to obtain a simple query in each one of the servers but this means that I have to create a table, view or stored procedure in every server, even if I use any already made SP I need to have a registered user in each server too. The servers are not in the same physical location so having those requirements would be a complex task. Is there a way to simply "ping" from C# one SQL Server? Thanks in advance!

    Read the article

  • How SqlDataAdapter works internally?

    - by tigrou
    I wonder how SqlDataAdapter works internally, especially when using UpdateCommand for updating a huge DataTable (since it's usually a lot faster that just sending sql statements from a loop). Here is some idea I have in mind : It creates a prepared sql statement (using SqlCommand.Prepare()) with CommandText filled and sql parameters initialized with correct sql types. Then, it loops on datarows that need to be updated, and for each record, it updates parameters values, and call SqlCommand.ExecuteNonQuery(). It creates a bunch of SqlCommand objects with everything filled inside (CommandText and sql parameters). Several SqlCommands at once are then batched to the server (depending of UpdateBatchSize). It uses some special, low level or undocumented sql driver instructions that allow to perform an update on several rows in a effecient way (rows to update would need to be provided using a special data format and a the same sql query (UpdateCommand here) would be executed against each of these rows).

    Read the article

  • how to get transform a local image to a web accessable image

    - by hguser
    Hi: Generally,if we want to display a image in the web page,we give the uri of the image resource like: http://host:port/image/xxx.jpg. Now,there are some images in my file system,and I save its absolute path in the db. Like this; id name address image 1 xxxx xxxx C:/images/xxx.jpg Now if the entity is retrived,its image should be displayed in the page. How to make it? What I thought is copy the image under the web server dir,then build its url,then the page can render it. But I wonder if this is a good idea? Is there any other way?

    Read the article

  • Question about Benchmark funcion in Mysql ( Incredible results ).

    - by xRobot
    I have 2 tables: author with 3 millions of rows. book with 20 miles rows. . So I have benchmarked this query with a join: SELECT BENCHMARK(100000000, 'SELECT book.title, author.name FROM `book` , `author` WHERE book.id = author.book_id ') And this is the result: Query took 0.7438 sec ONLY 0.7438 seconds for 100 millions of query with a join ??? Do I make some mistakes or this is the right result ?

    Read the article

  • Relation to multiple tables of different types for rating?

    - by Tronic
    i have a table structure like this Products Team Images and want to implement a rating/commenting-feature, where users can rate each entry of all tables. what's the best way to make a single rating table? e.g. a user votes a a product and a team entry, and it should be possible to get alle these entries from a single table. what kind of table-structure is best for this purpose? i hope, my questions is clear enough :/ thanks in advance!

    Read the article

  • How do I avoid a race condition in my Rails app?

    - by Cathal
    Hi, I have a really simple Rails application that allows users to register their attendance on a set of courses. The ActiveRecord models are as follows: class Course < ActiveRecord::Base has_many :scheduled_runs ... end class ScheduledRun < ActiveRecord::Base belongs_to :course has_many :attendances has_many :attendees, :through => :attendances ... end class Attendance < ActiveRecord::Base belongs_to :user belongs_to :scheduled_run, :counter_cache => true ... end class User < ActiveRecord::Base has_many :attendances has_many :registered_courses, :through => :attendances, :source => :scheduled_run end A ScheduledRun instance has a finite number of places available, and once the limit is reached, no more attendances can be accepted. def full? attendances_count == capacity end attendances_count is a counter cache column holding the number of attendance associations created for a particular ScheduledRun record. My problem is that I don't fully know the correct way to ensure that a race condition doesn't occur when 1 or more people attempt to register for the last available place on a course at the same time. My Attendance controller looks like this: class AttendancesController < ApplicationController before_filter :load_scheduled_run before_filter :load_user, :only => :create def new @user = User.new end def create unless @user.valid? render :action => 'new' end @attendance = @user.attendances.build(:scheduled_run_id => params[:scheduled_run_id]) if @attendance.save flash[:notice] = "Successfully created attendance." redirect_to root_url else render :action => 'new' end end protected def load_scheduled_run @run = ScheduledRun.find(params[:scheduled_run_id]) end def load_user @user = User.create_new_or_load_existing(params[:user]) end end As you can see, it doesn't take into account where the ScheduledRun instance has already reached capacity. Any help on this would be greatly appreciated.

    Read the article

  • Any tools or techniques for validating constraints programmatically between databases?

    - by Brandon
    If you had two databases, that had two tables between them that would normally implement a one to one (or many to many) constraint but cannot since they are separate databases, how would you validate this relationship in an application or test? Is there a simple way to do this? For example, a tool or technique that can, given a constraint type, tables and fields, does the validation. I imagine that this isn't the first time this come up so I'm hoping people can share their solution. Thanks.

    Read the article

  • How Indices Cope with MVCC ?

    - by geeko
    Greetings Overflowers, To my understanding (and I hope I'm not right) changes to indices cannot be MVCCed. I'm wondering if this is also true with big records as copies can be costly. Since records are accessed via indices (usually), how MVCC can be effective ? Do, for e.g., indices keep track of different versions of MVCCed records ? Any recent good reading on this subject ? Really appreciated ! Regards

    Read the article

  • Fastest way to do a weighted tag search in SQL Server

    - by Hasan Khan
    My table is as follows ObjectID bigint Tag nvarchar(50) Weight float Type tinyint I want to get search for all objects that has tags 'big' or 'large' I want the objectid in order of sum of weights (so objects having both the tags will be on top) select objectid, row_number() over (order by sum(weight) desc) as rowid from tags where tag in ('big', 'large') and type=0 group by objectid the reason for row_number() is that i want paging over results. The query in its current form is very slow, takes a minute to execute over 16 million tags. What should I do to make it faster? I have a non clustered index (objectid, tag, type) Any suggestions?

    Read the article

  • Advice on setting up a central db with master tables for web apps

    - by Dragn1821
    I'm starting to write more and more web applications for work. Many of these web applications need to store the same types of data, such as location. I've been thinking that it may be better to create a central db and store these "master" tables there and have each applicaiton access them. I'm not sure how to go about this. Should I create tables in my application's db to copy the data from the master table and store in the app's table (for linking with other app tables using foreign keys)? Should I use something like a web service to read the data from the master table instead of firing up a new db connection in my app? Should I forget this idea and just store the data within my app's db? I would like to have data such as the location central so I can go to one table and add a new location and the next time someone needs to select a location from one of the apps, the new one would be there. I'm using ASP.NET MVC 1.0 to build the web apps and SQL 2005 as the db. Need some advice... Thanks!

    Read the article

  • In mySQL, Is it possible to SELECT from two tables and merge the columns?

    - by Travis
    If I have two tables in mysql that have similar columns... TABLEA id name somefield1 TABLEB id name somefield1 somefield2 How do I structure a SELECT statement so that I can SELECT from both tables simultaneously, and have the result sets merged for the columns that are the same? So for example, I am hoping to do something like... SELECT name, somefield1 FROM TABLEA, TABLEB WHERE name="mooseburgers"; ...and have the name, and somefield1 columns from both tables merged together in the result set. Thank-you for your help!

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >