Search Results

Search found 32538 results on 1302 pages for 'restore database'.

Page 341/1302 | < Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >

  • How to restore a hidden loadable kernel module from /sys/module and dealing with restoring holders_dir?

    - by user1833005
    I'm playing with kernel module hiding on Linux Kernel 3.x. I try to hide and recover the module from /sys/module. Everything works fine on Kernel Version 3.0 and 3.2.6, I can load and unload the module and hide and unhide it. When I'm unloading the module on kernel 3.6.6 I get the following error: rmmod: ERROR: could not open '/sys/module/xxx/holders': No such file or directory rmmod: ERROR: Module xxx is in use Has anybody an idea how I could restore of the module so that I am able to unload it without errors? Here is my code: /* hide from /sys/module */ kobject_del(&__this_module.mkobj.kobj); list_del(&__this_module.mkobj.kobj.entry); /* add to /sys/module */ kobject_add(&__this_module.mkobj.kobj,__this_module.mkobj.kobj.parent,"xxx"); Thank you four help :)

    Read the article

  • How Indices Cope with MVCC ?

    - by geeko
    Greetings Overflowers, To my understanding (and I hope I'm not right) changes to indices cannot be MVCCed. I'm wondering if this is also true with big records as copies can be costly. Since records are accessed via indices (usually), how MVCC can be effective ? Do, for e.g., indices keep track of different versions of MVCCed records ? Any recent good reading on this subject ? Really appreciated ! Regards

    Read the article

  • Using a variable derived from a drop-down list as the column name in a select statement ... Access DB

    - by user1459698
    I'm working with the world's worst DB that was already here so don't blame me for that. So, here's what I have so far ... module = txtModule.value presstype = txtPressType.value SQL_query = "SELECT * FROM tbl_spareparts WHERE '"& module &"' <> '""' AND '"& module &"' = '"& presstype &"' AND Manufacturer = '"& txtsrch.value &"' ORDER BY SAP_Part_No" Set rsData = conn.Execute(SQL_query) This brings up the following SQL statement: SELECT * FROM tbl_spareparts WHERE 'Banyan_Module' <> '"' AND 'Banyan_Module' = 'PB' AND Manufacturer = 'Tester' ORDER BY SAP_Part_No Is there any way I can use the module variable as a column name - obviously the ''s around the column name are causing an error. This is really bothering me. BTW, I'm writing this in VBScript inside a .HTA application page as it has to run locally on tech PCs. Thanks. R.

    Read the article

  • Does this query fetch unnecessary information? Should I change the query?

    - by Camran
    I have this classifieds website, and I have about 7 tables in MySql where all data is stored. I have one main table, called "classifieds". In the classifieds table, there is a column called classified_id. This is not the PK, or a key whatsoever. It is just a number which is used for me to JOIN table records together. Ex: classifieds table: fordon table: id => 33 id => 12 classified_id => 10 classified_id => 10 ad_id => 'bmw_m3_92923' This above is linked together by the classified_id column. Now to the Q, I use this method to fetch all records WHERE the column ad_id matches any of the values inside an array, called in this case $ad_arr: SELECT mt.*, fordon.*, boende.*, elektronik.*, business.*, hem_inredning.*, hobby.* FROM classified mt LEFT JOIN fordon ON fordon.classified_id = mt.classified_id LEFT JOIN boende ON boende.classified_id = mt.classified_id LEFT JOIN elektronik ON elektronik.classified_id = mt.classified_id LEFT JOIN business ON business.classified_id = mt.classified_id LEFT JOIN hem_inredning ON hem_inredning.classified_id = mt.classified_id LEFT JOIN hobby ON hobby.classified_id = mt.classified_id WHERE mt.ad_id IN ('$ad_arr')"; Is this good or would this actually fetch unnecessary information? Check out this Q I posted couple of days ago. In the comments HLGEM is commenting that it is wrong etc etc. What do you think? http://stackoverflow.com/questions/2782275/another-rookie-question-how-to-implement-count-here Thanks

    Read the article

  • merging two tables, while applying aggregates on the duplicates (max,min and sum)

    - by cloudraven
    I have a table (let's call it log) with a few millions of records. Among the fields I have Id, Count, FirstHit, LastHit. Id - The record id Count - number of times this Id has been reported FirstHit - earliest timestamp with which this Id was reported LastHit - latest timestamp with which this Id was reported This table only has one record for any given Id Everyday I get into another table (let's call it feed) with around half a million records with these fields among many others: Id Timestamp - Entry date and time. This table can have many records for the same id What I want to do is to update log in the following way. Count - log count value, plus the count() of records for that id found in feed FirstHit - the earliest of the current value in log or the minimum value in feed for that id LastHit - the latest of the current value in log or the maximum value in feed for that id. It should be noticed that many of the ids in feed are already in log. The simple thing that worked is to create a temporary table and insert into it the union of both as in Select Id, Min(Timestamp) As FirstHit, MAX(Timestamp) as LastHit, Count(*) as Count FROM feed GROUP BY Id UNION ALL Select Id, FirstHit,LastHit,Count FROM log; From that temporary table I do a select that aggregates Min(firsthit), max(lasthit) and sum(Count) Select Id, Min(FirstHit),Max(LastHit),Sum(Count) FROM @temp GROUP BY Id; and that gives me the end result. I could then delete everything from log and replace it with everything with temp, or craft an update for the common records and insert the new ones. However, I think both are highly inefficient. Is there a more efficient way of doing this. Perhaps doing the update in place in the log table?

    Read the article

  • MySQL multiple dependent subqueries, painfully slow

    - by matt80
    I have a working query that retrieves the data that I need, but unfortunately it is painfully slow (runs over 3 minutes). I have indexes in place, but I think the problem is the multiple dependent subqueries. I've been trying to rewrite the query using joins but I can't seem to get it to work. Any help would be greatly appreciated. The tables: Basically, I have 2 tables. The first (prices) holds the prices of items in a store. Each row is the price of an item that day, and new rows are added every day with an updated price. The second table (watches_US) holds the item information (name, description, etc). CREATE TABLE `prices` ( `prices_id` int(11) NOT NULL auto_increment, `prices_locale` enum('CA','DE','FR','JP','UK','US') NOT NULL default 'US', `prices_watches_ID` char(10) NOT NULL, `prices_date` datetime NOT NULL, `prices_am` varchar(10) default NULL, `prices_new` varchar(10) default NULL, `prices_used` varchar(10) default NULL, PRIMARY KEY (`prices_id`), KEY `prices_am` (`prices_am`), KEY `prices_locale` (`prices_locale`), KEY `prices_watches_ID` (`prices_watches_ID`), KEY `prices_date` (`prices_date`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=61764 ; CREATE TABLE `watches_US` ( `watches_ID` char(10) NOT NULL, `watches_date_added` datetime NOT NULL, `watches_last_update` datetime default NULL, `watches_title` varchar(255) default NULL, `watches_small_image_height` int(11) default NULL, `watches_small_image_width` int(11) default NULL, `watches_description` text, PRIMARY KEY (`watches_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; The query retrieves the last 10 prices changes over a period of 30 hours, ordered by the size of the price change. So I have subqueries to get the newest price, the oldest price within 30 hours, and then to calculate the price change. Here's the query: SELECT watches_US.*, prices.*, watches_US.watches_ID as current_ID, ( SELECT prices_am FROM prices WHERE prices_watches_ID = current_ID AND prices_locale = 'US' ORDER BY prices_date DESC LIMIT 1 ) as new_price, ( SELECT prices_date FROM prices WHERE prices_watches_ID = current_ID AND prices_locale = 'US' ORDER BY prices_date DESC LIMIT 1 ) as new_price_date, ( SELECT prices_am FROM prices WHERE ( prices_watches_ID = current_ID AND prices_locale = 'US') AND ( prices_date >= DATE_SUB(new_price_date,INTERVAL 30 HOUR) ) ORDER BY prices_date ASC LIMIT 1 ) as old_price, ( SELECT ROUND(((new_price - old_price)/old_price)*100,2) ) as percent_change, ( SELECT (new_price - old_price) ) as absolute_change FROM watches_US LEFT OUTER JOIN prices ON prices.prices_watches_ID = watches_US.watches_ID WHERE ( prices_locale = 'US' ) AND ( prices_am IS NOT NULL ) AND ( prices_am != '' ) HAVING ( old_price IS NOT NULL ) AND ( old_price != 0 ) AND ( old_price != '' ) AND ( absolute_change < 0 ) AND ( prices.prices_date = new_price_date ) ORDER BY absolute_change ASC LIMIT 10 How would I rewrite this to use joins instead, or otherwise optimize this so it doesn't take over 3 minutes to get a result? Any help would be greatly appreciated! Thank you kindly.

    Read the article

  • PHP reporting error. DB verify how to?

    - by iamfab
    Error reporting Notice: Undefined variable: random_chars in wamp\www\php_sandbox\idgen.php on line 21 Call Stack: # Time Memory Function Location 1 0.0045 678928 {main}( ) ..\idgen.php:0 GPB7446 How do I fix this error? Using this code like an automatic unique id generator. How do I connect to DB to verify code is truly unique before allowing it to be assigned to a user creating a new account? Thanks <?php $characters = array( "A","B","E","F","G","H","J","K","M","N","P","R","S","T","W","X","Y","Z"); $keys = array(); while(count($keys) < 3) { $x = mt_rand(0, count($characters)-1); if(!in_array($x, $keys)) { $keys[] = $x; } } foreach($keys as $key){ $random_chars .= $characters[$key];} $randNum = rand(2327,9987); $randLet = rand(2327,9987); echo $random_chars . $randNum; ?>

    Read the article

  • What's the best way to test SQL Server connection programmatically?

    - by backslash17
    I need to develop a single routine that will be fired each 5 minutes to check if a list of SQL Servers (10 to 12) are up and running. I can try to obtain a simple query in each one of the servers but this means that I have to create a table, view or stored procedure in every server, even if I use any already made SP I need to have a registered user in each server too. The servers are not in the same physical location so having those requirements would be a complex task. Is there a way to simply "ping" from C# one SQL Server? Thanks in advance!

    Read the article

  • How do I link ASP.NET membership/role users to tables in db?

    - by SnapConfig.com
    I am going to use forms authentication but I want to be able to link the asp.net users to some tables in the db for example If I have a class and students (as roles) I'll have a class students table. I'm planning to put in a Users table containing a simple int userid and ASP.NET username in there and put userid wherever I want to link the users. Does that sound good? any other options of modeling this? it does sound a bit convoluted?

    Read the article

  • mysql circular dependency in foreign key constraints

    - by Flavius
    Given the schema: What I need is having every user_identities.belongs_to reference an users.id. At the same time, every users has a primary_identity as shown in the picture. However when I try to add this reference with ON DELETE NO ACTION ON UPDATE NO ACTION, MySQL says #1452 - Cannot add or update a child row: a foreign key constraint fails (yap.#sql-a3b_1bf, CONSTRAINT #sql-a3b_1bf_ibfk_1 FOREIGN KEY (belongs_to) REFERENCES users (id) ON DELETE NO ACTION ON UPDATE NO ACTION) I suspect this is due to the circular dependency, but how could I solve it (and maintain referential integrity)?

    Read the article

  • Sybase PowerDesigner Change Many (Find/Replace/Convert) Data Item's Data Types

    - by Andy
    Hello, I have a relatively large Conceptual Data Model in PowerDesigner. After generating a Physical Data Model and seeing the DBMS data types, I need to update all of data types(NUMBER/TEXT) for each data item. I'd like to either do a find/replace within the Conceptual Data Model or somehow map to different data types when creating the Physical Data Model. Ex. Change the auto conversion of Text - Clob, to Text - NVARCHAR(20). Thanks!

    Read the article

  • In mySQL, Is it possible to SELECT from two tables and merge the columns?

    - by Travis
    If I have two tables in mysql that have similar columns... TABLEA id name somefield1 TABLEB id name somefield1 somefield2 How do I structure a SELECT statement so that I can SELECT from both tables simultaneously, and have the result sets merged for the columns that are the same? So for example, I am hoping to do something like... SELECT name, somefield1 FROM TABLEA, TABLEB WHERE name="mooseburgers"; ...and have the name, and somefield1 columns from both tables merged together in the result set. Thank-you for your help!

    Read the article

  • Alter multiple tables' columns length

    - by gdoron
    So, we just found out that 254 tables in our Oracle DBMS have one column named "Foo" with the wrong length- Number(10) instead of Number(3). That foo column is a part from the PK of the tables. Those tables have other tables with forigen keys to it. What I did is: backed-up the table with a temp table. Disabled the forigen keys to the table. Disabled the PK with the foo column. Nulled the foo column for all the rows. Restored all the above But now we found out it's not just couple of tables but 254 tables. Is there an easy way, (or at least easier than this) to alter the columns length? P.S. I have DBA permissions.

    Read the article

  • When is porting data from MySQL to CouchDB NOT advisable? Seeking cautionary tales

    - by dan
    I've dabbled in CouchDB and I have pretty good MySQL experience. I've also created one production application that uses both. I like MySQL but I've run into scaling/concurrency issues with MySQL that CouchDB advertises itself as a general solution for. The problem is that I have MySQL based applications that are pretty huge, and I don't really know whether it would be a good idea or not to try to port them over to a CouchDB datastore. I don't want to put in a lot of time and effort only to find out that my application is really not a good fit for CouchDB. Is there any sort of informed consensus on when porting a MySQL based app to CouchDB is NOT advisable? Any cautionary tales? I think CouchDB is really cool and want to use it more. I'd also like to know ahead of time what specific types of data querying scenarios CouchDB is really not good for, or if CouchDB can really replace MySQL for all the applications I create going forward.

    Read the article

  • How should I migrate DDL changes from one environment to the next?

    - by Rl
    I make DDL changes using SQL Developer's GUI. Problem is, I need to apply those same changes to the test environment. I'm wondering how others handle this issue. Currently I'm having to manually write ALTER statements to bring the test environment into alignment with the development environment, but this is prone to error (doing the same thing twice). In cases where there's no important data in the test environment I usually just blow everything away, export the DDL scripts from dev and run them from scratch in test. I know there are triggers that can store each DDL change, but this is a heavily shared environment and I would like to avoid that if possible. Maybe I should just write the DDL stuff manually rather than using the GUI?

    Read the article

  • How do I avoid a race condition in my Rails app?

    - by Cathal
    Hi, I have a really simple Rails application that allows users to register their attendance on a set of courses. The ActiveRecord models are as follows: class Course < ActiveRecord::Base has_many :scheduled_runs ... end class ScheduledRun < ActiveRecord::Base belongs_to :course has_many :attendances has_many :attendees, :through => :attendances ... end class Attendance < ActiveRecord::Base belongs_to :user belongs_to :scheduled_run, :counter_cache => true ... end class User < ActiveRecord::Base has_many :attendances has_many :registered_courses, :through => :attendances, :source => :scheduled_run end A ScheduledRun instance has a finite number of places available, and once the limit is reached, no more attendances can be accepted. def full? attendances_count == capacity end attendances_count is a counter cache column holding the number of attendance associations created for a particular ScheduledRun record. My problem is that I don't fully know the correct way to ensure that a race condition doesn't occur when 1 or more people attempt to register for the last available place on a course at the same time. My Attendance controller looks like this: class AttendancesController < ApplicationController before_filter :load_scheduled_run before_filter :load_user, :only => :create def new @user = User.new end def create unless @user.valid? render :action => 'new' end @attendance = @user.attendances.build(:scheduled_run_id => params[:scheduled_run_id]) if @attendance.save flash[:notice] = "Successfully created attendance." redirect_to root_url else render :action => 'new' end end protected def load_scheduled_run @run = ScheduledRun.find(params[:scheduled_run_id]) end def load_user @user = User.create_new_or_load_existing(params[:user]) end end As you can see, it doesn't take into account where the ScheduledRun instance has already reached capacity. Any help on this would be greatly appreciated.

    Read the article

  • Advice on setting up a central db with master tables for web apps

    - by Dragn1821
    I'm starting to write more and more web applications for work. Many of these web applications need to store the same types of data, such as location. I've been thinking that it may be better to create a central db and store these "master" tables there and have each applicaiton access them. I'm not sure how to go about this. Should I create tables in my application's db to copy the data from the master table and store in the app's table (for linking with other app tables using foreign keys)? Should I use something like a web service to read the data from the master table instead of firing up a new db connection in my app? Should I forget this idea and just store the data within my app's db? I would like to have data such as the location central so I can go to one table and add a new location and the next time someone needs to select a location from one of the apps, the new one would be there. I'm using ASP.NET MVC 1.0 to build the web apps and SQL 2005 as the db. Need some advice... Thanks!

    Read the article

  • PHP - How to retrieve session in php

    - by Klaus Jasper
    I created a table that contains id - names - jobs and page that shows the names only and beside each name there is button Job and session that contains the id. this is my code $query = mysql_query("SELECT * FROM table"); while($fetch = mysql_fetch_array("$query")){ $name = $fetch['names']; $id = $fetch['id']; echo '</br>'; echo $name; $_SESSION['name'] = $id; echo "<button>Job</button>"; } I want when the user click on button Job redirect to a page that contains the job of that session. so how can I do it?

    Read the article

  • Prevent duplicate rows with all non-unique columns (only with MySQL)?

    - by letseatfood
    Ho do I prevent a duplicate row from being created in a table with two columns, neither of which are unique? And can this be done using MySQL only, or does it require checks with my PHP script? Here is the CREATE query for the table in question (two other tables exist, users and roles): CREATE TABLE users_roles ( user_id INT(100) NOT NULL, role_id INT(100) NOT NULL, FOREIGN KEY (user_id) REFERENCES users(user_id), FOREIGN KEY (role_id) REFERENCES roles(role_id) ) ENGINE = INNODB; I would like the following query, if executed more than once, to throw an error: INSERT INTO users_roles (user_id, role_id) VALUES (1, 2); Please do not recommend bitmasks as an answer. Thanks!

    Read the article

  • Databases: Migrate data between MS Access DB and MYSQL

    - by Dean
    Hello, I have 2 databases, one is MS Access DB from an old website, and the other one is MYSQL from the new Joomla+VirtueMart based website. I need to migrate existing products from MS Access to MYSQL. I thought of putting both on server and writing SQL queries in MYSQL workbench, untill I have a good script for that, but I'm very new to SQL, so I'd rather avoid that. I there a better way and more efficient for that?

    Read the article

  • Relation to multiple tables of different types for rating?

    - by Tronic
    i have a table structure like this Products Team Images and want to implement a rating/commenting-feature, where users can rate each entry of all tables. what's the best way to make a single rating table? e.g. a user votes a a product and a team entry, and it should be possible to get alle these entries from a single table. what kind of table-structure is best for this purpose? i hope, my questions is clear enough :/ thanks in advance!

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >