Search Results

Search found 30448 results on 1218 pages for 'database mirroring'.

Page 324/1218 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • How to change data structure in mysql using mysqldump without deleting files

    - by Don Quixote
    Essentially what I'm trying to do is sync a production server with a sandbox server, but only the table structures and stored procedures. The procedures aren't any problem since they can be overriden, but the problem is the tables. I want to sync and alter their structures on the production server using mysqldump (or any other way that you can propose) without altering any existing data. If it helps, I only want to add more columns, not remove any existing ones. Also, I am using mysqlyog. Is there any way to do this?

    Read the article

  • Storing user settings in table - how?

    - by Mdillion
    I have settings for the user about 200 settings, these include notice settings and tracing settings from user activities on objects. The problem is how to store it in the DB? Should each setting be a row or a column? If colunm then table will have 200 colunms. If row then about 3 colunms but 200 rows per user x even 10 million users = not good. So how else can i store all these settings? NOTE: these settings are a mix of text entry and FK lookups to other tables. Thanks.

    Read the article

  • cakephp - form for belongsTo Model

    - by user1511579
    I createt the following model to link 2 relational tables: class Ficha extends AppModel { //public $useTable = 'ficha_seg'; var $primaryKey = 'id_ficha'; var $name = 'Ficha'; var $belongsTo = array( 'Perigo' => array( 'className' => 'Perigo', 'foreignKey' => false, 'conditions' => 'Perigo.id_fichas = Ficha.id_ficha' ) ); } Now, i have a form that requires data from the class Ficha, and then is redirected to another ctp page where i will input the data for the table "Perigos". However, since i'm still a newbie in cakephp i'm having difficult building that second form to insert the data on the table "Perigos". Here goes the code i built at the moment related to the second form: FichasController.php (the method where is it supposed to save the data on the table "Perigos": public function preencher_ficha(){ if ($this->request->is('ficha')) { $this->Ficha->create(); if ($this->Ficha->Perigo->save($this->request->data)) { $last_id=$this->Ficha->getLastInsertID(); $this->Session->setFlash('Your post has been updated '.$last_id.'.'); //$this->redirect(array('action' => 'preencher_ficha')); } else { $this->Session->setFlash('Unable to qualquer coisa your post.'); } } } The preencher_ficha.ctp file with the form: echo $this->Form->create('Ficha->Perigo', array('action' => 'index')); echo $this->Form->input('class_subst', array('label' => 'Classificação:')); echo $this->Form->input('simbolos_perigo', array('label' => 'Símbolos:')); echo $this->Form->input('frases_r', array('label' => 'Frases:')); echo $this->Form->end('Finalizar Ficha'); Here i guess the create part is wrong, but i think i have errors too in the controller part.

    Read the article

  • Solr autocommit and autooptimize?

    - by Camran
    I will be uploading my website to a VPS soon. It is a classifieds website which uses Solr integrated with MySql. Solr is updated whenever a new classified is put or deleted. I need a way to make the commit() and optimize() be automated, for example once every 3 hours or so. How can I do this? (Details Please) When is it ideal to optimize? Thanks

    Read the article

  • Logic: Best way to sample & count bytes of a 100MB+ file

    - by Jami
    Let's say I have this 170mb file (roughly 180 million bytes). What I need to do is to create a table that lists: all 4096 byte combinations found [column 'bytes'], and the number of times each byte combination appeared in it [column 'occurrences'] Assume two things: I can save data very fast, but I can update my saved data very slow. How should I sample the file and save the needed information? Here're some suggestions that are (extremely) slow: Go through each 4096 byte combinations in the file, save each data, but search the table first for existing combinations and update it's values. this is unbelievably slow Go through each 4096 byte combinations in the file, save until 1 million rows of data in a temporary table. Go through that table and fix the entries (combine repeating byte combinations), then copy to the big table. Repeat going through another 1 million rows of data and repeat the process. this is faster by a bit, but still unbelievably slow This is kind of like taking the statistics of the file. NOTE: I know that sampling the file can generate tons of data (around 22Gb from experience), and I know that any solution posted would take a bit of time to finish. I need the most efficient saving process

    Read the article

  • What's the best way to test MSSQL connection programmatically?

    - by backslash17
    I need to develop a single routine that will be fired each 5 minutes to check if a list of MSSQL servers (10 to 12) are up and running. I can try to obtain a simple query in each one of the servers but this means that I have to create a table, view or stored procedure in every server, even if I use any already made SP I need to have a registered user in each server too. The servers are not in the same phisical location so having those requirements would be a complex task. Is there a way to simply "ping" from c# one MSSQL server? Thanks in advance!

    Read the article

  • help with delete where not in query

    - by kralco626
    I have a lookup table (##lookup). I know it's bad design because I'm duplicating data, but it speeds up my queries tremendously. I have a query that populates this table insert into ##lookup select distinct col1,col2,... from table1...join...etc... I would like to simulate this behavior: delete from ##lookup insert into ##lookup select distinct col1,col2,... from table1...join...etc... This would clearly update the table correctly. But this is a lot of inserting and deleting. It messes with my indexes and locks up the table for selecting from. This table could also be updated by something like: delete from ##lookup where not in (select distinct col1,col2,... from table1...join...etc...) insert into ##lookup (select distinct col1,col2,... from table1...join...etc...) except if it is already in the table The second way may take longer, but I can say "with no lock" and I will be able to select from the table. Any ideas on how to write the query the second way?

    Read the article

  • using dummy row with NOT NULL to solve DEFAUT NULL

    - by Tony38
    I know having DEFAULT NULLS is not a good practice but I have many optional lookup values which are FK in the system so to solve this issue here is what i am doing: I use NOT NULL for every FK / lookup valve field. I have the first row in every table which is PK id = 1 as a dummy row with just "none" in all the columns. this way I can use NOT NULL in my schema and if needed reference to the none row values which should be null. Is this a good design or any other work arounds?

    Read the article

  • Three-way full outer join in SQLite

    - by Vince
    I have three tables with a common key field, and I need to join them on this key. Given SQLite doesn't have full outer or right joins, I've used the full outer join without right join technique on Wikipedia with much success. But I'm curious, how would one use this technique to join three tables by a common key? What are the efficiency impacts of this (the current query takes about ten minutes)? Thanks!

    Read the article

  • FKs on all tables for status colunm

    - by Jonarch
    I have a colunm "Status" in every table in my DB. The purpose of it is to show if the given row is in use or if it has been deactivated. So values can be (0=deactive and 1= active). Two ways I see this: I can have enums or I am thinking if it is better to keep this colunm as a FK which references the main system data dictionary table which has all the codes used on the system. (website) The benefit is every table, every row can then be centralized through this FK. So if i ever want to check all rows which are deactive on my system i can from this table as all th child tables will have like status = ID 233, where 233 = deactive in the data dictionary table. Any benefit or should i stick with the old way of enums?. Also I am thinking if i need one more status for deleted or is that same as deactivated?

    Read the article

  • What is the differnce between an Oracle and Microsoft schema?

    - by Tarzan
    I am working on an enterprise project. Some of the team has an Oracle background and some has a Microsoft SQL Server background. There is much confusion when we talk about schemas. I am trying to provide some clarity. Is this an accurate way to describe the difference in the meaning of schemas between the two technologies? An Oracle schema is associated with a single user and consists of the objects owned by the user. A MS SQL Server schema schema is a namespace.

    Read the article

  • How do you debug MySQL stored procedures?

    - by Cory House
    My current process for debugging stored procedures is very simple. I create a table called "debug" where I insert variable values from the stored procedure as it runs. This allows me to see the value of any variable at a given point in the script, but is this is there a better way to debug MySQL stored procedures?

    Read the article

  • In a SQL XDL File, how do I read the waitresource attribute on process nodes which are deadlocking?

    - by skimania
    On SQL Server 2005, I'm getting a deadlock when updating two different keys in the same table. note from below that these two waitresources have the same beginning part, but different ending parts. waitresource="KEY: 6:72057594090487808 (d900ed5a6cc6)" and waitresource="KEY: 6:72057594090487808 (d900fb5261bb)" These two keys are locking, and I need to figure out why. The question: If the values in parenthesis are different, why are the first half of the key's the same? <deadlock-list> <deadlock victim="processffffffff8f5863e8"> <process-list> <process id="processaf02f8" taskpriority="0" logused="0" waitresource="KEY: 6:72057594090487808 (d900fb5261bb)" waittime="2281" ownerId="1370264705" transactionname="user_transaction" lasttranstarted="2010-06-17T00:35:25.483" XDES="0x69453a70" lockMode="U" schedulerid="3" kpid="7624" status="suspended" spid="339" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-06-17T00:35:25.483" lastbatchcompleted="2010-06-17T00:35:25.483" clientapp=".Net SqlClient Data Provider" hostname="RISKBBG_VM" hostpid="5848" loginname="RiskOpt" isolationlevel="read committed (2)" xactid="1370264705" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="MKP_RISKDB.dbo.MarketDataCurrentRtUpload" line="14" stmtstart="840" stmtend="1220" sqlhandle="0x03000600005f9d24c8878f00849d00000100000000000000"> UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID </frame> <frame procname="adhoc" line="1" sqlhandle="0x010006004a58132228bf8d73000000000000000000000000"> MarketDataCurrentBlbgRtUpload </frame> </executionStack> <inputbuf> MarketDataCurrentBlbgRtUpload </inputbuf> </process> <process id="processffffffff8f5863e8" taskpriority="0" logused="0" waitresource="KEY: 6:72057594090487808 (d900ed5a6cc6)" waittime="2281" ownerId="1370264646" transactionname="user_transaction" lasttranstarted="2010-06-17T00:35:25.450" XDES="0x1cb72be8" lockMode="U" schedulerid="5" kpid="1880" status="suspended" spid="287" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-06-17T00:35:25.450" lastbatchcompleted="2010-06-17T00:35:25.450" clientapp=".Net SqlClient Data Provider" hostname="RISKAPPS_VM" hostpid="1424" loginname="RiskOpt" isolationlevel="read committed (2)" xactid="1370264646" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="MKP_RISKDB.dbo.MarketDataCurrent_BulkUpload" line="28" stmtstart="1062" stmtend="1720" sqlhandle="0x03000600a28e5e4ef4fd8e00849d00000100000000000000"> UPDATE c WITH (ROWLOCK) SET LastUpdate = getdate(), Value = t.Value, Source = @source FROM MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid WHERE c.lastUpdate &lt; @updateTime and c.mdid not in (select mdid from MarketData where BloombergTicker is not null and PriceSource like &apos;Live.%&apos;) and c.value &lt;&gt; t.value </frame> <frame procname="adhoc" line="1" stmtstart="88" sqlhandle="0x01000600c1653d0598706ca7000000000000000000000000"> exec MarketDataCurrent_BulkUpload @clearBefore, @source </frame> <frame procname="unknown" line="1" sqlhandle="0x000000000000000000000000000000000000000000000000"> unknown </frame> </executionStack> <inputbuf> (@clearBefore datetime,@source nvarchar(10))exec MarketDataCurrent_BulkUpload @clearBefore, @source </inputbuf> </process> </process-list> <resource-list> <keylock hobtid="72057594090487808" dbid="6" objectname="MKP_RISKDB.dbo.MarketDataCurrent" indexname="PK_MarketDataCurrent" id="lock64ac7940" mode="U" associatedObjectId="72057594090487808"> <owner-list> <owner id="processffffffff8f5863e8" mode="U"/> </owner-list> <waiter-list> <waiter id="processaf02f8" mode="U" requestType="wait"/> </waiter-list> </keylock> <keylock hobtid="72057594090487808" dbid="6" objectname="MKP_RISKDB.dbo.MarketDataCurrent" indexname="PK_MarketDataCurrent" id="lockffffffffb8d2dd40" mode="U" associatedObjectId="72057594090487808"> <owner-list> <owner id="processaf02f8" mode="U"/> </owner-list> <waiter-list> <waiter id="processffffffff8f5863e8" mode="U" requestType="wait"/> </waiter-list> </keylock> </resource-list> </deadlock> </deadlock-list>

    Read the article

  • Should I Use GUID or IDENTITY as Thread Number?

    - by user311509
    offerID is the thread # which represents the thread posted. I see in forums posts are represented by random numbers. Is this achieved by IDENTITY? If not, please advice. nvarchar(max) will carry all kind of texts along with HTML tags. CREATE TABLE Offer ( offerID int IDENTITY (4382,15) PRIMARY KEY, memberID int NOT NULL REFERENCES Member(memberID), title nvarchar(200) NOT NULL, thread nvarchar(max) NOT NULL, . . . );

    Read the article

  • Aggregation over a few models - Django

    - by RadiantHex
    Hi folks, I'm trying to compute the average of a field over various subsets of a queryset. Player.objects.order_by('-score').filter(sex='male').aggregate(Avg('level')) This works perfectly! But... if I try to compute it for the top 50 players it does not work. Player.objects.order_by('-score').filter(sex='male')[:50].aggregate(Avg('level')) This last one returns the exact same result as the query above it, which is wrong. What am I doing wrong? Help would be very much appreciated!

    Read the article

  • Most efficient way to maintain a 'set' in SQL Server?

    - by SEVEN YEAR LIBERAL ARTS DEGREE
    I have ~2 million rows or so of data, each row with an artificial PK, and two Id fields (so: PK, ID1, ID2). I have a unique constraint (and index) on ID1+ID2. I get two sorts of updates, both with a distinct ID1 per update. 100-1000 rows of all-new data (ID1 is new) 100-1000 rows of largely, but not necessarily completely overlapping data (ID1 already exists, maybe new ID1+ID2 pairs) What's the most efficient way to maintain this 'set'? Here are the options as I see them: Delete all the rows with ID1, insert all the new rows (yikes) Query all the existing rows from the set of new data ID1+ID2, only insert the new rows Insert all the new rows, ignore inserts that trigger unique constraint violations Any thoughts?

    Read the article

  • SQL Server 2005: Rename DB Server Instance Name?

    - by Code Sherpa
    Hi, Can somebody tell me how to rename the DB server instance name and a DB name in SQL Server 2005? Right Now I Have SERVER/OLDNAME -- oldnameDB I want to change the server instance and also change the db name. I have tried: EXEC sp_renamedb 'oldName', 'newName' and that has changed the dbname as it appers in the tree directory. But, when I do "select @@servername" it is the old name. Also, the MDF and LDF files are still the old name. How do change instance and db names as a clean sweep across the server? Thanks.

    Read the article

  • MySQL Limiting a query to one consistent value

    - by Lucas Matos
    My current query returns a table like: +------------+ value1 | .... value1 | .... value2 | .... value3 | .... +------------+ I want: +------------+ value1 | .... value1 | .... +------------+ I want to only receive all rows with the first value. Normally I would do a WHERE clause if I knew that value, and I cannot use a LIMIT because each value has a different number of rows. Right now My query looks like "SELECT u.*, n.something, w.* FROM ... AS u, ... AS n, ... AS w WHERE u.id = n.id AND w.val = n.val AND u.desc LIKE '%GET REQUEST VARIABLE%';" This works great, except I get way too many rows and using PHP to do this ruins code portability and is superfluous. Thanks for reading

    Read the article

  • MSSQL Server using multiple ID Numbers

    - by vincer
    I have an web application that creates printable forms, these forms have a unique number on them, the problem is I have 2 forms that separate numbers need to be created for them. ie) Form1- Numbered 2000000-2999999 Form2- Numbered 3000000-3999999 dbo.test2 - is my form information table Tsel - is my autoinc table for the 3000000 series numbers Tadv - is my autoinc table for the 2000000 series numbers What I have done is create 2 tables with just autoinc row (one for 2000000 series numbers and one for 3000000 series numbers), I then created a trigger to add a record to the coresponding table, read back the autoinc number and add it to my table that stores the form information including the just created autoinc number for the right series of forms. Although it does work, I'm concerned that the numbers will get messed up under load. I'm not sure the @@IDENTITY will always return the right value when many people are using the system. (I cannot have duplicates and I need to use the numbering form show above. Thanks for any help See code below. ** TRIGGER ** CREATE TRIGGER MAKEANID2 ON dbo.test2 AFTER INSERT AS SET NOCOUNT ON declare @someid int declare @someid2 int declare @startfrom int declare @test1 varchar(10) select @someid=@@IDENTITY select @test1 = (Select name1 from test2 where sysid = @someid ) if @test1 = 'select' begin insert into Tsel Default values select @someid2 = @@IDENTITY end if @test1 = 'adv' begin insert into Tadv Default values select @someid2 = @@IDENTITY end update test2 set name2=(@someid2) where sysid = @someid SET NOCOUNT OFF

    Read the article

  • How to design model for multi-tiered data?

    - by Chris
    Say I have three types of object: Area, Subarea and Topic. I want to be able to display an Area, which is just a list of Subareas and the Topics contained in those Subareas. I never want to be able to display Subareas separately - they're just for breaking up the Topics. Topics can, however, appear in multiple Areas (but probably under the same Subarea). How would I design a model for this? I could use ForeignKey from Topic to Subarea and from Subarea to Area, but it seems unnecessarily complex given that I never want to interact with subareas themselves. Also, none of these objects are ever altered or added to by the user. They're just for me to represent information. Maybe there is a better way to represent it all?

    Read the article

  • I built my rails app with sqlite and without specifying any db field sizes, Is my app now foobared for production?

    - by Tim Santeford
    I've been following a lot of good tutorials on building rails apps but I seem to be missing the whole specifying and validating db field sizes part. I love not needing to have to think about it when roughing out an app (I would have never done this with a PHP or ASP.net app). However, now that I'm ready to go to production, I think I might have done myself a disservice by not specifying field sizes as I went. My production db will be MySQL. What is the best practice here? Do I need to go through all of my migration files and specify sizes, update all the models with validation, and update all my form partial views with input max widths? or am I missing a critical step in my development process?

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >