Search Results

Search found 42428 results on 1698 pages for 'database query'.

Page 506/1698 | < Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >

  • Solr autocommit and autooptimize?

    - by Camran
    I will be uploading my website to a VPS soon. It is a classifieds website which uses Solr integrated with MySql. Solr is updated whenever a new classified is put or deleted. I need a way to make the commit() and optimize() be automated, for example once every 3 hours or so. How can I do this? (Details Please) When is it ideal to optimize? Thanks

    Read the article

  • Logic: Best way to sample & count bytes of a 100MB+ file

    - by Jami
    Let's say I have this 170mb file (roughly 180 million bytes). What I need to do is to create a table that lists: all 4096 byte combinations found [column 'bytes'], and the number of times each byte combination appeared in it [column 'occurrences'] Assume two things: I can save data very fast, but I can update my saved data very slow. How should I sample the file and save the needed information? Here're some suggestions that are (extremely) slow: Go through each 4096 byte combinations in the file, save each data, but search the table first for existing combinations and update it's values. this is unbelievably slow Go through each 4096 byte combinations in the file, save until 1 million rows of data in a temporary table. Go through that table and fix the entries (combine repeating byte combinations), then copy to the big table. Repeat going through another 1 million rows of data and repeat the process. this is faster by a bit, but still unbelievably slow This is kind of like taking the statistics of the file. NOTE: I know that sampling the file can generate tons of data (around 22Gb from experience), and I know that any solution posted would take a bit of time to finish. I need the most efficient saving process

    Read the article

  • How to query from MEDIA provider with "group by" option?

    - by gkshope
    I'm a newbie to Android. Actually, I want to query data from Media provider with Content provider & content resolver. c = mContent.query(CONTENT_URI,projection,where,null,null); My question is, how can I query data from media provider as below using a GROUP BY clause: select DISTINCT _id, count(_id), _data FROM aaa_table WHERE _data LIKE "A" OR _data LIKE "B" GROUP BY _id; I have tried setting projection and where as follows: final String[] projection = new String[] { "_id", "COUNT ("+ _id +")" , "_data" }; and where: _data LIKE "A" OR _data LIKE "B" but, I couldn't find how to set the query option GROUP BY _id. Please help me.

    Read the article

  • Doctrine: how to create a query using "LIKE REPLACE" ?

    - by user248959
    Hi, this SQL clause is working OK: SELECT * FROM `sf_guard_user` WHERE nombre_apellidos LIKE REPLACE('Mar Sanz',' ','%') Now I'm trying to write this query for Doctrine. I have tried this: $query->andWhere(sprintf('%s.%s LIKE REPLACE (?," ","%")', $query->getRootAlias(), $fieldName), 'Mar Sanz')); but I get this error: Warning: sprintf() [function.sprintf]: Too few arguments Any idea? Regards Javi

    Read the article

  • FKs on all tables for status colunm

    - by Jonarch
    I have a colunm "Status" in every table in my DB. The purpose of it is to show if the given row is in use or if it has been deactivated. So values can be (0=deactive and 1= active). Two ways I see this: I can have enums or I am thinking if it is better to keep this colunm as a FK which references the main system data dictionary table which has all the codes used on the system. (website) The benefit is every table, every row can then be centralized through this FK. So if i ever want to check all rows which are deactive on my system i can from this table as all th child tables will have like status = ID 233, where 233 = deactive in the data dictionary table. Any benefit or should i stick with the old way of enums?. Also I am thinking if i need one more status for deleted or is that same as deactivated?

    Read the article

  • using dummy row with NOT NULL to solve DEFAUT NULL

    - by Tony38
    I know having DEFAULT NULLS is not a good practice but I have many optional lookup values which are FK in the system so to solve this issue here is what i am doing: I use NOT NULL for every FK / lookup valve field. I have the first row in every table which is PK id = 1 as a dummy row with just "none" in all the columns. this way I can use NOT NULL in my schema and if needed reference to the none row values which should be null. Is this a good design or any other work arounds?

    Read the article

  • What is the differnce between an Oracle and Microsoft schema?

    - by Tarzan
    I am working on an enterprise project. Some of the team has an Oracle background and some has a Microsoft SQL Server background. There is much confusion when we talk about schemas. I am trying to provide some clarity. Is this an accurate way to describe the difference in the meaning of schemas between the two technologies? An Oracle schema is associated with a single user and consists of the objects owned by the user. A MS SQL Server schema schema is a namespace.

    Read the article

  • In a SQL XDL File, how do I read the waitresource attribute on process nodes which are deadlocking?

    - by skimania
    On SQL Server 2005, I'm getting a deadlock when updating two different keys in the same table. note from below that these two waitresources have the same beginning part, but different ending parts. waitresource="KEY: 6:72057594090487808 (d900ed5a6cc6)" and waitresource="KEY: 6:72057594090487808 (d900fb5261bb)" These two keys are locking, and I need to figure out why. The question: If the values in parenthesis are different, why are the first half of the key's the same? <deadlock-list> <deadlock victim="processffffffff8f5863e8"> <process-list> <process id="processaf02f8" taskpriority="0" logused="0" waitresource="KEY: 6:72057594090487808 (d900fb5261bb)" waittime="2281" ownerId="1370264705" transactionname="user_transaction" lasttranstarted="2010-06-17T00:35:25.483" XDES="0x69453a70" lockMode="U" schedulerid="3" kpid="7624" status="suspended" spid="339" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-06-17T00:35:25.483" lastbatchcompleted="2010-06-17T00:35:25.483" clientapp=".Net SqlClient Data Provider" hostname="RISKBBG_VM" hostpid="5848" loginname="RiskOpt" isolationlevel="read committed (2)" xactid="1370264705" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="MKP_RISKDB.dbo.MarketDataCurrentRtUpload" line="14" stmtstart="840" stmtend="1220" sqlhandle="0x03000600005f9d24c8878f00849d00000100000000000000"> UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID </frame> <frame procname="adhoc" line="1" sqlhandle="0x010006004a58132228bf8d73000000000000000000000000"> MarketDataCurrentBlbgRtUpload </frame> </executionStack> <inputbuf> MarketDataCurrentBlbgRtUpload </inputbuf> </process> <process id="processffffffff8f5863e8" taskpriority="0" logused="0" waitresource="KEY: 6:72057594090487808 (d900ed5a6cc6)" waittime="2281" ownerId="1370264646" transactionname="user_transaction" lasttranstarted="2010-06-17T00:35:25.450" XDES="0x1cb72be8" lockMode="U" schedulerid="5" kpid="1880" status="suspended" spid="287" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-06-17T00:35:25.450" lastbatchcompleted="2010-06-17T00:35:25.450" clientapp=".Net SqlClient Data Provider" hostname="RISKAPPS_VM" hostpid="1424" loginname="RiskOpt" isolationlevel="read committed (2)" xactid="1370264646" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="MKP_RISKDB.dbo.MarketDataCurrent_BulkUpload" line="28" stmtstart="1062" stmtend="1720" sqlhandle="0x03000600a28e5e4ef4fd8e00849d00000100000000000000"> UPDATE c WITH (ROWLOCK) SET LastUpdate = getdate(), Value = t.Value, Source = @source FROM MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid WHERE c.lastUpdate &lt; @updateTime and c.mdid not in (select mdid from MarketData where BloombergTicker is not null and PriceSource like &apos;Live.%&apos;) and c.value &lt;&gt; t.value </frame> <frame procname="adhoc" line="1" stmtstart="88" sqlhandle="0x01000600c1653d0598706ca7000000000000000000000000"> exec MarketDataCurrent_BulkUpload @clearBefore, @source </frame> <frame procname="unknown" line="1" sqlhandle="0x000000000000000000000000000000000000000000000000"> unknown </frame> </executionStack> <inputbuf> (@clearBefore datetime,@source nvarchar(10))exec MarketDataCurrent_BulkUpload @clearBefore, @source </inputbuf> </process> </process-list> <resource-list> <keylock hobtid="72057594090487808" dbid="6" objectname="MKP_RISKDB.dbo.MarketDataCurrent" indexname="PK_MarketDataCurrent" id="lock64ac7940" mode="U" associatedObjectId="72057594090487808"> <owner-list> <owner id="processffffffff8f5863e8" mode="U"/> </owner-list> <waiter-list> <waiter id="processaf02f8" mode="U" requestType="wait"/> </waiter-list> </keylock> <keylock hobtid="72057594090487808" dbid="6" objectname="MKP_RISKDB.dbo.MarketDataCurrent" indexname="PK_MarketDataCurrent" id="lockffffffffb8d2dd40" mode="U" associatedObjectId="72057594090487808"> <owner-list> <owner id="processaf02f8" mode="U"/> </owner-list> <waiter-list> <waiter id="processffffffff8f5863e8" mode="U" requestType="wait"/> </waiter-list> </keylock> </resource-list> </deadlock> </deadlock-list>

    Read the article

  • Should I Use GUID or IDENTITY as Thread Number?

    - by user311509
    offerID is the thread # which represents the thread posted. I see in forums posts are represented by random numbers. Is this achieved by IDENTITY? If not, please advice. nvarchar(max) will carry all kind of texts along with HTML tags. CREATE TABLE Offer ( offerID int IDENTITY (4382,15) PRIMARY KEY, memberID int NOT NULL REFERENCES Member(memberID), title nvarchar(200) NOT NULL, thread nvarchar(max) NOT NULL, . . . );

    Read the article

  • How do you debug MySQL stored procedures?

    - by Cory House
    My current process for debugging stored procedures is very simple. I create a table called "debug" where I insert variable values from the stored procedure as it runs. This allows me to see the value of any variable at a given point in the script, but is this is there a better way to debug MySQL stored procedures?

    Read the article

  • SQL Server 2005: Rename DB Server Instance Name?

    - by Code Sherpa
    Hi, Can somebody tell me how to rename the DB server instance name and a DB name in SQL Server 2005? Right Now I Have SERVER/OLDNAME -- oldnameDB I want to change the server instance and also change the db name. I have tried: EXEC sp_renamedb 'oldName', 'newName' and that has changed the dbname as it appers in the tree directory. But, when I do "select @@servername" it is the old name. Also, the MDF and LDF files are still the old name. How do change instance and db names as a clean sweep across the server? Thanks.

    Read the article

  • MSSQL Server using multiple ID Numbers

    - by vincer
    I have an web application that creates printable forms, these forms have a unique number on them, the problem is I have 2 forms that separate numbers need to be created for them. ie) Form1- Numbered 2000000-2999999 Form2- Numbered 3000000-3999999 dbo.test2 - is my form information table Tsel - is my autoinc table for the 3000000 series numbers Tadv - is my autoinc table for the 2000000 series numbers What I have done is create 2 tables with just autoinc row (one for 2000000 series numbers and one for 3000000 series numbers), I then created a trigger to add a record to the coresponding table, read back the autoinc number and add it to my table that stores the form information including the just created autoinc number for the right series of forms. Although it does work, I'm concerned that the numbers will get messed up under load. I'm not sure the @@IDENTITY will always return the right value when many people are using the system. (I cannot have duplicates and I need to use the numbering form show above. Thanks for any help See code below. ** TRIGGER ** CREATE TRIGGER MAKEANID2 ON dbo.test2 AFTER INSERT AS SET NOCOUNT ON declare @someid int declare @someid2 int declare @startfrom int declare @test1 varchar(10) select @someid=@@IDENTITY select @test1 = (Select name1 from test2 where sysid = @someid ) if @test1 = 'select' begin insert into Tsel Default values select @someid2 = @@IDENTITY end if @test1 = 'adv' begin insert into Tadv Default values select @someid2 = @@IDENTITY end update test2 set name2=(@someid2) where sysid = @someid SET NOCOUNT OFF

    Read the article

  • How to design model for multi-tiered data?

    - by Chris
    Say I have three types of object: Area, Subarea and Topic. I want to be able to display an Area, which is just a list of Subareas and the Topics contained in those Subareas. I never want to be able to display Subareas separately - they're just for breaking up the Topics. Topics can, however, appear in multiple Areas (but probably under the same Subarea). How would I design a model for this? I could use ForeignKey from Topic to Subarea and from Subarea to Area, but it seems unnecessarily complex given that I never want to interact with subareas themselves. Also, none of these objects are ever altered or added to by the user. They're just for me to represent information. Maybe there is a better way to represent it all?

    Read the article

  • What is the best way to restore(rollback) data in an application to a specified state(date) ?

    - by panzerschreck
    Hello, An example would set the context right, the example below captures the various states of the entity, which needs to be reverted(rolled back) . State 1 - Recorded on 01-Mar-2010 Column1 Column2 Data1 0.56 State 2 - Recorded on 02-Mar-2010 Column1 Column2 Data1 0.57 State 3 - Recorded on 03-Mar-2010 Column1 Column2 Data1 0.58 User notices that state3 is not what he intended to be in, decides to revert back to state2. One approach that I can think of, without modifying the entity is via "auditing" all the inserts/updates, as below, the rollback information captures the data just before the updates/modifications on the entity, so that it can be applied in an order when you need to revert.Please note that changing the entity's schema, is not an option. Rollback - R1 recorded on 01-Mar-2010 Column1 Column2 Data1 0.56 Rollback - R2 Recorded on 02-Mar-2010 Column1 Column2 Data1 0.56 Rollback - R3 Recorded on 03-Mar-2010 Column1 Column2 Data1 0.57 So, to get to state2 , we would start with rollback information R1,apply R2 onto it. Is there a better approach to achieve this ? Thanks for your time.

    Read the article

  • getting smallest of coordinates that differ by N or more in Python

    - by user248237
    suppose I have a list of coordinates: data = [[(10, 20), (100, 120), (0, 5), (50, 60)], [(13, 20), (300, 400), (100, 120), (51, 62)]] and I want to take all tuples that either appear in each list in data, or any tuple that differs from all tuples in lists other than its own by 3 or less. How can I do this efficiently in Python? For the above example, the results should be: [[(100, 120), # since it occurs in both lists (10, 20), (13, 20), # since they differ by only 3 (50, 60), (51, 60)]] (0, 5) and (300, 400) would not be included, since they don't appear in both lists and are not different from elements in lists other than their own by 3 or less. how can this be computed? thanks.

    Read the article

  • what is the out put?

    - by user329820
    Hi this is my code but when I run it in mysql it will show an error because of datatype but my friend checked it with sql server and it doesn't show error and also insert the value: 32769 .which of them is correct? CREATE TABLE T1 (A INTEGER NOT NULL); INSERT T1 VALUES (32768.5);

    Read the article

  • I built my rails app with sqlite and without specifying any db field sizes, Is my app now foobared for production?

    - by Tim Santeford
    I've been following a lot of good tutorials on building rails apps but I seem to be missing the whole specifying and validating db field sizes part. I love not needing to have to think about it when roughing out an app (I would have never done this with a PHP or ASP.net app). However, now that I'm ready to go to production, I think I might have done myself a disservice by not specifying field sizes as I went. My production db will be MySQL. What is the best practice here? Do I need to go through all of my migration files and specify sizes, update all the models with validation, and update all my form partial views with input max widths? or am I missing a critical step in my development process?

    Read the article

  • Conditional PIVOT/transform problem

    - by IanC
    Hi folks I have a table with three columns, which we'll call ID1, ID2, and Value. Sample data: ID ID1 Value 1 1 0 1 2 1 1 3 1 1 3 2 1 4 0 1 4 1 1 5 0 1 5 2 2 1 2 Value is limited to 0, 1, or 2. What I need to do is pivot/transform this data into a column-based count of how many times each possible Value appears, grouped by ID, ID1. The output of the above should be: ID ID1 Val0 Val1 Val2 1 1 1 0 0 1 2 0 2 0 1 3 0 1 1 1 4 1 1 0 1 5 1 0 1 2 1 0 0 1 I'm using SQL Server 2008. How do I do this?

    Read the article

< Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >