Search Results

Search found 37647 results on 1506 pages for 'sql performance'.

Page 531/1506 | < Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >

  • How to correctly do SQL UPDATE with weighted subselect?

    - by luminarious
    I am probably trying to accomplish too much in a single query, but have I an sqlite database with badly formatted recipes. This returns a sorted list of recipes with relevance added: SELECT *, sum(relevance) FROM ( SELECT *,1 AS relevance FROM recipes WHERE ingredients LIKE '%milk%' UNION ALL SELECT *,1 AS relevance FROM recipes WHERE ingredients LIKE '%flour%' UNION ALL SELECT *,1 AS relevance FROM recipes WHERE ingredients LIKE '%sugar%' ) results GROUP BY recipeID ORDER BY sum(relevance) DESC; But I'm now stuck with a special case where I need to write the relevance value to a field on the same row as the recipe. I figured something along these lines: UPDATE recipes SET relevance=(SELECT sum(relevance) ...) But I have not been able to get this working yet. I will keep trying, but meanwhile please let me know how you would approach this?

    Read the article

  • In what circumstances can large pages produce a speedup ?

    - by timday
    Modern x86 CPUs have the ability to support larger page sizes than the legacy 4K (ie 2MB or 4MB), and there are OS facilities (Linux, Windows) to access this functionality. The Microsoft link above states large pages "increase the efficiency of the translation buffer, which can increase performance for frequently accessed memory". Which isn't very helpful in predicting whether large pages will improve any given situation. I'm interested in concrete, preferably quantified, examples of where moving some program logic (or a whole application) to use huge pages has resulted in some performance improvement. Anyone got any success stories ? There's one particular case I know of myself: using huge pages can dramatically reduce the time needed to fork a large process (presumably as the number of TLB records needing copying is reduced by a factor on the order of 1000). I'm interested in whether huge pages can also benefit more mundane applications though.

    Read the article

  • Inserting asyncronously into Oracle, any benefits?

    - by Karl Trumstedt
    I am using ODP.NET for loading data into Oracle. I am bulking inserts into groups of a 1000 rows each call. Is there any performance benefits in calling my load method asynchronously? So say I want to insert 10000 rows, instead of making 10 calls synchronously I make 10 calls asynchronously. My database is using ASSM right now but otherwise plenty of freelists are used of course. The database server has several cores as well. My initial tests seem to point to a performance increase, but maybe there is something I cannot see? Potential deadlock or contention issues? Of course, there is added complexity in handling transactions and such doing my load this way.

    Read the article

  • odp.net SQL query retrieve set of rows from two input arrays.

    - by Karl Trumstedt
    I have a table with a primary key consisting of two columns. I want to retrieve a set of rows based on two input arrays, each corresponding to one primary key column. select pkt1.id, pkt1.id2, ... from PrimaryKeyTable pkt1, table(:1) t1, table(:2) t2 where pkt1.id = t1.column_value and pkt1.id2 = t2.column_value I then bind the values with two int[] in odp.net. This returns all different combinations of my resulting rows. So if I am expecting 13 rows I receive 169 rows (13*13). The problem is that each value in t1 and t2 should be linked. Value t1[4] should be used with t2[4] and not all the different values in t2. Using distinct solves my problem, but I'm wondering if my approach is wrong. Anyone have any pointers on how to solve this the best way? One way might be to use a for-loop accessing each index in t1 and t2 sequentially, but I wonder what will be more efficient. Edit: actually distinct won't solve my problem, it just did it based on my input-values (all values in t2 = 0)

    Read the article

  • What is the best way to store categorical references in SQL tables?

    - by jlafay
    I'm wanting to store a wide array of categorical data in MySQL database tables. Let's say that for instance I want to to information on "widgets" and want to categorize attributes in certain ways, i.e. shape category. For instance, the widgets could be classified as: round, square, triangular, spherical, etc. Should these categories be stored within a table to reference them best from an application? Another possibility, I would imagine, would be to add a column to widgets that contained a shape column that contained a tiny int. That way my application could search shapes by that and then use a coordinating enum type that would map the shape int meanings. Which would be best? Or is there another solution that I'm not thinking of yet?

    Read the article

  • Format a money field in SQL without converting to varchar?

    - by sdmadsen
    I need to be able to display a money field as $XX,XXX.XX, but without converting to varchar using total_eval = '$' + CONVERT(varchar(19),total_eval.opvValueMoney,1) My project uses sorting of the information after I pull this to sort the column and it doesn't sort correctly when the column is a varchar. Is there anyway to do this?

    Read the article

  • get count from Iqueryable<T> in linq-to-sql?

    - by Pandiya Chendur
    The following code doesn't seem to get the correct count..... var materials = consRepository.FindAllMaterials().AsQueryable(); int count = materials.Count(); Is it the way to do it.... Here is my repository which fetches records... public IQueryable<MaterialsObj> FindAllMaterials() { var materials = from m in db.Materials join Mt in db.MeasurementTypes on m.MeasurementTypeId equals Mt.Id where m.Is_Deleted == 0 select new MaterialsObj() { Id = Convert.ToInt64(m.Mat_id), Mat_Name = m.Mat_Name, Mes_Name = Mt.Name, }; return materials; }

    Read the article

  • Multiple SQL Standard Instances on 4 Processor/32-core Server

    - by Theowood
    We have a large 4 processor/32-core server with 192GB of memory available in the data center and over twenty small SQL Standard databases to consolidate. They are a mix of SQL 2012 and 2008 R2 for 3rd-party apps. Is there any issue with simply installing two instances of SQL Standard on the server - one for 2012 and one for 2008 R2 ? Each instance will use up to 64GB out of the 192GB and 16 cores. If we did this with Enterprise, the licensing would be a fortune and the Enterprise features are not needed.

    Read the article

  • Is a full list returned first and then filtered when using linq to sql to filter data from a databas

    - by RJ
    This is probably a very simple question that I am working through in an MVC project. Here's an example of what I am talking about. I have an rdml file linked to a database with a table called Users that has 500,000 rows. But I only want to find the Users who were entered on 5/7/2010. So let's say I do this in my UserRepository: from u in db.GetUsers() where u.CreatedDate = "5/7/2010" select u (doing this from memory so don't kill me if my syntax is a little off, it's the concept I am looking for) Does this statement first return all 500,000 rows and then filter it or does it only bring back the filtered list?

    Read the article

  • VS2008 intellisense performance issue with large number of partial static classes

    - by scebula
    My question is a follow-up to the issue posted here regarding the Intellisense performance issue when building a large solution in VS2008 that has many partial static classes. Since Microsoft does not seem to be addressing the issue for VS2008, I would like to know if there are other ways around the problem? Waiting for VS2010 is not an option at this time. The proposed solution in the previous post is not practical as some of the partial classes may be regenerated and this would be a maintenance headache.

    Read the article

  • Using CONNECT BY to get all parents and one child in Hierarchy through SQL query in Oracle

    - by s khan
    I was going through some previous posts on CONNECT BY usage. What I need to find is that what to do if I want to get all the parents (i.e, up to root) and just one child for a node, say 4. It seems Like I will have to use union of the following two:- SELECT * FROM hierarchy START WITH id = 4 CONNECT BY id = PRIOR parent union SELECT * FROM hierarchy WHERE LEVEL =<2 START WITH id = 4 CONNECT BY parent = PRIOR id Is there a better way to do this, some workaround that is more optimized?

    Read the article

  • In SQL, why is "Distinct" not used in a subquery, when looking for some items "not showing up" in th

    - by Jian Lin
    Usually when looking for some items not showing up in the other table, we can use: select * from gifts where giftID not in (select giftID from sentgifts); or select * from gifts where giftID not in (select distinct giftID from sentgifts); the second line is with "distinct" added, so that the resulting table is smaller, and probably let the search for "not in" faster too. So, won't using "distinct" be desirable? Often than not, I don't see it being used in the subquery in such a case. Is there advantage or disadvantage of using it? thanks.

    Read the article

  • get-wmiobject sql join in powershell - trying to find physical memory vs. virtual memory of remote s

    - by Willy
    get-wmiobject -query "Select TotalPhysicalMemory from Win32_LogicalMemoryConfiguration" -computer COMPUTERNAME output.csv get-wmiobject -query "Select TotalPageFileSpace from Win32_LogicalMemoryConfiguration" -computer COMPUTERNAME output.csv I am trying to complete this script with an output as such: Computer Physical Memory Virtual Memory server1 4096mb 8000mb server2 2048mb 4000mb

    Read the article

  • Is it possible to write a SQL query to return specific rows, but then join some columns of those row

    - by Rob
    I'm having trouble wrapping my head around how to write this query. A hypothetical problem that is that same as the one I'm trying to solve: Say I have a table of apples. Each apple has numerous attributes, such as color_id, variety_id and the orchard_id they were picked from. The color_id, variety_id, and orchard_id all refer to their respective tables: colors, varieties, and orchards. Now, say I need to query for all apples that have color_id = '3', which refers to yellow in the colors table. I want to somehow obtain this yellow value from the query. Make sense? Here's what I was trying: SELECT * FROM apples, colors.id WHERE color_id = '3' LEFT JOIN colors ON apples.color_id = colors.id

    Read the article

  • What's SQL table name for table between 'Users' and 'UserTypes' ?

    - by Space Cracker
    i have tow tables in my database : Users : contain user information UserTypes : contain the names of user types ( student , teacher , specialist ) - I can't rename it to 'Types' as we have a table with this name relation between Users and UserTypes many to many .. so i'll create a table that have UserID(FK) with UserTypeID(FK) but I try to find best name for that table ... any suggestion please ?

    Read the article

  • Why would Linux VM in vSphere ESXi 5.5 show dramatically increased disk i/o latency?

    - by mhucka
    I'm stumped and I hope someone else will recognize the symptoms of this problem. Hardware: new Dell T110 II, dual-core Pentium G860 2.9 GHz, onboard SATA controller, one new 500 GB 7200 RPM cabled hard drive inside the box, other drives inside but not mounted yet. No RAID. Software: fresh CentOS 6.5 virtual machine under VMware ESXi 5.5.0 (build 174 + vSphere Client). 2.5 GB RAM allocated. The disk is how CentOS offered to set it up, namely as a volume inside an LVM Volume Group, except that I skipped having a separate /home and simply have / and /boot. CentOS is patched up, ESXi patched up, latest VMware tools installed in the VM. No users on the system, no services running, no files on the disk but the OS installation. I'm interacting with the VM via the VM virtual console in vSphere Client. Before going further, I wanted to check that I configured things more or less reasonably. I ran the following command as root in a shell on the VM: for i in 1 2 3 4 5 6 7 8 9 10; do dd if=/dev/zero of=/test.img bs=8k count=256k conv=fdatasync done I.e., just repeat the dd command 10 times, which results in printing the transfer rate each time. The results are disturbing. It starts off well: 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.451 s, 105 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.4202 s, 105 MB/s ... but after 7-8 of these, it then prints 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GG) copied, 82.9779 s, 25.9 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 84.0396 s, 25.6 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 103.42 s, 20.8 MB/s If I wait a significant amount of time, say 30-45 minutes, and run it again, it again goes back to 105 MB/s, and after several rounds (sometimes a few, sometimes 10+), it drops to ~20-25 MB/s again. Plotting the disk latency in vSphere's interface, it shows periods of high disk latency hitting 1.2-1.5 seconds during the times that dd reports the low throughput. (And yes, things get pretty unresponsive while that's happening.) What could be causing this? I'm comfortable that it is not due to the disk failing, because I also had configured two other disks as an additional volume in the same system. At first I thought I did something wrong with that volume, but after commenting the volume out from /etc/fstab and rebooting, and trying the tests on / as shown above, it became clear that the problem is elsewhere. It is probably an ESXi configuration problem, but I'm not very experienced with ESXi. It's probably something stupid, but after trying to figure this out for many hours over multiple days, I can't find the problem, so I hope someone can point me in the right direction. (P.S.: yes, I know this hardware combo won't win any speed awards as a server, and I have reasons for using this low-end hardware and running a single VM, but I think that's besides the point for this question [unless it's actually a hardware problem].) ADDENDUM #1: Reading other answers such as this one made me try adding oflag=direct to dd. However, it makes no difference in the pattern of results: initially the numbers are higher for many rounds, then they drop to 20-25 MB/s. (The initial absolute numbers are in the 50 MB/s range.) ADDENDUM #2: Adding sync ; echo 3 > /proc/sys/vm/drop_caches into the loop does not make a difference at all. ADDENDUM #3: To take out further variables, I now run dd such that the file it creates is larger than the amount of RAM on the system. The new command is dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct. Initial throughput numbers with this version of the command are ~50 MB/s. They drop to 20-25 MB/s when things go south. ADDENDUM #4: Here is the output of iostat -d -m -x 1 running in another terminal window while performance is "good" and then again when it's "bad". (While this is going on, I'm running dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct.) First, when things are "good", it shows this: When things go "bad", iostat -d -m -x 1 shows this:

    Read the article

  • How to dynamically write the query in SQL Server 2008?

    - by user1237131
    How to write the dynamically the below query? Table empid designation interestes 1 developer,tester cricket,chess 1 developer chess 1 techlead cricket Condition: IF empid = 1 AND (designation LIKE '%developer%' OR designationLIKE '%techlead%') OR (interests LIKE '%cricket%'). How to write the above query dynamically if designations need to send more than 2,and also same on interstes . please tell me ... EDIT stored procedure code: ALTER PROCEDURE [dbo].[usp_GetDevices] @id INT, @designation NVARCHAR (MAX) AS BEGIN declare @idsplat varchar(MAX) set @idsplat = @UserIds create table #u1 (id1 varchar(MAX)) set @idsplat = 'insert #u1 select ' + replace(@idsplat, ',', ' union select ') exec(@idsplat) Select id FROM dbo.DevicesList WHERE id=@id AND designation IN (select id1 from #u1) END

    Read the article

  • SQL: Is it quicker to insert sorted data into a table?

    - by AngryWhenHungry
    A table in Sybase has a unique varchar(32) column, and a few other columns. It is indexed on this column too. At regular intervals, I need to truncate it, and repopulate it with fresh data from other tables. insert into MyTable select list_of_columns from OtherTable where some_simple_conditions order by MyUniqueId If we are dealing with a few thousand rows, would it help speed up the insert if we have the order by clause for the select? If so, would this gain in time compensate for the extra time needed to order the select query? I could try this out, but currently my data set is small and the results don't say much.

    Read the article

< Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >