Search Results

Search found 28685 results on 1148 pages for 'query performance'.

Page 311/1148 | < Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >

  • Access - how to derive a field upon entry of a record

    - by jonos
    I have an Access database for a media rental company which includes the following tables, among others; LOAN: customer_id (pk), loan_datetimeLeant (pk), loan_dateReturned (pk) LOAN_ITEMS: customer_id (pk), loan_datetimeLeant (pk), item_id (pk), loanItem_cost ITEM: item_id (pk), product_id, item_availability PRODUCT: product_id (pk), product_name, product_type MEDIA_COST: product_type (pk), product_cost So basically the 'product type' (DVD, VHS etc) determines the cost. The 'product id' determines what movie, platform etc each item is. My question is: When creating a form for the Loan_Items table, how can I populate the loanItem_cost field (so it is stored) whenever a new Loan_Item record is added? I need to store it as the cost of the item (product_cost) may change over time and I'd like to record what the customer paid at the time the loan was made. Thanks in advance

    Read the article

  • Optimizing MySQL for ALTER TABLE of InnoDB

    - by schuilr
    Sometime soon we will need to make schema changes to our production database. We need to minimize downtime for this effort, however, the ALTER TABLE statements are going to run for quite a while. Our largest tables have 150 million records, largest table file is 50G. All tables are InnoDB, and it was set up as one big data file (instead of a file-per-table). We're running MySQL 5.0.46 on an 8 core machine, 16G memory and a RAID10 config. I have some experience with MySQL tuning, but this usually focusses on reads or writes from multiple clients. There is lots of info to be found on the Internet on this subject, however, there seems to be very little information available on best practices for (temporarily) tuning your MySQL server to speed up ALTER TABLE on InnoDB tables, or for INSERT INTO .. SELECT FROM (we will probably use this instead of ALTER TABLE to have some more opportunities to speed things up a bit). The schema changes we are planning to do is adding a integer column to all tables and make it the primary key, instead of the current primary key. We need to keep the 'old' column as well so overwriting the existing values is not an option. What would be the ideal settings to get this task done as quick as possible?

    Read the article

  • Weighted Average with LINQ

    - by jsmith
    My goal is to get a weighted average from one table, based on another tables primary key. Example Data: Table1 Key WEIGHTED_AVERAGE 0200 0 Table2 ForeignKey LENGTH PCR 0200 105 52 0200 105 60 0200 105 54 0200 105 -1 0200 47 55 I need to get a weighted average based on the length of a segment and I need to ignore values of -1. I know how to do this in SQL, but my goal is to do this in LINQ. It looks something like this in SQL: SELECT Sum(t2.PCR*t2.LENGTH)/Sum(t2.LENGTH) AS WEIGHTED_AVERAGE FROM Table1 t1, Table2 t2 WHERE t2.PCR <> -1 AND t2.ForeinKey = t1.Key; I am still pretty new to LINQ, and having a hard time figuring out how I would translate this. The result weighted average should come out to roughly 55.3. Thank you.

    Read the article

  • Fast Lightweight Image Comparisson Metric Algorithm

    - by gav
    Hi All, I am developing an application for the Android platform which contains 1000+ image filters that have been 'evolved'. When a user selects a photo I want to present the most relevant filters first. This 'relevance' should be dependent on previous use cases. I have already developed tools that register when a filtered image is saved; this combination of filter and image can be seen as the training data for my system. The issue is that the comparison must occur between selecting an image and the next screen coming up. From a UI point of view I need the whole process to take less that 4 seconds; select an image- obtain a metric to use for similarity - check against use cases - return 6 closest matches. I figure with 4 seconds I can use animations and progress dialogs to keep the user happy. Due to platform contraints I am fairly limited in the computational expense of the algorithm. I have implemented a technique adapted from various online tutorials for running C code on the G1 and hence this language is available Specific Constraints; Qualcomm® MSM7201A™, 528 MHz Processor 320 x 480 Pixel bitmap in 32 bit ARGB ~ 2 seconds computational time for the native method to get the metric ~ 2 seconds to compare the metric of the current image with training data This is an academic project so all ideas are welcome, anything you can think of or have heard about would be of interest to me. My ideas; I want to keep the complexity down (O(n*m)?) by using pixel data only rather than a neighbourhood function I was looking at using the Colour historgram/Greyscale histogram/Texture/Entropy of the image, combining them to make the measure. There will be an obvious loss of information but I need the resultant metric to be substantially smaller than the memory footprint of the image (~0.512 MB) As I said, any ideas to direct my research would be fantastic. Kind regards, Gavin

    Read the article

  • How to select top n rows from a datatable/dataview in asp.net

    - by skamale
    How to select top n rows from a datatable/dataview in asp.net.currently I am using the following code by passing the table and number of rows to get the records but is there a better way. public DataTable SelectTopDataRow(DataTable dt, int count) { DataTable dtn = dt.Clone(); for (int i = 0; i < count; i++) { dtn.ImportRow(dt.Rows[i]); } return dtn; }

    Read the article

  • Problem measuring N times the execution time of a code block

    - by Nazgulled
    EDIT: I just found my problem after writing this long post explaining every little detail... If someone can give me a good answer on what I'm doing wrong and how can I get the execution time in seconds (using a float with 5 decimal places or so), I'll mark that as accepted. Hint: The problem was on how I interpreted the clock_getttime() man page. Hi, Let's say I have a function named myOperation that I need to measure the execution time of. To measure it, I'm using clock_gettime() as it was recommend here in one of the comments. My teacher recommends us to measure it N times so we can get an average, standard deviation and median for the final report. He also recommends us to execute myOperation M times instead of just one. If myOperation is a very fast operation, measuring it M times allow us to get a sense of the "real time" it takes; cause the clock being used might not have the required precision to measure such operation. So, execution myOperation only one time or M times really depends if the operation itself takes long enough for the clock precision we are using. I'm having trouble dealing with that M times execution. Increasing M decreases (a lot) the final average value. Which doesn't make sense to me. It's like this, on average you take 3 to 5 seconds to travel from point A to B. But then you go from A to B and back to A 5 times (which makes it 10 times, cause A to B is the same as B to A) and you measure that. Than you divide by 10, the average you get is supposed to be the same average you take traveling from point A to B, which is 3 to 5 seconds. This is what I want my code to do, but it's not working. If I keep increasing the number of times I go from A to B and back A, the average will be lower and lower each time, it makes no sense to me. Enough theory, here's my code: #include <stdio.h> #include <time.h> #define MEASUREMENTS 1 #define OPERATIONS 1 typedef struct timespec TimeClock; TimeClock diffTimeClock(TimeClock start, TimeClock end) { TimeClock aux; if((end.tv_nsec - start.tv_nsec) < 0) { aux.tv_sec = end.tv_sec - start.tv_sec - 1; aux.tv_nsec = 1E9 + end.tv_nsec - start.tv_nsec; } else { aux.tv_sec = end.tv_sec - start.tv_sec; aux.tv_nsec = end.tv_nsec - start.tv_nsec; } return aux; } int main(void) { TimeClock sTime, eTime, dTime; int i, j; for(i = 0; i < MEASUREMENTS; i++) { printf(" » MEASURE %02d\n", i+1); clock_gettime(CLOCK_REALTIME, &sTime); for(j = 0; j < OPERATIONS; j++) { myOperation(); } clock_gettime(CLOCK_REALTIME, &eTime); dTime = diffTimeClock(sTime, eTime); printf(" - NSEC (TOTAL): %ld\n", dTime.tv_nsec); printf(" - NSEC (OP): %ld\n\n", dTime.tv_nsec / OPERATIONS); } return 0; } Notes: The above diffTimeClock function is from this blog post. I replaced my real operation with myOperation() because it doesn't make any sense to post my real functions as I would have to post long blocks of code, you can easily code a myOperation() with whatever you like to compile the code if you wish. As you can see, OPERATIONS = 1 and the results are: » MEASURE 01 - NSEC (TOTAL): 27456580 - NSEC (OP): 27456580 For OPERATIONS = 100 the results are: » MEASURE 01 - NSEC (TOTAL): 218929736 - NSEC (OP): 2189297 For OPERATIONS = 1000 the results are: » MEASURE 01 - NSEC (TOTAL): 862834890 - NSEC (OP): 862834 For OPERATIONS = 10000 the results are: » MEASURE 01 - NSEC (TOTAL): 574133641 - NSEC (OP): 57413 Now, I'm not a math wiz, far from it actually, but this doesn't make any sense to me whatsoever. I've already talked about this with a friend that's on this project with me and he also can't understand the differences. I don't understand why the value is getting lower and lower when I increase OPERATIONS. The operation itself should take the same time (on average of course, not the exact same time), no matter how many times I execute it. You could tell me that that actually depends on the operation itself, the data being read and that some data could already be in the cache and bla bla, but I don't think that's the problem. In my case, myOperation is reading 5000 lines of text from an CSV file, separating the values by ; and inserting those values into a data structure. For each iteration, I'm destroying the data structure and initializing it again. Now that I think of it, I also that think that there's a problem measuring time with clock_gettime(), maybe I'm not using it right. I mean, look at the last example, where OPERATIONS = 10000. The total time it took was 574133641ns, which would be roughly 0,5s; that's impossible, it took a couple of minutes as I couldn't stand looking at the screen waiting and went to eat something.

    Read the article

  • How can I get page faults statistics from kernel

    - by osgx
    Hello How can I get page faults statistics from kernel for my application while it is running? What about other events, like inter-cpu migrations count in SMP nodes, or number of context switches? I want to count such events for various small parts of the program. Thanks.

    Read the article

  • What are the internal limits to WCF

    - by ligos
    WCF has lots of limits imposed on it to protect against DoS attacks and other developer brainlessness. What are these limits? And how can they be overriden? Specifically, if I was to try to send a large number of messages in a short period of time to a variety of different clients, what are the internal WCF limits they may cause messages to be sent slowly. PS: this question is a much shorter version of my previous question that did not get much attention. EDIT I have answered the original question. The issue being my async calls were being turned into synchronous ones.

    Read the article

  • Help with MySQL Query using CASE statement

    - by hairdresser-101
    I am trying to group a number of customers together based on their "Head Office" or "Parent" location. THis works ok except for a flaw which I didn't forsee when I was developing my system... For customers that did not have a "Parent" (standalone business) I defaulted the parent_id to 0. Therefore, my data would look like this: id parent_id customer 1 0 CustName#1 2 4 CustName#2 - Melbourne 3 4 CustName#2 - Sydney 4 0 CustName#2 (Head Office) What I want to do is Group my results together so that I have one row for CustName#1 and one row for CustName#2 BUT my problem is that there is no parent record for parent_id=0 and these rows are being excluded when using an inner join. I've tried using a case statement but that is not working either (parents are still being ignored) Any help would be greatly appreciated. Here is my query (My CASE is basically trying to get the business_name from the customer table based on the parent_id EXCEPT when the parent_id = 0, THEN just use the customer_name that is listed in the job_summary table): SELECT js.month_of_year, (CASE js.parent_id WHEN 0 THEN js.customer_name ELSE c.business_name END) as customer, SUM(js.jobs), SUM(js.total_cost), sum(js.total_sell) FROM JOB_SUMMARY js INNER JOIN customer c on js.parent_id=c.id group by js.month_of_year, (CASE c.parent_id WHEN 0 THEN js.customer_name ELSE c.business_name END) ORDER BY `customer` ASC

    Read the article

  • how to do complete multiple replaces throughout table in ms-access

    - by silverkid
    i am a little confused in finding out what would be the best way to replace all occurances of 1. Blanks 2. - 3. NA from all collumns of TableA with question mark ? charachter. Sample Row in orignal tableA 444586 RAUR <blank> 8 570 NA - 13 - SCHS299 MP 339 70 EN <blank> Same Row in Expected TableA 444586 RAUR ? 8 570 ? ? 13 ? SCHS299 MP 339 70 EN ? please help me out I cant use the Find Replace Toolbar of access.

    Read the article

  • Very slow Eclipse 4.2, how to make it more responsive?

    - by Laurent
    I'm using Eclipse PDT on a rather large PHP project and the IDE is almost unusable. It takes nearly 30 seconds to open a file, and other actions, like selecting a folder in the file explorer, editing some text, etc. are equally slow. I followed various instructions to speed it up but nothing seems to work. This is my current eclipse.ini file. Any idea how I can improve it? -startup plugins/org.eclipse.equinox.launcher_1.3.0.v20120522-1813.jar --launcher.library plugins/org.eclipse.equinox.launcher.win32.win32.x86_1.1.200.v20120522-1813 -showsplash org.eclipse.platform --launcher.XXMaxPermSize 256m --launcher.defaultAction openFile -vmargs -server -Dosgi.requiredJavaVersion=1.7 -Xmn128m -Xms1024m -Xmx1024m -Xss2m -XX:PermSize=128m -XX:MaxPermSize=128m -XX:+UseParallelGC System: Eclipse 4.2.0, Windows 7, 4 GB RAM

    Read the article

  • Hibernate query for multiple items in a collection

    - by aarestad
    I have a data model that looks something like this: public class Item { private List<ItemAttribute> attributes; // other stuff } public class ItemAttribute { private String name; private String value; } (this obviously simplifies away a lot of the extraneous stuff) What I want to do is create a query to ask for all Items with one OR MORE particular attributes, ideally joined with arbitrary ANDs and ORs. Right now I'm keeping it simple and just trying to implement the AND case. In pseudo-SQL (or pseudo-HQL if you would), it would be something like: select all items where attributes contains(ItemAttribute(name="foo1", value="bar1")) AND attributes contains(ItemAttribute(name="foo2", value="bar2")) The examples in the Hibernate docs didn't seem to address this particular use case, but it seems like a fairly common one. The disjunction case would also be useful, especially so I could specify a list of possible values, i.e. where attributes contains(ItemAttribute(name="foo", value="bar1")) OR attributes contains(ItemAttribute(name="foo", value="bar2")) -- etc. Here's an example that works OK for a single attribute: return getSession().createCriteria(Item.class) .createAlias("itemAttributes", "ia") .add(Restrictions.conjunction() .add(Restrictions.eq("ia.name", "foo")) .add(Restrictions.eq("ia.attributeValue", "bar"))) .list(); Learning how to do this would go a long ways towards expanding my understanding of Hibernate's potential. :)

    Read the article

  • Good .NET library for fast streaming / batching trigonometry (Atan)?

    - by Sean
    I need to call Atan on millions of values per second. Is there a good library to perform this operation in batch very fast. For example, a library that streams the low level logic using something like SSE? I know that there is support for this in OpenCL, but I would prefer to do this operation on the CPU. The target machine might not support OpenCL. I also looked into using OpenCV, but it's accuracy for Atan angles is only ~0.3 degrees. I need accurate results.

    Read the article

  • Search & Replace SQL

    - by Shonna
    I am messing around with one of my databases.. is there away for me to search for a string in ALL the tables.. and replace it with another everywhere it occurs? I am looking for SQL

    Read the article

  • Stored Procedure to create Insert statements in MySql ??

    - by karthik
    I need a storedprocedure to get the records of a Table and return the value as Insert Statements for the selected records. For Instance, The stored procedure should have three Input parameters... 1- Table Name 2- Column Name 3- Column Value If 1- Table Name = "EMP" 2- Column Name = "EMPID" 3- Column Value = "15" Then the output should be, select all the values of EMP where EMPID is 15 Once the values are selected for above condition, the stored procedure must return the script for inserting the selected values. The purpose of this is to take backup of selected values. when the SP returns a value {Insert statements}, c# will just write them to a .sql file. I have no idea about writing this SP, any samples of code is appreicated. Thanks..

    Read the article

  • Join Query Not Working

    - by John
    Hello, I am using three MySQl tables: comment commentid loginid submissionid comment datecommented login loginid username password email actcode disabled activated created points submission submissionid loginid title url displayurl datesubmitted In these three tables, the "loginid" correspond. I would like to pull the top 10 loginids based on the number of "submissionid"s. I would like to display them in a 3-column HTML table that shows the "username" in the first column, the number of "submissionid"s in the second column, and the number of "commentid"s in the third column. I tried using the query below but it did not work. Any idea why not? Thanks in advance, John $sqlStr = "SELECT l.username ,l.loginid ,c.commentid ,count(s.commentid) countComments ,c.comment ,c.datecommented ,s.submissionid ,count(s.submissionid) countSubmissions ,s.title ,s.url ,s.displayurl ,s.datesubmitted FROM comment AS c INNER JOIN login AS l ON c.loginid = l.loginid INNER JOIN submission AS s ON c.loginid = s.loginid GROUP BY c.loginid ORDER BY countSubmissions DESC LIMIT 10"; $result = mysql_query($sqlStr); $arr = array(); echo "<table class=\"samplesrec1\">"; while ($row = mysql_fetch_array($result)) { echo '<tr>'; echo '<td class="sitename1"><a href="http://www...com/.../members/index.php?profile='.$row["username"].'">'.stripslashes($row["username"]).'</a></td>'; echo '</tr>'; echo '<td class="sitename1">'.stripslashes($row["countSubmissions"]).'</td>'; echo '</tr>'; echo '</tr>'; echo '<td class="sitename1">'.stripslashes($row["countComments"]).'</td>'; echo '</tr>'; } echo "</table>";

    Read the article

  • SQL select statement

    - by kwokwai
    Hi all, I got a Table which has two fields: Point, and Level, with some sample data as follows: ----------------------- Point | Level ----------------------- 10 | Level 1 20 | Level 2 30 | Level 3 40 | Level 4 Suppose that there is a user who has 25 points, to find the Level in which this user is in, the Select statement I used was: Select Level from Table where Point < 30 AND Point > 20; But the Select SQL ststament is a hard-copy one where you can see the ponts 30 and 20 are fixed. I want to alter the Select statement so that the new SQL Select statement can be applied to all users with different points, but I don't know how to do it.

    Read the article

  • Javascript clears a variable after there is no further reference it

    - by Praveen Prasad
    It is said, javascript clears a variable from memory after its being referenced last. just for the sake of this question i created a JS file with only one variable; //file start //variable defined var a=["Hello"] //refenence to that variable alert(a[0]); // //file end no further reference to that variable, so i expect javascript to clear varaible 'a' Now i just ran this page and then opened firebug and ran this code alert(a[0]); Now this alerts the value of variable, If the statement "Javascript clears a variable after there is no further reference it" is true how come alert() shows its value. Is it because all variable defined in global context become properties of window object, and since even after the execution file window objects exist so does it properties.

    Read the article

  • How many layers are between my program and the hardware?

    - by sub
    I somehow have the feeling that modern systems, including runtime libraries, this exception handler and that built-in debugger build up more and more layers between my (C++) programs and the CPU/rest of the hardware. I'm thinking of something like this: 1 + 2 OS top layer Runtime library/helper/error handler a hell lot of DLL modules OS kernel layer Do you really want to run 1 + 2?-Windows popup (don't take this serious) OS kernel layer Hardware abstraction Hardware Go through at least 100 miles of circuits Eventually arrive at the CPU ADD 1, 2 Go all the way back to my program Nearly all technical things are simply wrong and in some random order, but you get my point right? How much longer/shorter is this chain when I run a C++ program that calculates 1 + 2 at runtime on Windows? How about when I do this in an interpreter? (Python|Ruby|PHP) Is this chain really as dramatic in reality? Does Windows really try "not to stand in the way"? e.g.: Direct connection my binary < hardware?

    Read the article

  • Threading cost - minimum execution time when threads would add speed

    - by Lukas
    I am working on a C# application that works with an array. It walks through it (meaning that at one time only a narrow part of the array is used). I am considering adding threads in it to make it perform faster (it runs on a dualcore computer). The problem is that I do not know if it would actually help, because threads cost something and this cost could easily be more than the parallel gain... So how do I determine if threading would help?

    Read the article

  • MySQL table data transformation -- how can I dis-aggreate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • devArt's dotConnect for Oracle vs. ODP.net/OCI performanc.

    - by Sieg
    Does anybody have any experience going from ODP.net to devArt's dotConnect for Oracle? Some initial testing is showing Direct Connect in 64bit dotConnect running 30% slower at times than our original ODP.net/OCI 32 bit solution. Trying to determine if that's normal or if something may be wrong in my testing approach. Thanks!

    Read the article

< Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >