Search Results

Search found 28685 results on 1148 pages for 'query performance'.

Page 322/1148 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • C++ program runs slow in VS2008

    - by Nima
    I have a program written in C++, that opens a binary file(test.bin), reads it object by object, and puts each object into a new file (it opens the new file, writes into it(append), and closes it). I use fopen/fclose, fread and fwrite. test.bin contains 20,000 objects. This program runs under linux with g++ in 1 sec but in VS2008 in debug/release mode in 1min! There are reasons why I don't do them in batches or don't keep them in memory or any other kind of optimizations. I just wonder why it is that much slow under windows. Thanks,

    Read the article

  • Best method to select an object from another unknown jQuery object

    - by Yosi
    Lets say I have a jQuery object/collection stored in a variable named obj, which should contain a DOM element with an id named target. I don't know in advance if target will be a child in obj, i.e.: obj = $('<div id="parent"><div id="target"></div></div>'); or if obj equals target, i.e.: obj = $('<div id="target"></div>'); or if target is a top-level element inside obj, i.e.: obj = $('<div id="target"/><span id="other"/>'); I need a way to select target from obj, but I don't know in advance when to use .find and when to use .filter. What would be the fastest and/or most concise method of extracting target from obj? What I've come up with is: var $target = obj.find("#target").add(obj.filter("#target")); UPDATE I'm adding solutions to a JSPERF test page to see which one is the best. Currently my solution is still the fastest. Here is the link, please run the tests so that we'll have more data: http://jsperf.com/jquery-selecting-objects

    Read the article

  • exploratory SPARQL queries?

    - by significance
    whenever i start using sql i tend to throw a couple of exploratory statements at the database in order to understand what is avaliable, and what form the data takes. eg. show tables describe table select * from table could anyone help me understand the way to complete a similar exploration of an rdf datastore using a SPARQL endpoint? Thanks :)

    Read the article

  • Find the highest number of occurences in a column in SQL

    - by Ronnie
    Given this table: Order custName description to_char(price) A desa $14 B desb $14 C desc $21 D desd $65 E dese $21 F desf $78 G desg $14 H desh $21 I am trying to display the whole row where prices have the highest occurances, in this case $14 and $21 I believe there needs to be a subquery. So i started out with this: select max(count(price)) from orders group by price which gives me 3. after some time i didn't think that was helpful. i believe i needed the value 14 and 21 rather the the count so i can put that in the where clause. but I'm stuck how to display that. any help?

    Read the article

  • Effecient data structure design

    - by Sway
    Hi there, I need to match a series of user inputed words against a large dictionary of words (to ensure the entered value exists). So if the user entered: "orange" it should match an entry "orange' in the dictionary. Now the catch is that the user can also enter a wildcard or series of wildcard characters like say "or__ge" which would also match "orange" The key requirements are: * this should be as fast as possible. * use the smallest amount of memory to achieve it. If the size of the word list was small I could use a string containing all the words and use regular expressions. however given that the word list could contain potentially hundreds of thousands of enteries I'm assuming this wouldn't work. So is some sort of 'tree' be the way to go for this...? Any thoughts or suggestions on this would be totally appreciated! Thanks in advance, Matt

    Read the article

  • Combine SQL statement

    - by ninumedia
    I have 3 tables (follows, postings, users) follows has 2 fields - profile_id , following_id postings has 3 fields - post_id, profile_id, content users has 3 fields - profile_id, first_name, last_name I have a follows.profile_id value of 1 that I want to match against. When I run the SQL statement below I get the 1st step in obtaining the correct data. However, I now want to match the postings.profile_id of this resulting set against the users table so each of the names (first and last name) are displayed as well for all the listed postings. Thank you for your help! :) Ex: SELECT * FROM follows JOIN postings ON follows.following_id = postings.profile_id WHERE follows.profile_id = 1

    Read the article

  • Filtering out specific objects from a search query in Alfresco using Java

    - by Snowright
    I have a HashSet containing all groups I've retrieved from my database. I've been asked to filter this result by removing two specific groups. It seems trivial but I can't seem to come up with a solid solution for storing the specific groups I want to filter out. My idea is to just create an array containing references to the two groups I need to filter out. I can then filter out my search query with whatever is in the array. My concern is that in the future they may ask to filter out more groups and maybe an array may not be a good idea. //Creates the array containing groups to filter out String[] hiddenGroups = {"group1","group2"}; //retrieves all groups Set<String>allGroups = new HashSet<String>(); allGroups.addAll(authorityService.getAllAuthorities(AuthorityType.GROUP); List<String>results = new ArrayList<String>(); //filters out specified groups for (String group : allGroups) { boolean isHidden = false; for (String hiddenGroup : hiddenGroups) { if (hiddenGroup.equalsIgnorecase(group)) { isHidden = true; } } if (!isHidden){ results.add(group); } }

    Read the article

  • SQL select all items of an owner from an item-to-owner table

    - by kdobrev
    I have a table bike_to_owner. I would like to select current items owned by a specific user. Table structure is CREATE TABLE IF NOT EXISTS `bike_to_owner` ( `bike_id` int(10) unsigned NOT NULL, `user_id` int(10) unsigned NOT NULL, `last_change_date` date NOT NULL, PRIMARY KEY (`bike_id`,`user_id`,`last_change_date`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; In the profile page of the user I would like to display all his/her current possessions. I wrote this statement: SELECT `bike_id`,`user_id`,max(last_change_date) FROM `bike_to_owner` WHERE `user_id` = 3 group by `last_change_date` but i'm not quite sure it works correctly in all cases. Can you please verify this is correct and if not suggest me something better. Using php/mysql. Thanks in advance!

    Read the article

  • Tracking down slow managed DLL loading

    - by Alex K
    I am faced with the following issue and at this point I feel like I'm severely lacking some sort of tool, I just don't know what that tool is, or what exactly it should be doing. Here is the setup: I have a 3rd party DLL that has to be registered in GAC. This all works fine and good on pretty much every machine our software was deployed on before. But now we got 2 machines, seemingly identical to the ones we know work (they are cloned from the same image and stuffed with the same hardware, so pretty much the only difference is software settings, over which I went over and over, and they seem fine). Now the problem, the DLL in GAC takes a very long time to load. At least I believe this is the issue, what I can say definitively is that instantiating a single class from that DLL is the slow part. Once it is loaded, thing fly as they always have. But while on known-good machines the DLL loads so fast that a timestamp in the log doesn't even change, on these 2 machines it take over 1min to load. Knowns: I have no access to the source, so I can't debug through the DLL. Our app is the only one that uses it (so shouldn't be simultaneous access issues). There is only one version of this DLL in existance, so it shouldn't be a matter of version conflict. The GAC reference is being used (if I uninstall the DLL from GAC, an exception will be thrown about the missing GAC reference). Could someone with a greater skill in debug-fu suggest what I can do to track down the root cause of this issue?

    Read the article

  • Efficient way to delete a line from a text file (C#)

    - by Valentin Vasilyev
    Hello. I need to delete a certain line from a text file. What is the most efficient way of doing this? File can be potentially large(over million records). Thank you. UPDATE: below is the code I'm currently using, but I'm not sure if it is good. internal void DeleteMarkedEntries() { string tempPath=Path.GetTempFileName(); using (var reader = new StreamReader(logPath)) { using (var writer = new StreamWriter(File.OpenWrite(tempPath))) { int counter = 0; while (!reader.EndOfStream) { if (!_deletedLines.Contains(counter)) { writer.WriteLine(reader.ReadLine()); } ++counter; } } } if (File.Exists(tempPath)) { File.Delete(logPath); File.Move(tempPath, logPath); } }

    Read the article

  • JOIN two tables to show already purchased items

    - by Norbert
    I have a table where I keep all my templates: templates template_id template_name template_price These templates can be purchased by a registered user and then are inserted in the payments table: payments payment_id template_id user_id Is there a way to join these two tables and get not just a list of templates that have been purchased by a certain user, but all the templates? And then figure out from there which ones have already been purchased? I used this SELECT, but only the ones that the user bought showed up. I would like to have all the rows from templates, but empty in case the user_id doesn't match. SELECT * FROM templates LEFT JOIN payments ON templates.template_id = payments.template_id WHERE user_id = 2 GROUP BY templates.template_id

    Read the article

  • C#/WPF FileSystemWatcher on every extension on every path

    - by BlueMan
    I need FileSystemWatcher, that can observing same specific paths, and specific extensions. But the paths could by dozens, hundreds or maybe thousand (hope not :P), the same with extensions. The paths and ext are added by user. Creating hundreds of FileSystemWatcher it's not good idea, isn't it? So - how to do it? Is it possible to watch/observing every device (HDDs, SD flash, pendrives, etc.)? Will it be efficient? I don't think so... . Every changing Windows log file, scanning file by antyvirus program - it could realy slow down my program with SystemWatcher :(

    Read the article

  • Sqlite. How to create an index in attached DB?

    - by kappa
    I have a problem with adding index to memory database attached to main database. 1) I open the database (F) from file 2) Attach the :memory: (M) database 3) Create tables in database M 4) Copy data from F to M I would also like to create an index in database M, but don't know how to do that. This code creates index but in F database: sQuery = "CREATE INDEX IF NOT EXISTS [INDID] ON [PANEL]([ID] ASC);"; I tried to add the name qualifier before table name like this: sQuery = "CREATE INDEX IF NOT EXISTS [INDID] ON [M.PANEL]([ID] ASC);"; but SQLite returns with message that column main.M.PANEL does not exist. What can I do?

    Read the article

  • Should I aim for fewer HTTP requests or more cacheable CSS files?

    - by Jonathan Hanson
    We're being told that fewer HTTP requests per page load is a Good Thing. The extreme form of that for CSS would be to have a single, unique CSS file per page, with any shared site-wide styles duplicated in each file. But there's a trade off there. If you have separate shared global CSS files, they can be cached once when the front page is loaded and then re-used on multiple pages, thereby reducing the necessary size of the page-specific CSS files. So which is better in real-world practice? Shorter CSS files through multiple discrete CSS files that are cacheable, or fewer HTTP requests through fewer-but-larger CSS files?

    Read the article

  • Why the difference in speed?

    - by AngryHacker
    Consider this code: function Foo(ds as OtherDLL.BaseObj) dim lngRowIndex as long dim lngColIndex as long for lngRowIndex = 1 to ubound(ds.Data, 2) for lngColIndex = 1 to ds.Columns.Count Debug.Print ds.Data(lngRowIndex, lngColIndex) next next end function OK, a little context. Parameter ds is of type OtherDLL.BaseObj which is defined in a referenced ActiveX DLL. ds.Data is a variant 2-dimensional array (one dimension carries the data, the other one carries the column index. ds.Columns is a Collection of columns in 'ds.Data`. Assuming there are at least 400 rows of data and 25 columns, this code takes about 15 seconds to run on my machine. Kind of unbelievable. However if I copy the variant array to a local variable, so: function Foo(ds as OtherDLL.BaseObj) dim lngRowIndex as long dim lngColIndex as long dim v as variant v = ds.Data for lngRowIndex = 1 to ubound(v, 2) for lngColIndex = 1 to ds.Columns.Count Debug.Print v(lngRowIndex, lngColIndex) next next end function the entire thing processes in barely any noticeable time (basically close to 0). Why?

    Read the article

  • How to use group by for grouping varchar data.

    - by Shantanu Gupta
    I have a table that contains some data given below pk_map_id preferences ImmediateParent Department_Id -------------------- -------------------- -------------------- -------------------- 20 14 5 1 21 15 5 1 22 16 6 1 23 9 4 2 24 4 3 2 25 24 20 2 26 25 20 2 27 23 13 2 I want to group my records on behalf of department then immediate parent then preferences each seperated by ',' i.e. department Immediate Parent preferences 1 5,6 14,15,16 2 4,3,20,13 9,4,24,25,23 and this table also Immediate parent preferences 5 14,15 6 16 4 9 3 4 20 24,25 13 13 In actual scenario all these are my ids which are to be replaced by their string fields. I am using sql server 2k5

    Read the article

  • WCF data services (OData), query with inheritance limitation?

    - by Mathieu Hétu
    Project: WCF Data service using internally EF4 CTP5 Code-First approach. I configured entities with inheritance (TPH). See previous question on this topic: Previous question about multiple entities- same table The mapping works well, and unit test over EF4 confirms that queries runs smoothly. My entities looks like this: ContactBase (abstract) Customer (inherits from ContactBase), this entity has also several Navigation properties toward other entities Resource (inherits from ContactBase) I have configured a discriminator, so both Customer and Resource map to the same table. Again, everythings works fine on the Ef4 point of view (unit tests all greens!) However, when exposing this DBContext over WCF Data services, I get: - CustomerBases sets exposed (Customers and Resources sets seems hidden, is it by design?) - When I query over Odata on Customers, I get this error: Navigation Properties are not supported on derived entity types. Entity Set 'ContactBases' has a instance of type 'CodeFirstNamespace.Customer', which is an derived entity type and has navigation properties. Please remove all the navigation properties from type 'CodeFirstNamespace.Customer'. Stacktrace: at System.Data.Services.Serializers.SyndicationSerializer.WriteObjectProperties(IExpandedResult expanded, Object customObject, ResourceType resourceType, Uri absoluteUri, String relativeUri, SyndicationItem item, DictionaryContent content, EpmSourcePathSegment currentSourceRoot) at System.Data.Services.Serializers.SyndicationSerializer.WriteEntryElement(IExpandedResult expanded, Object element, ResourceType expectedType, Uri absoluteUri, String relativeUri, SyndicationItem target) at System.Data.Services.Serializers.SyndicationSerializer.<DeferredFeedItems>d__b.MoveNext() at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteItems(XmlWriter writer, IEnumerable`1 items, Uri feedBaseUri) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeedTo(XmlWriter writer, SyndicationFeed feed, Boolean isSourceFeed) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeed(XmlWriter writer) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteTo(XmlWriter writer) at System.Data.Services.Serializers.SyndicationSerializer.WriteTopLevelElements(IExpandedResult expanded, IEnumerator elements, Boolean hasMoved) at System.Data.Services.Serializers.Serializer.WriteRequest(IEnumerator queryResults, Boolean hasMoved) at System.Data.Services.ResponseBodyWriter.Write(Stream stream) Seems like a limitation of WCF Data services... is it? Not much documentation can be found on the web about WCF Data services (OData) and inheritance specifications. How can I overpass this exception? I need these navigation properties on derived entities, and inheritance seems the only way to provide mapping of 2 entites on the same table with Ef4 CTP5... Any thoughts?

    Read the article

  • Jmeter- HTTP Cache Manager, Unable to cache everything what it is being cached by Browser

    - by chinmay brahma
    I used HTTP Chache Manager to Cache files which are being cached in browser. I am successful of doing it for some of the pages. Number of files being cached in Jmeter is equal to Number of files being cached by browser. But in some cases : I found number files being cached is lesser than the files being cached by browser. Using Jmeter I found only 5 files are being cached but in real browser 12 files are getting cached. Thanks in advance

    Read the article

  • Can this loop be sped up in pure Python?

    - by Noctis Skytower
    I was trying out an experiment with Python, trying to find out how many times it could add one to an integer in one minute's time. Assuming two computers are the same except for the speed of the CPUs, this should give an estimate of how fast some CPU operations may take for the computer in question. The code below is an example of a test designed to fulfill the requirements given above. This version is about 20% faster than the first attempt and 150% faster than the third attempt. Can anyone make any suggestions as to how to get the most additions in a minute's time span? Higher numbers are desireable. EDIT: This experiment is being written in Python 3.1 and is 15% faster than the fourth speed-up attempt. def start(seconds): import time, _thread def stop(seconds, signal): time.sleep(seconds) signal.pop() total, signal = 0, [None] _thread.start_new_thread(stop, (seconds, signal)) while signal: total += 1 return total if __name__ == '__main__': print('Testing the CPU speed ...') print('Relative speed:', start(60))

    Read the article

  • SQL query for an access database needed

    - by masfenix
    Hey guys, first off all sorry, i can't login using my yahoo provider. anyways I have this problem. Let me explain it to you, and then I'll show you a picture. I have a access db table. It has 'report id', 'recpient id', and 'recipient name' and 'report req'. What the table "means" is that do the user using that report still require it or can we decommission it. Here is how the data looks like (blocked out company userids and usernames): *check the link below, I cant post pictures cuz yahoo open id provider isnt working. So basically I need to have 3 select queries: 1) Select all the reports where for each report, ALL the users have said no to 'reportreq'. In plain English, i want a listing of all the reports that we have to decommission because no user wants it. 2) Select all the reports where the report is required, and the batchprintcopy is more then 0. This way we can see which report needs to be printed and save paper instead of printing all the reports. 3)A listing of all the reports where the reportreq field is empty. I think i can figure this one out myself. This is using Access/VBA and the data will be exported to an excel spreadsheet. I just a simple query if it exists, OR an alogorithm to do it quickly. I just tried making a "matrix" and it took about 2 hours to populate. https://docs.google.com/uc?id=0B2EMqbpeBpQkMTIyMzA5ZjMtMGQ3Zi00NzRmLWEyMDAtODcxYWM0ZTFmMDFk&hl=en_US

    Read the article

  • What limits scaling in this simple OpenMP program?

    - by Douglas B. Staple
    I'm trying to understand limits to parallelization on a 48-core system (4xAMD Opteron 6348, 2.8 Ghz, 12 cores per CPU). I wrote this tiny OpenMP code to test the speedup in what I thought would be the best possible situation (the task is embarrassingly parallel): // Compile with: gcc scaling.c -std=c99 -fopenmp -O3 #include <stdio.h> #include <stdint.h> int main(){ const uint64_t umin=1; const uint64_t umax=10000000000LL; double sum=0.; #pragma omp parallel for reduction(+:sum) for(uint64_t u=umin; u<umax; u++) sum+=1./u/u; printf("%e\n", sum); } I was surprised to find that the scaling is highly nonlinear. It takes about 2.9s for the code to run with 48 threads, 3.1s with 36 threads, 3.7s with 24 threads, 4.9s with 12 threads, and 57s for the code to run with 1 thread. Unfortunately I have to say that there is one process running on the computer using 100% of one core, so that might be affecting it. It's not my process, so I can't end it to test the difference, but somehow I doubt that's making the difference between a 19~20x speedup and the ideal 48x speedup. To make sure it wasn't an OpenMP issue, I ran two copies of the program at the same time with 24 threads each (one with umin=1, umax=5000000000, and the other with umin=5000000000, umax=10000000000). In that case both copies of the program finish after 2.9s, so it's exactly the same as running 48 threads with a single instance of the program. What's preventing linear scaling with this simple program?

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >