Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 481/555 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • when long polling, Why are my other requests taking so long?

    - by Pascal
    The client makes 2 concurrent requests. (1 which takes 60 seconds - long polling) and another which is NOT long polling - supposed to return right away. It does return right away when I'm not doing long polling. But as soon as I start doing long polling with the other thread, the other one takes forever to execute. Firebug shows that the request is waiting for 10-50 seconds. On the server, I profiled ALL requests from the moment the php script starts to the time it goes back to the client, and it shows that each one only took 300ms or less. This problem started about the same time I started doing long polling (with the other XHR requests). I'm using jquery for both requests. The server shows that it is under very light load. CPU and memory less then 2%. 8 processes running out of a pool of 15. (it doesn't seem to deviate much from that number 8, even when I run more ajax requests). I guess each process can run multiple ajax threads concurrently. I made sure to EXIT from all processes as soon as their done executing. I don't see how the process pool has run out, if there are still 7 unused processes listed under prstat -J. Also, the problem happens somewhat intermittently. Firefox should be able to handle 2 concurrent ajax requests. i dont get what the problem is.

    Read the article

  • Why does perl crash with "*** glibc detected *** perl: munmap_chunk(): invalid pointer"?

    - by sid_com
    #!/usr/bin/env perl use warnings; use strict; use 5.012; use XML::LibXML::Reader; my $reader = XML::LibXML::Reader->new( location => 'http://www.heise.de/' ) or die $!; while ( $reader->read ) { say $reader->name; } At the end of the output from this script I get this error-messages: * glibc detected * perl: munmap_chunk(): invalid pointer: 0x0000000000b362e0 * ======= Backtrace: ========= /lib64/libc.so.6[0x7fb84952fc76] ... ======= Memory map: ======== 00400000-0053d000 r-xp 00000000 08:01 182002 /usr/local/bin/perl ... Is this due a bug? perl -V: Summary of my perl5 (revision 5 version 12 subversion 0) configuration: Platform: osname=linux, osvers=2.6.31.12-0.2-desktop, archname=x86_64-linux uname='linux linux1 2.6.31.12-0.2-desktop #1 smp preempt 2010-03-16 21:25:39 +0100 x86_64 x86_64 x86_64 gnulinux ' config_args='-Dnoextensions=ODBM_File' hint=recommended, useposix=true, d_sigaction=define useithreads=undef, usemultiplicity=undef useperlio=define, d_sfio=undef, uselargefiles=define, usesocks=undef use64bitint=define, use64bitall=define, uselongdouble=undef usemymalloc=n, bincompat5005=undef Compiler: cc='cc', ccflags ='-fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64', optimize='-O2', cppflags='-fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include' ccversion='', gccversion='4.4.1 [gcc-4_4-branch revision 150839]', gccosandvers='' intsize=4, longsize=8, ptrsize=8, doublesize=8, byteorder=12345678 d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16 ivtype='long', ivsize=8, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8 alignbytes=8, prototype=define Linker and Libraries: ld='cc', ldflags =' -fstack-protector -L/usr/local/lib' libpth=/usr/local/lib /lib /usr/lib /lib64 /usr/lib64 /usr/local/lib64 libs=-lnsl -ldl -lm -lcrypt -lutil -lc perllibs=-lnsl -ldl -lm -lcrypt -lutil -lc libc=/lib/libc-2.10.1.so, so=so, useshrplib=false, libperl=libperl.a gnulibc_version='2.10.1' Dynamic Linking: dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-Wl,-E' cccdlflags='-fPIC', lddlflags='-shared -O2 -L/usr/local/lib -fstack-protector' Characteristics of this binary (from libperl): Compile-time options: PERL_DONT_CREATE_GVSV PERL_MALLOC_WRAP USE_64_BIT_ALL USE_64_BIT_INT USE_LARGE_FILES USE_PERLIO USE_PERL_ATOF Built under linux Compiled at Apr 15 2010 13:25:46 @INC: /usr/local/lib/perl5/site_perl/5.12.0/x86_64-linux /usr/local/lib/perl5/site_perl/5.12.0 /usr/local/lib/perl5/5.12.0/x86_64-linux /usr/local/lib/perl5/5.12.0 .

    Read the article

  • Array: mathematical sequence

    - by VaioIsBorn
    An array of integers A[i] (i 1) is defined in the following way: an element A[k] ( k 1) is the smallest number greater than A[k-1] such that the sum of its digits is equal to the sum of the digits of the number 4* A[k-1] . You need to write a program that calculates the N th number in this array based on the given first element A[1] . INPUT: In one line of standard input there are two numbers seperated with a single space: A[1] (1 <= A[1] <= 100) and N (1 <= N <= 10000). OUTPUT: The standard output should only contain a single integer A[N] , the Nth number of the defined sequence. Input: 7 4 Output: 79 Explanation: Elements of the array are as follows: 7, 19, 49, 79... and the 4th element is solution. I tried solving this by coding a separate function that for a given number A[k] calculates the sum of it's digits and finds the smallest number greater than A[k-1] as it says in the problem, but with no success. The first testing failed because of a memory limit, the second testing failed because of a time limit, and now i don't have any possible idea how to solve this. One friend suggested recursion, but i don't know how to set that. Anyone who can help me in any way please write, also suggest some ideas about using recursion/DP for solving this problem. Thanks.

    Read the article

  • How to use Common Table Expression and check no duplication in SQL Server

    - by vodkhang
    I have a table references to itself. User table: id, username, managerid and managerid links back to id Now, I want to get all the managers including direct manager, manager of direct manager, so on and so forth... The problem is that I do not want to have a unstop recursive sql. So, I want to check if an id alreay in a list, I will not include it anymore. Here is my sql for that: with all_managers (id, username, managerid, idlist) as ( select u1.id, u1.username, u1.managerid, ' ' from users u1, users u2 where u1.id = u2.managerid and u2.id = 6 UNION ALL select u.id, u.username, u.managerid, idlist + ' ' + u.id from all_managers a, users u where a.managerid = u.id and charindex(cast(u.id as nvarchar(5)), idlist) != 0 ) select id, username from all_managers; The problem is that in this line: select u1.id, u1.username, u1.managerid, ' ' The SQL Server complains with me that I can not put ' ' as the initialized for idlist. nvarchar(40) does not work as well. I do not know how to declare it inside a common table expression like this one. Usually, in db2, I can just put varchar(40) My sample data: ID UserName ManagerID 1 admin 1 2 a 1 3 b 1 4 c 2 What I want to do is that I want to find all managers of c guy. The result should be: admin, a, b. Some of the user can be his manager (like admin) because the ManagerID does not allow NULL and some does not have direct manager. With common table expression, it can lead to an infinite recursive. So, I am also trying to avoid that situation by trying to not include the id twice. For example, in the 1st iteration, we already have id : 1, so, in the 2nd iteration and later on, 1 should never be allowed. I also want to ask if my current approach is good or not and any other solutions? Because if I have a big database with a deep hierarchy, I will have to initialize a big varchar to keep it and it consumes memory, right?

    Read the article

  • Implementing crossover in genetic programming

    - by Name
    Hi, I'm writing a genetic programming (GP) system (in C but that's a minor detail). I've read a lot of the literature (Koza, Poli, Langdon, Banzhaf, Brameier, et al) but there are some implementation details I've never seen explained. For example: I'm using a steady state population rather than a generational approach, primarily to use all of the computer's memory rather than reserve half for the interim population. Q1. In GP, as opposed to GA, when you perform crossover you select two parents but do you create one child or two, or is that a free choice you have? Q2. In steady state GP, as opposed to a generational system, what members of the population do the children created by crossover replace? This is what I haven't seen discussed. Is it the two parents, or is it two other, randomly-selected members? I can understand if it's the latter, and that you might use negative tournament selection to choose members to replace, but would that not create premature convergence? (After a crossover event the population contains the two original parents plus two children of those parents, and two other random members get removed. Elitism is inherent.) Q3. Is there a Web forum or mailing list focused on GP? Oddly I haven't found one. Yahoo's GP group is used almost exclusively for announcements, the Poli/Langdon Field Guide forum is almost silent, and GP discussions on general/game programming sites like gamedev.net are very basic. Thanks for any help you can provide!

    Read the article

  • echo command not found while testing selenium with phpunit

    - by Bill Szerdy
    I am getting an ERROR: Unknown command: 'echo' executing a selenium script with phpunit. Based on the output that echo command should be included in my version of PHPUnit. The selenium script does execute successfully in the firefox selenium IDE. mkdir_build: phpunit: [exec] PHPUnit 3.4.12 by Sebastian Bergmann. [exec] [exec] . [exec] TestFull [exec] E [exec] [exec] Time: 11 seconds, Memory: 6.50Mb [exec] [exec] There was 1 error: [exec] [exec] 1) TestFull::testNumberOne [exec] PHPUnit_Framework_Exception: Response from Selenium RC server for testComplete(). [exec] ERROR: Unknown command: 'echo'. [exec] [exec] [exec] /directory/to/tests/TestFull.php:14 [exec] [exec] FAILURES! [exec] Tests: 1, Assertions: 0, Errors: 1. And the RC server output: $ java -jar selenium-server.jar -port 4445 -debug 13:23:08.426 INFO - Java: Sun Microsystems Inc. 14.2-b01 13:23:08.428 INFO - OS: Linux 2.6.28-15-server i386 13:23:08.439 INFO - v2.0 [a2], with Core v2.0 [a2] 13:23:08.439 INFO - Selenium server running in debug mode. 13:25:05.661 DEBUG - ---------retrieving CommandQueue for sel_93352 13:25:05.662 DEBUG - Browser 2c8b3b5657a640db9fb278ecbd01049e/:top sel_93352 posted ERROR: Unknown command: 'echo' 13:25:05.662 DEBUG - ---------retrieving CommandQueue for sel_93352 13:25:05.662 DEBUG - putting command: ERROR: Unknown command: 'echo' 13:25:05.662 DEBUG - ..command put?: true 13:25:05.662 DEBUG - sel_93352 commandHolder sel_93352 getCommand() called 13:25:05.662 DEBUG - waiting for data for at most 10 more s 13:25:05.662 DEBUG - data from polling: ERROR: Unknown command: 'echo' 13:25:05.662 DEBUG - sel_93352 commandResultHolder sel_93352 getResult() -> ERROR: Unknown command: 'echo' 13:25:05.663 DEBUG - Got result: ERROR: Unknown command: 'echo' on session 2c8b3b5657a640db9fb278ecbd01049e 13:25:05.663 INFO - Got result: ERROR: Unknown command: 'echo' on session 2c8b3b5657a640db9fb278ecbd01049e 13:25:05.663 DEBUG - Handled by org.openqa.selenium.server.SeleniumDriverResourceHandler in HttpContext[/selenium-server,/selenium-server] 13:25:05.663 DEBUG - RESPONSE: HTTP/1.1 200 OK Date: Wed, 07 Apr 2010 20:25:05 GMT Server: Jetty/5.1.x (Linux/2.6.28-15-server i386 java/1.6.0_16 Cache-Control: no-cache Pragma: no-cache Expires: Thu, 01 Jan 1970 00:00:00 GMT Content-Type: text/plain Connection: close

    Read the article

  • Storing URLs while Spidering

    - by itemio
    I created a little web spider in python which I'm using to collect URLs. I'm not interested in the content. Right now I'm keeping all the visited URLs in a set in memory, because I don't want my spider to visit URLs twice. Of course that's a very limited way of accomplishing this. So what's the best way to keep track of my visited URLs? Should I use a database? * which one? MySQL, sqlite, postgre? * how should I save the URLs? As a primary key trying to insert every URL before visiting it? Or should I write them to a file? * one file? * multiple files? how should I design the file-structure? I'm sure there are books and a lot of papers on this or similar topics. Can you give me some advice what I should read?

    Read the article

  • Updating multiple related tables in SQLite with C#

    - by PerryJ
    Just some background, sorry so long winded. I'm using the System.Data.SQLite ADO.net adapter to create a local sqlite database and this will be the only process hitting the database, so I don't need to worry about concurrency. I'm building the database from various sources and don't want to build this all in memory using datasets or dataadapters or anything like that. I want to do this using SQL (DdCommands). I'm not very good with SQL and complete noob in sqlite. I'm basically using sqlite as a local database / save file structure. The database has a lot of related tables and the data has nothing to do with People or Regions or Districts, but to use a simple analogy, imagine: Region table with auto increment RegionID, RegionName column and various optional columns. District table with auto increment DistrictID, DistrictName, RegionId, and various optional columns Person table with auto increment PersonID, PersonName, DistrictID, and various optional columns So I get some data representing RegionName, DistrictName,PersonName, and other Person related data. The Region, District and/or Person may or may not be created at this point. Once again, not being the greatest with this, my thoughts would be something like: Check to see if Region exists and if so get the RegionID else create it and get RegionID Check to see if District exists and if so get the DistrictID else create it adding in RegionID from above and get DistrictID Check to see if Person exists and if so get the PersonID else create it adding in DistrictID from above and get PersonID Update Person with rest of data. In MS SQL Server I would create a stored procedure to handle all this. Only way I can see to do this with sqlite is a lot of commands. So I'm sure I'm not getting this. I've spent hours looking around on various sites but just don't feel like I'm going down the right road. Any suggestions would be greatly appreciated.

    Read the article

  • Long running stateful service in .NET

    - by Asaf R
    Hi, I need to create a service in .NET that maintains (inner) state in-memory, spawns multiple threads and is generally long-running. There are a lot options - Good-old Windows Service Windows Communication Services Windows Workflow Foundation I really don't know which to choose. Most of the functionality is in a library used by this service, so the service itself is rather simple. On one hand, it's important the service host is as close to "simply working" as possible, which excludes Windows Service. On the other hand, it's important that the service is not taken down by the host just because there's no external activity, which makes WCF kind o' "scary". As for WF, it's strongest selling point is the ability to create processes as, um..., workflows, which is something I don't need nor want. To sum it up, the plethora of Microsoft technologies got me a bit confused. I'd appreciate help regarding the pros and cons of each solution (or other's I've failed to mention) for the problem of a stateful, long running service in .NET Thanks, Asaf P.S., I'm using .NET 4. EDIT: What I mean by the host "simply working" is, for example, that the service I create be reactivated if it crashes. I guess the reason for this question is that I've created Windows Services in the past (I think it was in plain C++ with Win32 API), and I don't want to miss out on something simpler if there's is such as thing. Thanks for all the replies thus far! Asaf.

    Read the article

  • Using Word COM objects in .NET, InlineShapes not copied from template to document

    - by Keith
    Using .NET and the Word Interop I am programmatically creating a new Word doc from a template (.dot) file. There are a few ways to do this but I've chosen to use the AttachedTemplate property, as such: Dim oWord As New Word.Application() oWord.Visible = False Dim oDocuments As Word.Documents = oWord.Documents Dim oDoc As Word.Document = oDocuments.Add() oDoc.AttachedTemplate = sTemplatePath oDoc.UpdateStyles() (I'm choosing the AttachedTemplate means of doing this over the Documents.Add() method because of a memory leak issue I discovered when using Documents.Add() to open from templates.) This works fine EXCEPT when there is an image (represented as an InlineShape) in the template footer. In that case the image does not appear in the resulting document. Specifically the image should appear in the oDoc.Sections.Item(1).Footers.Item(WdHeaderFooterIndex.wdHeaderFooterPrimary).Range.InlineShapes collection but it does not. This is not a problem when using Documents.Add(), however as I said that method is not an option for me. Is there an extra step I have to take to get the images from the template? I already discovered that when using AttachedTemplate I have to explicitly call UpdateStyles() (as you can see in my code snippet) to apply the template styles to the document, whereas that is done automatically when using Documents.Add(). Or maybe there's some crazy workaround? Your help is much appreciated! :)

    Read the article

  • Why can I run JUnit tests for my Spring project, but not a main method?

    - by FarmBoy
    I am using JDBC to connect to MySQL for a small application. In order to test without altering the real database, I'm using HSQL in memory for JUnit tests. I'm using Spring for DI and DAOs. Here is how I'm configuring my HSQL DataSource <bean id="mockDataSource" class="org.springframework.jdbc.datasource.SingleConnectionDataSource"> <property name="driverClassName" value="org.hsqldb.jdbcDriver"/> <property name="url" value="jdbc:hsqldb:mem:mockSeo"/> <property name="username" value="sa"/> </bean> This works fine for my JUnit tests which use the mock DB. But when I try to run a main method, I find the following error: Error creating bean with name 'mockDataSource' defined in class path resource [beans.xml]: Error setting property values; nested exception is org.springframework.beans.PropertyBatchUpdateException; nested PropertyAccessExceptions (1) are: PropertyAccessException 1: org.springframework.beans.MethodInvocationException: Property 'driverClassName' threw exception; nested exception is java.lang.IllegalStateException: Could not load JDBC driver class [org.hsqldb.jdbcDriver] I'm running from Eclipse, and I'm using the Maven plugin. Is there a reason why this would work as a Test, but not as a main()? I know that the main method itself is not the problem, because it works if I remove all references to the HSQL DataSource from my Spring Configuration file.

    Read the article

  • Enormous data and PHP errors

    - by salamis
    I am currently using the following HighCharts:HighStock:Charts: http://www.highcharts.com/stock/demo/data-grouping in order to display the data returned from the server. We retrieve the data from a MySQL database and is really big. We are storing sensor metrics every 1 second. After a while we got the following error: [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 4756882 bytes) in C:\\wamp\\www\\admin\\getTrends.php on line 156, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP Stack trace:, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 1. {main}() C:\\wamp\\www\\admin\\getTrends.php:0, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 2. getTrendsDataAI() C:\\wamp\\www\\admin\\getTrends.php:33, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 3. printResults() C:\\wamp\\www\\admin\\getTrends.php:102, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 4. createData() C:\\wamp\\www\\admin\\getTrends.php:230, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 5. implode() C:\\wamp\\www\\admin\\getTrends.php:156, referer: http://localhost/admin/trends.php What is the best solution to return this data as JSON object to HighStocks for viewing? And how can we overcome the PHP limitation? Shall we return chunk of data each time? How do they usually present enormous amount of data to the users and creating charts and reports from this data? Another big problem that we need to overcome is that the returned JSON object is enormous. At this point is around 20-30 mbs and it will be much larger in the future. Is it ok to return this data to the user and perform everything client side? Any suggestions or thoughts welcome.

    Read the article

  • iPhone app stopped building crash reports

    - by BankStrong
    My app formerly created useful crash logs. I synced my iPhone in the past and found crash logs in library/logs/CrashReporter About a month ago, my app stopped creating crash reports. When I first discovered this problem, I assumed it was due to memory corruption (a possibility in my app). I just created a new project and added a crash to it. // Implement viewDidLoad to do additional setup after loading the view, // typically from a nib. - (void)viewDidLoad { NSMutableArray *array = [[NSMutableArray alloc] init]; [array removeObjectAtIndex:-1]; [super viewDidLoad]; } This app does not create a crash report either. Ideas I've started to explore: My phone is corrupted (tried restoring - somehow I brought it to the state from a few months ago) My XCode is corrupt (tried reinstalling, but current download demands Snow Leopard - and I can't upgrade to Snow Leopard online). This seems possible - I may have messed with device support around a month ago (similar to http://stackoverflow.com/questions/1224867/does-iphone-os-3-0-1-ruin-your-development-phone ) The location for crash logs has somehow moved. Suggestions?

    Read the article

  • Looking for a .Net ORM

    - by SLaks
    I'm looking for a .Net 3.5 ORM framework with a rather unusual set of requirements: I need to create and alter tables at runtime with schemas defined by my end-users. (Obviously, that wouldn't be strongly-typed; I'm looking for something like a DataTable there) I also want regular strongly-typed partial classes for rows in non-dynamic tables, with custom validation and other logic. (Like normal ORMs) I want to load the entire database (or some entire tables) once, and keep it in memory throughout the life of the (WinForms) GUI. (I have a shared SQL Server with a relatively slow connection) I also want regular LINQ support (like LINQ-to-SQL) for ASP.Net on the shared server (which has a fast connection to SQL Server) In addition to SQL Server, I also want to be able to use a single-file database that would support XCopy deployment (without installing SQL CE on the end-user's machine). (Probably Access or SQLite) Finally, it has to be free (unless it's OpenAccess) I'll probably have to write it myself, as I don't think there is an existing ORM that meets these requirements. However, I don't want to re-invent the wheel if there is one, hence this question. I'm using VS2010, but I don't know when my webhost (LFC) will upgrade to .Net 4.0

    Read the article

  • How can I build something like Amazon S3 in Perl?

    - by Joel G
    I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going): Easy API implementation for client side apps. (maybe REST (?) Centralized database server for the USERDB (maybe PostgreSQL (?). Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?). Easy server side configuration (config file(s) stored on the servers). Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases) Fast High Uptime Low memory usage Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?). Maybe a cache of some sort (memcached or parlbal or something else (?). Thanks in advance

    Read the article

  • Autodetect Presence of CSV Headers in a File

    - by banzaimonkey
    Short question: How do I automatically detect whether a CSV file has headers in the first row? Details: I've written a small CSV parsing engine that places the data into an object that I can access as (approximately) an in-memory database. The original code was written to parse third-party CSV with a predictable format, but I'd like to be able to use this code more generally. I'm trying to figure out a reliable way to automatically detect the presence of CSV headers, so the script can decide whether to use the first row of the CSV file as keys / column names or start parsing data immediately. Since all I need is a boolean test, I could easily specify an argument after inspecting the CSV file myself, but I'd rather not have to (go go automation). I imagine I'd have to parse the first 3 to ? rows of the CSV file and look for a pattern of some sort to compare against the headers. I'm having nightmares of three particularly bad cases in which: The headers include numeric data for some reason The first few rows (or large portions of the CSV) are null There headers and data look too similar to tell them apart If I can get a "best guess" and have the parser fail with an error or spit out a warning if it can't decide, that's OK. If this is something that's going to be tremendously expensive in terms of time or computation (and take more time than it's supposed to save me) I'll happily scrap the idea and go back to working on "important things". I'm working with PHP, but this strikes me as more of an algorithmic / computational question than something that's implementation-specific. If there's a simple algorithm I can use, great. If you can point me to some relevant theory / discussion, that'd be great, too. If there's a giant library that does natural language processing or 300 different kinds of parsing, I'm not interested.

    Read the article

  • Possible mem leak?

    - by LCD Fire
    I'm new to the concept so don't be hard on me. why doesn't this code produce a destructor call ? The names of the classes are self-explanatory. The SString will print a message in ~SString(). It only prints one destructor message. int main(int argc, TCHAR* argv[]) { smart_ptr<SString> smt(new SString("not lost")); new smart_ptr<SString>(new SString("but lost")); return 0; } Is this a memory leak? The impl. for smart_ptr is from here edited: //copy ctor smart_ptr(const smart_ptr<T>& ptrCopy) { m_AutoPtr = new T(ptrCopy.get()); } //overloading = operator smart_ptr<T>& operator=(smart_ptr<T>& ptrCopy) { if(m_AutoPtr) delete m_AutoPtr; m_AutoPtr = new T(*ptrCopy.get()); return *this; }

    Read the article

  • haskell: a data structure for storing ascending integers with a very fast lookup

    - by valya
    Hello! (This question is related to my previous question, or rather to my answer to it.) I want to store all qubes of natural numbers in a structure and look up specific integers to see if they are perfect cubes. For example, cubes = map (\x -> x*x*x) [1..] is_cube n = n == (head $ dropWhile (<n) cubes) It is much faster than calculating the cube root, but It has complexity of O(n^(1/3)) (am I right?). I think, using a more complex data structure would be better. For example, in C I could store a length of an already generated array (not list - for faster indexing) and do a binary search. It would be O(log n) with lower ?oefficient than in another answer to that question. The problem is, I can't express it in Haskell (and I don't think I should). Or I can use a hash function (like mod). But I think it would be much more memory consuming to have several lists (or a list of lists), and it won't lower the complexity of lookup (still O(n^(1/3))), only a coefficient. I thought about a kind of a tree, but without any clever ideas (sadly I've never studied CS). I think, the fact that all integers are ascending will make my tree ill-balanced for lookups. And I'm pretty sure this fact about ascending integers can be a great advantage for lookups, but I don't know how to use it properly (see my first solution which I can't express in Haskell).

    Read the article

  • EJB3Unit testing no-tx-datasource

    - by justastefan
    Hello, I am doing tests on an ejb3-project using ejb3unit http://ejb3unit.sourceforge.net/Session-Bean.html for testing. All my Services long for @PersistenceContext (UnitName=bla). I set up the ejb3unit.properties like this: ejb3unit_jndi.1.isSessionBean=true ejb3unit_jndi.1.jndiName=ejb/MyServiceBean ejb3unit_jndi.1.className=com.company.project.MyServiceBean everything works with the in-memory-database. So now i want additionally test another servicebean with @PersistenceContext (UnitName=noTxDatasource) that goes for a defined in my datasources.xml: <datasources> <local-tx-datasource> ... </local-tx-datasource> <no-tx-datasource> <jndi-name>noTxDatasource</jndi-name> <connection-url>...</connection-url> <driver-class>oracle.jdbc.OracleDriver</driver-class> <user-name>bla</user-name> <password>bla</password> </no-tx-datasource> </datasources> How do I tell ejb3unit to make this work: Object object = InitialContext.doLookup("java:/noTxDatasource"); if (object instanceof DataSource) { return ((DataSource) object).getConnection(); } else { return null; } Currently it fails saying: javax.NamingException: Cannot find the name (noTxDataSource) in the JNDI tree Current bindings: (ejb/MyServiceBean=com.company.project.MyServiceBean) How can I add this no-tx-datasource to the jndi bindings?

    Read the article

  • read access violation error

    - by user293569
    class Node{ private: string name; Node** adjacent; int adjNum; public: Node(); Node(string, int adj_num); Node(const Node &); bool addAdjacent(const Node &); Node** getAdjacents(); string getName(); ~Node(); }; bool Node::addAdjacent(const Node &anode){ Node** temp; temp= new Node*[adjNum+1]; for(int i=0;i<adjNum+1;i++) temp[i]=adjacent[i]; temp[adjNum]=const_cast<Node *>(&anode); delete[] adjacent; adjacent=new Node*[adjNum+1]; adjacent=temp; delete[] temp; adjNum++; return true; } int main() { Node node1("A",0); Node node2("B",0); node1.getName(); node1.addAdjacent(node2); system("PAUSE"); return 0; } when the program comes to this part: for(int i=0;i<adjNum+1;i++) temp[i]=adjacent[i]; it says Access violation reading location 0xcccccccc. The class must allocate the memory fore adjacent, but I think it didn't how can I solve this problem?

    Read the article

  • Generic allocator class without variadic templates?

    - by rainer
    I am trying to write a generic allocator class that does not really release an object's memory when it is free()'d but holds it in a queue and returns a previously allocated object if a new one is requested. Now, what I can't wrap my head around is how to pass arguments to the object's constructor when using my allocator (at least without resorting to variadic templates, that is). The alloc() function i came up with looks like this: template <typename... T> inline T *alloc(const &T... args) { T *p; if (_free.empty()) { p = new T(args...); } else { p = _free.front(); _free.pop(); // to call the ctor of T, we need to first call its DTor p->~T(); p = new( p ) T(args...); } return p; } Still, I need the code to be compatible with today's C++ (and older versions of GCC that do not support variadic templates). Is there any other way to go about passing an arbitrary amount of arguments to the objects constructor?

    Read the article

  • How do you dynamically allocate a contiguous 3D array in C?

    - by Derek
    In C, I want to loop through an array in this order for(int z = 0; z < NZ; z++) for(int x = 0; x < NX; x++) for(int y = 0; y < NY; y++) 3Darray[x][y][z] = 100; How do I create this array in such a way that 3Darray[0][1][0] comes right before 3Darray[0][2][0] in memory? I can get an initialization to work that gives me "z-major" ordering, but I really want a y-major ordering for this 3d array This is the code I have been trying to use: char *space; char ***Arr3D; int y, z; ptrdiff_t diff; space = malloc(X_DIM * Y_DIM * Z_DIM * sizeof(char)) Arr3D = malloc(Z_DIM * sizeof(char **)); for (z = 0; z < Z_DIM; z++) { Arr3D[z] = malloc(Y_DIM * sizeof(char *)); for (y = 0; y < Y_DIM; y++) { Arr3D[z][y] = space + (z*(X_DIM * Y_DIM) + y*X_DIM); } }

    Read the article

  • Safe to update separate regions of a BufferedImage in separate threads?

    - by finnw
    I have a collection of BufferedImage instances, one main image and some subimages created by calling getSubImage on the main image. The subimages do not overlap. I am also making modifications to the subimage and I want to split this into multiple threads, one per subimage. From my understanding of how BufferedImage, Raster and DataBuffer work, this should be safe because: Each instance of BufferedImage (and its respective WritableRaster) is accessed from only one thread. The shared ColorModel is immutable The DataBuffer has no fields that can be modified (the only thing that can change is elements of the backing array.) Modifying disjoint segments of an array in separate threads is safe. However I cannot find anything in the documentation that says that it is definitely safe to do this. Can I assume it is safe? I know that it is possible to work on copies of the child Rasters but I would prefer to avoid this because of memory constraints. Otherwise, is it possible to make the operation thread-safe without copying regions of the parent image?

    Read the article

  • Service php-fpm does not support chkconfig

    - by ychian
    Everything is working fine. Just that when i chkconfig –add php-fpm It throws me an error Service php-fpm does not support chkconfig php-5.2.13 php-5.2.13-fpm-0.5.13.diff.gz Below is the configuration i use ./configure --enable-fastcgi --enable-fpm --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu --target=x86_64-redhat-linux-gnu --program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/usr/com --mandir=/usr/share/man --infodir=/usr/share/info --cache-file=../config.cache --with-libdir=lib64 --with-config-file-path=/etc --with-config-file-scan-dir=/etc/php.d --disable-debug --with-pic --disable-rpath --with-pear --with-bz2 --with-curl --with-exec-dir=/usr/bin --with-freetype-dir=/usr --with-png-dir=/usr --enable-gd-native-ttf --without-gdbm --with-gettext --with-gmp --with-iconv --with-jpeg-dir=/usr --with-openssl --with-png --with-expat-dir=/usr --with-pcre-regex=/usr --with-zlib --with-layout=GNU --enable-exif --enable-ftp --enable-magic-quotes --enable-sockets --enable-sysvsem --enable-sysvshm --enable-sysvmsg --enable-track-vars --enable-trans-sid --enable-yp --enable-wddx --with-kerberos --enable-ucd-snmp-hack --with-unixODBC=shared,/usr --enable-memory-limit --enable-shmop --enable-calendar --enable-dbx --enable-dio --with-mime-magic=/usr/share/file/magic.mime --without-sqlite --with-libxml-dir=/usr --with-xml --with-system-tzdata --without-mysql --without-gd --without-odbc --disable-dom --disable-dba --without-unixODBC --disable-pdo --disable-xmlreader --disable-xmlwriter

    Read the article

  • PostgreSQL - Why are some queries on large datasets so incredibly slow

    - by Brad Mathews
    Hello, I have two types of queries I run often on two large datasets. They run much slower than I would expect them to. The first type is a sequential scan updating all records: Update rcra_sites Set street = regexp_replace(street,'/','','i') rcra_sites has 700,000 records. It takes 22 minutes from pgAdmin! I wrote a vb.net function that loops through each record and sends an update query for each record (yes, 700,000 update queries!) and it runs in less than half the time. Hmmm.... The second type is a simple update with a relation and then a sequential scan: Update rcra_sites as sites Set violations='No' From narcra_monitoring as v Where sites.agencyid=v.agencyid and v.found_violation_flag='N' narcra_monitoring has 1,700,000 records. This takes 8 minutes. The query planner refuses to use my indexes. The query runs much faster if I start with a set enable_seqscan = false;. I would prefer if the query planner would do its job. I have appropriate indexes, I have vacuumed and analyzed. I optimized my shared_buffers and effective_cache_size best I know to use more memory since I have 4GB. My hardware is pretty darn good. I am running v8.4 on Windows 7. Is PostgreSQL just this slow? Or am I still missing something? Thanks! Brad

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >