Search Results

Search found 21702 results on 869 pages for 'large objects'.

Page 73/869 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Flex - weird display behavior on large number of Canvas

    - by itarato
    Hi, I have a Flex app (SDK 3.5 - FP10) that does mindmap trees. Every node is a Canvas (I'm using Canvas specific properties so I needed it). It has a shadow effect, background color and some small ui element on it (like icons, texts...). It works perfectly until it goes over ~700 nodes (Canvas). Over that number it shows grey rectangles: http://yfrog.com/bhw2pj . If I turn off the DropShadowFilter effect for the Canvas, they are also gone, so I assume it's a DropShadowFilter problem: http://yfrog.com/2d9y8j . The effect is simple: private static var _nodeDropShadow:DropShadowFilter = new DropShadowFilter(1, 45, 0x888888, 1, 1, 1); _backgroundComp.filters = _nodeDropShadow; Is it possible that Flex can't handle that much? Thanks in advance

    Read the article

  • Sorting an array of objects in ActionScript 3

    - by vitto
    Hi, I'm trying to sort an array of objects with ActionScript 3. The array is like this: var arr:Array = new Array (); arr.push ({name:"John", date:"20080324", message:"Hi"}); arr.push ({name:"Susan", date:"20090528", message:"hello"}); can I do something with Array.sort(...) method?

    Read the article

  • List of objects plus a tag

    - by MC
    I want to store a list of objects, lets say of type Car, but with an additional 'tag' property eg a boolean True/False which does not belong on the Car class. What is the best way to accomplish this? I need to pass the result between methods.

    Read the article

  • Indexing large DB's with Lucene/PHP

    - by thebluefox
    Afternoon chaps, Trying to index a 1.7million row table with the Zend port of Lucene. On small tests of a few thousand rows its worked perfectly, but as soon as I try and up the rows to a few tens of thousands, it times out. Obviously, I could increase the time php allows the script to run, but seeing as 360 seconds gets me ~10,000 rows, I'd hate to think how many seconds it'd take to do 1.7million. I've also tried making the script run a few thousand, refresh, and then run the next few thousand, but doing this clears the index each time. Any ideas guys? Thanks :)

    Read the article

  • Python 3.1 - Memory Error during sampling of a large list

    - by jimy
    The input list can be more than 1 million numbers. When I run the following code with smaller 'repeats', its fine; def sample(x): length = 1000000 new_array = random.sample((list(x)),length) return (new_array) def repeat_sample(x): i = 0 repeats = 100 list_of_samples = [] for i in range(repeats): list_of_samples.append(sample(x)) return(list_of_samples) repeat_sample(large_array) However, using high repeats such as the 100 above, results in MemoryError. Traceback is as follows; Traceback (most recent call last): File "C:\Python31\rnd.py", line 221, in <module> STORED_REPEAT_SAMPLE = repeat_sample(STORED_ARRAY) File "C:\Python31\rnd.py", line 129, in repeat_sample list_of_samples.append(sample(x)) File "C:\Python31\rnd.py", line 121, in sample new_array = random.sample((list(x)),length) File "C:\Python31\lib\random.py", line 309, in sample result = [None] * k MemoryError I am assuming I'm running out of memory. I do not know how to get around this problem. Thank you for your time!

    Read the article

  • Cache for large read only database recommendation

    - by paddydub
    I am building site on with Spring, Hibernate and Mysql. The mysql database contains information on coordinates and locations etc, it is never updated only queried. The database contains 15000 rows of coordinates and 48000 rows of coordinate connections. Every time a request is processed, the application needs to read all these coordinates which is taking approx 3-4 seconds. I would like to set up a cache, to allow quick access to the data. I'm researching memcached at the moment, can you please advise if this would be my best option?

    Read the article

  • JSP application scope objects in Java library

    - by FrontierPsycho
    I am working on a preexisting web application built with JSP, which uses an external Java library. I want to make some JavaBeans that were instantiated with jsp:useBean tags available to the Java code. What would be a good practice to do that? I suppose I can pass the objects in question to every function call that requires them, but I'd like to avoid that.

    Read the article

  • How would you compare jQuery objects?

    - by Kyle
    So I'm trying to figure out how to compare two jQuery objects, to see if the parent element is the body of a page. here's what I got: if ( $(this).parent() === $('body') ) ... I know this is wrong, but if anybody understands what I'm getting at, could they point me towards the correct way of doing this?

    Read the article

  • Large tables of static data with DBGhost

    - by Paulo Manuel Santos
    We are thinking of restructuring our database development and deployment processes by using DBGhost, we want to move away from the central development database and bring the database to the source control. One of the problems we have is a big table with static data (containing translated language strings), it has close to 200K rows. I know that our best solution is to move these stings into resource files, but until we implement that, will DbGhost be able to maintain all this static data and generate our development and deployment databases in a short time? And if not is there a good alternative to filling up this table whenever we need to?

    Read the article

  • Unsure how to design JavaScript / jQuery functionality which uses XML to create HTML objects

    - by Jack Roscoe
    Hi, I'm using JavScript and jQuery to read an XML document and subsequently use the information from the XML to create HTML objects. The main 'C' nodes in the XML document all have a type attribute, and depending on the type I want to run a function which will create a new html object using the other attributes assigned to that particular 'C' node node. Currently, I have a for loop which extracts each 'C' node from the XML and also it's attributes (e.g. width, height, x, y). Also inside the for loop, I have an if statement which checks the 'type' attribute of the current 'C' node being processed, and depending on the type it will run a different function which will then create a new HTML object with the attributes which have been drawn from the XML. The problem is that there may be more than one 'C' node of the same type, so for example when I'm creating the function that will run when a 'C' node of 'type=1' is detected, I cannot use the 'var p = document.createElement('p')' because if a 'C' node of the same type comes up later in the loop it will clash and override that element with that variable that has just been created. I'm not really sure how to approach this? Here is my entire script. If you need me to elaborate on any parts please ask, I'm sure it's not written in the nicest possible way: var arrayIds = new Array(); $(document).ready(function(){ $.ajax({ type: "GET", url: "question.xml", dataType: "xml", success: function(xml) { $(xml).find("C").each(function(){ arrayIds.push($(this).attr('ID')); }); var svgTag = document.createElement('SVG'); // Create question type objects function ctyp3(x,y,width,height,baC) { alert('test'); var r = document.createElement('rect'); r.x = x; r.y = y; r.width = width; r.height = height; r.fillcolor = baC; svgTag.appendChild(r); } // Extract question data from XML var questions = []; for (j=0; j<arrayIds.length; j++) { $(xml).find("C[ID='" + arrayIds[j] + "']").each(function(){ // pass values questions[j] = { typ: $(this).attr('typ'), width: $(this).find("I").attr('wid'), height: $(this).find("I").attr('hei'), x: $(this).find("I").attr('x'), y: $(this).find("I").attr('x'), baC: $(this).find("I").attr('baC'), boC: $(this).find("I").attr('boC'), boW: $(this).find("I").attr('boW') } alert($(this).attr('typ')); if ($(this).attr('typ') == '3') { ctyp3(x,y,width,height,baC); // alert('pass'); } else { // Add here // alert('fail'); } }); } } }); });

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

  • color objects in C or C ++ [closed]

    - by jazz
    Possible Duplicate: Colors in C language i copied a game from a book which name is paratrooper i ask this question again i also provide the code of the objects which i create there i want to change the color of these objects but i didn't understand how to do that so can any one plz help me how to do that.Listen guys they are not the standard functions but i use the graphics library for these functions and i can't find the function in the library file of graphics. i hope u understand know.this code will not run properly so plz tell me something about the function which color it i can't put the image other wize i show u the image it will make alot easieer #include "graphics.h" #include "stdio.h" #include "conio.h" #include "process.h" #include "alloc.h" #include "stdlib.h" #include "math.h" #include "dos.h" main() { int gm=CGAHI, gd=CGA, key=0, area; initgraph(&gd, &gm, "C:\\tc\\bgi"); helidraw(246,50,-1); getch(); return 0; } helidraw ( int x, int y, int d ) { int direction, i, j ; if ( d ) direction = -1 ; else direction = 1 ; i = 3 ; j = 8 ; line ( x - j - 8, y - i - 2, x + j + 8, y - i - 2 ) ; line ( x - j + 5, y - i - 1, x + j - 5, y - i - 1 ) ; line ( x - j, y - i, x + j, y - i ) ; for ( ; i > 0 ; i--, j += 2 ) { putpixel ( x - ( direction * j ), y - i, 1 ) ; line ( x + ( direction * j ), y - i, x + ( direction * ( j - 8 ) ), y - i ) ; } i = 0 ; j -= 2 ; line ( x - ( direction * j ), y - i, x - ( direction * ( j + 17 ) ), y - i ) ; line ( x - ( direction * j ), y - i + 1, x - ( direction * ( j + 7 ) ), y - i + 1 ) ; putpixel ( x - ( direction * ( j + 19 ) ), y - i - 1, 1 ) ; for ( ; i < 3 ; i++, j -= 2 ) { putpixel ( x - j, y + i, 1 ) ; putpixel ( x + j, y + i, 1 ) ; } line ( x - j, y + i, x + j, y + i ) ; putpixel ( x - j + 3, y + i + 1, 1 ) ; putpixel ( x + j - 3, y + i + 1, 1 ) ; line ( x - j - 10, y + i + 2, x + j + 10, y + i + 2 ) ; putpixel ( x + ( direction * ( j + 12 ) ), y + i + 1, 1 ) ; }

    Read the article

  • Database for managing large volumes of (system) metrics

    - by symcbean
    Hi, I'm looking at building a system for managing and reporting stats on web page performance. I'll be collecting a lot more stats than are available in the standard log formats (approx 20 metrics) but compared to most types of database applications, the base data structure will be very simple. My problem is that I'll be accumulating a lot of data - in the region of 100,000 records (i.e. sets of metrics) per hour. Of course, resources are very limited! So that its possible to sensibly interact with the data, I'd need to consolidate each metric into one minute bins, broken down by URL, then for anything more than 1 day old, consolidated into 10 minute bins, then at 1 week, hourly bins. At the front end, I want to provide a view (prefereably as plots) of the last hour of data, with the facility for users to drill up/down through defined hierarchies of URLs (which do not always map directly to the hierarchy expressed in the path of the URL) and to view different time frames. Rather than coding all this myself and using a relational database, I was wondering if there were tools available which would facilitate both the management of the data and the reporting. I had a look at Mondrian however I can't see from the documentation I've looked at whether it's possible to drop the more granular information while maintaining the consolidated views of the data. RRDTool looks promising in terms of managing the data consolidation, but seems to be rather limited in terms of querying the dataset as a multi-dimensional/relational database. What else whould I be looking at?

    Read the article

  • Exporting classes containing std:: objects (vector, map, etc) from a dll

    - by RnR
    I'm trying to export classes from a DLL that contain objects such as std::vectors and std::stings - the whole class is declared as dll export through: class DLL_EXPORT FontManager { The problem is that for members of the complex types I get this warning: warning C4251: 'FontManager::m__fonts' : class 'std::map<_Kty,_Ty' needs to have dll-interface to be used by clients of class 'FontManager' with [ _Kty=std::string, _Ty=tFontInfoRef ] I'm able to remove some of the warnings by putting the following forward class declaration before them even though I'm not changing the type of the member variables themselves: template class DLL_EXPORT std::allocator<tCharGlyphProviderRef>; template class DLL_EXPORT std::vector<tCharGlyphProviderRef,std::allocator<tCharGlyphProviderRef> >; std::vector<tCharGlyphProviderRef> m_glyphProviders; Looks like the forward declaration "injects" the DLL_EXPORT for when the member is compiled but is it safe? Does it realy change anything when the client compiles this header and uses the std container on his side? Will it make all future uses of such a container DLL_EXPORT (and possibly not inline?)? And does it really solve the problem that the warning tries to warn about? Is this warning anything I should be worried about or would it be best to disable it in the scope of these constructs? The clients and the dll will always be built using the same set of libraries and compilers and those are header only classes... I'm using Visual Studio 2003 with the standard STD library. ---- Update ---- I'd like to target you more though as I see the answers are general and here we're talking about std containers and types (such as std::string) - maybe the question really is: Can we disable the warning for standard containers and types available to both the client and the dll through the same library headers and treat them just as we'd treat an int or any other built-in type? (It does seem to work correctly on my side.) If so would should be the conditions under which we can do this? Or should maybe using such containers be prohibited or at least ultra care taken to make sure no assignment operators, copy constructors etc will get inlined into the dll client? In general I'd like to know if you feel designing a dll interface having such objects (and for example using them to return stuff to the client as return value types) is a good idea or not and why - I'd like to have a "high level" interface to this functionality... maybe the best solution is what Neil Butterworth suggested - creating a static library?

    Read the article

  • Objective C code to handle large amount of data processing in iPhone

    - by user167662
    I had the following code that takes in 14 mb or more of image data encoded in base4 string and converts them to jpeg before writing to a file in iphone. It crashes my program giving the following error : Program received signal: “0”. warning: check_safe_call: could not restore current frame I tweak my program and it can process a few more images before the error appear again. My coding is as follows: // parameters is an array where the fourth element contains a list of images in base64 >encoded string NSMutableArray *imageStrList = (NSMutableArray*) [parameters objectAtIndex:5]; while (imageStrList.count != 0) { NSString *imgString = [imageStrList objectAtIndex:0]; // Create a file name using my own Utility class NSString *fileName = [Utility generateFileNName]; NSData *restoredImg = [NSData decodeWebSafeBase64ForString:imgString]; UIImage *img = [UIImage imageWithData: restoredImg]; NSData *imgJPEG = UIImageJPEGRepresentation(img, 0.4f); [imgJPEG writeToFile:fileName atomically:YES]; [imageStrList removeObjectAtIndex:0]; } I tried playing around with UIImageJPEGRepresentation and found out that the lower the value, the more image it can processed but this should not be the way. I am wondering if there is anyway to free up memory of the imageStrList immediately after processing each image so that it can be used by the next one in the line.

    Read the article

  • POS for .NET Known Service Objects

    - by Oliver S
    Hi, I was wondering if anyone knew where I could find a list of LineDisplays, CashDrawers, Printers, that work well with POS for .NET. I want to get around creating my own service objects for potential devices that I might by which are not supported. Thanks.

    Read the article

  • Reverse massive text file in Java

    - by DanJanson
    What would be the best approach to reverse a large text file that is uploaded asynchronously to a servlet that reverses this file in a scalable and efficient way? text file can be massive (gigabytes long) can assume mulitple server/clustered environment to do this in a distributed manner. open source libraries are encouraged to consider I was thinking of using Java NIO to treat file as an array on disk (so that I don't have to treat the file as a string buffer in memory). Also, I am thinking of using MapReduce to break up the file and process it in separate machines. Any input is appreciated. Thanks. Daniel

    Read the article

  • Managing Large Database Entity Models

    - by ChiliYago
    I would like hear how other's are effectively (or not) working with the Visual Studio Entity Designer when many database tables exists. It seems to me that navigating the Designer is tough enough to find what you are looking for with just a few tables but how about a database with say 100 to 200 tables? When a table change is made at the database level how is the model updated? Does it overwrite any manual changes you have made to the model? How would you quickly find an entity in the designer to make a change or inspect a change? Seems unrealistic to be scrolling around looking for specific entity. Thanks for your feedback!

    Read the article

  • Finding cause of memory leaks in large PHP stacks

    - by Mike B
    I have CLI script that runs over several thousand iterations between runs and it appears to have a memory leak. I'm using a tweaked version of Zend Framework with Smarty for view templating and each iteration uses several MB worth of code. The first run immediately uses nearly 8MB of memory (which is fine) but every following run adds about 80kb. My main loop looks like this (very simplified) $users = UsersModel::getUsers(); foreach($users as $user) { $obj = new doSomethingAwesome(); $obj->run($user); $obj = null; unset($obj); } The point is that everything in scope should be unset and the memory freed. My understanding is that PHP runs through its garbage collection process at it's own desire but it does so at the end of functions/methods/scripts. So something must be leaking memory inside doSomethingAwesome() but as I said it is a huge stack of code. Ideally, I would love to find some sort of tool that displayed all my variables no matter the scope at some point during execution. Some sort of symbol-table viewer for php. Does anything like that or any other tools that could help nail down memory leaks in php exist?

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

  • Migrating from Physical SQL (SQL2000) To VMWare machine (SQL2008) - Transferring Large DB

    - by alex
    We're in the middle of migrating from a windows & SQL 2000 box to a Virtualised Win & SQL 2k8 box The VMWare box is on a different site, with better hardware, connectivity etc... The old(current) physical machine is still in constant use - I've taken a backup of the DB on this machine, which is 21GB Transfering this to our virtual machine took around 7+ hours - which isn't ideal when we do the "actual" switchover. My question is - How should I handle the migration better? Could i set up our current machine to do log shipping to the VM machine to keep up to date? then, schedule down time out of hours to do the switch over? Is there a better way?

    Read the article

  • bitshift large strings for encoding QR Codes

    - by icekreaman
    As an example, suppose a QR Code data stream contains 55 data words (each one byte in length) and 15 error correction words (again one byte). The data stream begins with a 12 bit header and ends with four 0 bits. So, 12 + 4 bits of header/footer and 15 bytes of error correction, leaves me 53 bytes to hold 53 alphanumeric characters. The 53 bytes of data and 15 bytes of ec are supplied in a string of length 68 (str68). The problem seems simple enough - concatenate 2 bytes of (right-shifted) header data with str68 and then left shift the entire 70 bytes by 4 bits. This is the first time in many years of programming that I have ever needed to do something like this, I am a c and bit shifting noob, so please be gentle... I have done a little investigation and so far have not been able to figure out how to bitshift 70 bytes of data; any help would be greatly appreciated. Larger QR codes can hold 2000 bytes of data...

    Read the article

  • Best practice to modularise a large Grails app?

    - by Mulone
    Hi all, A Grails app I'm working on is becoming pretty big, and it would be good to refactor it into several modules, so that we don't have to redeploy the whole thing every time. In your opinion, what is the best practice to split a Grails app in several modules? In particular I'd like to create a package of domain classes + relevant services and use it in the app as a module. Is this possible? Is it possible to do it with plugins? Cheers, Mulone

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >