Search Results

Search found 43200 results on 1728 pages for 'large object pattern'.

Page 144/1728 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Finding cause of memory leaks in large PHP stacks

    - by Mike B
    I have CLI script that runs over several thousand iterations between runs and it appears to have a memory leak. I'm using a tweaked version of Zend Framework with Smarty for view templating and each iteration uses several MB worth of code. The first run immediately uses nearly 8MB of memory (which is fine) but every following run adds about 80kb. My main loop looks like this (very simplified) $users = UsersModel::getUsers(); foreach($users as $user) { $obj = new doSomethingAwesome(); $obj->run($user); $obj = null; unset($obj); } The point is that everything in scope should be unset and the memory freed. My understanding is that PHP runs through its garbage collection process at it's own desire but it does so at the end of functions/methods/scripts. So something must be leaking memory inside doSomethingAwesome() but as I said it is a huge stack of code. Ideally, I would love to find some sort of tool that displayed all my variables no matter the scope at some point during execution. Some sort of symbol-table viewer for php. Does anything like that or any other tools that could help nail down memory leaks in php exist?

    Read the article

  • Load large images into Bitmap?

    - by GuyNoir
    I'm trying to make a basic application that displays an image from the camera, but I when I try to load the .jpg in from the sdcard with BitmapFactory.decodeFile, it returns null. It doesn't give an out of memory error which I find strange, but the exact same code works fine on smaller images. How does the generic gallery display huge pictures from the camera with so little memory?

    Read the article

  • Override Maven's default resource filter replacement pattern.

    - by Matt Campbell
    By default maven will filter resources like this: <properties> <replace.me>value</replace.me> </properties> <some-tag> <key>${replace.me}</key> </some-tag> will get you: <some-tag> <key>value</key> </some-tag> Is there a way to override the way maven selects the strings to replace? Specifically, I want to be able to use this: <some-tag> <key>@replace.me@</key> </some-tag> to get the same result as above.

    Read the article

  • design pattern tools to use?

    - by ajsie
    i have noticed that every area has some tools you can use to make things easier. eg. css = dreamweaver doctrine/propel = orm designer // you dont have to hardcore code schemas manually and remembering all the syntax/variables mysql = mysql workbench // the same etc. in this way you get aided and dont have to type things the hard way, and can learn the structure, but then use GUI tools to help you develop faster. now i'm learning design patterns (singleton, factory, command, memento etc) and im wondering if there are some kind of tools you can use that will help you develop faster. i dont know exactly what tools i'm trying to find, just helping me when coding with design patterns (schema drawings? references?) are there any?

    Read the article

  • Count subset of binary pattern ..

    - by mr.bio
    Hi there . I have a A=set of strings and a B=seperate string. I want to count the number of occurences in from B in A. Example : A: 10001 10011 11000 10010 10101 B: 10001 result would be 3.(10001 is a subset of 10001,10011,10101) So i need a function that takes a set and string and returns an int. int myfunc(set<string> , string){ int result; // My Brain is melting return result ; }

    Read the article

  • Google App Engine - Dealing with concurrency issues of storing an object

    - by Spines
    My User object that I want to create and store in the datastore has an email, and a username. How do I make sure when creating my User object that another User object doesn't also have either the same email or the same username? If I just do a query to see if any other users have already used the username or the email, then there could be a race condition. UPDATE: The solution I'm currently considering is to use the MemCache to implement a locking mechanism. I would acquire 2 locks before trying to store the User object in the datastore. First a lock that locks based on email, then another that locks based on username. Since creating new User objects only happens at user registration time, and it's even rarer that two people try to use either the same username or the same email, I think it's okay to take the performance hit of locking. I'm thinking of using the MemCache locking code that is here: http://appengine-cookbook.appspot.com/recipe/mutex-using-memcache-api/ What do you guys think?

    Read the article

  • Javascript Regex multiple group matches in pattern

    - by mummybot
    I have a question for a Javascript regex ninja: How could I simplify my variable creation from a string using regex? I currently have it working without using any grouping, but I would love to see a better way! The string is: var url = 'resources/css/main.css?detect=#information{width:300px;}'; The code that works is: var styleStr = /[^=]+$/.exec(url).toString(); var id = /[^\#][^\{]+/.exec(styleStr).toString(); var property = /[^\{]+/.exec(/[^\#\w][^\:]+/.exec(styleStr)).toString(); var value = /[^\:]+/.exec(/[^\#\w\{][^\:]+[^\;\}]/.exec(styleStr)).toString(); This gives: alert(id) //information alert(property) //width alert(value) //300px Any takers?

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

  • wpf command pattern

    - by evan
    I have a wpf gui which displays a list of information in separate window and in a separate thread from the main application. As the user performs actions in the main window the side window is updated. (For example if you clicked page down in the main window a listbox in the side window would page down). Right now the architecture for this application feels very messy and I'm sure there is a cleaner way to do it. It looks like this: Main Window contains a singleton SideWindowControl which communicates with an instance of the SideWindowDisplay using events - so, for example, the pagedown button would work like: 1) the event handler of the button on the main window calls SideWindowControl.PageDown() 2) in the PageDown() function a event is created and thrown. 3) finally the gui, ShowSideWindowDisplay is subscribing to the SideWindowControl.Actions event handles the event and actually scrolls the listbox down - note because it is in a different thread it has to do that by running the command via Dispatcher.Invoke() This just seems like a very messy way to this and there must be a clearer way (The only part that can't change is that the main window and the side window must be on different threads). Perhaps using WPF commands? I'd really appreciate any suggestions!! Thanks

    Read the article

  • Managing Large Database Entity Models

    - by ChiliYago
    I would like hear how other's are effectively (or not) working with the Visual Studio Entity Designer when many database tables exists. It seems to me that navigating the Designer is tough enough to find what you are looking for with just a few tables but how about a database with say 100 to 200 tables? When a table change is made at the database level how is the model updated? Does it overwrite any manual changes you have made to the model? How would you quickly find an entity in the designer to make a change or inspect a change? Seems unrealistic to be scrolling around looking for specific entity. Thanks for your feedback!

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

  • Storing millions of URLs in a database for fast pattern matching

    - by Paras Chopra
    I am developing a web analytics kind of system which needs to log referring URL, landing page URL and search keywords for every visitor on the website. What I want to do with this collected data is to allow end-user to query the data such as "Show me all visitors who came from Bing.com searching for phrase that contains 'red shoes'" or "Show me all visitors who landed on URL that contained 'campaign=twitter_ad'", etc. Because this system will be used on many big websites, the amount of data that needs to log will grow really, really fast. So, my question: a) what would be the best strategy for logging so that scaling the system doesn't become a pain; b) how to use that architecture for rapid querying of arbitrary requests? Is there a special method of storing URLs so that querying them gets faster? In addition to MySQL database that I use, I am exploring (and open to) other alternatives better suited for this task.

    Read the article

  • Assigning large UInt32 constants in VB.Net

    - by Kumba
    I inquired on VB's erratic behavior of treating all numerics as signed types back in this question, and from the accepted answer there, was able to get by. Per that answer: Visual Basic Literals Also keep in mind you can add literals to your code in VB.net and explicitly state constants as unsigned. So I tried this: Friend Const POW_1_32 As UInt32 = 4294967296UI And VB.NET throws an Overflow error in the IDE. Pulling out the integer overflow checks doesn't seem to help -- this appears to be a flaw in the IDE itself. This, however, doesn't generate an error: Friend Const POW_1_32 As UInt64 = 4294967296UL So this suggests to me that the IDE isn't properly parsing the code and understanding the difference between Int32 and UInt32. Any suggested workarounds and/or possible clues on when MS will make unsigned data types intrinsic to the framework instead of the hacks they currently are?

    Read the article

  • The simplest concurrency pattern

    - by Ilya Kogan
    Please, would you help me in reminding me of one of the simplest parallel programming techniques. How do I do the following in C#: Initial state: semaphore counter = 0 Thread 1: // Block until semaphore is signalled semaphore.Wait(); // wait until semaphore counter is 1 Thread 2: // Allow thread 1 to run: semaphore.Signal(); // increments from 0 to 1 It's not a mutex because there is no critical section, or rather you can say there is an infinite critical section. So what is it?

    Read the article

  • Hibernate Updating an existing entity with a new object

    - by DD
    Hi, Assume I have an entity Foo in the DB. I am parsing some files and creating new Foo objects and would like to check if the parsed Foo object exists in the DB (using a unique attribute). If it exists already update it otherwise save as a new object. What is the best approach? Could I simply set the id and version in the new Foo object? Or would I be better off loading the Foo object from the DB and copying over the properties from the parsed file? Thanks.

    Read the article

  • Hang during databinding of large amount of data to WPF DataGrid

    - by nihi_l_ist
    Im using WPFToolkit datagrid control and do the binding in such way: <WpfToolkit:DataGrid x:Name="dgGeneral" SelectionMode="Single" SelectionUnit="FullRow" AutoGenerateColumns="False" CanUserAddRows="False" CanUserDeleteRows="False" Grid.Row="1" ItemsSource="{Binding Path=Conversations}" > public List<CONVERSATION> Conversations { get { return conversations; } set { if (conversations != value) { conversations = value; NotifyPropertyChanged("Conversations"); } } } public event PropertyChangedEventHandler PropertyChanged; public void NotifyPropertyChanged(string propertyName) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } public void GenerateData() { BackgroundWorker bw = new BackgroundWorker(); bw.WorkerSupportsCancellation = bw.WorkerReportsProgress = true; List<CONVERSATION> list = new List<CONVERSATION>(); bw.DoWork += delegate { list = RefreshGeneralData(); }; bw.RunWorkerCompleted += delegate { try { Conversations = list; } catch (Exception ex) { CustomException.ExceptionLogCustomMessage(ex); } }; bw.RunWorkerAsync(); } And than in the main window i call GenerateData() after setting DataCotext of the window to instance of the class, containing GenerateData(). RefreshGeneralData() returns some list of data i want and it returns it fast. Overall there are near 2000 records and 6 columns(im not posting the code i used during grid's initialization, because i dont think it can be the reason) and the grid hangs for almost 10 secs!

    Read the article

  • small scale web site - global javascript file style/format/pattern - improving maintainability

    - by yaya3
    I frequently create (and inherit) small to medium websites where I have the following sort of code in a single file (normally named global.js or application.js or projectname.js). If functions get big, I normally put them in a seperate file, and call them at the bottom of the file below in the $(document).ready() section. If I have a few functions that are unique to certain pages, I normally have another switch statement for the body class inside the $(document).ready() section. How could I restructure this code to make it more maintainable? Note: I am less interested in the functions innards, more so the structure, and how different types of functions should be dealt with. I've also posted the code here - http://pastie.org/999932 in case it makes it any easier var ProjectNameEnvironment = {}; function someFunctionUniqueToTheHomepageNotWorthMakingConfigurable () { $('.foo').hide(); $('.bar').click(function(){ $('.foo').show(); }); } function functionThatIsWorthMakingConfigurable(config) { var foo = config.foo || 700; var bar = 200; return foo * bar; } function globallyRequiredJqueryPluginTrigger (tooltip_string) { var tooltipTrigger = $(tooltip_string); tooltipTrigger.tooltip({ showURL: false ... }); } function minorUtilityOneLiner (selector) { $(selector).find('li:even').not('li ul li').addClass('even'); } var Lightbox = {}; Lightbox.setup = function(){ $('li#foo a').attr('href','#alpha'); $('li#bar a').attr('href','#beta'); } Lightbox.init = function (config){ if (typeof $.fn.fancybox =='function') { Lightbox.setup(); var fade_in_speed = config.fade_in_speed || 1000; var frame_height = config.frame_height || 1700; $(config.selector).fancybox({ frameHeight : frame_height, callbackOnShow: function() { var content_to_load = config.content_to_load; ... }, callbackOnClose : function(){ $('body').height($('body').height()); } }); } else { if (ProjectNameEnvironment.debug) { alert('the fancybox plugin has not been loaded'); } } } // ---------- order of execution ----------- $(document).ready(function () { urls = urlConfig(); (function globalFunctions() { $('.tooltip-trigger').each(function(){ globallyRequiredJqueryPluginTrigger(this); }); minorUtilityOneLiner('ul.foo') Lightbox.init({ selector : 'a#a-lightbox-trigger-js', ... }); Lightbox.init({ selector : 'a#another-lightbox-trigger-js', ... }); })(); if ( $('body').attr('id') == 'home-page' ) { (function homeFunctions() { someFunctionUniqueToTheHomepageNotWorthMakingConfigurable (); })(); } });

    Read the article

  • How to write a large number of nested records in JSON with Python

    - by jamesmcm
    I want to produce a JSON file, containing some initial parameters and then records of data like this: { "measurement" : 15000, "imi" : 0.5, "times" : 30, "recalibrate" : false, { "colorlist" : [234, 431, 134] "speclist" : [0.34, 0.42, 0.45, 0.34, 0.78] } { "colorlist" : [214, 451, 114] "speclist" : [0.44, 0.32, 0.45, 0.37, 0.53] } ... } How can this be achieved using the Python json module? The data records cannot be added by hand as there are very many.

    Read the article

  • bitshift large strings for encoding QR Codes

    - by icekreaman
    As an example, suppose a QR Code data stream contains 55 data words (each one byte in length) and 15 error correction words (again one byte). The data stream begins with a 12 bit header and ends with four 0 bits. So, 12 + 4 bits of header/footer and 15 bytes of error correction, leaves me 53 bytes to hold 53 alphanumeric characters. The 53 bytes of data and 15 bytes of ec are supplied in a string of length 68 (str68). The problem seems simple enough - concatenate 2 bytes of (right-shifted) header data with str68 and then left shift the entire 70 bytes by 4 bits. This is the first time in many years of programming that I have ever needed to do something like this, I am a c and bit shifting noob, so please be gentle... I have done a little investigation and so far have not been able to figure out how to bitshift 70 bytes of data; any help would be greatly appreciated. Larger QR codes can hold 2000 bytes of data...

    Read the article

  • Python 3.1 - Memory Error during sampling of a large list

    - by jimy
    The input list can be more than 1 million numbers. When I run the following code with smaller 'repeats', its fine; def sample(x): length = 1000000 new_array = random.sample((list(x)),length) return (new_array) def repeat_sample(x): i = 0 repeats = 100 list_of_samples = [] for i in range(repeats): list_of_samples.append(sample(x)) return(list_of_samples) repeat_sample(large_array) However, using high repeats such as the 100 above, results in MemoryError. Traceback is as follows; Traceback (most recent call last): File "C:\Python31\rnd.py", line 221, in <module> STORED_REPEAT_SAMPLE = repeat_sample(STORED_ARRAY) File "C:\Python31\rnd.py", line 129, in repeat_sample list_of_samples.append(sample(x)) File "C:\Python31\rnd.py", line 121, in sample new_array = random.sample((list(x)),length) File "C:\Python31\lib\random.py", line 309, in sample result = [None] * k MemoryError I am assuming I'm running out of memory. I do not know how to get around this problem. Thank you for your time!

    Read the article

  • Problem processing large data using Applet-Servlet communication

    - by Marquinio
    Hi everyone. I have an Applet that makes a request to a Servlet. On the servlet it's using the PrintWriter to write the response back to Applet: out.println("Field1|Field2|Field3|Field4|Field5......|Field10"); There are about 15000 records, so the out.println() gets executed about 15000 times. Problem is that when the Applet gets the response from Servlet it takes about 15 minutes to process the records. I placed System.out.println's and processing is paused at around 5000, then after 15 minutes it continues processing and then its done. Has anyone faced a similar problem? The servlet takes about 2 seconds to execute. So seems that the browser/Applet is too slow to process the records. Any ideas appreciated. Thanks.

    Read the article

  • How to access parent window variables from object

    - by Pickle
    I've got an XHTML 1.1 Strict document that is loading another XHTML 1.1 document in an <object> element (as <iframe> isn't part of the XHTML 1.1 spec). I'm having trouble in IE8 (don't care about 6 or 7) with accessing a javascript variable, Lightbox, in the parent window, from the document loaded in the <object>. In Firefox and everywhere I've seen online, I can just use window.parent.Lightbox. In IE8 however, I get it being undefined. window.parent does give me an object but it doesn't have my Lightbox variable. I've also tried window.Lightbox, window.top.Lightbox, and window.top.document.Lightbox, but all return undefined. I should mention I'm using Javascript to set the data property of the <object> - but I don't see how that could affect anything relevant. What Javascript Fu do I need to do to be able to access my Lightbox variable?

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >