Search Results

Search found 43200 results on 1728 pages for 'large object pattern'.

Page 452/1728 | < Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >

  • Experience migrating legacy Cobol/PL1 to Java

    - by MadMurf
    ORIGINAL Q: I'm wondering if anyone has had experience of migrating a large Cobol/PL1 codebase to Java? How automated was the process and how maintainable was the output? How did the move from transactional to OO work out? Any lessons learned along the way or resources/white papers that may be of benefit would be appreciated. EDIT 7/7: Certainly the NACA approach is interesting, the ability to continue making your BAU changes to the COBOL code right up to the point of releasing the JAVA version has merit for any organization. The argument for procedural Java in the same layout as the COBOL to give the coders a sense of comfort while familiarizing with the Java language is a valid argument for a large organisation with a large code base. As @Didier points out the $3mil annual saving gives scope for generous padding on any BAU changes going forward to refactor the code on an ongoing basis. As he puts it if you care about your people you find a way to keep them happy while gradually challenging them. The problem as I see it with the suggestion from @duffymo to Best to try and really understand the problem at its roots and re-express it as an object-oriented system is that if you have any BAU changes ongoing then during the LONG project lifetime of coding your new OO system you end up coding & testing changes on the double. That is a major benefit of the NACA approach. I've had some experience of migrating Client-Server applications to a web implementation and this was one of the major issues we encountered, constantly shifting requirements due to BAU changes. It made PM & scheduling a real challenge. Thanks to @hhafez who's experience is nicely put as "similar but slightly different" and has had a reasonably satisfactory experience of an automatic code migration from Ada to Java. Thanks @Didier for contributing, I'm still studying your approach and if I have any Q's I'll drop you a line.

    Read the article

  • Android Layout: Display as much ImageViews as possible without scrolling

    - by Toni4780
    I am working on an app which should display several same size images on the screen. But it should only display only so much images as possible without offering scrolling. E.g. On a "big" tablet it could display 10x10 Imageviews (screen is large, so there is much space for pictures) On a "big" phone there might be enough space to display 6x6 ImageViews, so it should only display a 6x6 array of images. On a small phone there is propably only space for 4x4 ImageViews, so it should only display this. How can I make this in Android? I know about "layout-large", ... but if i make a special fixed xml-layout for a "large" device, it would not fit all devices correct. E.g. a Galaxy Nexus is a "normal" device and so is a Nexus One, but there would be at least be space for one or two more imageview rows on a Galaxy Nexus than on a Nexus One. So do I have to measure in code somehow how big the resolution is and display some TableRows accordingly? Or is there a special way how I can manage this?

    Read the article

  • Perl XML SAX parser emulating XML::Simple record for record

    - by DVK
    Short Q summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog - due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. What I'm looking for is an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been.

    Read the article

  • My OpenCL kernel is slower on faster hardware.. But why?

    - by matdumsa
    Hi folks, As I was finishing coding my project for a multicore programming class I came up upon something really weird I wanted to discuss with you. We were asked to create any program that would show significant improvement in being programmed for a multi-core platform. I’ve decided to try and code something on the GPU to try out OpenCL. I’ve chosen the matrix convolution problem since I’m quite familiar with it (I’ve parallelized it before with open_mpi with great speedup for large images). So here it is, I select a large GIF file (2.5 MB) [2816X2112] and I run the sequential version (original code) and I get an average of 15.3 seconds. I then run the new OpenCL version I just wrote on my MBP integrated GeForce 9400M and I get timings of 1.26s in average.. So far so good, it’s a speedup of 12X!! But now I go in my energy saver panel to turn on the “Graphic Performance Mode” That mode turns off the GeForce 9400M and turns on the Geforce 9600M GT my system has. Apple says this card is twice as fast as the integrated one. Guess what, my timing using the kick-ass graphic card are 3.2 seconds in average… My 9600M GT seems to be more than two times slower than the 9400M.. For those of you that are OpenCL inclined, I copy all data to remote buffers before starting, so the actual computation doesn’t require roundtrip to main ram. Also, I let OpenCL determine the optimal local-worksize as I’ve read they’ve done a pretty good implementation at figuring that parameter out.. Anyone has a clue? edit: full source code with makefiles here http://www.mathieusavard.info/convolution.zip cd gimage make cd ../clconvolute make put a large input.gif in clconvolute and run it to see results

    Read the article

  • calloc v/s malloc and time efficiency

    - by yCalleecharan
    Hi, I've read with interest the post "c difference between malloc and calloc". I'm using malloc in my code and would like to know what difference I'll have using calloc instead. My present (pseudo)code with malloc: Scenario 1 int main() { allocate large arrays with malloc INITIALIZE ALL ARRAY ELEMENTS TO ZERO for loop //say 1000 times do something and write results to arrays end for loop FREE ARRAYS with free command } //end main If I use calloc instead of malloc, then I'll have: Scenario2 int main() { for loop //say 1000 times ALLOCATION OF ARRAYS WITH CALLOC do something and write results to arrays FREE ARRAYS with free command end for loop } //end main I have three questions: Which of the scenarios is more efficient if the arrays are very large? Which of the scenarios will be more time efficient if the arrays are very large? In both scenarios,I'm just writing to arrays in the sense that for any given iteration in the for loop, I'm writing each array sequentially from the first element to the last element. The important question: If I'm using malloc as in scenario 1, then is it necessary that I initialize the elements to zero? Say with malloc I have array z = [garbage1, garbage2, garbage 3]. For each iteration, I'm writing elements sequentially i.e. in the first iteration I get z =[some_result, garbage2, garbage3], in the second iteration I get in the first iteration I get z =[some_result, another_result, garbage3] and so on, then do I need specifically to initialize my arrays after malloc?

    Read the article

  • compact Number formatting behavior in Java (automatically switch between decimal and scientific notation)

    - by kostmo
    I am looking for a way to format a floating point number dynamically in either standard decimal format or scientific notation, depending on the value of the number. For moderate magnitudes, the number should be formatted as a decimal with trailing zeros suppressed. If the floating point number is equal to an integral value, the decimal point should also be suppressed. For extreme magnitudes (very small or very large), the number should be expressed in scientific notation. Alternately stated, if the number of characters in the expression as standard decimal notation exceeds a certain threshold, switch to scientific notation. I should have control over the maximum number of digits of precision, but I don't want trailing zeros appended to express the minimum precision; all trailing zeros should be suppressed. Basically, it should optimize for compactness and readability. 2.80000 - 2.8 765.000000 - 765 0.0073943162953 - 0.00739432 (limit digits of precision—to 6 in this case) 0.0000073943162953 - 7.39432E-6 (switch to scientific notation if the magnitude is small enough—less than 1E-5 in this case) 7394316295300000 - 7.39432E+6 (switch to scientific notation if the magnitude is large enough—for example, when greater than 1E+10) 0.0000073900000000 - 7.39E-6 (strip trailing zeros from significand in scientific notation) 0.000007299998344 - 7.3E-6 (rounding from the 6-digit precision limit causes this number to have trailing zeros which are stripped) Here's what I've found so far: The .toString() method of the Number class does most of what I want, except it doesn't upconvert to integer representation when possible, and it will not express large integral magnitudes in scientific notation. Also, I'm not sure how to adjust the precision. The "%G" format string to the String.format(...) function allows me to express numbers in scientific notation with adjustable precision, but does not strip trailing zeros. I'm wondering if there's already some library function out there that meets these criteria. I guess the only stumbling block for writing this myself is having to strip the trailing zeros from the significand in scientific notation produced by %G.

    Read the article

  • Does query plan optimizer works well with joined/filtered table-valued functions?

    - by smoothdeveloper
    In SQLSERVER 2005, I'm using table-valued function as a convenient way to perform arbitrary aggregation on subset data from large table (passing date range or such parameters). I'm using theses inside larger queries as joined computations and I'm wondering if the query plan optimizer work well with them in every condition or if I'm better to unnest such computation in my larger queries. Does query plan optimizer unnest table-valued functions if it make sense? If it doesn't, what do you recommend to avoid code duplication that would occur by manually unnesting them? If it does, how do you identify that from the execution plan? code sample: create table dbo.customers ( [key] uniqueidentifier , constraint pk_dbo_customers primary key ([key]) ) go /* assume large amount of data */ create table dbo.point_of_sales ( [key] uniqueidentifier , customer_key uniqueidentifier , constraint pk_dbo_point_of_sales primary key ([key]) ) go create table dbo.product_ranges ( [key] uniqueidentifier , constraint pk_dbo_product_ranges primary key ([key]) ) go create table dbo.products ( [key] uniqueidentifier , product_range_key uniqueidentifier , release_date datetime , constraint pk_dbo_products primary key ([key]) , constraint fk_dbo_products_product_range_key foreign key (product_range_key) references dbo.product_ranges ([key]) ) go . /* assume large amount of data */ create table dbo.sales_history ( [key] uniqueidentifier , product_key uniqueidentifier , point_of_sale_key uniqueidentifier , accounting_date datetime , amount money , quantity int , constraint pk_dbo_sales_history primary key ([key]) , constraint fk_dbo_sales_history_product_key foreign key (product_key) references dbo.products ([key]) , constraint fk_dbo_sales_history_point_of_sale_key foreign key (point_of_sale_key) references dbo.point_of_sales ([key]) ) go create function dbo.f_sales_history_..snip.._date_range ( @accountingdatelowerbound datetime, @accountingdateupperbound datetime ) returns table as return ( select pos.customer_key , sh.product_key , sum(sh.amount) amount , sum(sh.quantity) quantity from dbo.point_of_sales pos inner join dbo.sales_history sh on sh.point_of_sale_key = pos.[key] where sh.accounting_date between @accountingdatelowerbound and @accountingdateupperbound group by pos.customer_key , sh.product_key ) go -- TODO: insert some data -- this is a table containing a selection of product ranges declare @selectedproductranges table([key] uniqueidentifier) -- this is a table containing a selection of customers declare @selectedcustomers table([key] uniqueidentifier) declare @low datetime , @up datetime -- TODO: set top query parameters . select saleshistory.customer_key , saleshistory.product_key , saleshistory.amount , saleshistory.quantity from dbo.products p inner join @selectedproductranges productrangeselection on p.product_range_key = productrangeselection.[key] inner join @selectedcustomers customerselection on 1 = 1 inner join dbo.f_sales_history_..snip.._date_range(@low, @up) saleshistory on saleshistory.product_key = p.[key] and saleshistory.customer_key = customerselection.[key] I hope the sample makes sense. Much thanks for your help!

    Read the article

  • Delphi Shell IExtractIcon usage and result

    - by Roy M Klever
    What I do: Try to extract thumbnail using IExtractImage if that fail I try to extract icons using IExtractIcon, to get maximum iconsize, but IExtractIcon gives strange results. Problem is I tried to use a methode that extracts icons from an imagelist but if there is no large icon (256x256) it will render the smaller icon at the topleft position of the icon and that does not look good. That is why I am trying to use the IExtractIcon instead. But icons that show up as 256x256 icons in my imagelist extraction methode reports icon sizes as 33 large and 16 small. So how do I check if a large (256x256) icon exists? If you need more info I can provide som sample code. if PThumb.Image = nil then begin OleCheck(ShellFolder.ParseDisplayName(0, nil, StringToOleStr(PThumb.Name), Eaten, PIDL, Atribute)); ShellFolder.GetUIObjectOf(0, 1, PIDL, IExtractIcon, nil, XtractIcon); CoTaskMemFree(PIDL); bool:= False; if Assigned(XtractIcon) then begin GetLocationRes := XtractIcon.GetIconLocation(GIL_FORSHELL, @Buf, sizeof(Buf), IIdx, IFlags); if (GetLocationRes = NOERROR) or (GetLocationRes = E_PENDING) then begin Bmp := TBitmap.Create; try OleCheck(XtractIcon.Extract(@Buf, IIdx, LIcon, SIcon, 32 + (16 shl 16))); Done:= False; Roy M Klever

    Read the article

  • Fastest way to generate delimited string from 1d numpy array

    - by Abiel
    I have a program which needs to turn many large one-dimensional numpy arrays of floats into delimited strings. I am finding this operation quite slow relative to the mathematical operations in my program and am wondering if there is a way to speed it up. For example, consider the following loop, which takes 100,000 random numbers in a numpy array and joins each array into a comma-delimited string. import numpy as np x = np.random.randn(100000) for i in range(100): ",".join(map(str, x)) This loop takes about 20 seconds to complete (total, not each cycle). In contrast, consider that 100 cycles of something like elementwise multiplication (x*x) would take than one 1/10 of a second to complete. Clearly the string join operation creates a large performance bottleneck; in my actual application it will dominate total runtime. This makes me wonder, is there a faster way than ",".join(map(str, x))? Since map() is where almost all the processing time occurs, this comes down to the question of whether there a faster to way convert a very large number of numbers to strings.

    Read the article

  • Appropriate wx.Sizer(s) for the job?

    - by MetaHyperBolic
    I have a space in which I would like certain elements (represented here by A, B, D, and G) to each be in its own "corner" of the design. The corners ought to line up as if each of the four elements was repelling the other; a rectangle. This is to be contained within an unresizable panel. I will have several similar panels and want to keep the location of the elements as identical as possible. (I needed something a little more complex than a wx.Wizard, but with the same general idea.) AAAAAAAAAA BB CCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCC CCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCC DDD EEE FFF GGG A represents a text in a large font. B represents a numeric progress meter (e.g. "1 of 7") in a small font. C represents a large block of text. D, E, F, and G are buttons. The G button is separated from others for functionality. I have attempted nested wx.BoxSizers (horizontal boxes inside of one vertical box) without luck. My first problem with wx.BoxSizer is that the .SetMinSize on my last row has not been honored. The second problem is that I have no idea how to make the G button "take up space" without growing comically large, or how I can jam it up against the right edge and bottom edge. I have tried to use a wx.GridBagSizer, but ran into entirely different issues. After plowing through the various online tutorials and wxPython in Action, I'm a little frustrated. The relevant forums appear to see activity once every two weeks. "Playing around with it" has gotten me nowhere; I feel as if I am trying to smooth out a hump in ill-laid carpet.

    Read the article

  • With Go, how to append unknown number of byte into a vector and get a slice of bytes?

    - by Stephen Hsu
    I'm trying to encode a large number to a list of bytes(uint8 in Go). The number of bytes is unknown, so I'd like to use vector. But Go doesn't provide vector of byte, what can I do? And is it possible to get a slice of such a byte vector? I intends to implement data compression. Instead of store small and large number with the same number of bytes, I'm implements a variable bytes that uses less bytes with small number and more bytes with large number. My code can not compile, invalid type assertion: 1 package main 2 3 import ( 4 //"fmt" 5 "container/vector" 6 ) 7 8 func vbEncodeNumber(n uint) []byte{ 9 bytes := new(vector.Vector) 10 for { 11 bytes.Push(n % 128) 12 if n < 128 { 13 break 14 } 15 n /= 128 16 } 17 bytes.Set(bytes.Len()-1, bytes.Last().(byte)+byte(128)) 18 return bytes.Data().([]byte) // <- 19 } 20 21 func main() { vbEncodeNumber(10000) } I wish to writes a lot of such code into binary file, so I wish the func can return byte array. I haven't find a code example on vector.

    Read the article

  • Floating Point Arithmetic - Modulo Operator on Double Type

    - by CrimsonX
    So I'm trying to figure out why the modulo operator is returning such a large unusual value. If I have the code: double result = 1.0d % 0.1d; it will give a result of 0.09999999999999995. I would expect a value of 0 Note this problem doesn't exist using the dividing operator - double result = 1.0d / 0.1d; will give a result of 10.0, meaning that the remainder should be 0. Let me be clear: I'm not surprised that an error exists, I'm surprised that the error is so darn large compared to the numbers at play. 0.0999 ~= 0.1 and 0.1 is on the same order of magnitude as 0.1d and only one order of magnitude away from 1.0d. Its not like you can compare it to a double.epsilon, or say "its equal if its < 0.00001 difference". I've read up on this topic on StackOverflow, in the following posts one two three, amongst others. Can anyone suggest explain why this error is so large? Any any suggestions to avoid running into the problems in the future (I know I could use decimal instead but I'm concerned about the performance of that).

    Read the article

  • Git-Based Source Control in the Enterprise: Suggested Tools and Practices?

    - by Bob Murphy
    I use git for personal projects and think it's great. It's fast, flexible, powerful, and works great for remote development. But now it's mandated at work and, frankly, we're having problems. Out of the box, git doesn't seem to work well for centralized development in a large (20+ developer) organization with developers of varying abilities and levels of git sophistication - especially compared with other source-control systems like Perforce or Subversion, which are aimed at that kind of environment. (Yes, I know, Linus never intended it for that.) But - for political reasons - we're stuck with git, even if it sucks for what we're trying to do with it. Here are some of the things we're seeing: The GUI tools aren't mature Using the command line tools, it's far to easy to screw up a merge and obliterate someone else's changes It doesn't offer per-user repository permissions beyond global read-only or read-write privileges If you have a permission to ANY part of a repository, you can do that same thing to EVERY part of the repository, so you can't do something like make a small-group tracking branch on the central server that other people can't mess with. Workflows other than "anything goes" or "benevolent dictator" are hard to encourage, let alone enforce It's not clear whether it's better to use a single big repository (which lets everybody mess with everything) or lots of per-component repositories (which make for headaches trying to synchronize versions). With multiple repositories, it's also not clear how to replicate all the sources someone else has by pulling from the central repository, or to do something like get everything as of 4:30 yesterday afternoon. However, I've heard that people are using git successfully in large development organizations. If you're in that situation - or if you generally have tools, tips and tricks for making it easier and more productive to use git in a large organization where some folks are not command line fans - I'd love to hear what you have to suggest. BTW, I've asked a version of this question already on LinkedIn, and got no real answers but lots of "gosh, I'd love to know that too!"

    Read the article

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

  • CSS call from code behind not working

    - by SmartestVEGA
    I have the following entries in the css file. a.intervalLinks { font-size:11px; font-weight:normal; color:#003399; text-decoration:underline; margin:0px 16px 0px 0px; } a.intervalLinks:link { text-decoration:underline; } a.intervalLinks:hover { text-decoration:none; } a.intervalLinks:visited { text-decoration:underline; } a.selectedIntervalLink { font-size:12px; font-weight:bold; color:#003399; text-decoration:none; margin:0px 16px 0px 0px; } a.intervalLinks:active { text-decoration:underline; font-size:large ; } Whenever i take the click on some links (not shown) which is embedded in the webpage ..i can see the change in the link a.intervalLinks:active { text-decoration:underline; font-size:large ; (the font of the link will become large) but after clicking the page refreshes ..the changes will go away i want to keep the change for ever in that link ...even there is a page refresh i understood that ..this can achieved only throughg the code behind of asp.net Following code should work:but unfortunately its not ..could anyone help? protected override void OnInit(EventArgs e) { rptDeptList.ItemDataBound += new RepeaterItemEventHandler(rptDeptList_ItemDataBound); } void rptDeptList_ItemDataBound(object sender, RepeaterItemEventArgs e) { if (e.Item.DataItem == null) return; LinkButton btn = (LinkButton)e.Item.FindControl("LinkButton1"); btn.Attributes.Add("class", "intervalLinks"); } Current html code for links has been shown below : <ItemTemplate> <div class='dtilsDropListTxt'><div class='rightArrow' ></div> <asp:LinkButton ID="LinkButton1" runat="server" Text=<%#DataBinder.Eval(Container.DataItem, "WORK_AREA")%> CssClass="intervalLinks" OnClick="LinkButton1_Click" ></asp:LinkButton> </div> </ItemTemplate> Could anyone help?

    Read the article

  • mmap only needed pages of kernel buffer to user space

    - by axeoth
    See also this answer: http://stackoverflow.com/a/10770582/1284631 I need something similar, but without having to allocate a buffer: the buffer is large, in theory, but the user space program only needs to access some parts of it (it mocks some registers of a hardware). As I cannot allocate with vmalloc_user() such a large buffer (kernel 32 bit, in embedded environment, no swap...), I followed the same approach as in the quoted answer, trying to allocate only those pages that are really requested by the user space. So: I use a my_mmap() function for the device file (actually, is the .mmap field of a struct uio_info) to set up the fields of the vma, then, in the vm_area_struct's .fault field (also named my_fault()), I should return a page. except that: In the my_fault() method of vm_area_struct, I cannot obtain a page through: vmf->page=vmalloc_to_page(my_buf + (vmf->pgoff << PAGE_SHIFT)); since there is no allocated buffer: my_buf = vmalloc_user(MY_BUF_SIZE); fails with "allocation failed: out of vmalloc space - use vmalloc= to increase size." (and there is no room or swap to increase that vmalloc= parameter). So, I would need to get a page from the kernel and fill the vmf->page field. How to allocate a page (I assume that the offset of the page is known, as it is vm->pgoff). What base memory should I use instead of my_buf? PS: I also did set up the vma->flags |= VM_NORESERVE; (in the my_mmap()), but not sure if it helps. Is there any vmalloc_user_unreserved()-like function? (let's say, lazy allocation) Also, writing 1 to /proc/sys/vm/overcommit_memory and large values (eg 500) to /proc/sys/vm/overcommit_ratio before trying to my_buf=vmalloc_user(<large_size>) didn't work.

    Read the article

  • Problems with Threading in Python 2.5, KeyError: 51, Help debugging?

    - by vignesh-k
    I have a python script which runs a particular script large number of times (for monte carlo purpose) and the way I have scripted it is that, I queue up the script the desired number of times it should be run then I spawn threads and each thread runs the script once and again when its done. Once the script in a particular thread is finished, the output is written to a file by accessing a lock (so my guess was that only one thread accesses the lock at a given time). Once the lock is released by one thread, the next thread accesses it and adds its output to the previously written file and rewrites it. I am not facing a problem when the number of iterations is small like 10 or 20 but when its large like 50 or 150, python returns a KeyError: 51 telling me element doesn't exist and the error it points out to is within the lock which puzzles me since only one thread should access the lock at once and I do not expect an error. This is the class I use: class errorclass(threading.Thread): def __init__(self, queue): self.__queue=queue threading.Thread.__init__(self) def run(self): while 1: item = self.__queue.get() if item is None: break result = myfunction() lock = threading.RLock() lock.acquire() ADD entries from current thread to entries in file and REWRITE FILE lock.release() queue = Queue.Queue() for i in range(threads): errorclass(queue).start() for i in range(desired iterations): queue.put(i) for i in range(threads): queue.put(None) Python returns with KeyError: 51 for large number of desired iterations during the adding/write file operation after lock access, I am wondering if this is the correct way to use the lock since every thread has a lock operation rather than every thread accessing a shared lock? What would be the way to rectify this?

    Read the article

  • Application Code Redesign to reduce no. of Database Hits from Performance Perspective

    - by Rachel
    Scenario I want to parse a large CSV file and inserts data into the database, csv file has approximately 100K rows of data. Currently I am using fgetcsv to parse through the file row by row and insert data into Database and so right now I am hitting database for each line of data present in csv file so currently database hit count is 100K which is not good from performance point of view. Current Code: public function initiateInserts() { //Open Large CSV File(min 100K rows) for parsing. $this->fin = fopen($file,'r') or die('Cannot open file'); //Parsing Large CSV file to get data and initiate insertion into schema. while (($data=fgetcsv($this->fin,5000,";"))!==FALSE) { $query = "INSERT INTO dt_table (id, code, connectid, connectcode) VALUES (:id, :code, :connectid, :connectcode)"; $stmt = $this->prepare($query); // Then, for each line : bind the parameters $stmt->bindValue(':id', $data[0], PDO::PARAM_INT); $stmt->bindValue(':code', $data[1], PDO::PARAM_INT); $stmt->bindValue(':connectid', $data[2], PDO::PARAM_INT); $stmt->bindValue(':connectcode', $data[3], PDO::PARAM_INT); // Execute the statement $stmt->execute(); $this->checkForErrors($stmt); } } I am looking for a way wherein instead of hitting Database for every row of data, I can prepare the query and than hit it once and populate Database with the inserts. Any Suggestions !!! Note: This is the exact sample code that I am using but CSV file has more no. of field and not only id, code, connectid and connectcode but I wanted to make sure that I am able to explain the logic and so have used this sample code here. Thanks !!!

    Read the article

  • How to handle 30k files in a project which requires them?

    - by Jeremiah
    Visual Studio 2010 RC - Silverlight Application We have a library of images that we need to have access to. They are given to us from a vendor (through an installer) and they are not in a database, they are files in a folder (a very large monster of a folder). We do not control when the images change, so the vendor needs to be able to override them individually. We get updates frequently enough from this vendor to state that these images change "randomly" and without our (programmer) knowledge. The problem: I don't want 30K images in SVN. Heck, I don't even want to imagine them in my Solution. However, our application requires them in order to run properly. So, our build/staging servers need access to these images (we have two build servers). The Question: How would you handle it when your application will not work as specified without access to each of 30k images and you don't control when those images change? I'm do not want to have a crazy large SVN repository. Because I don't know when any of these images change, I really don't want them in my solution (definitely do not want a large solution, either). I also don't want a bunch of manual steps to do every time these images change. Our mantra, up to this point, has always been, any developer could download from SVN, compile and run our app. These images are going to kill that mantra. I'm tempted to make a WCF service that will return images if they exist and a dummy image if they don't. This way all dev boxes will return a dummy image and our build/staging/production boxes will return real images (ones that actually have the vendor's image installer installed on). This has to be a solved problem. What have other people done to handle these types of problems? I'm open to suggestions.

    Read the article

  • How do I efficiently parse a CSV file in Perl?

    - by Mike
    I'm working on a project that involves parsing a large csv formatted file in Perl and am looking to make things more efficient. My approach has been to split() the file by lines first, and then split() each line again by commas to get the fields. But this suboptimal since at least two passes on the data are required. (once to split by lines, then once again for each line). This is a very large file, so cutting processing in half would be a significant improvement to the entire application. My question is, what is the most time efficient means of parsing a large CSV file using only built in tools? note: Each line has a varying number of tokens, so we can't just ignore lines and split by commas only. Also we can assume fields will contain only alphanumeric ascii data (no special characters or other tricks). Also, i don't want to get into parallel processing, although it might work effectively. edit It can only involve built-in tools that ship with Perl 5.8. For bureaucratic reasons, I cannot use any third party modules (even if hosted on cpan) another edit Let's assume that our solution is only allowed to deal with the file data once it is entirely loaded into memory. yet another edit I just grasped how stupid this question is. Sorry for wasting your time. Voting to close.

    Read the article

  • Why does reusing arrays increase performance so significantly in c#?

    - by Willem
    In my code, I perform a large number of tasks, each requiring a large array of memory to temporarily store data. I have about 500 tasks. At the beginning of each task, I allocate memory for an array : double[] tempDoubleArray = new double[M]; M is a large number depending on the precise task, typically around 2000000. Now, I do some complex calculations to fill the array, and in the end I use the array to determine the result of this task. After that, the tempDoubleArray goes out of scope. Profiling reveals that the calls to construct the arrays are time consuming. So, I decide to try and reuse the array, by making it static and reusing it. It requires some additional juggling to figure out the minimum size of the array, requiring an extra pass through all tasks, but it works. Now, the program is much faster (from 80 sec to 22 sec for execution of all tasks). double[] tempDoubleArray = staticDoubleArray; However, I'm a bit in the dark of why precisely this works so well. Id say that in the original code, when the tempDoubleArray goes out of scope, it can be collected, so allocating a new array should not be that hard right? I ask this because understanding why it works might help me figuring out other ways to achieve the same effect, and because I would like to know in what cases allocation gives performance issues.

    Read the article

  • jQuery/Ajax IE7 - Long requests fail

    - by iQ
    Hi guys, I have a problem with IE7 regarding an ajax call that is made by jQuery.load function. Basically the request works in cases where the URL string is not too long, but as soon as the URL gets very large it fails. Doing some debugging on the Ajax call I found this error: URL: <blanked out security reasons but it's very long> Content Type: Headers size (bytes): 0 Data size (bytes): 0 Total size (bytes): 0 Transferred data size (bytes): 0 Cached data: No Error result: 0x800c0005 Error constant: INET_E_RESOURCE_NOT_FOUND Error description: The server or proxy was not found Extended error result: 0x7a Extended error description: The data area passed to a system call is too small. As you can see, it looks like nothing is being sent. Now this only happens on IE7 but not other browsers, with IE8 there is a small delay but still works. The same request works fine when the URL string is relatively small. Now I need this working on IE7 for compatibility reasons and I cannot find workarounds for this so any help is greatly appreciated. The actual ajax call is like this: $("ID").load("url?lotsofparams",callbac func(){}); "lotsofparams" can vary, sometimes being small or very large. It's when the string is very large that I get the above error for IE7 only.

    Read the article

  • jquery displaying table rows dynamically in dom belonging to one/more particular class

    - by whatf
    I have the basic html structure: <table> <tbody> <tr class="1"> <td> <p style="font-size:large"> <span class="muted"> This is the first object </span> </p> </td> </tr> <tr class="2"> <td> <p style="font-size:large"> <span class="muted"> This is second object </span> </p> </td> </tr> <tr class="3"> <p style="font-size:large"> <span class="muted"> this is the third object </span> </p> </td> </tr> </tbody> </table> and then I have check boxes, the functionality i want is, if checkbox 1 is checked, only the tr with class 1 be displayed. if checkbox 2 and 3 are clicked, the tr with class 1 gets hidden and 2, 3 show in the dom. again if checkbox 2,*3* are unchecked and 1 is checked tr with class 2, 3 do not show and 1 is showed. How can this be done in jQuery?

    Read the article

  • form validation with jquery and livevalidation

    - by ImpY
    I'm trying to do some form validation with livevaldation & jquery. I've a formular with an input field like that: <div id="prenameDiv" class="control-group"> `<input id="prename" name="prename" class="input-large" placeholder="Max" >` </div> So if there's an error on validation 'livevalidaton' adds the class 'LV_invalid_field' to the input - it looks like that: <div id="prenameDiv" class="control-group"> <input id="prename" name="prename" class="input-large LV_invalid_field" placeholder="Max" > </div> That's ok, but now I'll add another class 'error' to the div 'prenameDiv' when the DOM changes that it looks like that: <div id="prenameDiv" class="control-group error"> `<input id="prename" name="prename" class="input-large LV_invalid_field" placeholder="Max" ` </div> I tried it that way: if ($("#prenameDiv").bind("DOMSubtreeModified")){ `if ($("#prename").hasClass("LV_invalid_field")) {` $("#prenameDiv").addClass("error"); } } But nothing changes? Do you have some ideas?

    Read the article

  • jQuery - Show id, based on selected items class?

    - by Jon Hadley
    I have a layout roughly as follows: <div id="foo"> <!-- a bunch of content --> </div> <div id="thumbnails"> <div class="thumb-content1"></div> <div class="thumb-content2"></div> <div class="thumb-content3"></div> </div> <div id="content-1"> <!-- some text and pictures, including large-pic1 --> </div> <div id="content-2"> <!-- some text and pictures, including large-pic2 --> </div> <div id="content-3"> <!-- some text and pictures, including large-pic3 --> </div> etc .... On page load I want to show 'foo' and 'thumbnails' and hide the three content divs. As the user clicks each thumbnail, I want to hide foo, and replace it with the matching 'content-x'. I can get my head round jQuery show, hide and replace (although, bonus points if you want to include that in your example!). But how would I extract and construct the appropriate content id, from the thumbnail class, then pass it to the show hide code?

    Read the article

< Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >