Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 417/883 | < Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Help with Neuroph neural network

    - by user359708
    For my graduate research I am creating a neural network that trains to recognize images. I am going much more complex than just taking a grid of RGB values, downsampling, and and sending them to the input of the network, like many examples do. I actually use over 100 independently trained neural networks that detect features, such as lines, shading patterns, etc. Much more like the human eye, and it works really well so far! The problem is I have quite a bit of training data. I show it over 100 examples of what a car looks like. Then 100 examples of what a person looks like. Then over 100 of what a dog looks like, etc. This is quite a bit of training data! Currently I am running at about one week to train the network. This is kind of killing my progress, as I need to adjust and retrain. I am using Neuroph, as the low-level neural network API. I am running a dual-quadcore machine(16 cores with hyperthreading), so this should be fast. My processor percent is at only 5%. Are there any tricks on Neuroph performance? Or Java peroformance in general? Suggestions? I am a cognitive psych doctoral student, and I am decent as a programmer, but do not know a great deal about performance programming.

    Read the article

  • Why is OpenSubKey() returning null on my Win 7 64 bit system?

    - by BrMcMullin
    Has anyone seen OpenSubKey() and other Microsoft.Win32 registry functions return null on 64 bit systems when 32 bit registry keys are under Wow6432node in the registry? I'm working on a unit testing framework that makes a call to OpenSubKey() from the .net library. My dev system is a Win 7 64 bit environment with VS 2008 SP1 and the Win 7 SDK installed. The application we're unit testing is a 32 bit application, so the registry is virtualized under HKLM\Software\Wow6432node. When we call: Registry.LocalMachine.OpenSubKey( @"Software\MyCompany\MyApp\" ); Null is returned, however explicitly stating to look here works: Registry.LocalMachine.OpenSubKey( @"Software\Wow6432node\MyCompany\MyApp\" ); From what I understand this function should be agnostic to 32 bit or 64 bit environments and should know to jump to the virtual node. Even stranger is the fact that the exact same call inside a compiled and installed version of our application is running just fine on the same system and is getting the registry keys necessary to run; which are also being placed in HKLM\Software\Wow6432node. Any suggestions? Thanks in advance!

    Read the article

  • Applets failing to load

    - by Roy Tang
    While testing our setup for user acceptance testing, we got some reports that java applets in our web application would occasionally fail to load. The envt where it was reported was WinXP/IE6, and there were no errors found in the java console. Obviously we'd like to avoid it. What sort of things should we be checking for here? On our local servers, everything seems fine. There's some turnaround time when sending questions to the on-site guy, so I'd look to cover as many possible causes as possible. Some more info: We have multiple applets, in the instance that they fail loading, all of them fail loading. The applet jar files vary in size from 2MB to 8MB. I'm told it seems more likely to happen if the applet isn't cached yet, i.e. if they've been able to load the applets once on a given machine, further runs on that machine go smoothly. I'm wondering if there's some sort of network transfer error when downloading the applets, but I don't know how to verify that. Any advise is welcome!

    Read the article

  • Design suggestion for expression tree evaluation with time-series data

    - by Lirik
    I have a (C#) genetic program that uses financial time-series data and it's currently working but I want to re-design the architecture to be more robust. My main goals are: sequentially present the time-series data to the expression trees. allow expression trees to access previous data rows when needed. to optimize performance of the data access while evaluating the expression trees. keep a common interface so various types of data can be used. Here are the possible approaches I've thought about: I can evaluate the expression tree by passing in a data row into the root node and let each child node use the same data row. I can evaluate the expression tree by passing in the data row index and letting each node get the data row from a shared DataSet (currently I'm passing the row index and going to multiple synchronized arrays to get the data). Hybrid: an immutable data set is accessible by all of the expression trees and each expression tree is evaluated by passing in a data row. The benefit of the first approach is that the data row is being passed into the expression tree and there is no further query done on the data set (which should increase performance in a multithreaded environment). The drawback is that the expression tree does not have access to the rest of the data (in case some of the functions need to do calculations using previous data rows). The benefit of the second approach is that the expression trees can access any data up to the latest data row, but unless I specify what that row is, I'll have to iterate through the rows and figure out which one is the last one. The benefit of the hybrid is that it should generally perform better and still provide access to the earlier data. It supports two basic "views" of data: the latest row and the previous rows. Do you guys know of any design patterns or do you have any tips that can help me build this type of system? Should I use a DataSet to hold and present the data, or are there more efficient ways to present rows of data while maintaining a simple interface? FYI: All of my code is written in C#.

    Read the article

  • Is a red-black tree my ideal data structure?

    - by Hugo van der Sanden
    I have a collection of items (big rationals) that I'll be processing. In each case, processing will consist of removing the smallest item in the collection, doing some work, and then adding 0-2 new items (which will always be larger than the removed item). The collection will be initialised with one item, and work will continue until it is empty. I'm not sure what size the collection is likely to reach, but I'd expect in the range 1M-100M items. I will not need to locate any item other than the smallest. I'm currently planning to use a red-black tree, possibly tweaked to keep a pointer to the smallest item. However I've never used one before, and I'm unsure whether my pattern of use fits its characteristics well. 1) Is there a danger the pattern of deletion from the left + random insertion will affect performance, eg by requiring a significantly higher number of rotations than random deletion would? Or will delete and insert operations still be O(log n) with this pattern of use? 2) Would some other data structure give me better performance, either because of the deletion pattern or taking advantage of the fact I only ever need to find the smallest item? Update: glad I asked, the binary heap is clearly a better solution for this case, and as promised turned out to be very easy to implement. Hugo

    Read the article

  • Boost::Mutex & Malloc

    - by M. Tibbits
    Hi all, I'm trying to use a faster memory allocator in C++. I can't use Hoard due to licensing / cost. I was using NEDMalloc in a single threaded setting and got excellent performance, but I'm wondering if I should switch to something else -- as I understand things, NEDMalloc is just a replacement for C-based malloc() & free(), not the C++-based new & delete operators (which I use extensively). The problem is that I now need to be thread-safe, so I'm trying to malloc an object which is reference counted (to prevent excess copying), but which also contains a mutex pointer. That way, if you're about to delete the last copy, you first need to lock the pointer, then free the object, and lastly unlock & free the mutex. However, using malloc to create a boost::mutex appears impossible because I can't initialize the private object as calling the constructor directly ist verboten. So I'm left with this odd situation, where I'm using new to allocate the lock and nedmalloc to allocate everything else. But when I allocate a large amount of memory, I run into allocation errors (which disappear when I switch to malloc instead of nedmalloc ~ but the performance is terrible). My guess is that this is due to fragmentation in the memory and an inability of nedmalloc and new to place nice side by side. There has to be a better solution. What would you suggest?

    Read the article

  • AS3 - Can't access properties or methods of a MC child that has been added in script

    - by Chris
    Hi All - I am still a bit of a beginner at AS3, so bear with me, please. I have created a loop to instantiate tiles on a board. In the following example, "Gametiles" is an array containing objects of class "Tile" which is a class that extends MovieClip. "Game" is a MC that I added to the stage in the flash developing environment. for(var i:uint=0;i < Gametiles.length;i++){ var pulledTile = Gametiles[i]; var tilename:String = "I_Tile_" + pulledTile.grid_y + "_" + pulledTile.grid_x; var createdTile = new InteractiveTile(); pulledTile.addAnims(createdTile); Game.addChildAt(pulledTile, 0); Game.getChildAt(0).name = tilename; } The above code works - but with a tricky problem. If I did something like the following: trace(Game.I_Tile_1_3.x); I get "TypeError: Error #1010: A term is undefined and has no properties." However, I am able to access theses children in the following manner: var testing = Game.getChildByName("I_Tile_1_3") trace(testing.x); This method is a bit cumbersome though. I really don't want to have to create a var and call getChildByName every time I want to interact with these properties or methods. How can I set up these children so that I can access them directly without the extra steps?

    Read the article

  • PHP Hashtable array optimisation.

    - by hiprakhar
    I made a PHP app which was taking about ~0.0070sec for execution. Now, I added a hashtable array with about 2000 values. Suddenly the time for execution has gone up to ~0.0700 secs. Almost 10 times the previous value. I tried commenting out the part where I was searching inside the hashtable array (but array was still left defined). Still, the execution time remains about ~0.0500secs. Array is something like: $subjectinfo = array( 'TPT753' => 'Industrial Training', 'TPT801' => 'High Polymeric Engineering', 'TPT802' => 'Corrosion Engineering', 'TPT803' => 'Decorative ,Industrial And High Performance Coatings', 'TPT851' => 'Project'); Is there any way to optimize this part? I cannot use Database as I am running this app on Google app engine which is still not supporting JDO database for php. Some more code from the app: function getsubjectinfo($name) { $subjectinfo = array( 'TPT753' => 'Industrial Training', 'TPT801' => 'High Polymeric Engineering', 'TPT802' => 'Corrosion Engineering', 'TPT803' => 'Decorative ,Industrial And High Performance Coatings', 'TPT851' => 'Project'); $name = str_replace("-", "", $name); $name = str_replace(" ", "", $name); if (isset($subjectinfo["$name"])) return "(".$subjectinfo["$name"].")"; else return ""; } Then I am using the following statement 2-3 times in the app: echo $key." ".$this->getsubjectinfo($key)

    Read the article

  • How to properly develop and deploy features for existing asp.net applications on IIS

    - by Tomh
    My question actually consists of multiple questions. I'm frequently reading about companies who deploy a small subset of features for a select amount of customers using the live "database". Ruby on Rails and its ecosystem have deployment tools and database migrations to deploy or rollback such features in a live production or staging environment. My question, how is this done for an asp.net (mvc in particular) application? How do you test your newly released features against live data? Do you have any tools to modify the existing database and roll back changes if necessary? Do you make backups before deployment? Update Maybe I should point out that my question is not really clear, getting more answers here will help me phrase the question better. To make it easier I will describe a situation I'm commonly seeing with some of my clients. My clients have large deployments of popular web applications. They do not have staging/QA/testing servers. (yes this is not optimal). The data their apps consist of are images, xml files, user uploads and data in Sql Server. Having a few records, of their production database and a couple of dummy files is not a substitute of testing against real data in my opinion. How would you design a workflow that can create a acceptable environment to mimic a production environment before going live?

    Read the article

  • Which Java library for Binary Decision Diagrams?

    - by reprogrammer
    A Binary Decision Diagram (BDD) is a data structure to represent boolean functions. I'd like use this data structure in a Java program. My search for Java based BDD libraries resulted into the following packages. Java Decision Diagram Libraries JavaBDD JDD If you know of any other BDD libraries available for Java programs, please let me know so that I add it to the list above. If you have used any of these libraries, please tell me about your experience with the library. In particular, I'd like you to compare the available libraries along the following dimensions. Quality. Is the library mature and reasonably bug free? Performance. How do you evaluate the performance of the library? Support. Could you easily get support whenever you encountered a problem with the library? Was the library well documented? Ease of use. Was the API well designed? Could you install and use the library quickly and easily? Please mention the version of the library that you are evaluating.

    Read the article

  • Is their a definitive list for the differences between the current version of SQL Azure and SQL Serv

    - by Aim Kai
    I am a relative newbie when it comes to SQL Azure!! I was wondering if there was a definitive list somewhere regarding what is and is not supported by SQL Azure in regards to SQL Server 2008? I have had a look through google but I've noticed some of the blog posts are missing things which I have found through my own testing: For example, quite a lot is summarised in this blog entry http://www.keepitsimpleandfast.com/2009/12/main-differences-between-sql-azure-and.html Common Language Runtime (CLR) Database file placement Database mirroring Distributed queries Distributed transactions Filegroup management Global temporary tables Spatial data and indexes SQL Server configuration options SQL Server Service Broker System tables Trace Flags which is a repeat of the MSDN page http://msdn.microsoft.com/en-us/library/ff394115.aspx I've noticed from my own testing that the following seem to have issues when migrating from SQL Server 2008 to the Azure: XML Types (the msdn does mention large custom types - I guess it may include this?? even if the data schema is really small?) Multi-part views I've been using SQL Azure Migration Wizard v3.1.8 to migrate local databases into the cloud. I was wondering if anyone could point to a list or give me any information till when these features are likely to be included in SQL Azure.

    Read the article

  • jQuery plugin Breaks after ajax call

    - by Jason
    Hello, I am quite a newbie to jQuery/ajax but am having a problem with my site that im making. Basically at first the page loads fine. On the boxes is a fade caption, when the title of the caption is clicked you are brought to an ajax page. Once you use the 'Back' button on browser, or the 'Back to list' button i've made the caption fade plugin no longer works and the box i had previously clicked is no longer clickable. can anyone help? heres my website: http://www.jcianfrone.com/testing jquery: h**p://www.jcianfrone.com/testing/script.js HTML: <div id="pageContent"> <div class="item"><a href="#page6"><img src="images/wrk-kd.jpg" width="286" height="200" alt="Koodikkki"></a><span id="caption"><a href="#">Title</a><p>Description</p></span></div> <div class="item"><a href="#page7"><img src="images/wrk-kd.jpg" width="286" height="200" alt="Koodikkki"></a><span id="caption"><a href="#">Title</a><p>Description</p></span></div> <div class="item"><a href="#page8"><img src="images/wrk-kd.jpg" width="286" height="200" alt="Koodikkki"></a><span id="caption"><a href="#">Title</a><p>Description</p></span></div> <div class="item"><a href="#page9"><img src="images/wrk-kd.jpg" width="286" height="200" alt="Koodikkki"></a><span id="caption"><a href="#">Title</a><p>Description</p></span></div> </div> Many thanks in advance

    Read the article

  • jquery-sortable using behavior of a linkedlist

    - by BabaBooey
    I suspect I'm not looking at this issue in the right way so here goes. I have essentially a LinkedList of data on a web page (http://en.wikipedia.org/wiki/Linked_list) that I'd like to manipulate using traditional Linked List behavior (i.e. just updating the reference/id of the "next" object) for performance reasons. Where this gets a bit tricky is I'd ideally like to use Jquery's sortable to do this. Like the user would drag something up/down and I could just do an Ajax call to the server with the id of the object that moved and the new parent id of that object (and then behind the scenes I could figure out how to reconnect things..maybe need more data than that...). But every example I've seen where sortable is used they were sending the whole re-indexed list to the database to update which seems unnecessary to me. With a linked list to change an element's "index" I only need to make 3 updates which depending on the size of the list could be a big performance savings. Anyone have an example of what I'm trying to do...am I too far in left field?

    Read the article

  • Which articles I've should read before starting to make my custom drawn winforms app?

    - by Dmitriy Matveev
    Hello! I'm currently developing a windows forms application with a lot of user controls. Some of them are just custom drawn buttons or panels and some of them are a compositions of these buttons and panels inside of FlowLayoutPanels and TableLayoutPanels. And the window itself is also custom drawn. I don't have much experience in winforms development, but I've made a proper decomposition of proposed design into user controls and implementation is already almost finished. I've already solved many arisen problems during development by the help of the google, msdn, SO and several dirty hacks (when nothing were helping) and still experiencing some of them. There are a lot of gaps in my knowledge base, since I don't know answers to many questions like: When I should use things like double buffer, suspended layout, suspended redraw ? What should I do with the controls which shouldn't be visible at some moment ? Common performance pitfalls (I think I've fallen in in several ones) ? So I think there should be some great articles which can give some knowledge enough to avoid most common problems and improve performance and maintainability of my application. Maybe some of you can recommend a few?

    Read the article

  • Javascript keyup doesn't work as expected, it executes in instances where I have not removed my fing

    - by Binni
    I'm trying to create a simple game in javascript and I'm stuck on how to deal with keys. Small example: function keyUpEvent(event) { alert(event.keyCode); } window.addEventListener("keyup", keyUpEvent, false); I'm running Ubuntu 9.10 and testing it in Firefox 3.5 and Chromium. If I press and release a button instantly I get an alert, which is to be expected, but when I press and hold a button I get a small pause and then a series of alert windows, the expected result is that I only get an alert window when I remove my finger from a button. I reason it has something to do with the fact that when I press and hold a button in a text area for example I get one character, small pause, and then a series of characters: dddddddddddddddd. I believe it's possible to get around this or do it more right or whatever since this game for example: http://bohuco.net/testing/gamequery/pong.html seams not to be affected by this. But I notice if I try out the jquery keyup demo ( api.jquery.com/keyup/ ) I get the same problem. How can I implement basic game key event handling?

    Read the article

  • How do I simulate the usage of a sequence of web pages?

    - by Rory Becker
    I have a simple sequence of web pages written in ASP.Net 3.5 SP1. Page1 - A Logon Form.... txtUsername, txtPassword and cmdLogon Page2 - A Menu (created using DevExpress ASP.Net controls) Page3 - The page redirected to by the server in the event that the user picks the right menu option in Page2 I would like to create a threaded program to simulate many users trying to use this sequence of pages. I have managed to create a host Winforms app which Launches a new thread for each "User" I have further managed to work out the basics of WebRequest enough to perform a request which retrieves the Logon page itself. Dim Request As HttpWebRequest = TryCast(WebRequest.Create("http://MyURL/Logon.aspx"), HttpWebRequest) Dim Response As HttpWebResponse = TryCast(Request.GetResponse(), HttpWebResponse) Dim ResponseStream As StreamReader = New StreamReader(Response.GetResponseStream(), Encoding.GetEncoding(1252)) Dim HTMLResponse As String = ResponseStream.ReadToEnd() Response.Close() ResponseStream.Close() Next I need to simulate the user having entered information into the 2 TextBoxes and pressing logon.... I have a hunch this requires me to add the right sort of "PostData" to the request. before submitting. However I'm also concerned that "ViewState" may be an issue. Am I correct regarding the PostData? How do I add the postData to the request? Do I need to be concerned about Viewstate? Update: While I appreciate that Selenium or similar products are useful for acceptance testing , I find that they are rather clumsy for what amounts to load testing. I would prefer not to load 100 instances of Firefox or IE in order to simulate 100 users hitting my site. This was the reason I was hoping to take the ASPNet HttpWebRequest route.

    Read the article

  • What are the benefits of the PHP the different PHP compression libraries?

    - by Christopher W. Allen-Poole
    I've been looking into ways to compress PHP libraries, and I've found several libraries which might be useful, but I really don't know much about them. I've specifically been reading about bcompiler and PHAR libraries. Is there any performance benefit in either of these? Are there any "gotchas" I need to watch out for? What are the relative benefits? Do either of them add to/detract from performance? I'm also interested in learning of other libs which might be out there which are not obvious in the documentation? As an aside, does anyone happen to know whether these work more like zip files which just happen to have the code in there, or if they operate more like Python's pre-compiling which actually runs a pseudo-compiler? ======================= EDIT ======================= I've been asked, "What are you trying to accomplish?" Well, I suppose the answer is that this is all hypothetical. It is a combination of these: What if my pet project becomes the most popular web project on earth and I want to distribute it quickly and easily? (hay, a man can dream, right?) It also seems if using PHAR can be done easily, it would be the best way to create a subversion snapshot. Python has this really cool pre-compiling policy, I wonder if PHP has something like that? These libraries seem to do something similar. Will they do that? Hey, these libraries seem pretty neat, but I'd like clarification on the differences as they seem to do the same thing

    Read the article

  • called function A(args) calls a function B() which then calls a function A(args), How to do that?

    - by Ken
    See example: <!DOCTYPE html> <html> <head> <title>language</title> <script type="text/javascript" src="http://www.google.com/jsapi"> </script> </head> <body> <div id="language"></div> <script type="text/javascript"> var loaded = false; function load_api() { google.load("language", "1", { "nocss": true, "callback": function() { loaded = true; callback_to_caller(with_caller_agruments); // how to call a function (with the same arguments) which called load_api() ??? // case 1 should be: detect_language('testing'); // case 2 should be: translate('some text'); } }); } function detect_language(text) { if (!loaded) { load_api(); } else { // let's continue... believe that google.language is loaded & ready to use google.language.detect(text, function(result) { if (!result.error && result.language) { document.getElementById('language').innerHTML = result.language; } }); } } function translate(text) { if (!loaded) { load_api(); } else { // let's continue... } } detect_language('testing'); // case 1 translate('some text'); // case 2 </script> </body> </html>

    Read the article

  • SQL Selects on subsets

    - by Adam
    I need to check if a row exists in a database; however, I am trying to find the way to do this that offers the best performance. This is best summarised with an example. Let's assume I have the following table: dbo.Person( FirstName varchar(50), LastName varchar(50), Company varchar(50) ) Assume this table has millions of rows, however ONLY the column Company has an index. I want to find out if a particular combination of FirstName, LastName and Company exists. I know I can do this: IF EXISTS(select 1 from dbo.Person where FirstName = @FirstName and LastName = @LastName and Company = @Company) Begin .... End However, unless I'm mistaken, that will do a full table scan. What I'd really like it to do is a query where it utilises the index. With the table above, I know that the following query will have great performance, since it uses the index: Select * from dbo.Person where Company = @Company Is there anyway to make the search only on that subset of data? e.g. something like this: select * from ( Select * from dbo.Person where Company = @Company ) where FirstName = @FirstName and LastName = @LastName That way, it would only be doing a table scan on a much narrower collection of data. I know the query above won't work, but is there a query that would? Oh, and I am unable to create temporary tables, as the user will only have read access.

    Read the article

  • Python - Open default mail client using mailto, with multiple recipients

    - by victorhooi
    Hi, I'm attempting to write a Python function to send an email to a list of users, using the default installed mail client. I want to open the email client, and give the user the opportunity to edit the list of users or the email body. I did some searching, and according to here: http://www.sightspecific.com/~mosh/WWW_FAQ/multrec.html It's apparently against the RFC spec to put multiple comma-delimited recipients in a mailto link. However, that's the way everybody else seems to be doing it. What exactly is the modern stance on this? Anyhow, I found the following two sites: http://2ality.blogspot.com/2009/02/generate-emails-with-mailto-urls-and.html http://www.megasolutions.net/python/invoke-users-standard-mail-client-64348.aspx which seem to suggest solutions using urllib.parse (url.parse.quote for me), and webbrowser.open. I tried the sample code from the first link (2ality.blogspot.com), and that worked fine, and opened my default mail client. However, when I try to use the code in my own module, it seems to open up my default browser, for some weird reason. No funny text in the address bar, it just opens up the browser. The email_incorrect_phone_numbers() function is in the Employees class, which contains a dictionary (employee_dict) of Employee objects, which themselves have a number of employee attributes (sn, givenName, mail etc.). Full code is actually here (http://stackoverflow.com/questions/2963975/python-converting-csv-to-objects-code-design) from urllib.parse import quote import webbrowser .... def email_incorrect_phone_numbers(self): email_list = [] for employee in self.employee_dict.values(): if not PhoneNumberFormats.standard_format.search(employee.telephoneNumber): print(employee.telephoneNumber, employee.sn, employee.givenName, employee.mail) email_list.append(employee.mail) recipients = ', '.join(email_list) webbrowser.open("mailto:%s?subject=%s&body=%s" % (recipients, quote("testing"), quote('testing')) ) Any suggestions? Cheers, Victor

    Read the article

  • UNIX-style RegExp Replace running extremely slowly under windows. Help? EDIT: Negative lookahead ass

    - by John Sullivan
    I'm trying to run a unix regEXP on every log file in a 1.12 GB directory, then replace the matched pattern with ''. Test run on a 4 meg file is took about 10 minutes, but worked. Obviously something is murdering performance by several orders of magnitude. Find: ^(?!.*155[0-2][0-9]{4}\s.*).*$ -- NOTE: match any line NOT starting 155[0-2]NNNN where in is a number 0-9. Replace with: ''. Is there some justifiable reason for my regExp to take this long to replace matching text, or is the program I am using (this is windows / a program called "grepWin") most likely poorly optimized? Thanks. UPDATE: I am noticing that searching for ^(155[0-2]).$ takes ~7 seconds in a 5.6 MB file with 77 matches. Adding the Negative Lookahead Assertion, ?=, so that the regExp becomes ^(?!155[0-2]).$ is causing it to take at least 5-10 minutes; granted, there will be thousands and thousands of matches. Should the negative lookahead assertion be extremely detrimental to performance, and/or a large quantity of matches?

    Read the article

  • how to set SqlMapClient outside of spring xmls

    - by Omnipresent
    I have the following in my xml configurations. I would like to convert these to my code because I am doing some unit/integration testing outside of the container. xmls: <bean id="MyMapClient" class="org.springframework.orm.ibatis.SqlMapClientFactoryBean"> <property name="configLocation" value="classpath:sql-map-config-oracle.xml"/> <property name="dataSource" ref="IbatisDataSourceOracle"/> </bean> <bean id="IbatisDataSource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="jdbc/my/mydb"/> </bean> code I used to fetch stuff from above xmls: this.setSqlMapClient((SqlMapClient)ApplicationInitializer.getApplicationContext().getBean("MyMapClient")); my code (for unit testing purposes): SqlMapClientFactoryBean bean = new SqlMapClientFactoryBean(); UrlResource urlrc = new UrlResource("file:/data/config.xml"); bean.setConfigLocation(urlrc); DriverManagerDataSource dataSource = new DriverManagerDataSource(); dataSource.setDriverClassName("oracle.jdbc.OracleDriver"); dataSource.setUrl("jdbc:oracle:thin:@123.210.85.56:1522:ORCL"); dataSource.setUsername("dbo_mine"); dataSource.setPassword("dbo_mypwd"); bean.setDataSource(dataSource); SqlMapClient sql = (SqlMapClient) bean; //code fails here when the xml's are used then SqlMapClient is the class that sets up then how come I cant convert SqlMapClientFactoryBean to SqlMapClient

    Read the article

  • Perl Regex - Condensing groups of find/replace

    - by brydgesk
    I'm using Perl to perform some file cleansing, and am running into some performance issues. One of the major parts of my code involves standardizing name fields. I have several sections that look like this: sub substitute_titles { my ($inStr) = @_; ${$inStr} =~ s/ PHD./ PHD /; ${$inStr} =~ s/ P H D / PHD /; ${$inStr} =~ s/ PROF./ PROF /; ${$inStr} =~ s/ P R O F / PROF /; ${$inStr} =~ s/ DR./ DR /; ${$inStr} =~ s/ D.R./ DR /; ${$inStr} =~ s/ HON./ HON /; ${$inStr} =~ s/ H O N / HON /; ${$inStr} =~ s/ MR./ MR /; ${$inStr} =~ s/ MRS./ MRS /; ${$inStr} =~ s/ M R S / MRS /; ${$inStr} =~ s/ MS./ MS /; ${$inStr} =~ s/ MISS./ MISS /; } I'm passing by reference to try and get at least a little speed, but I fear that running so many (literally hundreds) of specific string replaces on tens of thousands (likely hundreds of thousands eventually) of records is going to hurt the performance. Is there a better way to implement this kind of logic than what I'm doing currently? Thanks Edit: Quick note, not all the replace functions are just removing periods and spaces. There are string deletions, soundex groups, etc.

    Read the article

  • Make Ant's delete task fail when a directory exists and is not deleted but not when it doesn't exist

    - by Tim Visher
    I have tho following clean function in my build script and I'd like to know how I can improve it. <target name="clean" description="Clean output directories."> <!-- Must not fail on error because it fails if directories don't exist. Is there really no better way to do this? --> <delete includeEmptyDirs="true" failonerror="false"> <fileset dir="${main.build.directory}" /> <fileset dir="dist" /> <fileset dir="${documentation.build.directory}" /> <fileset dir="/build-testing" /> </delete> </target> Specifically regarding my comment, I'm unhappy with the fact that I can't run this on a fresh box because the directory structure hasn't been set up yet by the other targets. We run the build in such a way that it entirely recreates the structures necessary for testing and deployment every time to avoid stale class files and such. With the way that delete currently is set up, a failure to delete a file does not fail the build and I'd like it to. I don't want it to fail the build if the file doesn't exist though. If it doesn't exist then what I'm asking it to do has already happened. Thoughts?

    Read the article

< Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >