Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 416/883 | < Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >

  • C++ .NET DLL vs C# Managed Code ? (File Encrypting AES-128+XTS)

    - by Ranhiru
    I need to create a Windows Mobile Application (WinMo 6.x - C#) which is used to encrypt/decrypt files. However it is my duty to write the encryption algorithm which is AES-128 along with XTS as the mode of operation. RijndaelManaged just doesn't cut it :( Very much slower than DES and 3DES CryptoServiceProviders :O I know it all depends on how good I am at writing the algorithm in the most efficient way. (And yes I my self have to write it from scratch but i can take a look @ other implementations) Nevertheless, does writing a C++ .NET DLL to create the encryption/decryption algorithm + all the file handling and using it from C# have a significant performance advantage OVER writing the encryption algorithm + file handling in completely managed C# code? If I use C++ .NET to create the encryption algorithm, should I use MFC Smart Device DLL or ATL? What is the difference and is there any impact on which one I choose? And can i just add a reference to the C++ DLL from C# or should I use P/Invoke? I am fairly competent with C# than C++ but performance plays a major role as I have convinced my lecturers that AES is a very efficient cryptographic algorithm for resource constrained devices. Thanx a bunch :)

    Read the article

  • Should I be relying on WebTests for data validation?

    - by Alexander Kahoun
    I have a suite of web tests created for a web service. I use it for testing a particular input method that updates a SQL Database. The web service doesn't have a way to retrieve the data, that's not its purpose, only to update it. I have a validator that validates the response XML that the web service generates for each request. All that works fine. It was suggested by a teammate that I add data validation so that I check the database to see the data after the initial response validator runs and compare it with what was in the input request. We have a number of services and libraries that are separate from the web service I'm testing that I can use to get the data and compare it. The problem is that when I run the web test the data validation always fails even when the request succeeds. I've tried putting the thread to sleep between the response validation and the data validation but to no avail; It always gets the data from before the response validation. I can set a break point and visually see that the data has been updated in the DB, funny thing is when I step through it in debug with the breakpoint it does validate successfully. Before I get too much more into this issue I have to ask; Is this the purpose of web tests? Should I be able to validate data through service calls in this manner or am I asking too much of a web test and the response validation is as far as I should go?

    Read the article

  • How do I grant a site's applet an AllPermission privilege?

    - by nahsra
    I'd like to specify certain applets to run with java.security.AllPermission on my computer (for debugging and security testing). However, I don't want to enable all applets that I run to have this permission. So, editing my user Java policy file (which I have ensured is the correct policy file through testing), I try to put this value: grant codeBase "http://host_where_applet_lives/-" { permission java.security.AllPermission; }; This value fails when the applet tries to do something powerful (create a new Thread, in my case). However, when I put the following value: grant { permission java.security.AllPermission; }; The applet is able to perform the powerful operation. The only difference is the lack of a codeBase attribute. An answer to a similar question asked here [1] seemed to suggest (but never show or prove) that AccessController.doPrivileged() calls may be required. To me, this sounds wrong as I don't need that call when I grant the permissions to all applets (the second example I showed). Even if this is a solution, littering the applets I run with AccessController.doPrivileged() calls is not easy or necessarily possible. To top it off, my tests show that this just doesn't work anyway. But I'm happy to hear more ideas around it. [1] http://stackoverflow.com/questions/1751412/cant-get-allpermission-configured-for-intranet-applet-can-anyone-help

    Read the article

  • Program Structure Design Tools? (Top Down Design)

    - by Lee Olayvar
    I have been looking to expand my methodologies to better involve Unit testing, and i stumbled upon Behavioral Driven Design (Namely Cucumber, and a few others). I am quite intrigued by the concept as i have never been able to properly design top down, only because keeping track of the design gets lost without a decent way to record it. So on that note, in a mostly language agnostic way, are there any useful tools out there i am (probably) unaware of? Eg, i have often been tempted to try building flow charts for my programs, but i am not sure how much that will help, and it seems a bit confusing to me how i could make a complex enough flow chart to handle the logic of a full program, and all its features.. ie, it just seems like flow charts would be limiting in the design scheme.. or possibly grow to an unmaintainable scale. BDD methods are nice, but with a system that is so tied to structure, tying into the language and unit testing seems like a must (for it to be worth it) and it seems to be hard to find something to work well with both Python and Java (my two main languages). So anyway.. on that note, any comments are much appreciated. I have searched around on here and it seems like top down design is a well discussed topic, but i haven't seen too much reference to tools themselves, eg, flow chart programs, etc. I am on Linux, if it matters (in the case of programs).

    Read the article

  • Does anyone know of a good Commercial WPF Web Browser Control?

    - by VoidDweller
    I have an MDI WPF app that I need to add web content too. At first, great it looks like I have 2 options built into the framework the Frame control and the WebBrowser control. Given that this is an MDI app it doesn't take long to discover that neither of these will work. The WebBrowser control wraps up the IE WebBrowser ActiveX Control which uses the Win32 graphics pipeline. The "Airspace" issue pretty much sums this up as "Sorry, the layouts will not play nice together". Yes, I have thought about taking snapshots of the web content rendering these and mapping the mouse and keyboard events back to the browser control, but I can't afford the performance penalty and I really don't have time to write and thoroughly test it. I have looked for third party controls, but so far I have only found Chris Cavanagh's WPF Chromium Web Browser control. Which wraps up Awesomium 1.5. Together these are very cool, they play nice with the WPF layouts. But they do not meet my performance requirements. They are VERY HEAVY on memory consumption and not to friendly with CPU usage either. Not to mention still quite buggy. I'll elaborate if you are interested. So, do any of you know of a stable performant WPF web browser control? Thanks.

    Read the article

  • mercurial: how to synchronize mq patches from a master repo as mq patches to a set of clone repos

    - by dim
    I have to run a dozen of different build tests on a code base maintained in a mercurial repository. I don't want to run serially these tests on same repository because they modify a set of common files and I want to run them in parallel on different machines. Also, after all tests are run I want to have access to latest test results from those test work areas. Currently I'm cloning the master repository a dozen of times and run in each clone one different test. Before each test execution I do a pull/update/purge preparation sequence in order to start the test on latest clean state. That's good for me. I'm also preparing new changes using mq extension that I would test on all clones as above before committing them. For testing some ready candidate mq patches I want somehow to deploy/synchronize them to be available in test clones and apply those ready for testing using some guard before running the test. Did anybody do this synchronization before? What's the most simple way to do it? Do I need to have versioned mq patches for that?

    Read the article

  • Version control for PHP Developement

    - by Vinod Mohan
    Hello, I have been hearing a lot about the advantages of using a version control system and would like to try one. I was doing freelance web development in PHP for the past 2 years, two months back I hired two more programmer to help me. I will be hiring one more person soon. We maintain 4 websites, all of which are my own, which are continuously being edited by one of us. I learned PHP by myself and have never worked in any other firms. Hence I am new to version control, unit testing and all. Currently, we have development servers on our workstations. When we edit a particular section of a site, we download the code for that particular section (say /news/ or /movies/ or /wallpapers/ ) from the production server to the dev server, makes the changes locally and uploads to the production server (no code review / testing). Because of this, our dev server is always out of date from our prod server. Occasionally, this also create problem when one of us forgets to download the latest copy from prod and overwrites the last change. I know this is very very foolish, but currently our prod server is the only copy that has all the updates and latest changes. Can anyone please suggest which is the best version control system for me? I am more interested in distributed version control, since we don't have a central backup for all our code. I read about Mercurial and Git and found that Mercurial is used in several large open source projects by Mozilla, Sun, Symbian etc. So which one do you think is better for me? Not only version control, if there are any other package that I can use to make my current setup better, please mention that too :) Thanks and Regards, Vinod Mohan

    Read the article

  • log4j vs. System.out.println - logger advantages?

    - by wishi_
    Hi! I'm newly using log4j in a project. A fellow programmer told me that using System.out.println is considered bas style and that log4j is something like standard for logging matters nowadays. We do lots of JUnit testing - System.out stuff turns out to be harder to test. Therefore I began utilizing log4j for a Console controller class, that's just handling commandline parameters. // some logger config org.apache.log4j.BasicConfigurator.configure(); Logger logger = LoggerFactory.getLogger(Console.class); Category cat = Category.getRoot(); Seems to work: logger.debug("String"); Produces: 1 [main] DEBUG project.prototype.controller.Console - String I got two questions regarding this: From my basic understanding using this logger should provide me comfortable options to write a logfile with timestamps - instead of spamming the console - if debug mode is enabled at the logger? Why is System.out.println harder to test? I searched stackoverflow and found a testing recipe. So I wonder what kind of advantage I really get by using log4j.

    Read the article

  • "Replay" the steps needed to recreate an error

    - by David
    I am going to create a typical business application that will be used by a few hundred consultants. Normally, the consultants would be presented with an error message with a standard text. As the application will be a complicated one with lots of changes being made to it constantly I would like the following: When an error message is presented, the user has the option to "send" the error message to the developers. The developers should be able to open the incoming file in i.e. Eclipse and debug the steps of the last 10 minutes of work step by step (one line at a time if they want to). Everything should be transparent, meaning that they for example should be able to see the return values of calls to the database. Are there any solutions that offer such functionality today, my preferred language is Python or also Java. I know that there will be a huge performance hit because of such functionality, but that is acceptable as this kind of software is not performance sensitive. It would be VERY nice if the database also had a cronology so that one could query the database for values that existed at the exact time that a specific line of code was run in the application, leading up to the bug.

    Read the article

  • Custom Cocoa Framework and a problem using it

    - by happyCoding25
    Hello, I made a custom cocoa framework just to experiment and find the best way to make one but ran in to a problem using it. The framework project builds and compiles just fine, but when I use it in an xcode project I get the error, 'LogTest' undeclared. The name of the framework is LogTest Heres the code to my app that uses the framework: AppDelegate.h: #import <Cocoa/Cocoa.h> #import <LogTest/LogTest.h> @interface TestAppDelegate : NSObject <NSApplicationDelegate> { NSWindow *window; } @property (assign) IBOutlet NSWindow *window; @end AppDelegate.m: #import "TestAppDelegate.h" @implementation TestAppDelegate @synthesize window; - (void)awakeFromNib { [LogTest logStart:@"testing 123":@"testing 1234"]; //This is the line where the error occurs } @end Framework Code........ LogTest.h: #import <Cocoa/Cocoa.h> #import "Method.h" @protocol LogTest //Not sure if this is needed I just wanted a blank header @end Method.h: #import <Cocoa/Cocoa.h> @interface Method : NSObject { } + (void)logStart:(NSString *)test:(NSString *)test2; @end Method.m: #import "Method.h" @implementation Method + (void)logStart:(NSString *)test:(NSString *)test2 { NSLog(test); NSLog(test2); } @end If anyone knows why I am getting this error please reply. Thanks for any help

    Read the article

  • Google Code Jam 2010 Large DataSets Take Too Long to Submit

    - by Travis
    Hey Guys, I'm participating in the 2010 code jam and I solved two of the problems for the small data sets, but I'm not even close to solving the large data sets in the 8 minute time frame. I'm wondering if anyone out there has solved the large data set: What hardware were you running on? What language were you running on? What performance tuning techniques did you do on your code to run as fast as possible? I'm writing the solutions in Ruby, which is not my day to day language, and executing them on my Macbook Pro. My solutions for problem A and problem C are on github at http://github.com/tjboudreaux/codejam2010. I'd appreciate any suggestions that you may have. FWIW, I have alot of experience in C++ from college, my primary language is PHP, and my "sandbox" language is Ruby. Was I just a bit ambitious by taking a shot at this in Ruby, not knowing where the language struggles for performance, or does anyone see anything that's a redflag as to why I can't complete the large dataset in time to submit.

    Read the article

  • Cassandra random read speed

    - by Jody Powlette
    We're still evaluating Cassandra for our data store. As a very simple test, I inserted a value for 4 columns into the Keyspace1/Standard1 column family on my local machine amounting to about 100 bytes of data. Then I read it back as fast as I could by row key. I can read it back at 160,000/second. Great. Then I put in a million similar records all with keys in the form of X.Y where X in (1..10) and Y in (1..100,000) and I queried for a random record. Performance fell to 26,000 queries per second. This is still well above the number of queries we need to support (about 1,500/sec) Finally I put ten million records in from 1.1 up through 10.1000000 and randomly queried for one of the 10 million records. Performance is abysmal at 60 queries per second and my disk is thrashing around like crazy. I also verified that if I ask for a subset of the data, say the 1,000 records between 3,000,000 and 3,001,000, it returns slowly at first and then as they cache, it speeds right up to 20,000 queries per second and my disk stops going crazy. I've read all over that people are storing billions of records in Cassandra and fetching them at 5-6k per second, but I can't get anywhere near that with only 10mil records. Any idea what I'm doing wrong? Is there some setting I need to change from the defaults? I'm on an overclocked Core i7 box with 6gigs of ram so I don't think it's the machine. Here's my code to fetch records which I'm spawning into 8 threads to ask for one value from one column via row key: ColumnPath cp = new ColumnPath(); cp.Column_family = "Standard1"; cp.Column = utf8Encoding.GetBytes("site"); string key = (1+sRand.Next(9)) + "." + (1+sRand.Next(1000000)); ColumnOrSuperColumn logline = client.get("Keyspace1", key, cp, ConsistencyLevel.ONE); Thanks for any insights

    Read the article

  • asynchronous pages

    - by lockedscope
    I have just read the multi-threading and custom threading in asp.net articles. http://www.williablog.net/williablog/post/2008/12/16/Custom-Threading-in-ASPNET.aspx http://www.williablog.net/williablog/post/2008/12/16/Multi-Threading-in-ASPNET.aspx I have couple of questions. What does he mean by returning a thread to the pool? Is that thread completely removed from memory or put in to a state that it does not scheduled to CPU(is it in sleep state or whatever)? If that thread is removed from memory how could it survive after async point? How this mechanism works? Are every objects(pages class, request,response etc.) are copied to somewhere else before they are disposed? (Or, is it just waiting in a sleep state and then its waked when async call ends?) He is saying that; "Having said that, making pages asynchronous is not really about improving performance, it is about improving scalability" then he is saying; "I'm sorry to say that it will do nothing for scalability or performance." So which one is true? or for which case(s) are they true?

    Read the article

  • PostgreSQL: BYTEA vs OID+Large Object?

    - by mlaverd
    I started an application with Hibernate 3.2 and PostgreSQL 8.4. I have some byte[] fields that were mapped as @Basic (= PG bytea) and others that got mapped as @Lob (=PG Large Object). Why the inconsistency? Because I was a Hibernate noob. Now, those fields are max 4 Kb (but average is 2-3 kb). The PostgreSQL documentation mentioned that the LOs are good when the fields are big, but I didn't see what 'big' meant. I have upgraded to PostgreSQL 9.0 with Hibernate 3.6 and I was stuck to change the annotation to @Type(type="org.hibernate.type.PrimitiveByteArrayBlobType"). This bug has brought forward a potential compatibility issue, and I eventually found out that Large Objects are a pain to deal with, compared to a normal field. So I am thinking of changing all of it to bytea. But I am concerned that bytea fields are encoded in Hex, so there is some overhead in encoding and decoding, and this would hurt the performance. Are there good benchmarks about the performance of both of these? Anybody has made the switch and saw a difference?

    Read the article

  • Objective-C respondsToSelector question.

    - by Holli
    From what I have learned so far: In Objective-C you can send any message to any object. If the object does implement the right method it will be executed otherwise nothing will happen. This is because before the message is send Objective-C will perform respondsToSelector. I hope I am right so far. I did a little program for testing where an action is invoked every time a slider is moved. Also for testing I set the sender to NSButton but in fact it is an NSSlider. Now I asked the object if it will respond to setAlternateTitle. While a NSButton will do and NSSlider will not. If I run the code and do respondsToSelector myself it will tell me the object will not respond to that selector. If I test something else like intValue, it will respond. So my code is fine so far. - (IBAction)sliderDidMove:(id)sender { NSButton *slider = sender; BOOL responds = [slider respondsToSelector:@selector(setAlternateTitle)]; if(responds == YES) { NSLog(@"YES"); } else { NSLog(@"NO"); } [slider setAlternateTitle:@"Hello World"]; } But when I actually send the setAlternateTitle message the program will crash and I am not exactly sure why. Shouldn't it do a respondsToSelector before sending the message?

    Read the article

  • Blocking on DBCP connection pool (open and close connnection). Is database connection pooling in OpenEJB pluggable?

    - by topchef
    We use OpenEJB on Tomcat (used to run on JBoss, Weblogic, etc.). While running load tests we experience significant performance problems with handling JMS messages (queues). Problem was localized to blocking on database connection pool getting or releasing connection to the pool. Blocking prevented concurrent MDB instances (threads) from running hence performance suffered 10-fold and worse. The same code used to run on application servers (with their respective connection pool implementations) with no blocking at all. Example of thread blocked: Name: JMS Resource Adapter-worker-23 State: BLOCKED on org.apache.commons.pool.impl.GenericObjectPool@1ea6b4a owned by: JMS Resource Adapter-worker-19 Total blocked: 18,426 Total waited: 0 Stack trace: org.apache.commons.pool.impl.GenericObjectPool.returnObject(GenericObjectPool.java:916) org.apache.commons.dbcp.PoolableConnection.close(PoolableConnection.java:91) - locked org.apache.commons.dbcp.PoolableConnection@1bcba8 org.apache.commons.dbcp.managed.ManagedConnection.close(ManagedConnection.java:147) com.xxxxx.persistence.DbHelper.closeConnection(DbHelper.java:290) .... Couple of questions. I am almost certain that some transactional attributes and properties contribute to this blocking, but MDBs are defined as non-transactional (we use both annotations and ejb-jar.xml). Some EJBs do use container-managed transactions though (and we can observe blocking there as well). Are there any DBCP configurations that may fix blocking? Is DBCP connection pool implementation replaceable in OpenEJB? How easy (difficult) to replace it with another library? Just in case this is how we define data source in OpenEJB (openejb.xml): <Resource id="MyDataSource" type="DataSource"> JdbcDriver oracle.jdbc.driver.OracleDriver JdbcUrl ${oracle.jdbc} UserName ${oracle.user} Password ${oracle.password} JtaManaged true InitialSize 5 MaxActive 30 ValidationQuery SELECT 1 FROM DUAL TestOnBorrow true </Resource>

    Read the article

  • Uncompress OpenOffice files for better storage in version control

    - by Craig McQueen
    I've heard discussion about how OpenOffice (ODF) files are compressed zip files of XML and other data. So making a tiny change to the file can potentially totally change the data, so delta compression doesn't work well in version control systems. I've done basic testing on an OpenOffice file, unzipping it and then rezipping it with zero compression. I used the Linux zip utility for my testing. OpenOffice will still happily open it. So I'm wondering if it's worth developing a small utility to run on ODF files each time just before I commit to version control. Any thoughts on this idea? Possible better alternatives? Secondly, what would be a good and robust way to implement this little utility? Bash shell that calls zip (probably Linux only)? Python? Any gotchas you can think of? Obviously I don't want to accidentally mangle a file, and there are several ways that could happen. Possible gotchas I can think of: Insufficient disk space Some other permissions issue that prevents writing the file or temporary files ODF document is encrypted (probably should just leave these alone; the encryption probably also causes large file changes and thus prevents efficient delta compression)

    Read the article

  • Dropping all user tables/sequences in Oracle

    - by Ambience
    As part of our build process and evolving database, I'm trying to create a script which will remove all of the tables and sequences for a user. I don't want to do recreate the user as this will require more permissions than allowed. My script creates a procedure to drop the tables/sequences, executes the procedure, and then drops the procedure. I'm executing the file from sqlplus: drop.sql: create or replace procedure drop_all_cdi_tables is cur integer; begin cur:= dbms_sql.OPEN_CURSOR(); for t in (select table_name from user_tables) loop execute immediate 'drop table ' ||t.table_name|| ' cascade constraints'; end loop; dbms_sql.close_cursor(cur); cur:= dbms_sql.OPEN_CURSOR(); for t in (select sequence_name from user_sequences) loop execute immediate 'drop sequence ' ||t.sequence_name; end loop; dbms_sql.close_cursor(cur); end; / execute drop_all_cdi_tables; / drop procedure drop_all_cdi_tables; / Unfortunately, dropping the procedure causes a problem. There seems to cause a race condition and the procedure is dropped before it executes. E.g.: SQL*Plus: Release 11.1.0.7.0 - Production on Tue Mar 30 18:45:42 2010 Copyright (c) 1982, 2008, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Procedure created. PL/SQL procedure successfully completed. Procedure created. Procedure dropped. drop procedure drop_all_user_tables * ERROR at line 1: ORA-04043: object DROP_ALL_USER_TABLES does not exist SQL Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64 With the Partitioning, OLAP, Data Mining and Real Application Testing options Any ideas on how to get this working?

    Read the article

  • moving audio over a local network using GStreamer

    - by James Turner
    I need to move realtime audio between two Linux machines, which are both running custom software (of mine) which builds on top of Gstreamer. (The software already has other communication between the machines, over a separate TCP-based protocol - I mention this in case having reliable out-of-band data makes a difference to the solution). The audio input will be a microphone / line-in on the sending machine, and normal audio output as the sink on the destination; alsasrc and alsasink are the most likely, though for testing I have been using the audiotestsrc instead of a real microphone. GStreamer offers a multitude of ways to move data round over networks - RTP, RTSP, GDP payloading, UDP and TCP servers, clients and sockets, and so on. There's also many examples on the web of streaming both audio and video - but none of them seem to work for me, in practice; either the destination pipeline fails to negotiate caps, or I hear a single packet and then the pipeline stalls, or the destination pipeline bails out immediately with no data available. In all cases, I'm testing on the command-line just gst-launch. No compression of the audio data is required - raw audio, or trivial WAV, uLaw or aLaw encoding is fine; what's more important is low-ish latency.

    Read the article

  • Best architecture for a social media app

    - by Sky
    Hey guys, Im working on promising project that develops a new social media app for web and mobile. We are at begin defining functionalities. Nevertheless, I'm thinking ahead on architecture. So I'm asking: 1 - Whats the best plataform to develop the core of this aplication that will have a Rest API interface. 2 - Whats the best database that will scale and grow with my application. As far as I researched, these were the answers I found most interesting: For database: Cassandra NoSQL DB, amazing scalabilty, amazing write performance, good read performance (will be improved on 0.6). I think i will choose that one. Zookeer for transactions on Cassandra. I think that 2 technologies rly good for that propose. What do you think guys? On the front end that will serve the REST API, i dont have a final candidate. For this one i have questions based on Perfomance X Scalabilty X Fast Development/Maintenance. Java or .Net As far as I researched, brings the best balance of this requisits. Python, pearl and Rail, has the best (Fast Development/Maintenance), but sux on all other. C or C++ I dont even consider, because its (Fast Development/Maintenance) sux... So what do you guy think about it?

    Read the article

  • MongoDB architectural question

    - by pex
    I have to store 4 Models. Let's say a Post that has many and belongs to many Categories. Category on the other hand has many Qualities. At the moment I'm of the opinion, that Post and Categories are Documents. Qualities becomes an EmbeddedDocument of Categories. We're coming to the root problem: There are a lot of Votes on Qualities that belong to a Post. I thought about embed Votes in Post and give it a quality_id. I am really expecting a lot of Votes and there has to be a possibility to filter them (e.g by Username / Usergroup / Date voted). I worked with MongoMapper and I think the missing existence of find methods for EmbeddedDocuments could become a killer. On the other hand I'm wondering about performance issues. What if I want to provide a Post without all the Votes, but only a few. Or, what if I define an own Document for Votes and have tons of Vote-Documents? Wouldn't that become a performance killer?

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Problem calling web service from within JBOSS EJB Service

    - by Rob Goodwin
    I have a simple web service sitting on our internal network. I used SOAPUI to do a bit of testing, generated the service classes from the WSDL , and wrote some java code to access the service. All went as expected as I was able to create the service proxy classes and make calls. Pretty simple stuff. The only speed bump was getting java to trust the certificate from the machine providing the web service. That was not a technical problem, but rather my lack of experience with SSL based web services. Now onto my problem. I coded up a simple EJB service and deployed it into JBoss Application Server 4.3 and now get the following error in the code that previously worked. 12:21:50,235 WARN [ServiceDelegateImpl] Cannot access wsdlURL: https://WS-Test/TestService/v2/TestService?wsdl I can access the wsdl file from a web browser running on the same machine as the application server using the URL in the error message. I am at a loss as to where to go from here. I turned on the debug logs in JBOSS and got nothing more than what I showed above. I have done some searching on the net and found the same error in some questions, but those questions had no answers. The web services classes where generated with JAX-WS 2.2 using the wsimport ant task and placed in a jar that is included in the ejb package. JBoss is deployed in RHEL 5.4, where the standalone testing was done on Windows XP.

    Read the article

  • jBPM 4.3 Task Notification tag being ignored

    - by user291780
    I have a task with a notification entry but no emails are being generated and no entries in logs. Emails from mail node work fine. What am I doing wrong? Do I have to do anything special to my custom AssignmentHandler impl for notifications? <mail g="216,156,80,40" name="Send email"> <to addresses="[email protected]" /> <subject>Testing the mail activity</subject> <text>This message was sent by the jBPM mail activity tester</text> <transition g="-78,-18" to="User Review" /> </mail> <task g="210,250,92,52" name="User Review"> <description>User Review Task Description</description> <assignment-handler class="com.kevinmoodley.BPMTaskAssignmentHandler"> <description>Review AI Process Failure Assignment Handler</description> </assignment-handler> <notification> <to addresses="[email protected]" /> <subject>Testing from task</subject> <text>This message was sent by the jBPM User Review task</text> </notification> <transition g="-42,-18" name="CANCEL" to="end1" /> <transition g="-42,-18" name="RESTART" to="end2" /> </task> Thanks Kevin

    Read the article

  • Getting a "No default module defined for this application" exception while running controller unit t

    - by Doron
    I have an application with the default directory structure, for an application without custom modules (see structure figure at the end). I have written a ControllerTestCase.php file as instructed in many tutorials, and I've created the appropriate bootstrap file as well (once again, see figures at the end). I've written some model tests which run just fine, but when I started writing the index controller test file, with only one test in it, with only one line in it ("$this-dispatch('/');"), I'm getting the following exception when running phpunit (but navigating with the browser to the same location - all is good and working): 1) IndexControllerTest::test_indexAction_noParams Zend_Controller_Exception: No default module defined for this application Why is this ? What have I done wrong ? Appendixes: Directory structure: -proj/ -application/ -controllers/ -IndexController.php -ErrorController.php -config/ -layouts/ -models/ -views/ -helpers/ -scripts/ -error/ -error.phtml -index/ -index.phtml -Bootstrap.php -library/ -tests/ -application/ -controllers/ -IndexControllerTest.php -models/ -bootstrap.php -ControllerTestCase.php -library/ -phpunit.xml -public/ -index.php (Basically I have some more files in the models directory, but that's not relevant to this question.) application.ini file: [production] phpSettings.display_startup_errors = 0 phpSettings.display_errors = 0 includePaths.library = APPLICATION_PATH "/../library" bootstrap.path = APPLICATION_PATH "/Bootstrap.php" bootstrap.class = "Bootstrap" appnamespace = "My" resources.frontController.controllerDirectory = APPLICATION_PATH "/controllers" resources.frontController.params.displayExceptions = 0 resources.layout.layoutPath = APPLICATION_PATH "/layouts/scripts/" resources.view[] = phpSettings.date.timezone = "America/Los_Angeles" [staging : production] [testing : production] phpSettings.display_startup_errors = 1 phpSettings.display_errors = 1 [development : production] phpSettings.display_startup_errors = 1 phpSettings.display_errors = 1 resources.frontController.params.displayExceptions = 1 tests/application/bootstrap.php file <?php error_reporting(E_ALL); // Define path to application directory defined('APPLICATION_PATH') || define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../../application')); // Define application environment defined('APPLICATION_ENV') || define('APPLICATION_ENV', (getenv('APPLICATION_ENV') ? getenv('APPLICATION_ENV') : 'testing')); // Ensure library/ is on include_path set_include_path(implode(PATH_SEPARATOR, array( realpath(APPLICATION_PATH . '/../library'), // this project' library get_include_path(), ))); /** Zend_Application */ require_once 'Zend/Application.php'; require_once 'ControllerTestCase.php';

    Read the article

< Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >