Search Results

Search found 22893 results on 916 pages for 'client scripting'.

Page 841/916 | < Previous Page | 837 838 839 840 841 842 843 844 845 846 847 848  | Next Page >

  • Importing Excel spreadsheet data into existing Access DB

    - by Keeb13r
    I've designed an Access 2003 DB with 3 tables: APPLICATIONS, SERVERS, and INSTALLATIONS. Records in the APPLICATIONS and SERVERS tables are uniquely identified by a synthetic primary key (in Access, an "auto number"). The INSTALLATIONS table is essentially a mapping table between APPLICATIONS and SERVERS: it's a list of records of which applications are installed on which servers. A record in the INSTALLATIONS table is also identified by a synthetic primary key, and it consists of an APPLICATION_ID and SERVER_ID for the records in their respective tables. I have an Excel 2003 spreadsheet I would like to import into this database, but it's proving difficult. The spreadsheet is made up of several tabs/worksheets, each one representing a server with its own listing of installed applications. I'm not sure how to proceed with an import - the "Get External Data -- Import" feature in Access has an import "In an Existing Table" option, but it's greyed out. I'm also unsure how I build the relationships between applications and servers for importing records into the INSTALLATIONS table. I had previously fooled around with adding some security to the Access DB file. I think I removed everything but perhaps I didn't and that's causing the problem? Some sample data from the Excel spreadsheet: SERVER101 * Adobe Reader 9 * BMC Remedy User 7.0 * HostExplorer 2008 * Microsoft Office 2003 * Microsoft Office 2007 * Notepad++ SERVER102 * Adobe Reader 9 * DameWare Mini Remote Control * Microsoft Office 2003 * Microsoft .NET Framework 3.5 SP1 * Oracle 9.2 SERVER103 * AWDView * EXTRA! Personal Client 32-bit * Microsoft Office 2003 * Microsoft .NET Framework 3.5 SP1 * Snagit 9.1 * WinZip 12.1 The Access DB design is very simple: APPLICATION * APPLICATION_ID (autonumber) * APPLICATION_NAME (varchar) SERVER * SERVER_ID (autonumber) * SERVER_NAME (varchar) INSTALLATION * INSTALLATION_ID (autonumber) * APPLICATION_ID (number) * SERVER_ID (number)

    Read the article

  • Why is using a common-lookup table to restrict the status of entity wrong?

    - by FreshCode
    According to Five Simple Database Design Errors You Should Avoid by Anith Sen, using a common-lookup table to store the possible statuses for an entity is a common mistake. Why is this wrong? I disagree that it's wrong, citing the example of jobs at a repair service with many possible statuses that generally have a natural flow, eg.: Booked In Assigned to Technician Diagnosing problem Waiting for Client Confirmation Repaired & Ready for Pickup Repaired & Couriered Irreparable & Ready for Pickup Quote Rejected Arguably, some of these statuses can be normalised to tables like Couriered Items, Completed Jobs and Quotes (with Pending/Accepted/Rejected statuses), but that feels like unnecessary schema complication. Another common example would be order statuses that restrict the status of an order, eg: Pending Completed Shipped Cancelled Refunded The status titles and descriptions are in one place for editing and are easy to scaffold as a drop-down with a foreign key for dynamic data applications. This has worked well for me in the past. If the business rules dictate the creation of a new order status, I can just add it to OrderStatus table, without rebuilding my code.

    Read the article

  • apache solr : sum of data resulted from group by

    - by Terance Dias
    Hi, We have a requirement where we need to group our records by a particular field and take the sum of a corresponding numeric field e.x. select userid, sum(click_count) from user_action group by userid; We are trying to do this using apache solr and found that there were 2 ways of doing this: Using the field collapsing feature (http://blog.jteam.nl/2009/10/20/result-grouping-field-collapsing-with-solr/) but found 2 problems with this: 1.1. This is not part of release and is available as patch so we are not sure if we can use this in production. 1.2. We do not get the sum back but individual counts and we need to sum it at the client side. Using the Stats Component along with faceted search (http://wiki.apache.org/solr/StatsComponent). This meets our requirement but it is not fast enough for very large data sets. I just wanted to know if anybody knows of any other way to achieve this. Appreciate any help. Thanks, Terance.

    Read the article

  • Listen to Response on HTML Form embedded in GWT View?

    - by confile
    I have a HTML like the following: <div> <form> <input type="text" /> <button class="sendForm" value="Send form" /> </form> </div> <script> // post the form with Jquery post // register a callback that handles the response </script> I use this type of form a lot with a JavaScript/JQuery overlay that displays the form. That could be handled for example with plugins like FancyBox. I also want to use this form embedded into a GWT view. Lets assume that the for cannot be created on client side because it has some server based markup language inside to set up some model data. If I want to use this form in GWT I have to do the following. Tell GWT the form request url and use a RequestBuilder to query the html content of this form. Then I can insert it into a div generated by GWT. So far so good. Problem: When the user hits the send button the response is handled my the JQuery callback that is inside the script under the form. Is there a way to access this callback from within GWT? Is there a way to overwrite the JQuery send action? Since, the code is HTML and comes from the server I cannot place ui-binder UiFields inside to get access to these DOM elements. I need to get the response if the submitted form accessible to GWT. Is there a way how I can achieve this with JSNI?

    Read the article

  • Need advanced mod_rewrite assistance

    - by user1647719
    This is driving me bananas. I have tried multiple fixes, and have even found a generator (generateit.net/mod-rewrite/) to create my .htaccess files, but I cannot get this to work. The .htaccess file is being read - if I put garbage characters in, I get a 500 server error. Also, mod rewrite was working in a simpler format, but the client wants even more SEO, so here I am. My intention: Take the url of brinkofdesign.com/portfolio/123/456/some_seo_text.html and send it to brinkofdesign.com/brink_portfolio.php?catid=123&locationid=456. The full contents of my .htaccess file are: Options +FollowSymlinks RewriteEngine on RewriteBase / RewriteRule ^portfolio([^/]*)/([^/]*)/([^/]*)\.html$ /portfolio_brink.php?catid=$1&locationid=$2&page=$3 [NC,L] RewriteRule ^(.*)blog(.+)([0-9+])(.+).html$ blog.php?blogid=$3 [NC,L] RewriteRule ^blog.xml$ blogfeed.php The blog and blogfeed parts work fine. If it matters, I do have an actual portfolio.php page - could this be breaking it? There is no actual /portfolio/ folder - this is just part of the SEO nme. Also, the host is 1and1.com. Real life tests: brinkofdesign.com/portfolio/1/1/birmingham_al_corporate_branding.html loads my actual portfolio.php page - which is supposed to be a landing page for the portfolio, as opposed to the brink_portfolio.php page, which is my detail record page. No variables are passed (echoing out the contents of the $_GET array gives an empty array). Typing in an non SEO link like brinkofdesign.com/brink_portfolio.php?catid=1&locationid=1 does work correctly. Gurus, please help me!

    Read the article

  • Are bad data issues that common?

    - by Water Cooler v2
    I've worked for clients that had a large number of distinct, small to mid-sized projects, each interacting with each other via properly defined interfaces to share data, but not reading and writing to the same database. Each had their own separate database, their own cache, their own file servers/system that they had dedicated access to, and so they never caused any problems. One of these clients is a mobile content vendor, so they're lucky in a way that they do not have to face the same problems that everyday business applications do. They can create all those separate compartments where their components happily live in isolation of the others. However, for many business applications, this is not possible. I've worked with a few clients, one of whose applications I am doing the production support for, where there are "bad data issues" on an hourly basis. Yeah, it's that crazy. Some data records from one of the instances (lower than production, of course) would have been run a couple of weeks ago, and caused some other user's data to get corrupted. And then, a data script will have to be written to fix this issue. And I've seen this happening so much with this client that I have to ask. I've seen this happening at a moderate rate with other clients, but this one just seems to be out of order. If you're working with business applications that share a large amount of data by reading and writing to/from the same database, are "bad data issues" that common in your environment?

    Read the article

  • On Disk Substring index

    - by emeryc
    I have a file (fasta file to be specific) that I would like to index, so that I can quickly locate any substring within the file and then find the location within the original fasta file. This would be easy to do in many cases, using a Trie or substring array, unfortunately the strings I need to index are 800+ MBs which means that doing them in memory in unacceptable, so I'm looking for a reasonable way to create this index on disk, with minimal memory usage. (edit for clarification) I am only interested in the headers of proteins, so for the largest database I'm interested in, this is about 800 MBs of text. I would like to be able to find an exact substring within O(N) time based on the input string. This must be useable on 32 bit machines as it will be shipped to random people, who are not expected to have 64 bit machines. I want to be able to index against any word break within a line, to the end of the line (though lines can be several MBs long). Hopefully this clarifies what is needed and why the current solutions given are not illuminating. I should also add that this needs to be done from within java, and must be done on client computers on various operating systems, so I can't use any OS Specific solution, and it must be a programatic solution.

    Read the article

  • How do I secure a .NET Web Service for use by an iPhone application?

    - by David A Gibson
    Hello, The title says it all, I have a Web Service written in .NET that provides data for an iPhone application. It will also allow the application make a "reservation." Currently it's all internal to the corporate network but obviously when the iPhone application is published I will need ensure the Web Service is available externally. How would I go about securing the Web Service? There are two aspects I'm looking into: Authentication for accessing the web service Protection for the data being transferred I'm no so bothered about the data being passed back and forth as it will be viewable in the application anyway (which will be free). The key issue for me is preventing users from accessing the Web Service and making reservations themselves. At the moment I am considering encrypting any strings in the XML data passed back and forth so only the client can effectively use the web service sidestepping the need for authentication and providing protection for the data. This is the only model I have seen but I think the overheads on the iPhone and even for the web service make for a poor user experience. Any solutions at all would be most welcome? Thanks

    Read the article

  • Avoiding mass propagation of properties and events for exposure to ViewModels.

    - by firoso
    I have an MVVM application I am developing that is to the point where I'm ready to start putting together a user interface (my client code is largely functional) I'm now running into the issue that I'm trying to get my application data to where I need it so that it can be consumed by the view model and then bound to the view. Unfortunately, it seems that I've either got a few structural oversights, or I'm just going to have to face the reality that I need to be propogating events and raising excessive amounts of errors to notify view models that thier properties have changed. Let me go into some examples of my issue: I have a class "Unit" contained in a class "Test", contained in a class "Session" contained in a class "TestManager" which is contained in "TestDataModel" which is utilized by "TestViewModel" which is databound to by my "TestView" .... WHOA. Now, consider that Unit (the bottom of the heiarchy) has a property called "Results" that is updated periodically, I want to expose that to my viewmodel and then databind it to my view, trouble is, the only way I can really think to do this is to perpetuate events WAY up a chain that say "I've been updated!" and then request the new value... This seems like an aweful way to do this. Alternatively, I could register a static event and raise it, and have the appropriate "Unit view model" grab the event and request the update. This SEEMS better... but... static events? Is that a taboo idea? Also, having an expression like: TestDataModel.TestManager.Session.Test.Unit.Results[i] Seems REALLY gross to have on a View Model. I know this all reeks of a bad design issue, but I can't figure out what I did wrong? Should I be using more singleton/container controlled lifetimes type objects? Register object instances with static helper containers? Obviously these are hard questions to answer without being intimate with the existing structure, but if you've run into situations like this, what did you do to refactor? Should I just live with this, add mass events, and propogate them?

    Read the article

  • PayPal sandbox anomalies

    - by Christian
    When testing some donations on my local machine, I set various key=value pairs to do various things (return to specific thank you page, get POST data from PayPal and not GET data and others) I also built my code around the response from the PayPal sandbox. BUT, when my code goes to the production server and we switch on live payments and test with real accounts and money, a few strange things happen; We get a GET response from PayPal - the URL is filled with crap. We get no transaction details. This is the biggie, no name, no txn_id, no dates, nothing. We get a handful of keys etc, its not totally empty and the payment has gone through, but nowhere near the verbosity of the sandbox. Curious about why this might be? It doesn't really make sense to have a sandbox (or dev environment) that is substantially different from the production environment. Or, am I missing something? EDIT: Still no response to my question in the PayPal Developer Forums. I don't even get a donation amount back from PayPal. Is this a setting maybe? EDIT #2: Two of you have suggested to check PDT and Auto-Return. The data analytics guy for the project only 2 hrs ago suggested the same. I have asked the client to confirm this. I can't see a setting for it in the Sandbox so can assume that it is enabled by default?

    Read the article

  • Incremental deploy from a shell script

    - by WishCow
    I have a project, where I'm forced to use ftp as a means of deploying the files to the live server. I'm developing on linux, so I hacked together a bash script that makes a backup of the ftp server's contents, deletes all the files on the ftp, and uploads all the fresh files from the mercurial repository. (and taking care of user uploaded files and folders, and making post-deploy changes, etc) It's working well, but the project is starting to get big enough to make the deployment process too long. I'd like to modify the script to look up which files have changed, and only deploy the modified files. (the backup is fine atm as it is) I'm using mercurial as a VCS, so my idea is to somehow request the changed files between two revisions from it, iterate over the changed files, and upload each modified file, and delete each removed file. I can use hg log -vr rev1:rev2, and from the output, I can carve out the changed files with grep/sed/etc. Two problems: I have heard the horror stories that parsing the output of ls leads to insanity, so my guess is that the same applies to here, if I try to parse the output of hg log, the variables will undergo word-splitting, and all kinds of transformations. hg log doesn't tell me a file is modified/added/deleted. Differentiating between modified and deleted files would be the least. So, what would be the correct way to do this? I'm using yafc as an ftp client, in case it's needed, but willing to switch.

    Read the article

  • Releasing Autoreleasepool crashes on iOS 4.0 (and only on 4.0)

    - by samsam
    Hi there. I'm wondering what could cause this. I have several methods in my code that i call using performSelectorInBackground. Within each of these methods i have an Autoreleasepool that is being alloced/initialized at the beginning and released at the end of the method. this perfectly works on iOS 3.1.3 / 3.2 / 4.2 / 4.2.1 but it fataly crashes on iOS 4.0 with a EXC_BAD_ACCESS Exception that happens after calling [myPool release]. After I noticed this strange behaviour I was thinking about rewriting portions of my code and to make my app "less parallel" in case that the client os is 4.0. After I did that, the next point where the app crashed was within the ReachabilityCallback-Method from Apples Reachability "Framework". well, now I'm not quite sure what to do. The things i do within my threaded methods is pretty simple xml parsing (no cocoa calls or stuff that would affect the UI). After each method finishes it posts a notification which the coordinating-thread listens to and once all the parallelized methods have finished, the coordinating thread calls viewcontrollers etc... I have absolutely no clue what could cause this weird behaviour. Especially because Apples Code fails as well. any help is greatly appreciated! thanks, sam

    Read the article

  • nhibernate mapping: delete collection, insert new collection with old IDs

    - by npeBeg
    my issue lokks similar to this one: (link) but i have one-to-many association: <set name="Fields" cascade="all-delete-orphan" lazy="false" inverse="true"> <key column="[TEMPLATE_ID]"></key> <one-to-many class="MyNamespace.Field, MyLibrary"/> </set> (i also tried to use ) this mapping is for Template object. this one and the Field object has their ID generators set to identity. so when i call session.Update for the Template object it works fine, well, almost: if the Field object has an Id number, UPDATE sql request is called, if the Id is 0, the INSERT is performed. But if i delete a Field object from the collection it has no effect for the Database. I found that if i also call session.Delete for this Field object, everything will be ok, but due to client-server architecture i don't know what to delete. so i decided to delete all the collection elements from the DB and call session.Update with a new collection. and i've got an issue: nhibernate performs the UPDATE operation for the Field objects that has non-zero Id, but they are removed from DB! maybe i should use some other Id generator or smth.. what is the best way to make nhibernate perform "delete all"/"insert all" routine for the collection?

    Read the article

  • Server Push CGI Perl Problem writing the JPEG image

    - by Jujjuru
    I am a beginner in Perl CGI etc. I was experimenting with server-push concept with a piece of perl code. It is supposed to send a jpeg image to the client every 3 seconds. Unfortunately nothing seems to work. Can somebody help identify the problem? Here is the code.... use strict; # turn off io buffering $|=1; print "Content-type: multipart/x-mixed-replace;"; print "boundary=magicalboundarystring\n\n"; print "--magicalboundarystring\n"; #list the jpg images my(@file_list) = glob "*.jpg"; my($file) = ""; foreach $file(@file_list ) { open FILE,">", $file or die "Cannot open file $file: $!"; print "Content-type: image/jpeg\n\n"; while ( <FILE> ) { print "$_"; } close FILE; print "\n--magicalboundarystring\n"; sleep 3; next; } EDIT: added turn off i/o buffering, added "use strict" and "@file_list", "$file" are made local

    Read the article

  • template files evaluation in python

    - by saminny
    I am trying to use python for translating a set of templates to a set of configuration files based on values taken from a main configuration file. However, I am having certain issues. Consider the following example of a template file. file1.cfg.template %(CLIENT1)s %(HOST1)s %(PORT1)d C %(COMPID1)s %(CLIENT2)s %(HOST2)s %(PORT2)d C %(COMPID2)s This file contains an entry for each client. There are hundreds of config files like this and I don't want to have logic for each type of config file. Python should do the replacements and generate config files automatically given a set of global values read from a main xml config file. However, in the above example, if CLIENT2 does not exist, how do I delete that line? I expect Python would generate the config file using something like this: os.open("file1.cfg.template").read() % myhash where myhash is hash of configuration parameters from the main config file which may not contain CLIENT2 at all. In the case it does not contain CLIENT2, I want that line to disappear from the file. Is it possible to insert some 'IF' block in the file and have python evaluate it? Thanks for your help. Any suggestions most welcome.

    Read the article

  • Java "Pool" of longs or Oracle sequence with reusable values

    - by Anthony Accioly
    Several months ago I implemented a solution to choose unique values from a range between 1 and 65535 (16 bits). This range is used to generate unique Route Targets suffixes, which for this customer massive network (it's a huge ISP) are a very disputed resource, so any free index needs to become immediately available to the end user. To tackle this requirement I used a BitSet. Allocate on the RT index with set and deallocate a suffix with clear. The method nextClearBit() can find the next available index. I handle synchronization / concurrency issues manually. This works pretty well for a small range... The entire index is small (around 10k), it is blazing fast and can be easy serialized into a Blob field. The problem is, some new devices can handle RTs of 32 bits (range 1 / 4294967296). Which can't be managed with a BitSet (it would, by itself, consume around 600Mb, plus be limited to int range). Even with this massive range available, the client still wants to free available Route Targets for the end user, mainly because the lowest ones (up to 65535) - which are compatible with old routers - are being heavily disputed. Before I tell the customer that this is impossible and he will have to conform with my reusable index for lower RTs (up to 65550) and use a database sequence for the other ones (which means that when the user frees a Route Target, it will not become available again). Would anyone shed some light? Maybe some kind soul already implemented a high performance number pool for Java (6 if it matters), or I am missing a killer feature of Oracle database (11R2 if it matters)... Wishful thinking. Thank you very much in advance.

    Read the article

  • How do I create an exception-wrapping fubumvc behaviour?

    - by Jon M
    How can I create a fubumvc behaviour that wraps actions with a particular return type, and if an exception occurs while executing the action, then the behaviour logs the exception and populates some fields on the return object? I have tried the following: public class JsonExceptionHandlingBehaviour : IActionBehavior { private static readonly Logger logger = LogManager.GetCurrentClassLogger(); private readonly IActionBehavior _innerBehavior; private readonly IFubuRequest _request; public JsonExceptionHandlingBehaviour(IActionBehavior innerBehavior, IFubuRequest request) { _innerBehavior = innerBehavior; _request = request; } public void Invoke() { try { _innerBehavior.Invoke(); var response = _request.Get<AjaxResponse>(); response.Success = true; } catch(Exception ex) { logger.ErrorException("Error processing JSON request", ex); var response = _request.Get<AjaxResponse>(); response.Success = false; response.Exception = ex.ToString(); } } public void InvokePartial() { _innerBehavior.InvokePartial(); } } But, although I get the AjaxResponse object from the request, any changes I make don't get sent back to the client. Also, any exceptions thrown by the action don't make it as far as this, the request is terminated before execution gets to the catch block. What am I doing wrong? For completeness, the behaviour is wired up with the following in my WebRegistry: Policies .EnrichCallsWith<JsonExceptionHandlingBehaviour>(action => typeof(AjaxResponse).IsAssignableFrom(action.Method.ReturnType)); And AjaxResponse looks like: public class AjaxResponse { public bool Success { get; set; } public object Data { get; set; } public string Exception { get; set; } }

    Read the article

  • How can I send multiple images in a server push Perl CGI program?

    - by Jujjuru
    I am a beginner in Perl CGI etc. I was experimenting with server-push concept with a piece of Perl code. It is supposed to send a jpeg image to the client every three seconds. Unfortunately nothing seems to work. Can somebody help identify the problem? Here is the code: use strict; # turn off io buffering $|=1; print "Content-type: multipart/x-mixed-replace;"; print "boundary=magicalboundarystring\n\n"; print "--magicalboundarystring\n"; #list the jpg images my(@file_list) = glob "*.jpg"; my($file) = ""; foreach $file(@file_list ) { open FILE,">", $file or die "Cannot open file $file: $!"; print "Content-type: image/jpeg\n\n"; while ( <FILE> ) { print "$_"; } close FILE; print "\n--magicalboundarystring\n"; sleep 3; next; } EDIT: added turn off i/o buffering, added "use strict" and "@file_list", "$file" are made local

    Read the article

  • How to access a web service behind a NAT?

    - by jr
    We have a product we are deploying to some small businesses. It is basically a RESTful API over SSL using Tomcat. This is installed on the server in the small business and is accessed via an iPhone or other device portable device. So, the devices connecting to the server could come from any number of IP addresses. The problem comes with the installation. When we install this service, it seems to always become a problem when doing port forwarding so the outside world can gain access to tomcat. It seems most time the owner doesn't know router password, etc, etc. I am trying to research other ways we can accomplish this. I've come up with the following and would like to hear other thoughts on the topic. Setup a SSH tunnel from each client office to a central server. Basically the remote devices would connect to that central server on a port and that traffic would be tunneled back to Tomcat in the office. Seems kind of redundant to have SSH and then SSL, but really no other way to accomplish it since end-to-end I need SSL (from device to office). Not sure of performance implications here, but I know it would work. Would need to monitor the tunnel and bring it back up if it goes done, would need to handle SSH key exchanges, etc. Setup uPNP to try and configure the hole for me. Would likely work most of the time, but uPNP isn't guaranteed to be turned on. May be a good next step. Come up with some type of NAT transversal scheme. I'm just not familiar with these and uncertain of how they exactly work. We have access to a centralized server which is required for the authentication if that makes it any easier. What else should I be looking at to get this accomplished?

    Read the article

  • How do I send data between two computers over the internet

    - by Johan
    I have been struggling with this for the entire day now, I hope somebody can help me with this. My problem is fairly simple: I wish to transfer data (mostly simple commands) from one PC to another over the internet. I have been able to achieve this using sockets in Java when both computers are connected to my home router. I then connected both computers to the internet using two different mobile phones and attempted to transmit the data again. I used the mobile phones as this provides a direct route to the internet and if I use my router I have to set up port forwarding, at least, that is how I understand it. I think the problem lies in the method that I set up the client socket. I used: Socket kkSocket = new Socket(ipAddress, 3333); where ipAddress is the IP address of the computer running the server. I got the IP address by right-clicking on the connection, status, support. Is that the correct IP address to use or where can I obtain the address of the server? Also, is it possible to get a fixed name for my computer that I can use instead of entering the IP address, as this changes every time I connect to the internet using my mobile phone? Alternatively, are there better methods to solving my problem such as using http, and if so, where can I find more information about this? Thanks!

    Read the article

  • Sending information between JavaScript and Web Services using AJAX

    - by COB-CSU-AM
    Alright so I'm using Microsoft's Web Services and AJAX to get information from a SQL database for use with java script on the client side. And I'm wondering what the best method is. Before I started working on the project, the web services were setup to return a C# List filled with some objects. Those objects variables (ints, strings, etc.) contain the data I want to use. Of course, java script can't do much with this, to the best of my knowledge. I then modified the web service to return a 2D Array, but java script got confused, and to the best of my knowledge can't handle 2D array's returned from C#. I then tried to use a regular array, but then a found the length property of an array in JS doesn't carry over, so I couldn't preform a for loop through all the items, because there wasn't anyway of knowing how many elements there were. The only other thing I can thing of is returning a string with special char's to separate the data, but this seems way too convoluted. Any suggestions? Thanks in advance!

    Read the article

  • web service slowdown

    - by user238591
    Hi, I have a web service slowdown. My (web) service is in gsoap & managed C++. It's not IIS/apache hosted, but speaks xml. My client is in .NET The service computation time is light (<0.1s to prepare reply). I expect the service to be smooth, fast and have good availability. I have about 100 clients, response time is 1s mandatory. Clients have about 1 request per minute. Clients are checking web service presence by tcp open port test. So, to avoid possible congestion, I turned gSoap KeepAlive to false. Until there everything runs fine : I bearly see connections in TCPView (sysinternals) New special synchronisation program now calls the service in a loop. It's higher load but everything is processed in less 30 seconds. With sysinternals TCPView, I see that about 1 thousands connections are in TIME_WAIT. They slowdown the service and It takes seconds for the service to reply, now. Could it be that I need to reset the SoapHttpClientProtocol connection ? Someone has TIME_WAIT ghosts with a web service call in a loop ?

    Read the article

  • Need an advice for unit testing using mock object

    - by Andree
    Hi there, I just recently read about "Mocking objects" for unit testing and currently I'm having a difficulties implementing this approach in my application. Please let me explain my problem. I have a User model class, which is dependent on 2 data sources (database and facebook web service). The controller class simply use this User model as an interface to access data and it doesn't care about where the data came from. Currently I never done any unit test to this User model because it is dependent on an external web service. But just a while ago, I read about object mocking and now I know that it is a common approach to unit test a class that depends on external resources (like in my case). Now I want to create a unit test for the User model, but then I encountered a design issue: In order for the User model to use a mocked Facebook SDK, I have to inject this mocked Facebook SDK to the User object (probably using a setter). Therefore I can't construct the Facebook SDK inside the User object. I have to construct it outside the User object, and inject the SDK into the User object. The real client of my User model is the application's controller. Therefore I have to construct the Facebook SDK inside the controller and inject it to the user object. Well, this is a problem because I want my controller to be as clean as possible. I want my controller to be ignorant about the application's data source. I'm not good at explaining something systematically, so you'll probably sleeping before reading this last paragraph. But anyway, I want to ask if anyone here ever encountered the same problem as mine? How do you solve this problem? Regards, Andree

    Read the article

  • Jquery Hidden Field in Table

    - by zSysop
    Hi all, I was wondering if anyone knew of a way to access a hidden field (by client id) within a table row using jquery. $("#tblOne").find("tr").click(function() { var worker = $(this).find(":input").val(); }); I find that the above works for a row that has only one input, but i need some help figuring out a way to get the value by the inputs name. Here's the example of a table row. How would i access the two fields by their id's? <table id="tblOne"> <tr> <td> <asp:HiddenField id="hdnfld_Id" Text='<% Eval("ID") %>'></asp:HiddenField> </td> <td> <asp:HiddenField id="hdnfld_Id2" Text='<% Eval("ID2") %>'></asp:HiddenField> </td> </tr> </table>

    Read the article

  • Visual Studio C++ Solution in Maven2

    - by graham.reeds
    A new project is coming up that will require interaction between Java and C++. It's been decided that the project will be built via Maven2. Unfortunately I don't know anything about Maven and the Java guys don't know anything about C++. They have their build chain all set up with various reports being emitted for each part related to CheckStyle, Findbugs, Corbortura(?) etc. and they want the same to be done with the C++ side. Currently we have 4 apps that need building: 2 services, a tray app and a simple dialog based application. I've been told I need to have a pom for each and configure each to output to a target directory, have the tool chain produce the reports - the most particular being the code coverage which the client wants 100%. I have sourced the tools - Bullseye and QA-C++ and requested eval copies - but I am dismayed to find there is very little information on C++ & Maven, and what little there is seems to be horror stories. Does anyone on SO have a good story about it (or have link to blog post)? Is there a simple explanation anywhere for configuring a Visual Studio solution (preferably C++) to be Mavenized? I am expecting pain but I am getting increasingly wary of this venture - but unfortunately the project manager is Java side and seems hell-bent on Mavenizing it.

    Read the article

< Previous Page | 837 838 839 840 841 842 843 844 845 846 847 848  | Next Page >