Search Results

Search found 6814 results on 273 pages for 'payment processing'.

Page 231/273 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • JBoss Clustered Service that sends emails from txt file

    - by michael lucas
    I need a little push in the right direction. Here's my problem: I have to create an ultra-reliable service that sends email messages to clients whose addresses are stored in txt file on FTP server. Single txt file may contain unlimited number of entries. Most often the file contains about 300,000 entries. Service exposes interface with just two simple methods: TaskHandle sendEmails(String ftpFilePath); ProcessStatus checkProcessStatus(TaskHandle taskHandle); Method sendEmails() returns TaskHandle by which we can ask for ProcessStatus. For such a service to be reliable clustering is necessary. Processing single txt file might take a long time. Restarting one node in a cluster should have no impact on sending emails. We use JBoss AS 4.2.0 which comes with a nice HASingletonController that ensure one instance of service is running at given time. But once a fail-over happens, the second service should continue work from where the first one stopped. How can I share state between nodes in a cluster in such a way that leaves no possibility of sending some emails twice?

    Read the article

  • Speeding up inner-joins and subqueries while restricting row size and table membership

    - by hiffy
    I'm developing an rss feed reader that uses a bayesian filter to filter out boring blog posts. The Stream table is meant to act as a FIFO buffer from which the webapp will consume 'entries'. I use it to store the temporary relationship between entries, users and bayesian filter classifications. After a user marks an entry as read, it will be added to the metadata table (so that a user isn't presented with material they have already read), and deleted from the stream table. Every three minutes, a background process will repopulate the Stream table with new entries (i.e. whenever the daemon adds new entries after the checks the rss feeds for updates). Problem: The query I came up with is hella slow. More importantly, the Stream table only needs to hold one hundred unread entries at a time; it'll reduce duplication, make processing faster and give me some flexibility with how I display the entries. The query (takes about 9 seconds on 3600 items with no indexes): insert into stream(entry_id, user_id) select entries.id, subscriptions_users.user_id from entries inner join subscriptions_users on subscriptions_users.subscription_id = entries.subscription_id where subscriptions_users.user_id = 1 and entries.id not in (select entry_id from metadata where metadata.user_id = 1) and entries.id not in (select entry_id from stream where user_id = 1); The query explained: insert into stream all of the entries from a user's subscription list (subscriptions_users) that the user has not read (i.e. do not exist in metadata) and which do not already exist in the stream. Attempted solution: adding limit 100 to the end speeds up the query considerably, but upon repeated executions will keep on adding a different set of 100 entries that do not already exist in the table (with each successful query taking longer and longer). This is close but not quite what I wanted to do. Does anyone have any advice (nosql?) or know a more efficient way of composing the query?

    Read the article

  • How can I compare market data feed sources for quality and latency improvement?

    - by yves Baumes
    I am in the very first stages of implementing a tool to compare 2 market data feed sources in order to prove the quality of new developed sources to my boss ( meaning there are no regressions, no missed updates, or wrong ), and to prove latencies improvement. So the tool I need must be able to check updates differences as well as to tell which source is the best (in term of latency). Concrectly, reference source could be Reuters while the other one is a Feed handler we develop internally. People warned me that updates might not arrive in the same order as Reuters implementation could differs totally from ours. Therefore a simple algorithm based on the fact that updates could arrive in the same order is likely not to work. My very first idea would be to use fingerprint to compare feed sources, as Shazaam application does to find the title of the tube you are submitting. Google told me it is based on FFT. And I was wondering if signal processing theory could behaves well with market access applications. I wanted to know your own experience in that field, is that possible to develop a quite accurate algorithm to meet the needs? What was your own idea? What do you think about fingerprint based comparison?

    Read the article

  • setting default value of superglobal

    - by Prasoon Saurav
    I have been working on a Timesheet Management website. I have my home page as index.php //index.php <?php session_start(); if($_SESSION['logged']=='set') { $x=$_SESSION['username']; echo '<div align="right">'; echo 'Welcome ' .$x.'<br/>'; echo'<a href="logout.php" class="links">&nbsp;<b><u>Logout</u></b></a>' ; } else if($_SESSION['logged']='unset') { echo'<form id="searchform" method="post" action="processing.php"> <div> <div align="right"> Username&nbsp;<input type="text" name="username" id="s" size="15" value="" /> &nbsp;Password&nbsp;<input type="password" name="pass" id="s" size="15" value="" /> <input type="submit" name="submit" value="submit" /> </div> <br /> </div> </form> '; } ?> The problem I am facing is that during the first run of this script I get an error Notice: Undefined index: logged in C:\wamp\www\ps\index.php but after refreshing the page the error vanishes. How can I correct this problem? logged is a variable which helps determine whether the user is logged in or not. When the user is logged in $_SESSION['logged'] is set, otherwise unset. I want the default value of $_SESSION['logged'] to be unset prior to the execution of the script. How can I solve this problem?

    Read the article

  • Catching / blocking SIGINT during system call

    - by danben
    I've written a web crawler that I'd like to be able to stop via the keyboard. I don't want the program to die when I interrupt it; it needs to flush its data to disk first. I also don't want to catch KeyboardInterruptedException, because the persistent data could be in an inconsistent state. My current solution is to define a signal handler that catches SIGINT and sets a flag; each iteration of the main loop checks this flag before processing the next url. However, I've found that if the system happens to be executing socket.recv() when I send the interrupt, I get this: ^C Interrupted; stopping... // indicates my interrupt handler ran Traceback (most recent call last): File "crawler_test.py", line 154, in <module> main() ... File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 397, in readline data = recv(1) socket.error: [Errno 4] Interrupted system call and the process exits completely. Why does this happen? Is there a way I can prevent the interrupt from affecting the system call?

    Read the article

  • Spring MVC 3 - How come @ResponseBody method renders a JSTLView?

    - by Ken Chen
    I have mapped one of my method in one Controller to return JSON object by @ResponseBody. @RequestMapping("/{module}/get/{docId}") public @ResponseBody Map<String, ? extends Object> get(@PathVariable String module, @PathVariable String docId) { Criteria criteria = new Criteria("_id", docId); return genericDAO.getUniqueEntity(module, true, criteria); } However, it redirects me to the JSTLView instead. Say, if the {module} is product and {docId} is 2, then in the console I found: DispatcherServlet with name 'xxx' processing POST request for [/xxx/product/get/2] Rendering view [org.springframework.web.servlet.view.JstlView: name 'product/get/2'; URL [/WEB-INF/views/jsp/product/get/2.jsp]] in DispatcherServlet with name 'xxx' How can that be happened? In the same Controller, I have another method similar to this but it's running fine: @RequestMapping("/{module}/list") public @ResponseBody Map<String, ? extends Object> list(@PathVariable String module, @RequestParam MultiValueMap<String, String> params, @RequestParam(value = "page", required = false) Integer pageNumber, @RequestParam(value = "rows", required = false) Integer recordPerPage) { ... return genericDAO.list(module, criterias, orders, pageNumber, recordPerPage); } Above do returns correctly providing me a list of objects I required. Anyone to help me solve the mystery?

    Read the article

  • jQuery AJAX request (Rails 3) gets redirected and returns empty message body!

    - by elsurudo
    I'm trying to do a manual jQuery AJAX request the following way: $("#user_plan_id").change(function() { $("#plan_container").load('/plans/' + this.value); }); I have the "rails.js" file included in my header, and a "<%= csrf_meta_tag %". I see from my log that the request IS getting to the server (although without the authenticity token... does rails.js even do this?), but the response is a 302 (Found) rather than 200, and no data actually gets rendered. Any ideas? Edit: I now see that the first request redirects, and the proper partial gets rendered on the redirect. However, the 2nd response's body (on the client-side) is still empty. I'm guessing jQuery uses the first response and doesn't have a listener set up for the redirect. How do I get around this? Also, another note: the page doing the requesting is an HTTPS page. Here is what my log says: Started GET "/plans/221168073" for 127.0.0.1 at Tue Jun 15 01:24:06 -0400 2010 Processing by PlansController#show as HTML Parameters: {"id"=>"221168073"} DEPRECATION WARNING: Using #request_uri is deprecated. Use fullpath instead. (called from ensure_proper_protocol at /Users/ernestsurudo/Sites/vidfolia/vendor/plugins/ssl_requirement/lib/ssl_requirement.rb:57) Redirected to http://vidfolia.com/plans/221168073 Completed 302 Found in 1ms Perhaps it has something to do with the deprecation warning?

    Read the article

  • Is my objective possible using WCF (and is it the right way to do things?)

    - by David
    I'm writing some software that modifies a Windows Server's configuration (things like MS-DNS, IIS, parts of the filesystem). My design has a server process that builds an in-memory object graph of the server configuration state and a client which requests this object graph. The server would then serialize the graph, send it to the client (presumably using WCF), the server then makes changes to this graph and sends it back to the server. The server receives the graph and proceeds to make modifications to the server. However I've learned that object-graph serialisation in WCF isn't as simple as I first thought. My objects have a hierarchy and many have parametrised-constructors and immutable properties/fields. There are also numerous collections, arrays, and dictionaries. My understanding of WCF serialisation is that it requires use of either the XmlSerializer or DataContractSerializer, but DCS places restrictions on the design of my object-graph (immutable data seems right-out, it also requires parameter-less constructors). I understand XmlSerializer lets me use my own classes provided they implement ISerializable and have the de-serializer constructor. That is fine by me. I spoke to a friend of mine about this, and he advocates going for a Data Transport Object-only route, where I'd have to maintain a separate DataContract object-graph for the transport of data and re-implement my server objects on the client. Another friend of mine said that because my service only has two operations ("GetServerConfiguration" and "PutServerConfiguration") it might be worthwhile just skipping WCF entirely and implementing my own server that uses Sockets. So my questions are: Has anyone faced a similar problem before and if so, are there better approaches? Is it wise to send an entire object graph to the client for processing? Should I instead break it down so that the client requests a part of the object graph as it needs it and sends only bits that have changed (thus reducing concurrency-related risks?)? If sending the object-graph down is the right way, is WCF the right tool? And if WCF is right, what's the best way to get WCF to serialise my object graph?

    Read the article

  • find word and score based on positions

    - by ryder1211212
    hey guys i have a textfile i have divided it into 4 parts. i want to search each part for the words that appear in each part and score that word exmaple welcome to the national basketball finals,the basketball teams here today have come a long way. without much delay lets play basketball. i will want to return national = 1 as it appears only in one part etc am working on determining text context using word position. am working with c# and not very good in text processing basically if a word appears in the 4 sections it scores 4 if a word appears in the 3 sections it scores 3 if a word appears in the 2 sections it scores 2 if a word appears in the 1 section it scores 1 thanks in advance so far i have this var s = "welcome to the national basketball finals,the basketball teams here today have come a long way. without much delay lets play basketball. "; var numberOfParts = 4; var eachPartLength = s.Length / numberOfParts; var parts = new List<string>(); var words = Regex.Split(s, @"\W").Where(w => w.Length > 0); // this splits all words, removes empty strings var wordsIndex = 0; for (int i = 0; i < numberOfParts; i++) { var sb = new StringBuilder(); while (sb.Length < eachPartLength && wordsIndex < words.Count()) { sb.AppendFormat("{0} ", words.ElementAt(wordsIndex)); wordsIndex++; } // here you have the part Response.Write("[{0}]"+ sb); parts.Add(sb.ToString()); var allwords = parts.SelectMany(p => p.Split(' ').Distinct()); var wordsInAllParts = allwords.Where(w => parts.All(p => p.Contains(w))).Distinct();

    Read the article

  • Ruby on rails form variables as array items

    - by Francois
    I have a simple form with text input and text area, but when I submit it the variables seems to be array items instead of just string values? the form <%= form_tag(home_kontak_path, :remote => true) do %> <label>Jou epos adres</label> <%= text_field(:epos, "", :placeholder => "Jou epos adres", :id => "epos", :class => "input-block-level") %> <label>Boodskap hier</label> <%= text_area(:boodskap, "", :rows => "5", :placeholder => "Boodskap hier...", :id => "boodskap", :class => "input-block-level") %> <%= submit_tag "submit" %> <% end %> console output Started POST "/home/kontak" for 127.0.0.1 at 2012-11-23 11:53:03 +0200 Processing by HomeController#kontak as JS Parameters: {"utf8"=>"?", "authenticity_token"=>"i+5UWaQeBu7LYGPFBNAbum+67VzyyC82JN2wMlLc/UU=", "epos"=>["text box value"], "boodskap"=>["text area value"], "commit"=>""} what i would like it to be instead of "epos"=["text box value"] i want it to return "epos"="text box value"

    Read the article

  • mod_rewrite with location-based ACL in apache?

    - by Alexey
    Hi. There is a CGI-script that provides some API for our customers. Call syntax is: script.cgi?module=<str>&func=<str>[&other-options] The task is to make different authentiction rules for different modules. Optionally, it will be great to have nice URLs. My config: <VirtualHost *:80> DocumentRoot /var/www/example ServerName example.com # Global policy is to deny all <Location /> Order deny,allow Deny from all </Location> # doesn't work :( <Location /api/foo> Order deny,allow Deny from all Allow from 127.0.0.1 </Location> RewriteEngine On # The only allowed type of requests: RewriteRule /api/(.+?)/(.+) /cgi-bin/api.cgi?module=$1&func=$2 [PT] # All others are forbidden: RewriteRule /(.*) - [F] RewriteLog /var/log/apache2/rewrite.log RewriteLogLevel 5 ScriptAlias /cgi-bin /var/www/example <Directory /var/www/example> Options -Indexes AddHandler cgi-script .cgi </Directory> </VirtualHost> Well, I know that problem is order of processing that directives. <Location>s will be processed after mod_rewrite has done its work. But I believe there is a way to change it. :) Using of standard Order deny,allow + Allow from <something> directives is preferable because it's commonly used in other places like this. Thank you for your attention. :)

    Read the article

  • Recommend me an architecture for this Facebook application

    - by andybaird
    Firstly, this question is subjective. There is not a right answer for this question and it really depends on what works for you. I'm hoping to use this thread as a breeding ground for ideas. I hope this is acceptable in this medium. I'm working on building a Facebook app that will be replacing an already popular app that gets ~50k hits a day. The original app is using a very typical LAMP setup with help from some Zend libraries for database layer extraction. For the most part the app worked well, except to solve a lot of issues I ended up fragmenting tables to speed things up. As a result, I couldn't do a lot of things with the app that I wanted to (namely any processing using aggregate data that needed to be returned quickly) So I'm starting to design plans for the next version of this application, and I have a whole bunch of new and cool features that I know would choke my current setup. I'm looking for technological recommendations of data storage methods that scale well. The database does not necessarily need to be relational, simple key/value storage would suffice (although at present time I know little to nothing about KV stores) What's your recommendation? How would you tackle this? I'd like to take a completely free approach to this -- although I am most familiar and comfortable using PHP, I want to leave all technical options open.

    Read the article

  • Pump Messages During Long Operations + C# (it is urgent)

    - by Newbie
    Hi I have a web service that is doing huge computation and is taking more than a minute. I have generated the proxy file of the web service and then from my client end I am using the dll(of course I generated the proxy dll). My client side code is TimeSeries3D t = new TimeSeries3D(); int portfolioId = 4387919; string[] str = new string[2]; str[0] = "MKT_CAP"; DateRange dr = new DateRange(); dr.mStartDate = DateTime.Today; dr.mEndDate = DateTime.Today; Service1 sc = new Service1(); t = sc.GetAttributesForPortfolio(portfolioId, true, str, dr); But since it is taking to much time for the server to compute, after 1 minute I am receiving an error message The CLR has been unable to transition from COM context 0x33caf30 to COM context 0x33cb0a0 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. Kindly guide me what to do? It is very urgent. Thanks

    Read the article

  • Evaluate or Show HTML using Javascript

    - by Anita
    Hello, I have this...portion of code in Javascript: var description = document.createElement('P'); description.className='rssBoxDescription'; description.innerHTML = itemTokens[2]; div.appendChild(description); I see that description gets the HTML value and displays as plain HTML coding not as HTML processing it shouldbe ... how to convert this HTML value in description to be appended to div element shown as HTML...which is not showing. Also it has value something like: <table><tr><td style="padding: 0 5px"><ahref="xttp://picasaweb.google.com/linktoimagepage"><imgtag style="border:1px solid #5C7FB9" s_rc="xttp://lh3.ggpht.com/imagepath" alt="image.jpg"/></a></td><td valign="top"><font color="#6B6B6B">Date: </font><font color="#333333">Jun 28, 2007 9:00 AM</font><br/><font color=\"#6B6B6B\">Number of Comments on Photo:</font><font color=\"#333333\">0</font><br/><p><ahref="xttp://picasaweb.google.com/linktoimage"><font color="#3964C2">View Photo</font></a></p></td></tr></table> Pls help Anita

    Read the article

  • ASP.NET CacheDependency out of ThreadPool

    - by Stephen
    In an async http handler, we add items to the ASP.NET cache, with dependencies on some files. If the async method executes on a thread from the ThreadPool, all is fine: AsyncResult result = new AsyncResult(context, cb, extraData); ThreadPool.QueueUserWorkItem(new WaitCallBack(DoProcessRequest), result); But as soon as we try to execute on a thread out of the ThreadPool: AsyncResult result = new AsyncResult(context, cb, extraData); Runner runner = new Runner(result); Thread thread = new Thread(new ThreadStart(runner.Run()); ... where Runner.Run just invokes DoProcessRequest, The dependencies do trigger right after the thread exits. I.e. the items are immediately removed from the cache, the reason being the dependencies. We want to use an out-of-pool thread because the processing might take a long time. So obviously something's missing when we create the thread. We might need to propagate the call context, the http context... Has anybody already encountered that issue? Note: off-the-shelf custom threadpools probably solve this. Writing our own threadpool is probably a bad idea (think NIH syndrom). Yet I'd like to understand this in details, though.

    Read the article

  • Ruby Design Problem for SQL Bulk Inserter

    - by crunchyt
    This is a Ruby design problem. How can I make a reusable flat file parser that can perform different data scrubbing operations per call, return the emitted results from each scrubbing operation to the caller and perform bulk SQL insertions? Now, before anyone gets narky/concerned, I have written this code already in a very unDRY fashion. Which is why I am asking any Ruby rockstars our there for some assitance. Basically, everytime I want to perform this logic, I create two nested loops, with custom processing in between, buffer each processed line to an array, and output to the DB as a bulk insert when the buffer size limit is reached. Although I have written lots of helpers, the main pattern is being copy pasted everytime. Not very DRY! Here is a Ruby/Pseudo code example of what I am repeating. lines_from_file.each do |line| line.match(/some regex/).each do |sub_str| # Process substring into useful format # EG1: Simple gsub() call # EG2: Custom function call to do complex scrubbing # and matching, emitting results to array # EG3: Loop to match opening/closing/nested brackets # or other delimiters and emit results to array end # Add processed lines to a buffer as SQL insert statement @buffer << PREPARED INSERT STATEMENT # Flush buffer when "buffer size limit reached" or "end of file" if sql_buffer_full || last_line_reached @dbc.insert(SQL INSERTS FROM BUFFER) @buffer = nil end end I am familiar with Proc/Lambda functions. However, because I want to pass two separate procs to the one function, I am not sure how to proceed. I have some idea about how to solve this, but I would really like to see what the real Rubyists suggest? Over to you. Thanks in advance :D

    Read the article

  • Inner join on 2 arrays?

    - by russ
    I'm trying to find a solution to the following (I'm thinking with Linq) to the following: I need to pull down certain files from a larger list of files on an ftp server that have a similar file name. For example, we send an order file to some company for processing then they return a response file that we can download. So, I could send them the files "order_123.txt" and "order_456.txt". After a certain amount of time I need to go look for and download the responses for those files that will be named "order_123.resp" and "order_456.resp". The kicker is that in some cases I can have multiple responses in which case they would create "order_123-1.resp" and "order_123-2.resp" and also the files don't get removed from the server. I know this can be accomplished by looping through the files I know I need responses to then loop through all the files on the server until I find files that match but I'm hoping that I don't have to loop through the files on the server more than once. This example may help clarify: I've sent "order_222.txt" and "order_333.txt" they processed them and the ftp server contains: "order_111-1.resp" "order_001.resp" "order_222-1.resp" "order_222-2.resp" "order_333.resp" I need to download the 3rd, 4th, and 5th file. Thanks.

    Read the article

  • Does anyone knows the algorithm how Facebook detects the images when adding a link

    - by Edelcom
    When you add a link to your Facebook page, after some processing, Facebook presents you a next/prev button to choose an image linked to the url your are inserting. Obviously, Facebook reads the html-page and displays the images found on the url you insert. Does anyone knows what algorithm Facebook uses to decide what images to show ? If I insert a link to : http://www.staplijst.be/lachende-wandelaars-aalter-aktivia-003.asp, only 11 images are detected. The one I want, the one at the top right corner, is not included in the list. If I insert a link to http://www.staplijst.be/stichting-kennedymars-rijsbergen-zundert-nederland-knblo-nl-81996.asp, 19 images are displayed (including the one I want (the one at the right top corner of the text area). Both pages are build using asp code but are functionally the same. I thought that it has something to do with the image size, but can't find any deciding factor there. I will investigate some furhter, because if I know what Facebook is looking for, I can make sure that the correct images are included on the page (since they are dynamic pages build with classic asp). But if anyone has any idea ? Help would be appreciated.

    Read the article

  • Testing ASP.NET webservice using NUnit and transferring session state

    - by herbertyeung
    I have a NUnit test class that starts an ASP.NET web service (using Microsoft.VisualStudio.WebHost.Server) which runs on http://localhost:1070 The problem I am having is that I want to create a session state within the NUnit test that is accessible by the ASP.NET web service on localhost:1070. I have done the following, and the session state can be created successfully inside the NUnit Test, but is lost when the web service is invoked: //Create a new HttpContext for NUnit Testing based on: //http://blogs.imeta.co.uk/jallderidge/archive/2008/10/19/456.aspx HttpContext.Current = new HttpContext( new HttpRequest("", "http://localhost:1070/", ""), new HttpResponse( new System.IO.StringWriter())); //Create a new HttpContext.Current for NUnit Testing System.Web.SessionState.SessionStateUtility.AddHttpSessionStateToContext( HttpContext.Current, new HttpSessionStateContainer("", new SessionStateItemCollection(), new HttpStaticObjectsCollection(), 20000, true, HttpCookieMode.UseCookies, SessionStateMode.Off, false)); HttpContext.Current.Session["UserName"] = "testUserName"; testwebService.testMethod(); I want to be able to get the session state created in the NUnit test for Session["UserName"] in the ASP.NET web service: [WebMethod(EnableSession=true)] public int testMethod() { string user; if(Session["UserName"] != null) { user = (string)Session["UserName"]; //Do some processing of the user return 1; } else return 0; } The web.config file has the following configuration for the session state configuration and would like to remain using InProc than rather StateServer Or SQLServer: <sessionState mode="InProc" stateConnectionString="tcpip=127.0.0.1:42424" cookieless="false" timeout="20"/>

    Read the article

  • Should java try blocks be scoped as tightly as possible?

    - by isme
    I've been told that there is some overhead in using the Java try-catch mechanism. So, while it is necessary to put methods that throw checked exception within a try block to handle the possible exception, it is good practice performance-wise to limit the size of the try block to contain only those operations that could throw exceptions. I'm not so sure that this is a sensible conclusion. Consider the two implementations below of a function that processes a specified text file. Even if it is true that the first one incurs some unnecessary overhead, I find it much easier to follow. It is less clear where exactly the exceptions come from just from looking at statements, but the comments clearly show which statements are responsible. The second one is much longer and complicated than the first. In particular, the nice line-reading idiom of the first has to be mangled to fit the readLine call into a try block. What is the best practice for handling exceptions in a funcion where multiple exceptions could be thrown in its definition? This one contains all the processing code within the try block: void processFile(File f) { try { // construction of FileReader can throw FileNotFoundException BufferedReader in = new BufferedReader(new FileReader(f)); // call of readLine can throw IOException String line; while ((line = in.readLine()) != null) { process(line); } } catch (FileNotFoundException ex) { handle(ex); } catch (IOException ex) { handle(ex); } } This one contains only the methods that throw exceptions within try blocks: void processFile(File f) { FileReader reader; try { reader = new FileReader(f); } catch (FileNotFoundException ex) { handle(ex); return; } BufferedReader in = new BufferedReader(reader); String line; while (true) { try { line = in.readLine(); } catch (IOException ex) { handle(ex); break; } if (line == null) { break; } process(line); } }

    Read the article

  • Upload using python script takes very long on one laptop as compared to another

    - by Engr Am
    I have a python 2.7 code which uses STORBINARY function for uploading files to an ftp server and RETRBINARY for downloading from this server. However, the issue is the upload is taking a very long time on three laptops from different brands as compared to a Dell laptop. The strange part is when I manually upload any file, it takes the same time on all the systems. The manual upload rate and upload rate with the python script is the same on the Dell Laptop. However, on every other brand of laptop (I have tried with IBM, Toshiba, Fujitsu-Siemens) the python script has a very low upload rate than the manual attempt. Also, on all these other laptops, the upload rate using the python script is the same (1Mbit/s) while the manual upload rate is approx. 8 Mbit/s. I have tried to vary the filesize for the upload to no avail. TCP Optimizer improved the download rate on all the systems but had no effect on the upload rate. Download rate using this script on all the systems is fine and same as the manual download rate. I have checked the server and it has more than 90% free space. The network connection is the same for all the laptops, and I try uploading only with one laptop at a time. All the laptops have almost the same system configurations, same operating system and approximately the same free drive space. If anything the Dell laptop is a little less in terms of processing power and RAM than 2 of the others, but I suppose this has no effect as I have checked many times to see how much was the CPU usage and network usage during these uploads and downloads, and I am sure that no other virus or program has been eating up my bandwidth. Here is the code ('ftp' and 'file_path' are inputs to the function): path,filename=os.path.split(file_path) filesize=os.path.getsize(file_path) deffilesize=(filesize/1024)/1024 f = open(file_path, "rb") upstart = time.clock() print ftp.storbinary("STOR "+filename, f) upende = time.clock()-upstart outname="Upload " f.close() return upende, deffilesize, outname

    Read the article

  • How would i down-sample a .wav file then reconstruct it using nyquist? - in MATLAB

    - by Andrew
    This is all done in MATLAB 2010 My objective is to show the results of: undersampling, nyquist rate/ oversampling First i need to downsample the .wav file to get an incomplete/ or impartial data stream that i can then reconstuct. Heres the flow chart of what im going to be doing So the flow is analog signal - sampling analog filter - ADC - resample down - resample up - DAC - reconstruction analog filter what needs to be achieved: F= Frequency F(Hz=1/s) E.x. 100Hz = 1000 (Cyc/sec) F(s)= 1/(2f) Example problem: 1000 hz = Highest frequency 1/2(1000hz) = 1/2000 = 5x10(-3) sec/cyc or a sampling rate of 5ms This is my first signal processing project using matlab. what i have so far. % Fs = frequency sampled (44100hz or the sampling frequency of a cd) [test,fs]=wavread('test.wav'); % loads the .wav file left=test(:,1); % Plot of the .wav signal time vs. strength time=(1/44100)*length(left); t=linspace(0,time,length(left)); plot(t,left) xlabel('time (sec)'); ylabel('relative signal strength') **%this is were i would need to sample it at the different frequecys (both above and below and at) nyquist frequency.*I think.*** soundsc(left,fs) % shows the resaultant audio file , which is the same as original ( only at or above nyquist frequency however) Can anyone tell me how to make it better, and how to do the sampling at verious frequencies?

    Read the article

  • Remove Empty Attributes from XML

    - by er4z0r
    Hi, I have a buggy xml that contains empty attributes and I have a parser that coughs on empty attributes. I have no control over the generation of the xml nor over the parser that coughs on empty attrs. So what I want to do is a pre-processing step that simply removes all empty attributes. I have managed to find the empty attribus, but now I don't know how to remove them: XPathFactory xpf = XPathFactory.newInstance(); XPath xpath = xpf.newXPath(); XPathExpression expr = xpath.compile("//@*"); Object result = expr.evaluate(d, XPathConstants.NODESET); if (result != null) { NodeList nodes = (NodeList) result; for(int node=0;node<nodes.getLength();node++) { Node n = nodes.item(node); if(isEmpty(n.getTextContent())) { this.log.warn("Found empty attribute declaration "+n.toString()); NamedNodeMap parentAttrs = n.getParentNode().getAttributes(); parentAttrs.removeNamedItem(n.getNodeName()); } } } This code gives me a NPE when accessing n.getParentNode().getAttributes(). But how can I remove the empty attribute from an element, when I cannot access the element?

    Read the article

  • how can we apply client side validation on fileupload control in ASP.NET to check filename contain s

    - by subodh
    I am working on ASP.NET3.5 platform. I have used a file upload control and a asp button to upload a file. Whenever i try to upload a file which contain special characterlike (file#&%.txt) it show crash and give the messeage Server Error in 'myapplication' Application. A potentially dangerous Request.Files value was detected from the client (filename="...\New Text &#.txt"). Description: Request Validation has detected a potentially dangerous client input value, and processing of the request has been aborted. This value may indicate an attempt to compromise the security of your application, such as a cross-site scripting attack. You can disable request validation by setting validateRequest=false in the Page directive or in the configuration section. However, it is strongly recommended that your application explicitly check all inputs in this case. Exception Details: System.Web.HttpRequestValidationException: A potentially dangerous Request.Files value was detected from the client (filename="...\New Text &#.txt"). Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. how can i prevent this crash using javascript at client side?

    Read the article

  • NHibernate. DTO -> Domain

    - by Andrew Kalashnikov
    Hello guys. I've got SOA which processing data for diff clients(asp,sl). The base of this design is domains of my business model. For transporting,showing it to clients I use DTO. For mapping domain to DTO I use AutoMapper. Now I should persist new entities from clients. I want use my DTO's at this scenario too. So i've got some questions as I'm not much familiar with this design 1) Is it a good practice build DTO on client and send it to web-service on the wire? MayBe i should pass my domains? 2) Is it possible have several DTO's to one domain (one show at grid, and one to save). For saving I need set all nonprimitive props at client. 3) DTO - to Domain. If I've got int can I use AutoMapper to generate NHibernate Proxy for this ID, or I should do i manually. Your expierence and practice are very interesting. Thanks for answer!!!

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >