Search Results

Search found 6612 results on 265 pages for 'seconds'.

Page 211/265 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • How to configure MAVEN?

    - by i2ijeya
    I am a newbie to maven and i gone through the configuration steps given in Apache site, but still i cant configure it. So anyone please help me with simple steps to configure MAVEN in windows. Thanks in advance. EDITED C:\Documents and Settings\arselv>mvn install [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building Maven Default Project [INFO] task-segment: [install] [INFO] ------------------------------------------------------------------------ Downloading: http://repo1.maven.org/maven2/org/apache/maven/plugins/maven- resources- plugin/2.3/maven-resources-plugin-2.3.pom Downloading: http://repo1.maven.org/maven2/org/apache/maven/plugins/maven-resources- plugin/2.3/maven-resources-plugin-2.3.pom [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error building POM (may not be this project's POM). Project ID: org.apache.maven.plugins:maven-resources-plugin Reason: POM 'org.apache.maven.plugins:maven-resources-plugin' not found in repository: Unable to download the artifact from any repository org.apache.maven.plugins:maven-resources-plugin:pom:2.3 from the specified remote repositories: central (http://repo1.maven.org/maven2) for project org.apache.maven.plugins:maven-resources-plugin [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 42 seconds [INFO] Finished at: Fri Feb 05 13:10:06 IST 2010 [INFO] Final Memory: 2M/5M [INFO] ------------------------------------------------------------------------ So Above is the Error whil trying to do the steps given in apache site.

    Read the article

  • what type of bug causes a program to slowly use more processor power and all of a sudden go to 100%?

    - by reinier
    Hi, I was hoping to get some good ideas as to what might be causing a really nasty bug. This is a program which is transmitting data over a socket, and also receives messages back. I could explain lots more, but I don't think this will help here. I'm just searching for hypothetical problems which can cause the following behaviour: program runs processor time slowly accumulates (till around 60%) all of a sudden (could be after 30 but also after 60 seconds) the processor time shoots to 100%. the program halts completely In my syslog it always ends on one line with a memory allocation (something similar to: myArray = new byte[16384]) in the same thread. now here is the weird part: if I set the debugger anywhere...it immediately stops on that line. So just the act of setting a breakpoint, made the thread continue (it wasn't running since I saw no log output anymore) I was thinking 'deadlock' but that would not cause 100% processor power. If anything, the opposite. Also, setting a breakpoint would not cause a deadlock to end. anyone else a theoretical suggestion as to what kind of 'construct' might cause this effect? (apart from 'bad programming') ;^) thanks EDIT: I just noticed.... by setting the sendspeed slower, the problem shows itself much later than expected. I would think around the same amount of packets send...but no the amount of packets send is much higher this way before it has the same problem.

    Read the article

  • XMLHttpRequest leak in javascript. please help.

    - by Raja
    Hi everyone, Below is my javascript code snippet. Its not running as expected, please help me with this. <script type="text/javascript"> function getCurrentLocation() { console.log("inside location"); navigator.geolocation.getCurrentPosition(function(position) { insert_coord(new google.maps.LatLng(position.coords.latitude,position.coords.longitude)); }); } function insert_coord(loc) { var request = new XMLHttpRequest(); request.open("POST","start.php",true); request.onreadystatechange = function() { callback(request); }; request.setRequestHeader("Content-Type","application/x-www-form-urlencoded"); request.send("lat=" + encodeURIComponent(loc.lat()) + "&lng=" + encodeURIComponent(loc.lng())); return request; } function callback(req) { console.log("inside callback"); if(req.readyState == 4) if(req.status == 200) { document.getElementById("scratch").innerHTML = "callback success"; window.setTimeout("getCurrentLocation()",5000); } } getCurrentLocation(); //called on body load </script> What i'm trying to achieve is to send my current location to the php page every 5 seconds or so. i can see few of the coordinates in my database but after sometime it gets weird. Firebug show very weird logs like simultaneous POST's at irregular intervals. Here's the firebug screenshot: IS there a leak in the program. please help.

    Read the article

  • java.sql.Exception ClosedConnection

    - by john
    I am getting the following error: java.sql.SQLException: Closed Connection at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:208) at oracle.jdbc.driver.PhysicalConnection.getMetaData(PhysicalConnection.java:1508) at com.ibatis.sqlmap.engine.execution.SqlExecutor.moveToNextResultsSafely(SqlExecutor.java:348) at com.ibatis.sqlmap.engine.execution.SqlExecutor.handleMultipleResults(SqlExecutor.java:320) at com.ibatis.sqlmap.engine.execution.SqlExecutor.executeQueryProcedure(SqlExecutor.java:277) at com.ibatis.sqlmap.engine.mapping.statement.ProcedureStatement.sqlExecuteQuery(ProcedureStatement.java:34) at com.ibatis.sqlmap.engine.mapping.statement.GeneralStatement.executeQueryWithCallback(GeneralStatement.java:173) at com.ibatis.sqlmap.engine.mapping.statement.GeneralStatement.executeQueryForList(GeneralStatement.java:123) at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForList(SqlMapExecutorDelegate.java:614) at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForList(SqlMapExecutorDelegate.java:588) at com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryForList(SqlMapSessionImpl.java:118) at org.springframework.orm.ibatis.SqlMapClientTemplate$3.doInSqlMapClient(SqlMapClientTemplate.java:268) at org.springframework.orm.ibatis.SqlMapClientTemplate.execute(SqlMapClientTemplate.java:193) at org.springframework.orm.ibatis.SqlMapClientTemplate.executeWithListResult(SqlMapClientTemplate.java:219) at org.springframework.orm.ibatis.SqlMapClientTemplate.queryForList(SqlMapClientTemplate.java:266) at gov.hud.pih.eiv.web.authentication.AuthenticationUserDAO.isPihUserDAO(AuthenticationUserDAO.java:24) at gov.hud.pih.eiv.web.authorization.AuthorizationProxy.isAuthorized(AuthorizationProxy.java:125) at gov.hud.pih.eiv.web.authorization.AuthorizationFilter.doFilter(AuthorizationFilter.java:224) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:246) at I am really stumped and can't figure out what could be causing this error. I am not able to reproduce the error on my machine but on production it is coming a lot of times. I am using iBatis in the whole application so there are no chances of my code not closing connections. We do have stored procedures that run for a long time before they return results (around 15 seconds). does anyone have any ideas on what could be causing this? I dont think raising the # of connections on the application server will fix this issue buecause if connections were running out then we'd see "Error on allocating connections"

    Read the article

  • matplotlib.pyplot/pylab not updating figure while isinteractive(), using ipython -pylab

    - by NumberOverZero
    There are a lot of questions about matplotlib, pylab, pyplot, ipython, so I'm sorry if you're sick of seeing this asked. I'll try to be as specific as I can, because I've been looking through people's questions and looking at documentation for pyplot and pylab, and I still am not sure what I'm doing wrong. On with the code: Goal: plot a figure every .5 seconds, and update the figure as soon as the plot command is called. My attempt at coding this follows (running on ipython -pylab): import time ion() x=linspace(-1,1,51) plot(sin(x)) for i in range(10): plot([sin(i+j) for j in x]) #see ** print i time.sleep(1) print 'Done' It correctly plots each line, but not until it has exited the for loop. I have tried forcing a redraw by putting draw() where ** is, but that doesn't seem to work either. Ideally, I'd like to have it simply add each line, instead of doing a full redraw. If redrawing is required however, that's fine. Additional attempts at solving: just after ion(), tried adding hold(True) to no avail. for kicks tried show() for ** The closest answer I've found to what I'm trying to do was at http://stackoverflow.com/questions/2310851/plotting-lines-without-blocking-execution, but show() isn't doing anything. I apologize if this is a straightforward request, and I'm looking past something so obvious. For what it's worth, this came up while I was trying to convert matlab code from class to some python for my own use. The original matlab (initializations removed) which I have been trying to convert follows: for i=1:time plot(u) hold on pause(.01) for j=2:n-1 v(j)=u(j)-2*u(j-1) end v(1)= pi u=v end Any help, even if it's just "look up this_method" would be excellent, so I can at least narrow my efforts to figuring out how to use that method. If there's any more information that would be useful, let me know.

    Read the article

  • Fancybox "close" hangs/delays in IE7/8

    - by Kerri
    I'm having an issue with IE7/8 only on a development site: when closing the Fancybox, there is a major delay, sometimes about ten seconds or so, sometimes much longer (like a minute). Some things that might be relevant: • Using fancybox 1.3.1 • Works perfectly on all other browsers (FF, Safari, Opera, Chrome) • The loaded iframe contains a flash "virtual tour" • There are no errors reported in IE's Dev Toolbar or in DebugBar • The fancybox renders perfectly in IE, and after it has closed, there is not a problem loading it again. The closing seems to be the only issue. (That, and that it crashed the client's browser once, but I'm inclined to believe that had more to do with the content of the iframe than fancybox). I changed the link to the iFrame to something simpler (google), and it closed with no problem. So it does seem to have something to do with a conflict with the content of the iframe. The call: $("a#tourbox").fancybox({ "width": 750, "height" : 575, "autoScale": false, "type": "iframe" }); The HTML: <a id="tourbox" href="http://tour.circlepix.com/tour.htm?id=670335"><img src="sites/all/themes/removed/images/banquets-vtour.jpg" alt="Virtual Tour" /></a> Here's a link to the page with the problem: http://s93571.gridserver.com/banquets Click on the "Go to 360° Virtual Tour" image. Here's the page it loads: http://tour.circlepix.com/tour.htm?id=670335 I'd greatly appreciate any clues or ideas of what might be the issue. I couldn't find any other discussions with a similar problem. Thanks for any insights!

    Read the article

  • Efficient Context-Free Grammar parser, preferably Python-friendly

    - by Max Shawabkeh
    I am in need of parsing a small subset of English for one of my project, described as a context-free grammar with (1-level) feature structures (example) and I need to do it efficiently . Right now I'm using NLTK's parser which produces the right output but is very slow. For my grammar of ~450 fairly ambiguous non-lexicon rules and half a million lexical entries, parsing simple sentences can take anywhere from 2 to 30 seconds, depending it seems on the number of resulting trees. Lexical entries have little to no effect on performance. Another problem is that loading the (25MB) grammar+lexicon at the beginning can take up to a minute. From what I can find in literature, the running time of the algorithm used to parse such a grammar (Earley or CKY) should be linear to the size of the grammar and cubic to the size of the input token list. My experience with NLTK indicates that ambiguity is what hurts the performance most, not the absolute size of the grammar. So now I'm looking for a CFG parser to replace NLTK. I've been considering PLY but I can't tell whether it supports feature structures in CFGs, which are required in my case, and the examples I've seen seem to be doing a lot of procedural parsing rather than just specifying a grammar. Can anybody show me an example of PLY both supporting feature structs and using a declarative grammar? I'm also fine with any other parser that can do what I need efficiently. A Python interface is preferable but not absolutely necessary.

    Read the article

  • Mercurial Remote Subrepos

    - by Travis G
    I'm trying to set up my Mercurial repository system to work with multiple subrepos. I've basically followed these instructions to set up the client repo with Mercurial client v1.5 and I'm using HgWebDir to host my multiple projects. I have an HgWebDir with the following structure: http://myserver/hg fooproj mylib where mylib is some collection of common template library to be consumed by fooproj. The structure of fooproj looks like this: fooproj doc/ src/ .hgignore .hgsub .hgsubstate And .hgsub looks like: src/mylib = http://myserver/hg/mylib This should work, per my interpretation of the documentation: The first 'nested' is the path in our working dir, and the second is a URL or path to pull from. So, let's say I pull down fooproj to my home folder with: ~$ hg clone http://myserver/hg/fooproj foo Which pulls down the directory structure properly and adds the folder ~/foo/src/mylib which is a local Mercurial repository. This is where the problems begin: the mylib folder is empty aside from the items in .hg. With 2 seconds of investigation, one can see the src/mylib/.hg/hgrc is: [paths] default = http://myserver/hg/fooproj/src/mylib which is completely wrong (attempting a pull of that repo will give a 404 because, well, that URL doesn't make any sense). Logically, the default value should be what I specified in .hgsub or it would get the files from the repository in some way. None of the Mercurial commands return error codes (aside from a pull from within src/mylib), so it clearly believes that it is behaving properly (and just might be), although this does not seem logical at all. What am I doing wrong?

    Read the article

  • .NET socket timeout - blocking on Close method

    - by Mark
    I'm having trouble implementing a connect timeout using asynchronous socket calls. The idea being that I call BeginConnect on a Socket object, then use a timer to call Close() on the socket after a timeout period has elapsed. This works fine as long as the socket is created on the GUI thread - the Close method returns immediately, and the callback method is executed. However, if the socket is created on any other thread, the Close method blocks until the default IP timeout occurs. Code to reproduce: private Socket client; private void button1_Click(object sender, EventArgs e) { // Creating the socket on a threadpool thread causes Close to block. ThreadPool.QueueUserWorkItem((object state) => { client = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); IAsyncResult result = client.BeginConnect(IPAddress.Parse("144.1.1.1"), 23, new AsyncCallback(CallbackMethod), client); // Wait for 2 seconds before closing the socket. if (result.AsyncWaitHandle.WaitOne(2000)) { MessageBox.Show("Connected."); } else { MessageBox.Show("Timed out. Closing socket..."); client.Close(); MessageBox.Show("Socket closed."); } }); } private void CallbackMethod(IAsyncResult result) { MessageBox.Show("Callback started."); Socket client = result.AsyncState as Socket; try { client.EndConnect(result); } catch (ObjectDisposedException) { } MessageBox.Show("Callback finished."); } If you remove the QueueUserWorkItem line, creating the socket on the GUI thread, the socket closes instantly without blocking. Can anyone shed some light on what's going on? Thanks. Edit - System.Net trace output seems to be different depending on whether it's being connected on the GUI thread or a different thread: Trace from non-blocking close when using GUI thread Trace from blocking close when using non-GUI thread

    Read the article

  • No long-running conversations - IllegalArgumentException: Stack must not be null

    - by Markos Fragkakis
    Hi all, I have a very simple application with just 2 pages on WebLogic 10.3.2 (11g), Seam 2.2.0.GA. I have a command button in each, which makes a redirect-after-post to the other. This works well, as I see the URL of the current page I am seeing in the address bar. BUT, even though I have no long-running conversations defined, after a random number of clicks, and - I think - after a random number of seconds (~10s - 60s) I get the lovely exception at the end of this post. Now, if I have understood how temporary conversations work when redirecting this happens: When I first see my application, the url is http://localhost:7001/myapp When I click the button in pageA.xhtml, I end up in "pageB.xhtml?cid=26". This is normal because Seam extends the temporary conversation of the first request to last until the renderResponse phase of the redirect. So, it uses the cid (Conversation Id) of the extended temporary conversation to find any propagated parameters. When I click the button in pageB.xhtml, I end up in pageA.xhtml?cid=26 The same cid was given to the new extended temporary conversation. This is normal because the conversation ended at the end of the previous redirect-after-post, and not the number 26 is free to use as a cid. Is this all correct? If yes, why does this happen: If I re-type the applications home address (showing pageA) and re-click, I end up in pageB.xhtml?cid=29, which is a different number than 26. But 26 has ended after the previous RenderResponse phase, befire I re-types the url. Why is it not used instead of 29? So, to sup up, 2 questions: Why do I get the exception, even though I have not started any long-running conversations? What happens exactly with the cid? On what basis does it change? Cheers,

    Read the article

  • Practical value for concurrent-request-timeout parameter

    - by Andrei
    In the Seam Reference Guide, one can find this paragraph: We can set a sensible default for the concurrent request timeout (in ms) in components.xml: <core:manager concurrent-request-timeout="500" /> However, we found that 500 ms is not nearly enough time for most of the cases we had to deal with, especially with the severe restriction seam places on conversation access. In our application we have a combination of page scoped ajax requests (triggered by various use actions), some global scoped polling notification logic (part of the header, so included in every page) and regular links that invoke actions and/or navigate to other pages. Therefore, we get the dreaded concurrent access to conversation exception way too often, even without any significant load on the site. After researching the options for quite a bit, we ended up bumping this value to several seconds (we're debating whether to bump it up to 10s), as none of the recommended solutions seemed able to solve our issue completely (even forcing a global queue for all the ajax requests would still leave us exposed to a user deciding to click a link right when one of our polling calls was in progress). And we'd much rather have the users wait for a second or two instead of getting an error page just because they clicked a link at the wrong moment. And now to the question: is there something obvious we're missing (like a way to allow concurrent access to conversations and taking care of the needed locking ourselves, for instance :)? How do people solve this problem (ajax requests mixed with user driven interaction) in seam? Disabling all the links on the page while ajax requests are in progress (as suggested by one blog page) is really not a viable option. Any other suggestions? TIA, Andrei

    Read the article

  • How to know about MySQL 'refused connections'

    - by celalo
    Hello, I am using MONyog to montitor my two mysql servers. I get alert emails from MONyog when something goes wrong. There is an error I could not find out why. It says: Connection History: Percentage of refused connections) - 66.67% the percentage is not important, this is just about having refused connections. I get this email every half an hour. So this is like a constant situation. This must be my mistake, because I just set up those servers and there is no chance somebody else could be interfering the servers. MONyog advices me: Try to isolate users/applications that are using an incorrect password or trying to connect from unauthorized hosts. A client will be disallowed to connect if it takes more than connect_timeout seconds to connect. Set the value of log_warnings system variable to 2. This will force the MySQL server to log further information about the error. I added log_warnings=2 to my.cnf and I enabled logging like this: [mysqld_safe] . . log_warnings=2 log-error = /var/log/mysql/error.log . . . . [mysqld_safe] . log-error=/var/log/mysqld.log . . I cannot see any warnings at /var/log/mysql/error.log I can see some warnings at /var/log/mysqld.log but they are about something else. In sum, my question is how can I detect refused connections? Please let me know if any more info is required. Thanks in advance.

    Read the article

  • FileInputStream throws NullPointerException.

    - by Mohamed
    I am getting nullpointerexception, don't know what actually is causing it. I read from java docs that fileinputstream only throws securityexception so don't understand why this exception pops up. here is my code snippet. private Properties prop = new Properties(); private String settings_file_name = "settings.properties"; private String settings_dir = "\\.autograder\\"; public Properties get_settings() { String path = this.get_settings_directory(); System.out.println(path + this.settings_dir + this.settings_file_name); if (this.settings_exist(path)) { try { FileInputStream in = new FileInputStream(path + this.settings_dir + this.settings_file_name); this.prop.load(in); in.close(); } catch (IOException e) { e.printStackTrace(); } } else { this.create_settings_file(path); try{ this.prop.load(new FileInputStream(path + this.settings_dir + this.settings_file_name)); }catch (IOException ex){ //ex.printStackTrace(); } } return this.prop; } private String get_settings_directory() { String user_home = System.getProperty("user.home"); if (user_home == null) { throw new IllegalStateException("user.home==null"); } return user_home; } and here is my stacktrace: C:\Users\mohamed\.autograder\settings.properties Exception in thread "main" java.lang.NullPointerException at autograder.Settings.get_settings(Settings.java:41) at autograder.Application.start(Application.java:20) at autograder.Main.main(Main.java:19) Java Result: 1 BUILD SUCCESSFUL (total time: 0 seconds) Line 41 is: this.prop.load(in);

    Read the article

  • Which key:value store to use with Python?

    - by Kurt
    So I'm looking at various key:value (where value is either strictly a single value or possibly an object) stores for use with Python, and have found a few promising ones. I have no specific requirement as of yet because I am in the evaluation phase. I'm looking for what's good, what's bad, what are the corner cases these things handle well or don't, etc. I'm sure some of you have already tried them out so I'd love to hear your findings/problems/etc. on the various key:value stores with Python. I'm looking primarily at: memcached - http://www.danga.com/memcached/ python clients: http://pypi.python.org/pypi/python-memcached/1.40 http://www.tummy.com/Community/software/python-memcached/ CouchDB - http://couchdb.apache.org/ python clients: http://code.google.com/p/couchdb-python/ Tokyo Tyrant - http://1978th.net/tokyotyrant/ python clients: http://code.google.com/p/pytyrant/ Lightcloud - http://opensource.plurk.com/LightCloud/ Based on Tokyo Tyrant, written in Python Redis - http://code.google.com/p/redis/ python clients: http://pypi.python.org/pypi/txredis/0.1.1 MemcacheDB - http://memcachedb.org/ So I started benchmarking (simply inserting keys and reading them) using a simple count to generate numeric keys and a value of "A short string of text": memcached: CentOS 5.3/python-2.4.3-24.el5_3.6, libevent 1.4.12-stable, memcached 1.4.2 with default settings, 1 gig memory, 14,000 inserts per second, 16,000 seconds to read. No real optimization, nice. memcachedb claims on the order of 17,000 to 23,000 inserts per second, 44,000 to 64,000 reads per second. I'm also wondering how the others stack up speed wise.

    Read the article

  • What's Your Biggest Visual Studio 2008 Annoyance?

    - by Kyle West
    I love Visual Studio about 90% of the time, but that last 10% it is such a PITA it makes me want to launch my monitor off the desk. My latest annoyances: It won't remember my toolbar settings. I don't want any toolbars, ever. Quit popping open the CSS editor or XML editor or text editor everytime I open a file. Doesn't remember which regions I had expanded or collapsed and as far as I know there is no way to tell it to always open files with the regions expanded. When editing CSS or HTML the damn error list wants to pop up each time I start a tag and haven't finished it yet. First of all, don't pop up at all. And if you're going to ... give me a couple seconds to finish what I'm doing. The best part ... ReSharper :) EDIT [Jay Bazuzi]: It seems like this discussion is only productive if it's focused on the latest released version. Set the title to VS2008.

    Read the article

  • Nature of Lock is child table while deletion(sql server)

    - by Mubashar Ahmad
    Dear Devs From couple of days i am thinking of a following scenario Consider I have 2 tables with parent child relationship of kind one-to-many. On removal of parent row i have to delete the rows in child those are related to parents. simple right? i have to make a transaction scope to do above operation i can do this as following; (its psuedo code but i am doing this in c# code using odbc connection and database is sql server) begin transaction(read committed) Read all child where child.fk = p1 foreach(child) delete child where child.pk = cx delete parent where parent.pk = p1 commit trans OR begin transaction(read committed) delete all child where child.fk = p1 delete parent where parent.pk = p1 commit trans Now there are couple of questions in my mind Which one of above is better to use specially considering a scenario of real time system where thousands of other operations(select/update/delete/insert) are being performed within a span of seconds. does it ensure that no new child with child.fk = p1 will be added until transaction completes? If yes for 2nd question then how it ensures? do it take the table level locks or what. Is there any kind of Index locking supported by sql server if yes what it does and how it can be used. Regards Mubashar

    Read the article

  • How to use SSL3 instead of TLS in a particular HttpWebRequest?

    - by Anton Tykhyy
    My application has to talk to different hosts over https, and the default setting of ServicePointManager.SecurityProtocol = TLS served me well up to this day. Now I have some hosts which (as System.Net trace log shows) don't answer the initial TLS handshake message but keep the underlying connection open until it times out, throwing a timeout exception. I tried setting HttpWebRequest's timeout to as much as 5mins, with the same result. Presumably these hosts are waiting for an SSL3 handshake since both IE and Firefox are able to connect to these hosts after a 30-40 seconds' delay. There seems to be some fallback mechanism in .NET which degrades TLS to SSL3, but it doesn't kick in for some reason. FWIW, here's the handshake message my request is sending: 00000000 : 16 03 01 00 57 01 00 00-53 03 01 4C 12 39 B4 F9 : ....W...S..L.9.. 00000010 : A3 2C 3D EE E1 2A 7A 3E-D2 D6 0D 2E A9 A8 6C 03 : .,=..*z>......l. 00000020 : E7 8F A3 43 0A 73 9C CE-D7 EE CF 00 00 18 00 2F : ...C.s........./ 00000030 : 00 35 00 05 00 0A C0 09-C0 0A C0 13 C0 14 00 32 : .5.............2 00000040 : 00 38 00 13 00 04 01 00-00 12 00 0A 00 08 00 06 : .8.............. 00000050 : 00 17 00 18 00 19 00 0B-00 02 01 00 : ............ Is there a way to use SSL3 instead of TLS in a particular HttpWebRequest, or force a fallback? It seems that ServicePointManager's setting is global, and I'd really hate to have to degrade the security protocol setting to SSL3 for the whole application.

    Read the article

  • Why do Asp.net timers/updatepanels leak memory and can it be fixed/worked around?

    - by KallDrexx
    I have built a suite of internal websites for our company to manage some of our processes. I have been noticing that these pages have massive memory leaks that cause the pages to be using well over 150mb of memory, which is ridiculous for a webpage that consists of a single form and a GridView that is displaying 7-10 rows of data at a time, sometimes with the data not changing for a whole day. This data does need to be refreshed on a semi-regular basis so that we always see the latest results and can act on them. After some testing it appears that the memory leak is extremely easy to reproduce, and very noticeable. I created a page with the following asp.net markup: <body> <form id="form1" runat="server"> <div> <asp:scriptmanager ID="Scriptmanager1" runat="server"></asp:scriptmanager> <asp:Timer ID="timer1" runat="server" Interval="1000" /> <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <ContentTemplate> </ContentTemplate> </asp:UpdatePanel> </div> </form> </body> There is absolutely no code behind for this. This is the entirety of the page. Running this site in Chrome shows the memory usage shoot up to 25 megs in the span of 20-30 seconds. Leaving it running for a few minutes makes the memory go up to the 70 megs and such. Am I using timers and update panels wrong, or is this a pure Asp.net issue with no work around?

    Read the article

  • A process serving application pool 'X' reported a failure. The process id was 'Y'. The data field c

    - by born to hula
    I have a WCF Web Service which is kept under an Application Pool on IIS. Lately I've been getting "Service Unavaiable" when I'm trying to make calls to this Web Service. The first thing I tried to do was restarting the Application Pool. I did it and after a couple of seconds, it crashed and stopped. Looking at the Event Viewer, I found these messages, which by the moment couldn't help me to find where the problem is. A process serving application pool 'X' reported a failure. The process id was '11616'. The data field contains the error number. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. After getting a couple of these, I got this one: Application pool 'X' is being automatically disabled due to a series of failures in the process(es) serving that application pool. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. I've already checked permissions and Application Pool configurations but everything seems to be OK. Have anyone been through this? Thanks in advance.

    Read the article

  • jquery carousel

    - by butteff
    Hi to all! Sorry for my bad English! I don't know jquery, but I must to do carousel for student's economic portal. I'm not programmist, just student, who want to help other people to upgrade there economic knomledge. I learned something and can to do simple sites and paint into photoshop, but with this carousel I have got many problems. Can you help me? All plugins, which allready done, is not good, because they don't work with tables. result is ugly. But I can't change html code, because it is part of one big themplate. Can you help me? I am ready to pay, if it is necessary. http://rghost.ru/1888572 this is rar archive, where: carousel.htm - file with html code of carousel div.txt - list of divs, which i must to scroll into carousel. I want, that it will be gradually turning and beautiful Can you help me? how much it will be coast? It may be done on allready done plugins, like http://woork.blogspot.com/2008/03/simple-images-slider-to-create-flickr.html http://sorgalla.com/projects/jcarousel/ or other. this will be wonderfull, if it will be scrolled automatycally once in 15 seconds too. So, How much it will be cost? in WMZ, please

    Read the article

  • How many files in a directory is too many?

    - by Kip
    Does it matter how many files I keep in a single directory? If so, how many files in a directory is too many, and what are the impacts of having too many files? (This is on a Linux server.) Background: I have a photo album website, and every image uploaded is renamed to an 8-hex-digit id (say, a58f375c.jpg). This is to avoid filename conflicts (if lots of "IMG0001.JPG" files are uploaded, for example). The original filename and any useful metadata is stored in a database. Right now, I have somewhere around 1500 files in the images directory. This makes listing the files in the directory (through FTP or SSH client) take a few seconds. But I can't see that it has any affect other than that. In particular, there doesn't seem to be any impact on how quickly an image file is served to the user. I've thought about reducing the number of images by making 16 subdirectories: 0-9 and a-f. Then I'd move the images into the subdirectories based on what the first hex digit of the filename was. But I'm not sure that there's any reason to do so except for the occasional listing of the directory through FTP/SSH.

    Read the article

  • Microsoft Reporting 2005 and Report Viewer Report ASP.Net Session Has Expired on Load

    - by ThaKidd
    At my job, I have been tasked with fixing an error with our reporting server. That error is ASP.Net Session Has Expired. This error occurs when the Visual Studio ReportViewer 2005 Control attempts to load a report. We are trying to host this report to users hitting our Internet exposed Windows 2003 Server running IIS 6.0. The reportviewer control is attempting to load this report from a second server running Microsoft SQL 2005 w/Reporting Services. The SQL server is not exposed to the Internet. Here is the weird thing. This error never occurs on the development box. When it is transferred to the production IIS server, the error starts to occur. It only happens every time the report is first loaded. If the browser's refresh button is clicked 5-10 times, the report will finally load correctly. I have reproduced this same error on the latest version of Mozilla Firefox, IE 7, and IE 8. The report only takes 10-20 seconds to load. I have tried timeouts in the 300+ second range on the reporting server/iis production server. I have tried a few options like Async (which causes images not to load properly) and setting the session mode to iproc with a high timeout value in the Reporting Server's web.config. I have also tried using the reporting server's IP address in the report viewer's code instead of the server name. I plan on verifying a picture loading issue which I also read about tomorrow when I get into work. I am unsure what service packs Visual Studio 2005 and the MSSQL server are running. Was an update released to fix this problem that I could not find? Does anyone have a fix for this?

    Read the article

  • JACOB (Java/COM/ActiveX) - How to troubleshoot event handling?

    - by Youval Bronicki
    I'm trying to use JACOB to interact with a COM object. I was able to invoke an initialization method on the object (and to get its properties), but am not getting any events back. The code is quoted below. I have a sample HTML+Javascript page (running in IE) that successfully receives events from the same object. I'm considering the following options, but would appreciate any concrete troubleshooting ideas ... Send my Java program to the team who developed the COM object, and have them look for anything suspicious on their side (does the object have a way on knowing whether there's a client listening to its events, and whether they were successfully delivered?) Get into the native parts of JACOB and try to debug on that side. That's a little scary given that my C++ is rusty and that I've never programmed for Windows. public static void main(String[] args) { try { ActiveXComponent c = new ActiveXComponent( "CLSID:{********-****-****-****-************}"); // My object's clsid if (c != null) { System.out.println("Version:"+c.getProperty("Version")); InvocationProxy proxy = new InvocationProxy() { @Override public Variant invoke(String methodName, Variant[] targetParameters) { System.out.println("*** Event ***: " + methodName); return null; } }; DispatchEvents de = new DispatchEvents((Dispatch) c.getObject(), proxy); c.invoke("Init", new Variant[] { new Variant(10), //param1 new Variant(2), //param2 }); System.out.println("Wating for events ..."); Thread.sleep(60000); // 60 seconds is long enough System.out.println("Cleaning up ..."); c.safeRelease(); } } catch (Exception e) { e.printStackTrace(); } finally { ComThread.Release(); } }

    Read the article

  • iText PDFReader Extremely Slow To Open

    - by Wbmstrmjb
    I have some code that combines a few pages of acro forms (with acrofields in tact) and then at the end writes some JS to the entire document. It is the PdfReader in the function adding the JS that is taking extremely long to instantiate (about 12 seconds for a 1MB file). Here is the code (pretty simple): public static byte[] AddJavascript(byte[] document, string js) { PdfReader reader = new PdfReader(new RandomAccessFileOrArray(document), null); MemoryStream msOutput = new MemoryStream(); PdfStamper stamper = new PdfStamper(reader, msOutput); PdfWriter writer = stamper.Writer; writer.AddJavaScript(js); stamper.Close(); reader.Close(); byte[] withJS = msOutput.GetBuffer(); return withJS; } I have benchmarked the above and the line that is slow is the first one. I have tried reading it from a file instead of memory and tried using a MemoryStream instead of the RandomAccessFileOrArray. Nothing makes it any faster. If I add JS to a single page document, it is very fast. So my thought is that the code that combines the pages is somehow making the PDF slow to read for the PdfReader. Here is the combine code: public static byte[] CombineFiles(List<byte[]> sourceFiles) { MemoryStream output = new MemoryStream(); PdfCopyFields copier = new PdfCopyFields(output); try { output.Position = 0; foreach (var fileBytes in sourceFiles) { PdfReader fileReader = new PdfReader(fileBytes); copier.AddDocument(fileReader); } } catch (Exception exception) { //throw } finally { copier.Close(); } byte[] concat = output.GetBuffer(); return concat; } I am using PdfCopyFields because I need to preserve the form fields and so cannot use the PdfCopy or PdfSmartCopy. This combine code is very fast (few ms) and produces working documents. The AddJS code above is called after it and the PdfReader open is the slow piece. Any ideas?

    Read the article

  • Simulating Google Appengine's Task Queue with Gearman

    - by sotangochips
    One of the characteristics I love most about Google's Task Queue is its simplicity. More specifically, I love that it takes a URL and some parameters and then posts to that URL when the task queue is ready to execute the task. This structure means that the tasks are always executing the most current version of the code. Conversely, my gearman workers all run code within my django project -- so when I push a new version live, I have to kill off the old worker and run a new one so that it uses the current version of the code. My goal is to have the task queue be independent from the code base so that I can push a new live version without restarting any workers. So, I got to thinking: why not make tasks executable by url just like the google app engine task queue? The process would work like this: User request comes in and triggers a few tasks that shouldn't be blocking. Each task has a unique URL, so I enqueue a gearman task to POST to the specified URL. The gearman server finds a worker, passes the url and post data to a worker The worker simply posts to the url with the data, thus executing the task. Assume the following: Each request from a gearman worker is signed somehow so that we know it's coming from a gearman server and not a malicious request. Tasks are limited to run in less than 10 seconds (There would be no long tasks that could timeout) What are the potential pitfalls of such an approach? Here's one that worries me: The server can potentially get hammered with many requests all at once that are triggered by a previous request. So one user request might entail 10 concurrent http requests. I suppose I could have a single worker with a sleep before every request to rate-limit. Any thoughts?

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >