Search Results

Search found 36174 results on 1447 pages for 'change requests'.

Page 9/1447 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Relaying requests between third party server and Heroku for static IP

    - by Gady
    I have a rails application hosted on Heroku that I need to integrate with 3rd party payments provider. The payment provider requires that my application will have a static IP for incoming and outgoing HTTPS requests. I want to deploy a proxy on a Linode VPS so it can relay the information as a proxy. Relaying the request to the service provider seems easy, I just use their IP. Can I relay requests coming from the service provider to the heroku application? Can I realy the request using a URL (https://myapp.herokuapp.com) ? What is the recommended proxy server to use?

    Read the article

  • mod_access for lighttpd causes a 403 error for all POST requests

    - by Sam
    I have found on my debian server that running the lighttpd module mod_access is causing the server to response with a 403 to all POST requests. It's very odd as I have two servers, one is running as I'd expect and the other keeps returning these 403's. They are running identical configs for lighttpd and php. My lighttpd.conf is: https://gist.github.com/4269500 There is also one other custom conf: https://gist.github.com/4269508 I've opened up the servers for requests until I get this fixed, the server that works is http://mercury.isitup.org/ and the one that fails is http://venus.isitup.org/. After working out that disabling mod_access resolves the problem I greped all my lighttpd configs for uses of it (docs). Disabling each line I found didn't help, leading me to think this is perhaps some default behaviour (or bug?)... Has anyone come across this before or know what configuration value I've got wrong? Versions Debian: Debian GNU/Linux 6.0.6 (squeeze) Lighttpd: lighttpd/1.4.28 (ssl) PHP: PHP 5.3.19-1~dotdeb.0 with Suhosin-Patch (cli)

    Read the article

  • Plesk 9 VPS - Doesn't reply to NameServer requests (nslookup, etc)

    - by Ben
    Hi, I'm trying to troubleshoot a problem with a new VPS i'm setting up. The VPS is running Plesk 9 on a CentOS 5 system. Everything works fine, except it doesn't serve dns requests. If I try something like nslookup [somedomain.com] the.ser.ver.ip to test a DNS query, i get the following error ;; connection timed out; no servers could be reached I can't telnet to it on port 42 either.. I'm guessing something is blocking the requests.. firewall maybe? the plesk firewall module is installed and the nameservers entry is green. Any other way I can check what's blocking it on the server? Any help/tip greatly appreciated. Note: http works, i can telnet to the server on port 80 and i can also ping the server Thanks

    Read the article

  • Local Apache on Windows XP not finishing page requests

    - by asgeo1
    I have Apache 2.2.11 installed locally on my Windows XP (SP3) dev machine, which I setup about 3 months ago. I have just started having a strange problem in the last few week. Apache is serving some basic PHP applications like phpMyAdmin. When I make a page request, Apache appears to not finish serving all resources for that page. Firefox shows the "Transferring data from servername..." message, and the page never completes. The same problem happens in Internet Explorer too. I can sometimes tell which resource it is waiting on, because most of the page will render except for some image or similar resources. (Not sure why Firebug doesn't show this) It doesn't have the problem every page request - for page requests where most of the resources are cached in my browser, the page request will work with no problems. Or pages that are very light will work with no problems. However, if I "hard" refresh the page, I will have this problem (probably because it is requesting all page resources) Does anyone know what this could be? It is so strange that it has only just started happening - and I did not make any changes to my system (that I am aware of) I tried playing with the Apache ThreadsPerChild setting, but it did not seem to make a difference. UPDATE: I have been doing some more tests. I have been serving the most basic of pages, just a plain HTML file: <html> <body> <h1>testing</h1> </body> </html> If I request this page multiple times in a row, AND each request occurs immediately after the previous has completed, then 50% of the time the request will time out. However, if I put a 1-2 second gap between requests, then there is no problem. This correlates to what I have observed when the brower requests a real application page. When the browser has nothing cached, then all of the page resources are requested from the browser in a short amount of time - this appears to trigger the problem. UPDATE2: Nathan Long has helped me understand the issue a little better with the server-status page (see below). It is weird, it is like the server has a hickup sending data to the client. The client sits there waiting forever for data that never arrives. Closing the client process does not terminate the connection on the server - the server still has active threads for each previously attempted connection, but they just sit there - not sending any data and never terminating. (even though the client is now closed) Only a restart of the server seems to terminate them.

    Read the article

  • Viewing file: protocol requests made from a swf

    - by Erik Vold
    I've got a swf file in a index.html file, the swf is basically loading images, but it requests a xml file which details where the image/asset files are located. Now the index file (which is just html, js, and css) works (ie: the swf file is working) when used on this url: http://localhost:8500/core/index.html which I'm able to do with Coldfusion 8 single server development environment. But when I access the file directly with this url: file:///C:/ColdFusion8/wwwroot/core/index.html the swf file does not work. So I'm guessing that the swf is having trouble locating the files that it needs. The problem is I have no idea what file urls it is try to access atm, and both Firebug and Fiddler are not able to inform me what requests are made on the file: protocol. So is there another tool that I can use?

    Read the article

  • Vetting Github Pull requests with Hudson

    - by cdecker
    I've been using Gerrit and Hudson very successfully to test and automatically vote on new checkins in the past and now I'm wondering whether it is possible to set up Hudson so that it'll check Github at regular intervals and looks if there are new Pull Requests available. If yes it should apply the patch and run the unit tests against it, adding a comment to the pull request if no failure is detected. It would certainly reduce the amount of work going into vetting patches/pull requests. Is that possible at all, or should I stick with my Gerrit setup?

    Read the article

  • Apache CPU usage stays at 100% even when there are no requests

    - by Leirith
    Hi, I've been running the Apache HTTP server benchmarking tool (ab) on my new Apache server to test performance. I noticed that with a command like the following: ab -n 100000 -c 1000 http://www.mysite.com/ The CPU is used 100% by the apache2 processes during the testing. When the test concludes, usually with the following error just before the last requests are made: apr_poll: The timeout specified has expired (70007) Total of 99960 requests completed the CPU usage remains at 100%, and it's all being consumed by apache. I am using the worker MPM with and running PHP with mod_fcgid. Any advice as to why this is or what can be done to stop it would be appreciated.

    Read the article

  • PHP Requests Being Blocked After Making About 25 in Ten Minutes

    - by Daniel Stern
    We have an administrative portal where we run PHP functions through a Javascript portal using ajax for administrative purposes. For example, we might have a function called updateAllDatabaseEntries() which would call AJAX functions in rapid succession, with those functions each executing numerous SQL queries. The problem is after making several successive requests from the same computer (not an excessive amount, maybe 30 in ten minutes) the system will stop responding to any PHP, HTTP requests ETC ONLY from my computer. From other computers in the office the panel can still be accessed, and access is restored to this computer after about 15 minutes. We believe this is not a glitch but some kind of security feature built into our server, possibly relating to Suhosin and likely well-intentioned but currently preventing us from running our system administration. Server Info: Linux 2.6.32-5-xen-amd64 #1 SMP Tue Mar 8 00:01:30 UTC 2011 x86_64 GNU/Linux Cheers - DS

    Read the article

  • Repeated requests on our server?

    - by pitty.platsch
    I encountered something strange in the access log of our Apache server which I cannot explain. Requests for webpages that I or my colleagues do from the office's Windows network get repeated by another IP (that we don't know) a couple of seconds later. The user agent repeating our requests is Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2) Has anyone an idea? Update: I've got some more information now. The referrer of the replicate is set to the URL I requested before and it's not the exact same request as the protocol version is changed from 'HTTP/1.1' to 'HTTP/1.0'. The IP is not just one, it's just one of a subnet (80.40.134.*). It's just the first request to a resource that's get repeated, so it seems the "spy" is building up some kind of cache of visited places. The repeater is also picky. I tried randomly URLs with different HTTP status codes and different file patterns. 301s and 200s are redone, 404s not. Image extensions seem to be ignored. While doing my tests I discovered that this behavior seems to be common as I found other clients visiting just after the first requests: 66.249.73.184 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 10952 "-" "Mediapartners-Google" 50.17.125.180 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 41312 "-" "Mozilla/5.0 (compatible; proximic; +http://www.proximic.com/info/spider.php)" I wasn't aware about this practice, so I don't see it that much as a threat anymore. I still want to find out who this is, so any further help is appreciated. I'll try later if this also happens if I query some other server where I have access to the access logs and will update here then.

    Read the article

  • 1K incoming http post requests per second, each with a 10-50K file

    - by Blankman
    I'm trying to figure out what kind of server setup I will need to support: 1K http post requests per second each post will contain a xml file between 5-50K (average of 25 kilobytes) Even if I get a 100 Mb/s connection with my dedicated box (they usually give 10 Mb/s but you can upgrade), from my calculations that is about 12K kb/s which means about 480 25kb files per second. So this means I need around 3 servers then, each with 100 Mb/s connection. Would a single server running HAProxy be able to redirect the requests to other servers or does this mean I need to get something else that can handle more than 100 Mb/s to proxy things out to the other servers? If my math is off I'd appreciate any corrections you may have.

    Read the article

  • Apache redirect some requests to another server

    - by mucie
    We just bought a new server. We want our old server to respond the https connections(because of ssl certificate) and new server to respond the rest. New server is ready but i don't know how to redirect requests to new one. mydomain.com => old machine ip 10.10.10.41 => new machine Requests will come through mydomain.com. If it is https: respond else redirect to 10.10.10.41 How should i configure apache for this situation?

    Read the article

  • SmartAssembly Support: How to change the maps folder

    - by Bart Read
    If you've set up SmartAssembly to store error reports in a SQL Server database, you'll also have specified a folder for the map files that are used to de-obfuscate error reports (see Figure 1). Whilst you can change the database easily enough you can't change the map folder path via the UI - if you click on it, it'll just open the folder in Explorer - but never fear, you can change it manually and fortunately it's not that difficult. (If you want to get to these settings click the Tools > Options link on the left-hand side of the SmartAssembly main window.)   Figure 1. Error reports database settings in SmartAssembly. The folder path is actually stored in the database, so you just need to open up SQL Server Management Studio, connect to the SQL Server where your error reports database is stored, then open a new query on the SmartAssembly database by right-clicking on it in the Object Explorer, then clicking New Query (see figure 2).     Figure 2. Opening a new query against the SmartAssembly error reports database in SQL Server. Now execute the following SQL query in the new query window: SELECT * FROM dbo.Information You should find that you get a result set rather like that shown in figure 3. You can see that the map folder path is stored in the MapFolderNetworkPath column.   Figure 3. Contents of the dbo.Information table, showing the map folder path I set in SmartAssembly. All I need to do to change this is execute the following SQL: UPDATE dbo.Information SET MapFolderNetworkPath = '\\UNCPATHTONEWFOLDER' WHERE MapFolderNetworkPath = '\\dev-ltbart\SAMaps' This will change the map folder path to whatever I supply in the SET clause. Once you've done this, you can verify the change by executing the following again: SELECT * FROM dbo.Information You should find the result set contains the new path you've set.

    Read the article

  • Stupid Geek Tricks: Change Your IP Address From the Command Line in Linux

    - by Taylor Gibb
    Almost everybody can figure out how to change their IP address using an interface, but did you know you can set your network card’s IP address using a simple command from the command line? Changing Your IP From the Command Line in Linux Note: This will work on all Debian based Linux Distro’s. To get started type ifconfig into the terminal and hit enter, take note of the name of the interface that you want to change the settings for. To change the settings, you also use the ifconfig command, this time with a few parameters: sudo ifconfig eth0   192.168.0.1 netmask 255.255.255.0 That’s about all all you need to do to change your IP, of course the above command assumes a few things: The interface that you want to change the IP for is eth0 The IP you want to give the interface is 192.168.0.1 The Subnet Mask you want to set for the interface is 255.255.255.0 If you run ifconfig again you will see that your interface has now taken on the new settings you assigned to it. If you wondering how to change the Default Gateway, you can use the route command. sudo route add default gw 192.168.0.253 eth0 Will set your Default Gateway on the eth0 interface to 192.168.0.253. To see your new setting, you will need to display the routing table. route -n That’s all there is to it. How to Play Classic Arcade Games On Your PC How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8

    Read the article

  • Building in Change: Project Construction in Asset Intensive Industries

    - by Sylvie MacKenzie, PMP
    According to a recent survey by the Economist Intelligence Unit, sponsored by Oracle, only 51% of project owners rated themselves as effective at delivering their projects to scope, budget, and schedule when confronted with change. In addition only 43% rated themselves as effective at anticipating potential change. Even with the best processes and technology in place, change is often an unavoidable part of the construction process. How organizations respond to change can mean the difference between delays and cost overruns, and projects being completed on schedule and on budget. Implementing Enterprise Project Portfolio Management and using a solution to help manage and automate those process can help asset intensive organizations: Govern project and program compliance and regulatory requirements for project success Unite project teams and stakeholders through collaboration and strong feedback methods to speed project completion Reduce the risk of cost and schedule overruns and any resulting penalties to deliver on time and on budget Effectively manage change throughout the project life cycle Ensure sufficient capacity, utilization, and availability of people, skills, and other resources to meet commitments. The results of the recent EIU survey, sponsored by Oracle:"Building in Change: Project Construction in Asset-Intensive Industries", will be revealed in an upcoming webinar with Hart Energy / Oil & Gas Investor, featuring the Economist Intelligence Unit and Oracle on April 11th at 1pm CST. Click here for further information or visit http://www.oilandgasinvestor.com/

    Read the article

  • Reinstalled Ubuntu 12.04 and now I cannot change preferences like theme, wallpaper, and nautilus preferences

    - by krishnab
    So I just down-rev'd ubuntu from 13.04 back to 12.04 LTS desktop 64 (Precise). I am using Unity. I just reformatted the Ubuntu partition, but kept my home directory intact, and everything seemed to reconnect just fine. No data was lost. However, I found that I cannot seem to change my preferences. So I cannot seem to change my desktop background, no matter how many ways I try--Ubuntu Tweak, Gnome Tweak, system settings. I also cannot change the system GTK+ theme, though apparently I am able to change the windows border theme. Further, I cannot seem to change my Nautilus preferences--so I cannot seem the make the default view a list view, and I cannot make the "single-click" behavior the default. I even went into the nautilus org.gnome.nautilus settings to manually change things, but no luck. I thought it was a permissions issue, so I did a chown on the home folder and on the .gvfs folder. Still no luck. So somewhere there seems to be a permission that I am not catching. Does anyone have any suggestions? Thanks.

    Read the article

  • How can I programmatically change the keyboard layout?

    - by Jason R. Coombs
    I want to run a shell command or script that will configure each of my Ubuntu Precise boxes to use the Dvorak keyboard layout as the default (and only) layout. With earlier versions, I was able to set the XKBVARIANT in /etc/default/keyboard but when I make this change in Precise (and reboot), the keyboard layout appears to be unaffected (both in console and in gnome). I tried also setting the XKBMODEL to pc105 and XKBLAYOUT to us, but that did not seem to help. I know I can set the layout for gnome using the 'keyboard layout' tool... but I want the change to affect the console, and I want to automate the process. How can I accomplish this? Edit: To clarify, I want to know how I can cause to change (using only a script or command-line) the keyboard layout to be Dvorak as the default and only keyboard layout for both Gnome and the console. I want this change to be persistent (survive reboots), just as it is when the change is made through the Keyboard Layout tool. Edit: Let me put it another way. If I had installed the operating system myself (which I did not because the OS was installed by the virtual machine infrastructure), I could have selected the desired keyboard layout at install time, and that layout would be applied persistently, system-wide. How can I change the layout to appear as if I had set it during the install process?

    Read the article

  • How to implement proper identification and session managent on json post requests?

    - by IBr
    I have some minor messaging connection to server from website via json requests. I have single endpoint which distributes requests according to identification data. I am using asynchronous server and handle data when it comes. Now I am thinking about extending requests with some kind of session. What is the best way to define session? Get cookie when registered and use token as long as session runs with each request? Should I implement timeout for token? Is there alternative methods? Can I cache tokens to same origin requests? What could I use on client side (Web browser)? How about safety? What techniques I should use to throw away requests with malformed data, to big data, without choking server down? Should I worry?

    Read the article

  • [MINI HOW-TO] Change the Default Color Scheme in Office 2010

    - by Mysticgeek
    Like in Office 2007 the default color scheme for 2010 is blue. If you are not a fan of it, here we show you how to change it to silver or black. In this example we are using Microsoft Word, but it works the same way in Excel, Outlook, and PowerPoint as well. Once you change the color scheme in one Office application, it will change it for all of the other apps in the suite. Change Color Scheme To change the color scheme click on the File tab to access Backstage View and click on Options. In Word Options the General section should open by default…use the dropdown menu next to Color Scheme to change it to Silver, Blue, or Black then click OK. Here is what Black looks like…who knows why Microsoft decided to leave the blue around the edges. This is the default Blue color scheme… And finally we take a look at the Silver color scheme in Excel… That is all there is to it! It would be nice if they would incorporate other color schemes to Office 2010, as some of you may not be happy with only three choices. If you’re using Office 2007 check out our article on how to change the color scheme in it. Also, The Geek has a cool article on how to set the Color Scheme of Office 2007 with a quick registry hack. Similar Articles Productive Geek Tips Set the Office 2007 Color Scheme With a Quick Registry HackChange The Default Color Scheme In Office 2007Maximize Space by "Auto-Hiding" the Ribbon in Office 2007How To Personalize the Windows Command PromptOrganize & Group Your Tabs in Firefox the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • ColdFusion Server crash after thousands of HTTP requests

    - by Jason Bristol
    We are running ColdFusion 8 on a windows server 2003 VPS with an API that exposes student records to a partner API through a connector. Our API returns around 50k student records serialized in XML format pretty seamlessly. My question originates when something very frightening happened today when we tested our connector to our partners API. Our entire website and web host went down. We assumed that our host was just having some issues and after 4 hours with no resolution and no response from their customer service we finally got a response from them claiming that they had an "unauthorized user" in their network. After our server was back up we were unable to connect to our website as if the web service or coldfusion itself had froze. This is really where my concern comes from as I fear we may have overloaded the web service. As I mentioned before we tried sending over 50k HTTP POST requests over to our partner's API, however everything stopped after around 1.6k Is this bad practice or is there some sort of rate limiting I can relax somewhere in server configuration? We managed to find a workaround, but it bypasses our connector which is critical to our design. This would have been a one time deal as the purpose of so many requests was to populate our partner's website with current data, after that hourly syncs will keep requests down to around 100 per hour. UPDATE Our partner API is owned and operated by Pardot. We are converting students to prospects by passing student data to their API which unfortunately only seems to accept one student at a time. For that reason we have to do all 50k requests individually. Our server has 4GB of RAM, an Intel Core 2 Duo @ 2.8GHz running Windows Server 2003 SP2. I monitored the server during a 100 student sync, a 400 student sync, and a 1.4k student sync with the following results: 100 students - 2.25GB of Memory, 30-40% CPU utilization, 0.2-0.3% network bandwidth 400 students - 2.30GB of Memory, 30-50% CPU utilization, 0.2-1.0% network bandwidth 1.4k students - 2.30GB of Memory, 30-70% CPU utilization, 0.2-1.0% network bandwidth I know this is a far cry from 50k students, but I don't want to risk taking down our CMS system again assuming that was the cause. To give you a look at our code: <cfif (#getStudents.statusCode# eq "200 OK")> <cftry> <cfloop index="StudentXML" array="#XmlSearch(responseSTUD,'/students/student')#"> <cfset StudentXML = XmlParse(StudentXML)> <cfhttp url="#PARDOT_CMS_UPSERT#" method="post" timeout="10000" > <cfhttpparam type="url" name="user_key" value="#PARDOT_CMS_USERKEY#"> <cfhttpparam type="url" name="api_key" value="#api_key#"> <cfhttpparam type="url" name="email" value="#StudentXML.student.email.XmlText#"> <cfhttpparam type="url" name="first_name" value="#StudentXML.student.first.XmlText#"> <cfhttpparam type="url" name="last_name" value="#StudentXML.student.last.XmlText#"> <cfhttpparam type="url" name="in_cms" value="#StudentXML.student.studentid.XmlText#"> <cfhttpparam type="url" name="company" value="#StudentXML.student.agencyname.XmlText#"> <cfhttpparam type="url" name="country" value="#StudentXML.student.countryname.XmlText#"> <cfhttpparam type="url" name="address_one" value="#StudentXML.student.address.XmlText#"> <cfhttpparam type="url" name="address_two" value="#StudentXML.student.address2.XmlText#"> <cfhttpparam type="url" name="city" value="#StudentXML.student.city.XmlText#"> <cfhttpparam type="url" name="state" value="#StudentXML.student.state_province.XmlText#"> <cfhttpparam type="url" name="zip" value="#StudentXML.student.postalcode.XmlText#"> <cfhttpparam type="url" name="phone" value="#StudentXML.student.phone.XmlText#"> <cfhttpparam type="url" name="fax" value="#StudentXML.student.fax.XmlText#"> <cfhttpparam type="url" name="output" value="simple"> </cfhttp> </cfloop> <cfcatch type="any"> <cfdump var="#cfcatch.Message#"> </cfcatch> </cftry> </cfif> UPDATE 2 I checked the CF logs and found a couple of these: "Error","jrpp-8","06/06/13","16:10:18","CMS-API","Java heap space The specific sequence of files included or processed is: D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm, line: 675 " java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2882) at java.io.CharArrayWriter.write(CharArrayWriter.java:105) at coldfusion.runtime.CharBuffer.replace(CharBuffer.java:37) at coldfusion.runtime.CharBuffer.replace(CharBuffer.java:50) at coldfusion.runtime.NeoBodyContent.write(NeoBodyContent.java:254) at cfapi2ecfm292155732._factor30(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:675) at cfapi2ecfm292155732._factor31(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:662) at cfapi2ecfm292155732._factor36(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:659) at cfapi2ecfm292155732._factor42(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:657) at cfapi2ecfm292155732._factor37(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm) at cfapi2ecfm292155732._factor44(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:456) at cfapi2ecfm292155732._factor38(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm) at cfapi2ecfm292155732._factor46(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:455) at cfapi2ecfm292155732._factor39(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm) at cfapi2ecfm292155732._factor47(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:453) at cfapi2ecfm292155732.runPage(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:1) at coldfusion.runtime.CfJspPage.invoke(CfJspPage.java:192) at coldfusion.tagext.lang.IncludeTag.doStartTag(IncludeTag.java:366) at coldfusion.filter.CfincludeFilter.invoke(CfincludeFilter.java:65) at coldfusion.filter.ApplicationFilter.invoke(ApplicationFilter.java:279) at coldfusion.filter.RequestMonitorFilter.invoke(RequestMonitorFilter.java:48) at coldfusion.filter.MonitoringFilter.invoke(MonitoringFilter.java:40) at coldfusion.filter.PathFilter.invoke(PathFilter.java:86) at coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:70) at coldfusion.filter.ClientScopePersistenceFilter.invoke(ClientScopePersistenceFilter.java:28) at coldfusion.filter.BrowserFilter.invoke(BrowserFilter.java:38) at coldfusion.filter.NoCacheFilter.invoke(NoCacheFilter.java:46) at coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:38) at coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22) at coldfusion.CfmServlet.service(CfmServlet.java:175) at coldfusion.bootstrap.BootstrapServlet.service(BootstrapServlet.java:89) at jrun.servlet.FilterChain.doFilter(FilterChain.java:86) Looks like I might have crashed the JVM in CF, is there a better way to do this? We are thinking of just exporting all records initially as a CSV file and importing it into Pardot seeing as we will never have to do a request this large again.

    Read the article

  • DHCPDISCOVER requests from an off-by-one MAC address

    - by Aleksandr Levchuk
    In a Linux DHCP server I'm getting a bunch of these log lines: dhcpd: DHCPDISCOVER from 00:30:48:fe:5c:9c via eth1: network 192.168.2.0/24: no free leases I don't have any machines with 00:30:48:fe:5c:9c and I don't intend to give out an IP to 00:30:48:fe:5c:9c (whatever that could be). I tracked down the server that this is coming from and killed all the DHCP clients that were running but the DHCPDISCOVER requests do not stop. I can prove that this is the sending server by pulling the Ethernet cable - the requests stop. The strange thing is that the sending server only has 2 interfaces which are: 00:30:48:fe:5c:9a 00:30:48:fe:5c:9b What can be the cause of the off-by-one address? Who could be sending the requests? Details On the DHCP client: root@n34:~# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 100 link/ether 00:30:48:fe:5c:9a brd ff:ff:ff:ff:ff:ff 3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000 link/ether 00:30:48:fe:5c:9b brd ff:ff:ff:ff:ff:ff 4: ib0: <BROADCAST,MULTICAST> mtu 2044 qdisc noop state DOWN qlen 256 link/infiniband 80:00:00:48:fe:80:00:00:00:00:00:00:00:02:c9:03:00:08:81:9f brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff 5: ib1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2044 qdisc pfifo_fast state UP qlen 256 link/infiniband 80:00:00:49:fe:80:00:00:00:00:00:00:00:02:c9:03:00:08:81:a0 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff Same info: root@n34:~# ifconfig -a eth0 Link encap:Ethernet HWaddr 00:30:48:fe:5c:9a inet addr:192.168.2.234 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::230:48ff:fefe:5c9a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:72544 errors:0 dropped:0 overruns:0 frame:0 TX packets:152773 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:4908592 (4.6 MiB) TX bytes:89815782 (85.6 MiB) Memory:dfd60000-dfd80000 eth1 Link encap:Ethernet HWaddr 00:30:48:fe:5c:9b UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Memory:dfde0000-dfe00000 ib0 Link encap:UNSPEC HWaddr 80-00-00-48-FE-80-00-00-00-00-00-00-00-00-00-00 BROADCAST MULTICAST MTU:2044 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:256 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ib1 Link encap:UNSPEC HWaddr 80-00-00-49-FE-80-00-00-00-00-00-00-00-00-00-00 inet addr:192.168.3.234 Bcast:192.168.3.255 Mask:255.255.255.0 inet6 addr: fe80::202:c903:8:81a0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:2044 Metric:1 RX packets:1330 errors:0 dropped:0 overruns:0 frame:0 TX packets:255 errors:0 dropped:5 overruns:0 carrier:0 collisions:0 txqueuelen:256 RX bytes:716415 (699.6 KiB) TX bytes:17584 (17.1 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:560 (560.0 B) TX bytes:560 (560.0 B) The nodes were imaged with Perseus which uses kexec instead of rebooting.

    Read the article

  • Belkin router issue

    - by walr1
    Hi, My cousin and I bought a wireless Belkin router for testing purposes. Please keep in mind for all of our tests there is no ethernet cable plugged in, just the router's power cord. We have been trying to "flood" it with PING requests on its default address 192.168.2.1, but it isn't doing a thing; not even logging any attempts of too many requests. I've disabled the firewall, disabled PING request block, etc. Any idea why this thing isn't being affected? We sent 4 million packets and it hasn't done a thing. Quite odd! Thanks.

    Read the article

  • Apachebench on node.js server returning "apr_poll: The timeout specified has expired (70007)" after ~30 requests

    - by Scott
    I just started working with node.js and doing some experimental load testing with ab is returning an error at around 30 requests or so. I've found other pages showing a lot better concurrency numbers than I am such as: http://zgadzaj.com/benchmarking-nodejs-basic-performance-tests-against-apache-php Are there some critical server configuration settings that need done to achieve those numbers? I've watched memory on top and I still see a decent amount of free memory while running ab, watched mongostat as well and not seeing anything that looks suspicious. The command I'm running, and the error is: ab -k -n 100 -c 10 postrockandbeyond.com/ This is ApacheBench, Version 2.0.41-dev <$Revision: 1.121.2.12 $> apache-2.0 Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Copyright (c) 2006 The Apache Software Foundation, http://www.apache.org/ Benchmarking postrockandbeyond.com (be patient)...apr_poll: The timeout specified has expired (70007) Total of 32 requests completed Does anyone have any suggestions on things I should look in to that may be causing this? I'm running it on osx lion, but have also run the same command on the server with the same results. EDIT: I eventually solved this issue. I was using a TTAPI, which was connecting to turntable.fm through websockets. On the homepage, I was connecting on every request. So what was happening was that after a certain number of connections, everything would fall apart. If you're running into the same issue, check out whether you are hitting external services each request.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >