Search Results

Search found 15035 results on 602 pages for 'request'.

Page 178/602 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • Add static ARP entries when network is brought up

    - by jozzas
    I have some pretty dumb IP devices on a subnet with my Ubuntu server, and the server receives streaming data from each device. I have run into a problem in that when an ARP request is issued to the device while it is streaming data to the server, the request is ignored, the cache entry times out and the server stops receiving the stream. So, to prevent the server from sending ARP requests to these devices altogether, I would like to add a static ARP entry for each, something like arp -i eth2 -s ip.of.the.device mac:of:the:device But these "static" ARP entries are lost if networking is disabled / enabled or if the server is rebooted. Where is the best place to automatically add these entries, preferably somewhere that will re-add them every time the interface eth2 is brought up? I really don't want to have to write a script that monitors the output of arp and re-adds the cache entries if they're missing. Edit to add what my final script was: Created the file /etc/network/if-up.d/add-my-static-arp With the contents: #!/bin/sh arp -i eth0 -s 192.168.0.4 00:50:cc:44:55:55 arp -i eth0 -s 192.168.0.5 00:50:cc:44:55:56 arp -i eth0 -s 192.168.0.6 00:50:cc:44:55:57 And then obviously add the permission to allow it to be executed: chmod +x /etc/network/if-up.d/add-my-static-arp And these arp entries will be manually added or re-added every time any network interface is brought up.

    Read the article

  • Restful Java based web services in json + html5 and javascript no templates (jsp/jsf/freemarker) aka fat/thick client

    - by Ismail Marmoush
    I have this idea of building a website which service JSON data through restful services framework. And will not use any template engines like jsp/jsf/freemarker. Just pure html5 and Javascript libs. What do you think of the pros and cons of such design ? Just for elaboration and brain storming a friend of mine argued with the following concerns: sounds like gwt this way you won't have any control over you service api for example say you wanna charge the user per request how will you handle it? how will you control your design and themes? what about the 1st request the browser make? not easy with this all of the user's requests will come with "Accept" header "application/json" how will you separate browser from abuser? this way all of your public apis will be used by third party apps abusively and you won't be able to lock it since you won't be able to block the normal user browser We won't use compiled html anyway but may be something like freemarker and in that case you won't expose any of your json resources to the unauthorized user but you will expose all the html since any browser can access them all the well known 1st class services do this can you send me links to what you've read? keep in mind the DOM based XSS it will be a nightmare ofc, if what you say is applicable.

    Read the article

  • Duplicity can't connect to CloudFiles "Network is unreachable"

    - by jwandborg
    Whenever I click "Backup now" in the Backup GUI, the smaller "Back Up" window opends, and after a while I get the following error message: Traceback (most recent call last): File "/usr/bin/duplicity", line 1359, in with_tempdir(main) File "/usr/bin/duplicity", line 1342, in with_tempdir fn() File "/usr/bin/duplicity", line 1202, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File "/usr/lib/python2.7/dist-packages/duplicity/commandline.py", line 942, in ProcessCommandLine globals.backend = backend.get_backend(args[0]) File "/usr/lib/python2.7/dist-packages/duplicity/backend.py", line 156, in get_backend return _backends[pu.scheme](pu) File "/usr/lib/python2.7/dist-packages/duplicity/backends/cloudfilesbackend.py", line 70, in __init__ self.container = conn.create_container(container) File "/usr/lib/python2.7/dist-packages/cloudfiles/connection.py", line 250, in create_container response = self.make_request('PUT', [container_name]) File "/usr/lib/python2.7/dist-packages/cloudfiles/connection.py", line 189, in make_request response = retry_request() File "/usr/lib/python2.7/dist-packages/cloudfiles/connection.py", line 182, in retry_request self.connection.request(method, path, data, headers) File "/usr/lib/python2.7/httplib.py", line 955, in request self._send_request(method, url, body, headers) File "/usr/lib/python2.7/httplib.py", line 989, in _send_request self.endheaders(body) File "/usr/lib/python2.7/httplib.py", line 951, in endheaders self._send_output(message_body) File "/usr/lib/python2.7/httplib.py", line 811, in _send_output self.send(msg) File "/usr/lib/python2.7/httplib.py", line 773, in send self.connect() File "/usr/lib/python2.7/httplib.py", line 1154, in connect self.timeout, self.source_address) File "/usr/lib/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable I use Rackspace CloudFiles as a storage backend, last backup was 3 days ago (successful. I have not changed any settings since then.

    Read the article

  • using iptables to change a destination port but keep the ip the same.

    - by Scott Chamberlain
    I am playing around with transparent proxies, The current way I am doing things is the program makes a request to a computer on port 80, I use iptables -t nat -A OUTPUT -p tcp --destination-port 80 -j REDIRECT --to-port 1234 to redirect to my proxy that I am playing with. the proxy will send out a request to port 81 (as all outbound port 80 are being fed back in to the proxy so I want to do something like iptables -t nat -A OUTPUT -p tcp --destination-port 81 -j DNAT --to-destination xxxx:80 The problem lies with the xxxx part. How do I change the destination port without changing changing the destination ip? Or am I doing this setup completely wrong, I am learning after all and constructive criticism is definitely appreciated.

    Read the article

  • Windows redirect traffic to different DNS name not fixed IP address (hosts file equivalent)

    - by Arik Raffael Funke
    Using the Windows hosts file, one can redirect traffic for a domain to a specific IP address, e.g. domainA.com -- 127.0.0.1 I am looking for a SIMPLE way to do the same, but for a target domain name not for a target IP address (as this is dynamic), I.e. domainA.com -- domainB.com Addition: After the getting some initial answers I think I need to concretise my question. Situation: I have an application which looks up the IP of the target domain via DNS and then connects via HTTP to the IP address. I do not have control over any proxy settings. Option 1 Basically I am looking for a way to: intercept DNS requests for a domainA.com launch a DNS request for a domainB.com serve the IP of domainB.com in response to the request for domainA.com Without running an entire DNS server. Option 2 If a DNS server is the only way, in the alternative I would also be happy with an solution to how to define a non-standard DNS-server for a single application. Any ideas for wrapper applications, etc?

    Read the article

  • Remote Desktop Services Gateway Issue

    - by AVandelay05
    Alright fellow techies here's the rundown. I have installed Server 2008 r2 Remote Dekstop Services on a VM in my network. I installed the following RD role services: RD Session Host, Licensing, Connection Broker, Gateway, Web Access. When I set things up originally, the gateway server and RDWeb worked as it should locally. After getting things running locally (remoteserver.domainname.local) I wanted to test things externally. From the outside, I couldn't get things running (meaning I could connect to rdweb access externally, but when I tried to run an app I would get the message "can't connect/find computer"). Here's my setup for external access The VM has every RD Services role services installed on it, meaning it acts as gateway, rd web access, session host, licensing, the whole bit. I made a self-signed certificate on the gateway server (gateway.domainname.net is the cert name). Internally, I have a secondary forward-lookup zone called domainname.net with an A record gateway pointing to the local IP of the gateway server. On our public DNS (domainname.net) I have an A record gateway. This is to access the RDWeb externally. In IIS I have the following authentication settings RDWeb: All disabled except for anonymous authentication Rpc: All disabled except for basic and windows RpcWithCert: All disbled except for windows authentication I have the necessary web access config in our sonicwall tz210 (https and rdp, external ip pointing to local ip of rds server) RAP and CAP have the correct user and computer groups, authentication, and allowed devices After all of this, here's what happens accessing externally. I can login correctly to RDWeb Access (I've tried a bogus login, I can't login to it so that's working properly). I see the Apps for use. I click on an app, click connect, the credential window opens, I put in the correct user creds, it tries to connect to the gateway server, but then the cred window comes back in view. I tried to reach a limit of failed logins, but never reached one, haha. So from the same external client machine I try to connect to the gateway through a Remote Desktop connection. I put in the correct gateway settings in the RD window, try to connect and get the same results as I did in RDWeb access. I checked the event logs on the RD Services machine and saw the following event IDs around the time I tried to login externally: ID 6037 with the message "The program svchost.exe, with the assigned process ID 2168, could not authenticate locally by using the target name host/gateway.domainname.net. The target name used is not valid. A target name should refer to one of the local computer names, for example, the DNS host name. Try a different target name." ID 10 RADWebAccess "RD Web Access was unable to access gateway.domainname.net, which is the server that is specified as running the RemoteApp and Desktop Connection Management service. Ensure that the computer account of the RD Web Access server is a member of the TS Web Access Computers security group on gateway.domainname.net" ID 4625 "An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: Administrator Account Domain: gateway.domainname.net Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc000006a Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: USER-LAPTOP Source Network Address: External IP Source Port: 63125 Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 This event is generated when a logon request fails. It is generated on the computer where access was attempted. The Subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The Logon Type field indicates the kind of logon that was requested. The most common types are 2 (interactive) and 3 (network). The Process Information fields indicate which account and process on the system requested the logon. The Network Information fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases. The authentication information fields provide detailed information about this specific logon request. - Transited services indicate which intermediate services have participated in this logon request. - Package name indicates which sub-protocol was used among the NTLM protocols." I don't think the VM has a null SID. The SID of the VM and it's physical host have different SIDS. I can access the blank page for rpc externally using the external gateway name. It seems like authentication is a problem. Also, is it a problem that the external name of the gateway server doesn't match the local name? The external name (which the cert is based on) is gateway.domainname.net and the internal name is remoteserver.domainname.local. That's the only thing I can think of that would be the problem, but the external name has to be different from the local right? Internally, I ping gateway.domainname.net and it gives me the correct local IP of the server. Now, there isn't an actual computer name in AD, but I don't know how I would achieve that? I hope I've been clear....any help would be appreciated. I think I'm close to achieving this. :)

    Read the article

  • Custom Session Management using HashTable

    - by kaleidoscope
    ASP.NET session state lets you associate a server-side string or object dictionary containing state data with a particular HTTP client session. A session is defined as a series of requests issued by the same client within a certain period of time, and is managed by associating a session ID with each unique client. The ID is supplied by the client on each request, either in a cookie or as a special fragment of the request URL. The session data is stored on the server side in one of the supported session state stores, which include in-process memory, SQL Server™ database, and the ASP.NET State Server service. The latter two modes enable session state to be shared among multiple Web servers on a Web farm and do not require server affinity. Implement Custom session Handler you need to follow following process : 1. Create class library which will inherit from  SessionStateStoreProviderBase abstract Class. 2. Implement all abstract Method in your base class. 3.Change Mode of session to “Custom” in web.config file and provide Provider as your Namespace with classname. <sessionState mode=”Custom” customProvider=”Namespace.classname”> <Providers> <add name=”Name” type=”Namespace.classname”> </sessionstate> For more Details Please refer following links :   http://msdn.microsoft.com/en-us/magazine/cc163730.aspx http://msdn.microsoft.com/en-us/library/system.web.sessionstate.sessionstatestoreproviderbase.aspx - Chandraprakash, S Technorati Tags: Chandraprakash,Session state Managment

    Read the article

  • Accessing the JSESSIONID from JSF

    - by Frank Nimphius
    The following code attempts to access and print the user session ID from ADF Faces, using the session cookie that is automatically set by the server and the Http Session object itself. FacesContext fctx = FacesContext.getCurrentInstance(); ExternalContext ectx = fctx.getExternalContext(); HttpSession session = (HttpSession) ectx.getSession(false); String sessionId = session.getId(); System.out.println("Session Id = "+ sessionId); Cookie[] cookies = ((HttpServletRequest)ectx.getRequest()).getCookies(); //reset session string sessionId = null; if (cookies != null) { for (Cookie brezel : cookies) {     if (brezel.getName().equalsIgnoreCase("JSESSIONID")) {        sessionId = brezel.getValue();        break;      }   } } System.out.println("JSESSIONID cookie = "+sessionId); Though apparently both approaches to the same thing, they are different in the value they return and the condition under which they work. The getId method, for example returns a session value as shown below grLFTNzJhhnQTqVwxHMGl0WDZPGhZFl2m0JS5SyYVmZqvrfghFxy!-1834097692!1322120041091 Reading the cookie, returns a value like this grLFTNzJhhnQTqVwxHMGl0WDZPGhZFl2m0JS5SyYVmZqvrfghFxy!-1834097692 Though both seem to be identical, the difference is within "!1322120041091" added to the id when reading it directly from the Http Session object. Dependent on the use case the session Id is looked up for, the difference may not be important. Another difference however, is of importance. The cookie reading only works if the session Id is added as a cookie to the request, which is configurable for applications in the weblogic-application.xml file. If cookies are disabled, then the server adds the session ID to the request URL (actually it appends it to the end of the URI, so right after the view Id reference). In this case however no cookie is set so that the lookup returns empty. In both cases however, the getId variant works.

    Read the article

  • Making WIF local STS to work with your ASP.NET application

    - by DigiMortal
    Making Windows Identity Foundation (WIF) STS test application work with your solution is not as straightforward process as you can read from books and articles. There are some tricks and some configuration modifications you must do to get things work. Fortunately these steps are simple one. 1. Move your application to IIS or IIS Express If your application uses development web server that ships with Visual Studio then make your application use IIS or IIS Express. You get simple support for IIS Express to Visual Studio 2010 after installing Visual Studio 2010 SP1. You can read more from my blog posting Visual Studio 2010 SP1 Beta supports IIS Express. NB! You don’t have to move your dummy STS project to IIS. 2. Change request validation mode to ASP.NET 2.0 As a next thing you will get the following error when coming back from dummy STS service: HttpRequestValidationException (0x80004005): A potentially dangerous Request.Form value was detected from the client. Open web.config of your application and add the following line before </system.web>: <httpRuntime requestValidationMode="2.0" /> Now you are done with configuring web application to work with STS.

    Read the article

  • IIS: redirect everything to another URL, except for one Directory

    - by DrStalker
    I have an IIS server (IIS 6, Win 2003) that hosts the site http://www.foo.com. I want any request to http://foo.com (no matter what path/filename is used) to redirect to http://www.bar.org/AwesomePage.html UNLESS the request is for http://www.foo.com/specialdir, in which case the HTML files in the local directory specialdir should be used. The problem I have is once the redirect is set it also affects /specialdir - even if I right click on that directory and select "content should come from ... local directory" that change does not take effect, and the directory still shows as redirecting to http://www.bar.org/AwesomePage.html. The same thing happens if I try to set individual files to load from the local system instead of redirecting - IIS gives no error, but the change does not take effect and the files still show as being redirected. How can I set specialdir to override the redirection to the new URL?

    Read the article

  • Problem connecting to SSH in office network

    - by Jeune
    I have trouble connecting via SSH to a server whenever I am in the office. I get as far as being prompted for my password and then after that there's a long wait which always ends in a Write failed: Broken pipe This is only for connecting via SSH. I use svn to commit files to a repository hosted on the same server and there are no hitches. Furthermore, this only happens in our office. When I go the university or whenever I am at home or at the coffee shop I am able to connect seamlessly. There are no firewalls in our office. It's just a basic wireless router connected to a modem setup. It's the same setup I have at home and I guess the same setup in the coffee shop. What are the causes for a broken pipe and why does this phenomenon only happen when I try connect via SSH and not when I work with svn on the same server? Updated: Some debug logs after authentication: debug3: packet_send2: adding 48 (len 64 padlen 16 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env ORBIT_SOCKETDIR debug3: Ignored env SSH_AGENT_PID debug3: Ignored env TERM debug3: Ignored env SHELL debug3: Ignored env XDG_SESSION_COOKIE debug3: Ignored env WINDOWID debug3: Ignored env GNOME_KEYRING_CONTROL debug3: Ignored env GTK_MODULES debug3: Ignored env USER debug3: Ignored env LS_COLORS debug3: Ignored env LIBGL_DRIVERS_PATH debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env DEFAULTS_PATH debug3: Ignored env SESSION_MANAGER debug3: Ignored env USERNAME debug3: Ignored env XDG_CONFIG_DIRS debug3: Ignored env DESKTOP_SESSION debug3: Ignored env LIBGL_ALWAYS_INDIRECT debug3: Ignored env PATH debug3: Ignored env PWD debug3: Ignored env GDM_KEYBOARD_LAYOUT debug1: Sending env LANG = en_PH.utf8 debug2: channel 0: request env confirm 0 debug3: Ignored env GNOME_KEYRING_PID debug3: Ignored env MANDATORY_PATH debug3: Ignored env GDM_LANG debug3: Ignored env GDMSESSION debug3: Ignored env SHLVL debug3: Ignored env HOME debug3: Ignored env GNOME_DESKTOP_SESSION_ID debug3: Ignored env LOGNAME debug3: Ignored env XDG_DATA_DIRS debug3: Ignored env DBUS_SESSION_BUS_ADDRESS debug3: Ignored env LESSOPEN debug3: Ignored env WINDOWPATH debug3: Ignored env DISPLAY debug3: Ignored env LESSCLOSE debug3: Ignored env XAUTHORITY debug3: Ignored env COLORTERM debug3: Ignored env OLDPWD debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 UPDATE 2011-14-07: I am able to connect to the server via SSH now. I didn't do anything but that's because there is no one in the office but me! Having said that, is it possible that it has something to do with the number of sessions an SSH server can handle? UPDATE 2011-14-07: I try to login via SSH through Putty on another machine running windows together with my current SSH session in Ubuntu and now it seems my SSH session in Ubuntu has been dropped. I can't type into the terminal. Is Putty the culprit now?

    Read the article

  • Scalable Architecture for modern Web Development [on hold]

    - by Jhilke Dai
    I am doing research about Scalable architecture for Web Development, the research is solely to support Modern Web Development with flexible architecture which can scale up/down according to the needs without losing any core functionality. By Modern Web I mean to support all the Devices used to access websites, but the loading mechanism for all devices would be different. My quest of architecture is: For PC: Accessing web in PC is faster but it also depends on the Geo-location, so, the application would check by default the capacity of Internet/Browser and load the page according to it. For Mobile: Most of the mobile design these days either hide information or use different version of same application. eg: facebook uses m.facebook.com which is completely different than PC version. Hiding the things from Mobile using JavaScript or CSS is not a solution as it'll consume the bandwidth and make the application slow. So, my architecture research is about Serving one Application, which has different stack. When the application receives the request it'd send the Packaged Stack to the received request. This way the load time for end users would be faster and maintenance of application for developers would be easier. I am researching about for 4-tier(layered) architecture like: Presentation Layer Application Logic Layer -- The main Logic layer which stores the Presentation Stack Business Logic Layer Data Layer Main Question: Have you come across of similar architecture? If so, then can you list the links here, I'm very much interested to learn about those implementations specially in real world scenario. Have you thought about similar architectures and tried your own ideas, or if you have any ideas regarding this, then I urge to share. I am open to any discussions regarding this, so, please feel free to comment/answer.

    Read the article

  • Suscribers locking during snapshot replication

    - by remi_bourgarel
    Hi all :) Here is my architecture : I have a main server and 4 replica (the servers are synchronized with snapshot replication). The replication is working fine, except for one thing : during the replication a lot of SELECT request on one of the replica fail with a time-out. Here are my questions : Can I avoid these time-out ? If I can't : how can I detect the start and the end of the replication to redirect all the request on one of the replica to the main ? Thanks Sorry if you already answered to that kind of question but I couldn't find anything.

    Read the article

  • SSL Ajax type of certificate for the static domain (image + js)

    - by Alexl
    Hi, I have a page that is SSL and has a valid certificate extended. (mainpage.com) But this page request some static content to another domain(page-static.com), basicly images and js. Actually i have only a certificate for my mainpage.com. So now when i request this page i get invalid ssl page because it contains invalid encrypted data (the one provided by the www.page-static.com) What kind of certificate do i need for the www.page-static.com. Do i need the same one as the mainpage.com, because this certificate are expensive (it's a extended certificate). Or a cheap certificate from godaddy will do the trick. This is another question do both certificates have to be signed by the same root provider and/or the same encryption key length (or it can be only 128 bits)? Thanks for your help

    Read the article

  • Windows Server 2008R2 IIS7.5, Requests getting 401 status

    - by TLBH
    We have a web site running in windows server 2008r2 iis7.5 and we are seeing several errors reported from our global error handler saying that "Request Timed Out". I matched up one to the IIS log file and see the request took 135116 (presumably milliseconds) had an sc-status of 401 an sc-sub0status of 0 and an sc-win32-status of 64. 2 requests failed in these way but lots of surrounding requests (1979 successful requests vs 2 fails) for the same user went through perfectly fine- with the same cs-username which makes a 401 seem a little odd. The target of the requests is an ASP.Net web service's web method called by the .net client library- it's called a lot of times per user (3 times per second) to keep a page updated. We're getting some users reportng seeing a freezing effect and I think this may be the cause, any ideas? Peter

    Read the article

  • Shibboleth + IIS and Pound Reverse Proxy

    - by boburob
    Having a bit of a problem getting Shibboleth (SSO) working with ADFS and Pound. The main problem seems to be that: The website address will be https://website.domain.com Pound will then terminate the SSL and forward the traffic to the webserver on a different port (http://server.domain.com:8888) I have set up Shibboleth to protect the address http://server.domain.com:8888, which allows me to retrieve metadata and it all seems to be working fine. However the problem seems to be that ADFS is configured to protect the https website, so when Shibboleth attempts to recieve information from ADFS I get nothing except the following error: A token request was received for a relying party identified by the key 'https://msstagrevproxy.cwpintranet.com/shibboleth', but the request could not be fulfilled because the key does not identify any known relying party trust. Key: https://msstagrevproxy.cwpintranet.com/shibboleth I am not really sure how I can work around this as to retrieve the metadata from Shibboleth I have to use the https address but this does not actually exist in Shibboleth or IIS. Has anyone had any experience with this before or using any other SSO with a reverse proxy that works?

    Read the article

  • Apache SSL losing session over load balancer

    - by SaltyNuts
    I have two physical Apache servers behind a load balancer. The load balancer was supposed to be set up so that a user would always be sent to the same physical server after the first request, to preserve sessions. This worked fine for our web apps until we added SSL to the setup. Now the user can successfully login, see the home page, but clicking on any other internal links logs the user right out. I traced the issue to the fact that while initial authentication is performed by server 1, clicking on internal links leads to having the request sent to server 2. Server 2 does not share sessions with server 1, and the user is kicked out. How can I fix it? Do I need to share sessions between the two servers? If so, could you point me to a good guide for doing this? Thanks.

    Read the article

  • Running Sonatype Nexus in Tomcat 7.0, Tomcat blocking PUT requests

    - by gdm
    I was previously running Nexus 1.8 on OSX and uploading jars for releases without any issues. The OSX box died, so I moved to a FreeBSD server. Since Nexus doesn't have binaries for FreeBSD, I decided to run it in my Tomcat container. Now, I have set up Nexus 1.9 in Tomcat 7.0 on FreeBSD. Everything is working well, except I can't upload jars to my release or snapshot repositories. If I try via Hudson, I get a 401 error (and no further details). If I try manually via curl, I get an error message back from Tomcat: "This request requires HTTP authentication.". Why is Tomcat giving this error, and how do I stop it? If I look in the Nexus logs I can see that the PUT request doesn't even reach Nexus, Tomcat is intercepting it.

    Read the article

  • Webserver not giving the correct response on CURL and other httprequest methods [migrated]

    - by Maxim
    I am trying to make a REST request to a external webserver by using this code <?php $user = 'USER'; $pass = 'PASS'; $data = "MYDATA" $ch = curl_init('URL'); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_POSTFIELDS, $data); curl_setopt($ch, CURLOPT_HTTPHEADER, array( 'Content-Type: application/json', 'Content-Length: ' . strlen($data)) ); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_VERBOSE, true); if(!($res = curl_exec($ch))) { echo('[cURL Failure] ' . curl_error($ch)); } curl_close($ch); echo($res); Now this is a CURL request, however i tried different methods to test my result and they all give me a 403 forbidden error response that i get from the webserver, however i do get a 200 response when i run it on any other webserver (localhost, webserver2, ...) Therefore i think there is something wrong with my webserver and it might be disallowing/caching the post parameters that i provide because sometimes it returns a 200 response but most of the times it returns the 403. This is the response i get : HTTP/1.1 403 Forbidden Accept-Ranges: bytes Content-Type: application/json; charset=UTF-8 Date: Sat, 26 Oct 2013 13:56:37 GMT Server: Restlet-Framework/2.1.3 Vary: Accept-Charset, Accept-Encoding, Accept-Language, Accept Content-Length: 77 Connection: keep-alive {"error":"ForbiddenOperationException","errorMessage":"Invalid credentials."} It says Invalid credentials however i provide the correct credentials and i can confirm them because it is working on other servers. Since this is a crucial part of my script that i use for clients to register i assume that there is something wrong with the post parameters. I am running cpanel and uninstalled the following already: - varnish - apachebooster i also recompiled php already and enabled curl and its dependencies but nothing seems to resolve my problem. If more information is required then don't hesitate to ask me in the comments i will respond very quickly as i really need this. any help is appreciated. Kind regards Maxim

    Read the article

  • How do I disable nginx sending messages to syslog?

    - by altman
    My nginx sends lots of messages to syslog, but I don't need them. In my nginx.conf: error_log /var/log/nginx-error.log notice; ...... server { access_log off; location / { .... } } but, in my /var/log/message you see Nov 22 23:25:09 cache3 nginx: 2011/11/22 23:25:09 [error] 3437#0: *32172530 kevent() reported about an closed connection (60: Operation timed out) while reading response header from upstream, client: , server: , request: "GET http://www.igoido012.com//vk HTTP/1.1", upstream: "http:////vk", host: "www.igoido012.com", referrer: "http://www.baidu.com/" Nov 22 23:25:09 cache3 nginx: 2011/11/22 23:25:09 [error] 3437#0: *32099531 upstream timed out (60: Operation timed out) while reading response header from upstream, client: , server: , request: "GET http://t.web2.qq.com/channel/poll?msg_id=0&clientid=431509&t=1321975433305 HTTP/1.1", upstream: "http://:80/channel/poll?msg_id=0&clientid=431509&t=1321975433305", host: "t.web2.qq.com", referrer: "http://t.web2.qq.com/proxy.html?v=20110331001" How can I prevent nginx sending messages to my syslog?

    Read the article

  • How is this modsec rule getting triggered?

    - by BipedalShark
    I made a GET request to the URL, http://domain.tld/test/docs/index.php?create_table=1&step=2 and got a 403 response code. It turns out this modsec rule is getting triggered: Access denied with code 403 (phase 2). Pattern match "(?:ogg|gopher|zlib|(?:ht|f)tps?)\:/" at ARGS:gltr_redir. [file "/opt/mod_security/10_asl_rules.conf"] [line "827"] [id "340153"] [rev "22"] [msg "Generic PHP code injection protection via ARGS 3"] [severity "CRITICAL"] I would assume ARGS refers to GET/POST data, but there's no gltr_redir in the query string. And, being a GET request, there's obviously no POST data. So how is this rule being triggered?

    Read the article

  • Apache Serving 403 Forbidden after OS X Snow Leopard Upgrade to Version 10.6.6

    - by Ian Oxley
    I've just upgraded my MacBook Pro to OS X Snow Leopard version 10.6.6 and now Apache is misbehaving: requests to http://localhost/ generate a 403 Forbidden response -- FIXED requests to any of my virtual hosts seem to generate a 200 Ok response, but contain zero bytes Some further info that might be useful: I'm using the Apache that comes bundled with OS X. I'm using PHP from http://www.entropy.ch/software/macosx/php/ (which is in /usr/local/bin) I've had look at the Apache error log and the only error seems to be the following: [notice] child pid 744 exit signal Segmentation fault (11) I'm completely stumped by this. Any help would be much appreciated. UPDATE Ok, I've managed to resolve the 403 Forbidden error thanks to http://techtrouts.com/mac-os-x-105-web-sharing-forbidden-403-on-httplocalhostusername/ I'm still having the second problem though for any request e.g. this now happens when I request http://localhost

    Read the article

  • How to send connection type (SSH|Telnet) info in Radius Access Requests on Cisco router?

    - by Gianni Costanzi
    I've configured the following on a cisco router: aaa authentication login default group radius local ! radius-server host x.x.x.x auth-port 1012 acct-port 1013 radius-server host y.y.y.y auth-port 1012 acct-port 1013 radius-server retransmit 1 radius-server timeout 3 radius-server key 7 xxxxxxxxx I'd like to be able to specify some radius options in order to add information about the type of connection for which a user is being authenticated, i.e. I'd like the radius server to receive in the Cisco Router's Radius Access Request information about the connection being SSH or Telnet.. I'd like to find something that automatically adds this info in the access request, without specific configurations on VTY lines dedicated to SSH and to Telnet. Any idea about that?

    Read the article

  • Pushing complete notifications to client

    - by ton.yeung
    So with cqrs, we accept that consistency is eventual. However, that doesn't mean that the user has to continually poll, or that eventual means an update has to take more then 500ms to sync. For the sake of UX, we want to at least give the illusion of consistency, or if not possible, be as transparent as possible. With that in mind, I have this setup: angularjs web client, consumes webapi restful services, sends commands to nservicebus command handlers, saves to neventstore, dispatches events to nservicebus event handlers, sends message to signalr hub, sends notifications to angularjs web client so with that setup, theoretically some initiates a request the server validates the request sends out the necessary commands In the mean time the client gets a 200 response updates the view: working on it gets message sometime later: done, here's the updated data Here's where things get interesting, each command could spawn multiple events. Not sure if this is a serious no, no, or not, but that's how it is currently. For example, a new customer spawns CustomerIDCreated, CustomerNameUpdated, CustomerAddressUpdated, etc... Which event handler needs to notify the client? Should all of them in a progress bar style update?

    Read the article

  • JUnit Testing in Multithread Application

    - by e2bady
    This is a problem me and my team faces in almost all of the projects. Testing certain parts of the application with JUnit is not easy and you need to start early and to stick to it, but that's not the question I'm asking. The actual problem is that with n-Threads, locking, possible exceptions within the threads and shared objects the task of testing is not as simple as testing the class, but testing them under endless possible situations within threading. To be more precise, let me tell you about the design of one of our applications: When a user makes a request several threads are started that each analyse a part of the data to complete the analysis, these threads run a certain time depending on the size of the chunk of data (which are endless and of uncertain quality) to analyse, or they may fail if the data was insufficient/lacking quality. After each completed its analysis they call upon a handler which decides after each thread terminates if the collected analysis-data is sufficient to deliver an answer to the request. All of these analysers share certain parts of the applications (some parts because the instances are very big and only a certain number can be loaded into memory and those instances are reusable, some parts because they have a standing connection, where connecting takes time, ex.gr. sql connections) so locking is very common (done with reentrant-locks). While the applications runs very efficient and fast, it's not very easy to test it under real-world conditions. What we do right now is test each class and it's predefined conditions, but there are no automated tests for interlocking and synchronization, which in my opionion is not very good for quality insurances. Given this example how would you handle testing the threading, interlocking and synchronization?

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >