Search Results

Search found 17054 results on 683 pages for 'jms request reply'.

Page 276/683 | < Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >

  • TLS_REQCERT and PHP with LDAPS

    - by John
    Problem: Secure LDAP queries via command-line and PHP to an AD domain controller with a self-signed certificate. Background: I am working on a project where I need to enable LDAP look-ups from a PHP web application to a MS AD domain controller that is using a self-signed certificate. This self-signed certificate is also using a domain name that is not a FQDN - think of something like people.campus as the domain name. The web application would take the user's credentials and pass them on to the AD domain controller to verify if the credntials are a match or not. This seems simple, but I am having problems trying to get PHP and the self-signed certificate to work. Some people have suggested that I changed the TLS_REQCERT variable from "request" to "never" within the OpenLDAP configuration. I am concerned that this might have larger implications such as a man-in-the-middle attack and I am not comfortable changing this setting to never. I have also read some places online where one can take a certificate and place it as a trusted source within the openldap configuration file. I am curious if that is something that I could do for the situation that I have? Can I, from the command line, obtain the self-signed certificate that the AD domain controller is using, save it to a file, and then have openldap use that file for the trust that it needs so that I do not need to adjust the variable from request to never? I do not have access to the AD domain controller and as a result cannot export the certificate. If there is a way to obtain the certificate from the command line, what commands do I need to use? Is there an alternate method of handling this issue that would be better in the long run? I have some CentOS servers and some Ubuntu servers that I am working with to try and get this going on. Thanks in advance for your help and ideas.

    Read the article

  • belkin wireless router connection drops for ~60sec every ~15 mins

    - by j j
    Hi, I have a belkin wireless-g router, model F5D7230-4. about every 10-15 minutes, for about 1-1.5 mins I won't be able to browse any web sites. during this period, I usually get replys from "ping google.com". maybe 4/5 times I'll get a reply, 1/5 times i don't. I have changed the router DNS entries from "get DNS server from ISP" to 8.8.8.8, google's DNS and that hasn't fixed the situation. This happens on several computers, running windows7, xp, ubuntu, and a mac, so it is definitely NOT an issue with a computer configuration. any ideas?

    Read the article

  • Requests are making it to my app server, but not into node.js -- why?

    - by Zane Claes
    I detailed in this question on StackOverflow how some random requests are not making it from the client to my Node.js app server, resulting in a gateway timeout. In summary, identical requests are, at random, not even making it far enough to trigger a console.log() in my first line of express middleware. I need to narrow down the problem, though, to find out WHERE the traffic is being lost and it was suggested that I try a packet sniffer on my app servers. Here's my setup: 2x Load Balancers (m1.larges) 2x node.js servers (also m1.large) Here's what's interesting/unusual: the node.js servers started as PHP servers with an Apache stack and continue to serve PHP files for my domain (streamified.me). However, I use a little httpd.conf magic on the app servers so that requests to api.streamified.me get routed over port 8888 to the node.js server: RewriteCond %{HTTP_HOST} ^api.streamified.me RewriteRule ^(.*) http://localhost:8888$1 [P] So, the request hits the load balancer = goes to an app server = gets routed to port 8888 if it's intended for the API = gets handled by node.js So, in the same httpd.conf file, I turned on RewriteLogLevel 5 and then created a simple PHP+CURL script on my localhost to hit my api.streamified.me with a random URL (which should cause node.js to trigger a simple "not found" response) until it resulted in a Gateway timeout. Here, you can see that it has happened -- and the rewrite log shows that the request was definitely received by the app server and forwarded to port 8888... but it was never received by node.js (or, at least, the first line of code in the first line of middleware never gets it...) Image Link: http://i.stack.imgur.com/3OQxS.png

    Read the article

  • Postfix Vacation.pl with local users

    - by Simiyu
    Hi, I am trying to setup the vacation.pl script on a mail servers which has local users only (since they are only 10 users). I have installed the SquirrelMail plugin and the Auto respond option is available for the users, but when an email is sent to the addresses no auto reply email is sent to the sender. There are also no logs on the /var/log/vacation folder which i created as well as the normal log files. Most of the examples online refer to virtual users, can it work with local users? and if so how? regards, Arthur

    Read the article

  • Domain Trust 2008 to 2003

    - by nick3216
    I'm having trouble setting up the trust relationship between a Windows Server 2003 and a Windows Server 2008 AD. Domain a is Windows Server 2003 Forest functional level. Domain b is a Windows Server 2008 Forest functional level. I can set up the incoming side of the trust relationship on domain "a" so that it trusts domain "b". Try as I might on domain "b" I can't set up the outgoing side of the trust relationship to domain "a". The GUI interface gives an unhelpful 'The request is not supported'. I'm not sure netdom is being more or less helpful as it refers me to FilterSIDs netdom trust /add b /uo:b\admin /po:* /d:a /ud:a\admin /pd:* /oneside:trusting To improve the security of this external trust, security identifier (SID) filtering is enabled, however, if users have been migrated to the trusted domain and their SID histories have been preserved, you may choose to turn off this feature. For more information about SID filtering and how to turn it off, see the help for netdom trust /FilterSids or see Help and Support. The request is not supported. The command failed to complete succesfully. I say 'less helpful' because Windows Server 2008 doesn't support the /FilterSIDs option. How can we force creation of this trust? Edit: Just to clarify I've checked that the [Computer Configuration\Windows Settings\Security Settings\Local Policies\Security Options] "Network access: Allow anonymous SID/Name translation” is enabled on both sides of the trust as per http://social.technet.microsoft.com/Forums/en/winserverDS/thread/cc61fc25-3569-4413-bbfd-92390eb31118

    Read the article

  • VMWare web UI intermittent access on CentOS

    - by PeteWilliams
    I've got a CentOS 5.2 server that I'm trying to get set up as a development environment. As part of this, I planned to install VMWare Server 2 and set up several virtual development servers. I've got as far as installing VMWare Server 2 but access to the remote control panel is only working intermittently. If I access it through Firefox at https://127.0.0.1:8333/ui/# it usually says either: "Connection intterupted: connection was reset before the page loaded" Or "Firefox can't establish a connection to the server at 127.0.0.1" But every now and then it lets me in and I'll manage a few clicks in the web UI before it kicks me out with the following error: "The server could not complete a request (HTTP 0 ). The server encountered an unexpected condition that prevented it from fulfilling the request. If this problem persists, please contact your system administrator." I've done all the updates available in CentOS except one OpenOffice one that is causing a conflict, and I re-ran wmware-config.pl after updating the kernel. Though I went with all the defaults as I don't really know what I'm doing! I've since rebooted and nothing changed. I've also tried accessing the control panel remotely from another machine in the network and the results are the same. Does anyone have any ideas what might be causing this and how I can resolve it? I'm afraid I'm a developer playing at sys-admin, so I may be missing something obvious! Many thanks Pete Update I have now reinstalled both the operating system and VMWare and I'm still getting the same issue. I wonder if it's a result of the settings I'm putting in on the config.pl script..?

    Read the article

  • Apache Bench reports different result with same page

    - by Aspis
    I'm running into a little problem base-lining an Apache2/fcgi/php-fpm server I am setting up. 1) If I run: ab -n 15000 http://mysite.com/index.php. Apache Bench returns Time per request: 41ms but document length: 0 bytes and html transferred: 0 bytes. The Transfer rate: 7.9Kb/s. 2) If I run: ab -n 15000 http://mysite.com/ Apache Bench returns Time per request: 83ms along with the accurate document length and html transferred total. The APC cache status reports identical hit counts from both test. Also Apache Bench reports no errors in either case. Overall, no errors on any test sites and all logs are clean, etc. DocumentRoot is set to index.php so I would expect both of these test runs to produced a similar result. My 2 question(s) are: 1) why the discrepancy? 2) which is the correct result? I've seen plenty of results like test 1 posted (with out question) but frankly from my own experience and those of others, accurate testing is hard to come by. Even with out goofy issues like this.

    Read the article

  • SQL 2008 managing 2 Instances on single Physical server

    - by Rajeev
    Hi, I need some calrification about managing SQL server 2008. The scenario is as follows: I have One Windows Physical server at Primary site, I want to have Two different applications database on it, so shall I create two Instances on same server or shall have diffenrent server for another database. First Database is for management purpose while second would be used for Reporting purpose. There is a second database at the secondary site, which will be in Passive mode and I intend to connect them through MSCS. Now, can I have both Instances on Single server and both will work fine? The management database will be used more. Please reply soon. Can both Instances have dedicated reources allocated to them?

    Read the article

  • timeout with apache & php w/ each virtual host has his own user process

    - by acemtp
    I have 10 unix users in /home/. Each user is for a specific subdomain for example user www in /home/www/public_html is for www.mywebsite. blog in /home/blog/public_html is for blog.mywebsite. 90% is php and 10% ror for the moment i use apache + fastcgi that use SuexecUserGroup to setup the process with the good user. it seems to works but i have a strange behavior where after a few hours/days, the server stop answering (timeout) but the cpu load is still very low (it's a big server), the apache status display lot of "W" Sending Reply states but there's still 50 idle workers so it should be able to answer. in the older server (lot of slower) we add only one user and using mod_php and we never had this issue. is there another way to do that without fastcgi and SuexecUserGroup or do you know what's going wrong?

    Read the article

  • how limit the number of open TCP streams from same IP to a local port?

    - by JMW
    Hi, i would like to limit the number of concurrent open TCP streams from the the same IP to the server's (local) port. Let's say 4 concurrent conncetions. How can this be done with ip tables? the closest thing, that i've found was: In Apache, is there a way to limit the number of new connections per second/hour/day? iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --set iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --update --seconds 86400 --hitcount 100 -j REJECT But this limitation just messures the number of new connections over the time. This might be good for controlling HTTP traffic. But this is not a good solution for me, since my TCP streams usually have a lifetime between 5 minutes and 2 hours. thanks a lot in advance for any reply :)

    Read the article

  • How can I setup OpenVPN with IPv4 and IPv6 using a tap device?

    - by Lekensteyn
    I've managed to setup OpenVPN for full IPv4 connectivity using tap0. Now I want to do the same for IPv6. Addresses and network setup (note that my real prefix is replaced by 2001:db8): 2001:db8::100:0:0/96 my assigned IPv6 range 2001:db8::100:abc:0/112 OpenVPN IPv6 range 2001:db8::100:abc:1 tap0 (on server) (set as gateway on client) 2001:db8::100:abc:2 tap0 (on client) 2001:db8::1:2:3:4 gateway for server Home laptop (tap0: 2001:db8::100:abc:2/112 gateway 2001:db8::100:abc:1/112) | | | (running Kubuntu 10.10; OpenVPN 2.1.0-3ubuntu1) | wifi | | router | | OpenVPN INTERNET | eth0 | /tap0 VPS (eth0:2001:db8::1:2:3:4/64 gateway 2001:db8::1) (tap0: 2001:db8::100:abc:1/112) (running Debian 6; OpenVPN 2.1.3-2) The server has both native IPv4 and IPv6 connectivity, the client has only IPv4. I can ping6 to and from my server over OpenVPN, but not to other machines (for example, ipv6.google.com). net.ipv6.conf.all.forwarding is set to 1, I've tried disabling net.ipv6.conf.all.accept_ra as well, without luck. Using tcpdump on both the server and client, I can see that packets are actually transferred over tap0 to eth0. The router (2001:db8::1) send a neighbor solicitation for the client (2001:db8::100:abc:2) to eth0 after it receives the ICMP6 echo-request. The server does not respond to that solicitation, which causes the ICMP6 echo-request not be routed to the destination. How can I make this IPv6 connection work?

    Read the article

  • How can I make IPv6 on OpenVPN work using a tap device?

    - by Lekensteyn
    I've managed to setup OpenVPN for full IPv4 connectivity using tap0. Now I want to do the same for IPv6. Addresses and network setup (note that my real prefix is replaced by 2001:db8): 2001:db8::100:0:0/96 my assigned IPv6 range 2001:db8::100:abc:0/112 OpenVPN IPv6 range 2001:db8::100:abc:1 tap0 server side (set as gateway on client) 2001:db8::100:abc:2 tap0 client side 2001:db8::1:2:3:4 gateway for server Home laptop (tap0: 2001:db8::100:abc:2/112 gateway 2001:db8::100:abc:1/112) | | | (running Kubuntu 10.10; OpenVPN 2.1.0-3ubuntu1) | wifi | | router | | OpenVPN INTERNET | eth0 | /tap0 VPS (eth0:2001:db8::1:2:3:4/64 gateway 2001:db8::1) (tap0: 2001:db8::100:abc:1/112) (running Debian 6; OpenVPN 2.1.3-2) The server has both native IPv4 and IPv6 connectivity, the client has only IPv4. I can ping6 to and from my server over OpenVPN, but not to other machines (for example, ipv6.google.com). Using tcpdump on both the server and client, I can see that packets are actually transferred over tap0 to eth0. The router (2001:db8::1) send a neighbor solicitation for the client (2001:db8::100:abc:2) to eth0 after it receives the ICMP6 echo-request. The server does not respond to that solicitation, which causes the ICMP6 echo-request not be routed to the destination. How can I make this IPv6 connection work?

    Read the article

  • VMWare web UI intermittent access on CentOS

    - by PeteWilliams
    Hiya, I've got a CentOS 5.2 server that I'm trying to get set up as a development environment. As part of this, I planned to install VMWare Server 2 and set up several virtual development servers. I've got as far as installing VMWare Server 2 but access to the remote control panel is only working intermittently. If I access it through Firefox at https://127.0.0.1:8333/ui/# it usually says either: "Connection intterupted: connection was reset before the page loaded" Or "Firefox can't establish a connection to the server at 127.0.0.1" But every now and then it lets me in and I'll manage a few clicks in the web UI before it kicks me out with the following error: "The server could not complete a request (HTTP 0 ). The server encountered an unexpected condition that prevented it from fulfilling the request. If this problem persists, please contact your system administrator." I've done all the updates available in CentOS except one OpenOffice one that is causing a conflict, and I re-ran wmware-config.pl after updating the kernel. Though I went with all the defaults as I don't really know what I'm doing! I've since rebooted and nothing changed. I've also tried accessing the control panel remotely from another machine in the network and the results are the same. Does anyone have any ideas what might be causing this and how I can resolve it? I'm afraid I'm a developer playing at sys-admin, so I may be missing something obvious! Many thanks Pete Update I have now reinstalled both the operating system and VMWare and I'm still getting the same issue. I wonder if it's a result of the settings I'm putting in on the config.pl script..?

    Read the article

  • heimdal error Decrypt integrity check failed for checksum type

    - by user880414
    when I try to authentication with heimdal-kdc ,I get this error in kdc log : (enctype aes256-cts-hmac-sha1-96) error Decrypt integrity check failed for checksum type hmac-sha1-96-aes256, key type aes256-cts-hmac-sha1-96 and authentication failed!!! but authentication with kinit is correct!! my kerb5.conf is [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log krb5 = FILE:/var/log/krb5.log [libdefaults] default_realm = AUTH.LANGHUA clockskew = 300 [realms] AUTH.LANGHUA = { kdc = AUTH.LANGHUA } [domain_realm] .langhua = AUTH.LANGHUA [kdc] and when add this line to krb5.conf (in kdc tag) require-preauth = no I get this error krb5_get_init_creds: Client have no reply key

    Read the article

  • ARP replies contain wrong MAC address

    - by Jayen
    I've got a robot running linux with wired and wireless adapters. When I boot up, it connects to the wireless fine. When I assign an IP to the wired (either statically or with DHCP), it looks like it works. As in, ifconfig shows a proper IP and route shows proper routes. However, when I do an ARP request of the wired IP, the ARP reply contains the wireless MAC. ??? There's no bridge running on the robot, so why don't I get the wired MAC??? When the wire is disconnected, the wired IP replies to ping... Why is the robot replying over the wireless interface to IP requests on the wired???

    Read the article

  • Ngix rewrite is not working as expected

    - by SamFisher83
    I am trying to learn how to use nginx and how to use its rewrite functionality Nginx seems to be doing the rewrite: 2012/03/27 16:30:26 [notice] 16216#0: *3 "foo.php" matches "/foo.php", client: 61.90.22.223, server: localhost, request: "GET /foo.php HTTP/1.1", host: "domain.com" 2012/03/27 16:30:26 [notice] 16216#0: *3 rewritten data: "img.php", args: "", client: 61.90.22.223, server: localhost, request: "GET /foo.php HTTP/1.1", host: "domain.com" but in my access log I am getting the following: 61.90.22.223 - - [27/Mar/2012:16:26:54 +0000] "GET /foo.php HTTP/1.1" 404 31 "-" "Mozilla/5.0 (Windows NT 6.1; rv:11.0) Gecko/20100101 Firefox/11.0" 61.90.22.223 - - [27/Mar/2012:16:30:26 +0000] "GET /foo.php HTTP/1.1" 404 31 "-" "Mozilla/5.0 (Windows NT 6.1; rv:11.0) Gecko/20100101 Firefox/11.0" There is an img.php in the root directory so I am not sure why I am getting a 404 error Here is part of the configuration block: rewrite foo.php img.php last; location / { try_files $uri $uri/ /index.html; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; }

    Read the article

  • Ubuntu hang, and cannot be soft-reset after resuming from stand by mode

    - by Phuong Nguyen
    I have downgrade my xorg drivers, so I can hibernate and stand by my ubuntu smoothly. However, in some case, there's a problem. My ubuntu get hang. When I tried to switch to console mode (Ctrl+Alt+F1), then I cannot login. The system always reply with an error whenever I tried to press a key. When I press Ctrl+Alt+Del to perform a softreset, here what's it said: [80141.320122] end_request: I/O error, dev sda, sector 193687181 init: control-alt-delete main process (5660) terminated with status 2. This error is not even recorded in syslog. I guess this should be a problem with my hard disk since it said something about a bad sector. Exactly, what kind of error is this?

    Read the article

  • Apache returns HTTP 206 for GET /file.mp3

    - by javano
    I am making a get request to an SSL enabled site on apache (so wireshark isn't giving me anything to useful). In my Apache SSL access log I see the following entry: 1.2.3.4 - my.username [15/Nov/2012:16:52:01 +0000] "GET /uploads/file.mp3 HTTP/1.1" 206 534400 "https://site.com/uploads/layla.mp3" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.19 (KHTML, like Gecko) Ubuntu/12.04 Chromium/18.0.1025.168 Chrome/18.0.1025.168 Safari/535.19" Why is this happening? I'm not familiar with the HTTP 206 response code but searching the Internet I can see it is a partial content GET requests. I understand correctly, my browser is making a partial GET request not for the full file. Is that correct? If so, is this a browser issue or is the web server instructing my browser to do so? I have tested in Firefox also and in both browser, I am not sent the file. If I rename the file to file.jpg I can download it through my browser and rename it to .mp3 and it plays. How can I troubleshoot this issue?

    Read the article

  • Multiheaded X.org with a single workspace-pool

    - by blauwblaatje
    I've got an idea for x.org/$randomwindowmanager in combination with a multiheaded setup, but I haven't figured out how it should work. Also I don't really know where to place the feature request. Now for the idea. I've been working with screen (wikipedia:GNU_Screen) for some years now. One thing I like about it, is the fact that I can get a multi-display mode (screen -x), so you can have multiple terminals all connected to the same screen. The fun thing about it, is that you can get 2 terminals with the same content and switch my onscreen layout, without moving the terminals. I admit, in screen it's not extremely useful, but I think for a wm it can be. Imagine this. You've got two monitors and 4 workdesks. On one workdesk I've got my IDE with code, on the second one I've got the output, on the third one I've got the documentation and on the forth one I've got my e-mail and IM clients. At one moment, I want my IDE and output on my monitors, another moment my code and documentation and Yet another moment my IM to consult a colleague and documentation or code. Finally my colleague comes to help me at my desk. I'd like it if we could both watch the same workdesk without him sitting on my lap, so I turn one monitor so he can see it better. It would be great if we could see the same thing that's on my monitor (exclude mousepointer). The thing with most WMs is that your workspaces on the two monitors are either separated or glued together. If they're separated, you can change workspaces on each monitor autonomous, but you can't exchange applications between monitors because they're different x-clients (iirc). If they're glued together (xinerama), you can exchange the applications, but when changing your workspace, the other monitors change too. So, what I'd like to know is this. Is this already possible or should I submit a feature request somewhere (and if so, where?)

    Read the article

  • Is it okay to use an administrator account for everyday use if UAC is on?

    - by Valentin Radu
    Since I switched to Windows 7 about 3 years ago, and now using Windows 8.1, I have become familiar with the concept of User Account Control and used my PC the following way: a standard account which I use for every day work and the built-in Administrator account activated and used only to elevate processes when they request so, or to ”Run as administrator” applications when I need to. However, recently after reading more about User Account Control, I started wondering if my way of working is good? Or should I use an administrator account for every day work, since an administrator account is not elevated until requested by apps, or until I request so via the ”Run as administrator” option? I am asking this because I read somewhere that the built-in Administrator account is a true administrator, by which I mean UAC doesn't pop up when logged in within it, and I am scared of not having problems when potential malicious software come into scene. I have to mention that I do not use it on a daily basis, just when I need to elevate some apps. I barely log in into it 10 times a year... So, how's better? Thanks for your answers! And Happy New Year, of course! P.S. I asked this a year ago (:P) and I think I should reiterate it: is an administrator account as safe these days as a standard account coupled with the built-in Administrator account when needed?

    Read the article

  • How to make a server check it's own availability on the web?

    - by Javawag
    Hi all, Just a quick question – my server is running at my house serving www pages at www.javawag.com. The problem is that my home internet connection keeps dropping randomly - for about 10 mins at a time. This is only an intermittent problem and will go away soon I hope. However, my server doesn't recover properly - when the connection comes back, I can still access it at 192.168.0.8 (locally) without any issue, but at www.javawag.com there's no reply! (Just an aside - my home internet connection is dynamic ISP, the domain www.javawag.com points to javawag.dyndns.org which in turn points to my IP, updated every minute by ddclient on the server) Is there some way for the server to check if it's accessible from the outside world periodically, and if not restart Apache/reboot? Oh, and if I reboot the problem fixes itself also! Javawag

    Read the article

  • nginx redirect what is not coming from load balancing

    - by dawez
    I have nginx on SERVER1 that is acting as load balancing between SERVER1 and SERVER2 in SERVER1 I have the upstreams for the load balancing defined as : upstream de.server.com { # similar upstreams defined also for other languages # SELF SERVER1 server 127.0.0.1:8082 weight=3 max_fails=3 fail_timeout=2; # other SERVER2 server otherserverip:8082 max_fails=3 fail_timeout=2; } The load balancing config on SERVER1 is this one: server { listen 80; server_name ~^(?<LANG>de|es|fr)\.server\.com; location / { proxy_pass http://$LANG.server.com; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # trying to pass a variable in the header to SERVER2 proxy_set_header Is-From-Load-Balancer 1; } } Then in server 2 I have: server { listen 8082; server_name localhost; root /var/www/server.com/public; # test output values add_header testloadbalancer $http_is_from_load_balancer; add_header testloadbalancer2 not_load_bal; ## other stuff here to process the request } I can see the "testloadbalancer" in the response header is set to 1 when the request is coming from the load balancing, it is not present when from a direct access: SERVER2:8082 . I would like to bounce back to the SERVER1 all the direct requests that are sent to SERVER2, but keep the ones from the load balancing. So this should forbid direct access to SERVER2:8082 and redirect to SERVER1:80 .

    Read the article

  • Easiest way to allow direct HTTPS connection in Intercept mode?

    - by Nicolo
    I know the SSL issue has been beaten to death I'm using DNS redirect to force my clients to use my intercept proxy. As we all know, intercepting HTTPS connection is not possible unless I provide a fake certificate. What I want to achieve here is to allow all HTTPS requests connect directly to the source server, thus bypassing Squid: HTTP connection Proxy by Squid HTTPS connection Bypass Squid and connect directly I spent the past few days goolging and trying different methods but none worked so far. I read about SSL tunneling using the CONNECT method but couldn't find any more information on it. I tried a similar method in using RINETD to forward all traffic going through port 443 of my Squid back to the original IP of www.pandora.com. Unfortunately, I did not realize all other HTTPS requests are also forwarded to the IP of www.pandora.com. For example, https://www.gmail.com also takes me to https://www.pandora.com Since I'm running the Intercept mode, the forwarding needs to be dynamic and match each HTTPS domain name with proper original IP. Can this be done in Squid or iptables? Lastly, I'm directing traffic to my Squid server using DNS zone redirect. For example, a client requests www.google.com, my DNS server directs that request to my Squid IP, then my transparent Squid will proxy that request. Will this set up affect what I'm trying to achieve? I tried many methods but couldn't get it to work. Any takes on how to do this?

    Read the article

  • Is it possible to spoof the From: field in Outlook?

    - by tsv
    I am wondering if it is possible to change the From: field (not just the reply-to) in Outlook (specifically in the 2010 beta, but also interested in other versions). I am just moving over from Linux and am used to being able to do this quite simply in some clients. At first I thought it was the option of "Other E-Mail address" on the From drop down menu in a compose window, but that seems to do... nothing! I want to be able to do this so I can have email look like it comes from a domain I own but do not wish to run an SMTP server on.

    Read the article

  • How to get http requests details in a tcpdump?

    - by tucson
    I am trying to get a tcpdump trace of some http requests. Here is what I got so far (I replaced the real IP addresses with REMOTE and LOCAL): C:\>Windump -na -i 3 ip host REMOTE and ip src LOCAL and tcp port 80 Windump: listening on \Device\NPF_{8056BE5E-BDBB-44E6-B492-9274B410AD66} 13:13:34.985460 IP LOCAL.4261 > REMOTE.80: . 1784894764:1784894765(1) ack 1268208398 win 65535 13:13:38.589175 IP LOCAL.4302 > REMOTE.80: F 3708464308:3708464308(0) ack 982485614 win 65535 13:13:38.589285 IP LOCAL.4303 > REMOTE.80: F 890175362:890175362(0) ack 2462862919 win 65535 13:13:38.589330 IP LOCAL.4304 > REMOTE.80: F 1838079178:1838079178(0) ack 156173959 win 65535 13:13:38.589374 IP LOCAL.4305 > REMOTE.80: F 3952718843:3952718843(0) ack 2209231545 win 65535 13:13:38.589413 IP LOCAL.4306 > REMOTE.80: F 446105750:446105750(0) ack 3141849979 win 65535 13:13:38.590265 IP LOCAL.4302 > REMOTE.80: . ack 2 win 65535 13:13:38.590403 IP LOCAL.4304 > REMOTE.80: . ack 2 win 65535 13:13:38.590429 IP LOCAL.4303 > REMOTE.80: . ack 2 win 65535 13:13:38.590484 IP LOCAL.4305 > REMOTE.80: . ack 2 win 65535 13:13:38.590514 IP LOCAL.4306 > REMOTE.80: . ack 2 win 65535 But I do not get the following level of details: Request URL:http://domain.com/index.php Request Method:POST Status Code:200 OK POST /index.php HTTP/1.1 Host: domain.com Connection: keep-alive Content-Length: 151 Cache-Control: max-age=0 etc How can I get this level of data?

    Read the article

< Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >