Search Results

Search found 15040 results on 602 pages for 'request servervariables'.

Page 368/602 | < Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >

  • Updating, etc., automatically

    - by Steve D
    Is there a way to set up Ubuntu 12.04 (or earlier versions) so that all recommended updates are done automatically, say once a week? When I say automatically, I mean no password entry or user intervention required. This sounds like a stupid request, so let me tell why I'm asking. My grandfather knows nothing about computers; he uses his solely to read Yahoo! mail. I want to get rid of his clunky, spyware-ridden Windows XP and install Ubuntu. I want to set it up so when he turns the computer on, after a couple minutes, voila!, Yahoo! mail, already signed in, ready to go. The problem is I don't want to have to go over there every week or so and make sure everything is up-to-date, he hasn't accidentally installed any spyware, etc. So can this be done? Is this the best way to set things up for my grandfather? Are there other things I should be worried about when it comes to keeping things hassle-free for him? Please don't post anything like "why not teach him how to... blah blah blah". My grandfather is 80 years old and has made it clear email is the only thing he will ever use a computer for! Thanks!

    Read the article

  • what can be reasons for localhost responding super-slow the first time a page is requested?

    - by frequent
    Still learning my server-ways with Apache2.2/MySQL5.2/Coldfusion8 on localhost (running Windows XP) What I notice is every time I request a page for the first time after firing up Coldfusion and Apache, localhost needs forever (1+ minute to respond and send the inital page). After that all seems to run at normal loading time. I'm using require.js to pull in Jquery, Jquery Mobile and two other plugins on first page load, but loading the same page from a real server works normally, so I'm also ruling out this as a probable cause. Since it happens regardless of which page I'm loading first it shoud not be page related, so I'm looking for other clues on why this could happen. Thanks for some thought!

    Read the article

  • mod_proxy failing as forward proxy in simple configuration

    - by Stabledog
    (On Mac OS X 10.6, Apache 2.2.11) Following the oft-repeated googled advice, I've set up mod_proxy on my Mac to act as a forward proxy for http requests. My httpd.conf contains this: <IfModule mod_proxy> ProxyRequests On ProxyVia On <Proxy *> Allow from all </Proxy> (Yes, I realize that's not ideal, but I'm behind a firewall trying to figure out why the thing doesn't work at all) So, when I point my browser's proxy settings to the local server (ip_address:80), here's what happens: I browse to http://www.cnn.com I see via sniffer that this is sent to Apache on the Mac Apache responds with its default home page ("It works!" is all this page says) So... Apache is not doing as expected -- it is not forwarding my browser's request out onto the Internet to cnn. Nothing in the logfile indicates an error or problem, and Apache returns a 200 header to the browser. Clearly there is some very basic configuration step I'm not understanding... but what?

    Read the article

  • Real-time Image Resize, Cropping and Caching Server Product

    - by Elijah
    I'm investigating what products are out there that will allow you to request images through a HTTP API in arbitrary image sizes. The server would behind a CDN but would still need to be able to handle a fair bit of traffic and be possibly load-balanced. I've been tasked with writing such a service, but I wanted to do some due diligence to see what commercial or open source solutions are out there. Google has not been particularly helpful. It may be because I have been searching for the wrong term. Third-party sites and services are out of the question because of corporate policies.

    Read the article

  • Strange traffic on fresh Ubuntu Server install

    - by Fishy
    I've just installed Ubuntu Server on my home box after becoming partially familiar with it at work and wanting to train up as a Pen Tester. I installed the latest version on a logical partition (the main one contained Win7), and selected none of the extra modules (I think). I installed ngrep and fired it up (along with TCPdump) and immediately saw some strange traffic which I am unable to identify. My pc is sending out UDP packets every couple of seconds to a seemingly random series of IP addresses, all on the same port (47669 - though I did also see it use another port for a while). I watched it do this for about 20 mins, whilst trying to work out why it was doing it. The only other traffic was the odd ARP request for the router and SSDP UPnP broadcasts from the router. Anyone know what this is, or have any advice on how best to find out? Thanks. EDIT: Actually, it's not my box generating the traffic. It's receiving the traffic on that port, from a series of IP addresses, and returning 'port unreachable' messages.

    Read the article

  • Apache LocationMatch throws 500 and AddOutputFilterByType does nothing

    - by tackleberry
    I need to add below directives to apache. But I get 500 when I add these lines. <LocationMatch "^/assets/.*$"> Header unset ETag FileETag None # RFC says only cache for 1 year ExpiresActive On ExpiresDefault "access plus 1 year" </LocationMatch> Additionally response is not gzipped when I add: AddOutputFilterByType DEFLATE text/html text/css application/javascript application/x-javascript Apache version is: Server version: Apache/2.2.22 (Unix) App: rails 3.2 app When I checked response&request for gzip problem, I see that browser requested gzip: Accept-Encoding gzip, deflate but response not gzipped.

    Read the article

  • College wifi works easily on Linux, but not on Windows

    - by user52849
    In Linux: After connecting to the college wifi, going to the network login page logging in, the internet works perfectly as it should. In Windows: After connecting to the college wifi, going to the network login page, logging in, Windows shows "Internet access" and the wireless icon turns white. But still after that, regardless of the browser being used, attempting to accessing any page just shows "Sending request". It does work though after a lot of tries, but only in intervals. But when running Ubuntu 11.10 in VirtualBox, it works properly just like booting in Ubuntu, even if it isn't working on Windows. The college wifi service is really crappy and has been unable to solve this problem. I'm pretty sure there should be a solution for this, but what? What is it that Ubuntu is doing right and Windows isn't? Windows settings set to "Automatically detect settings" and no proxy server used.

    Read the article

  • Best way to protect website application code

    - by Gaz_Edge
    Background I have a web application that I host on my own server. I have clients who use the application as is, but some have asked if they can host the application on their own server. This enables them to have their own URLS rather than mine. The application only forms part of their website so I'm assuming it will not be possible for my server to respond to a direct call to their domain etc To give some examples, i currently have urls like www.mydomain.com/profile, www.mydomain.com/index.php?option=someoption&view=someview&id=1 What my clients' want is www.theirdomian.com/profile, www.theirdomian.com/index.php?option=someoption&view=someview&id=1 etc Question My question is, what is the best way for me to allow them to use their own URLs with my application, without giving them all the backend source code and databases to install on their server? One way I thought would be to create a router.php file that sits on their server. The router then asks my server to output the html. The router modifies all the links etc in the html source and outputs the new html through the clients server. When a link is clicked on the clients site, the router receives the request and modifies the url to get the data from my server etc. Is this an effective way to achieve what I want, or is it way off the mark.

    Read the article

  • Lots of Internet browsing issues, all browsers

    - by dario_ramos
    Before the upgrade, everything was working fine. Now, however, I can connect to the Internet but a lot of stuff fails, and the weirdest thing is that it happens with Firefox, Chromium and Opera. Some of the things that fail: I can't log in to Stack Overflow, after entering user/pass it loads for a long time on Firefox and throws Error 408 (browser request timed out) on Chromium and Opera I can't log in to Hotmail, similar symptoms I can login to Facebook, but when I try to write a comment, or just post something in my wall, it stays loading for a long time, and then fails The first two issues seem to be related to secure pages, and the second one is another issue altogether, I believe. However, they all happen with all browsers, which is really weird. Talking about weird: I connect using a Huawei SmartAX MT 810 USB modem, which cost me blood and tears to get it working under Ubuntu. I ordered an ethernet modem/router with my ISP, and I'm still waiting, but this issue intrigues me anyway. Has anyone experienced this kind of problems? I Googled around, but couldn't find a similar case.

    Read the article

  • Delivery Status Notification (Relay) in Exchange Server 2007 with original email attachment

    - by Nick Kavadias
    I have recently setup Exhchange Server 2007. The server is smarthosting outgoing messages. Users have 'request delivery receipt' on by default their 'auditing' purposes in Outlook. They would like the original email attached to the delivery notification as was the case in Exchange Server 2003. I need this same functionality in 2007. The question has been asked here, here and here but cannot find a valid solution. Here's some information about the functionality in Exchange 2003. The question is, can i replication this functionality in 2007? Here is what a 2007 delivery message looks like: I know it's possible to customize DSN's. Can I make a custom DSN for this type of message and have the original included as an attachment? Anyone got any other ideas?

    Read the article

  • My Laptop (HP/Compaq 2510p) running ubuntu 10.04 LTS keeps losing the WLAN connection.

    - by Ernelli
    I am using Wicd and can successfully connect to my ADSL router (Thomson TG 787) using WPA PSK. But with regular interval I lose the ability to connect to Internet. I can ping the GW and can actually ping servers on the Internet but not connect to them using HTTP (Tested with both Firefox and wget). I would suspect the router unless for the fact that the problem does not show up when running Windows XP on the same computer and also, when the problem arises, a simple disconnect/connect in Wicd solves the problem, which does not involve the router (Except for the DHCP request). I have searched Ubuntu forums without luck, most problems described relate to specific network drivers or other problems. Does anyone have the same experience with Linux/Ubuntu and WLAN?

    Read the article

  • Possible to redirect from HTTPS to HTTP behind load-balancer?

    - by Derek Hunziker
    I have a basic ASP.NET application that sits behind an F5 load-balancer. Incoming SSL requests (over HTTPS) terminate at the load-balancer and all internal communication between the load-balancer and my application servers is unsecure (over HTTP). When a unsecure request comes in, my app is able to use Response.Redirect("https://...") to redirect a secure URL with no problems. However, the other direction appears to be impossible - I cannot redirect from HTTPS to HTTP using Response.Redirect() from my application. The URL remains HTTPS for the client and does not change. Could the F5 be preventing the redirect for ever reaching the client? Is there any special configuration necessary to let this happen?

    Read the article

  • How to run a script in Ubuntu via SSH as superuser?

    - by Irinotecan
    So I have a script that needs to be executed remotely as root. This isn't a problem with most Linux distros since they have a root account. But since Ubuntu does not, executing anything as root requires a 2-step process of entering the account password twice - once to log in and once for sudo. The SSH process to launch the script is automated, so it cannot pause for user input for the second password request. Does anyone know, short of hacking Ubuntu to re-enable root (not an option), if unattended SSH script execution with superuser privilege on the target machine is possible? Also, having no experience with Debian, does Debian behave this way too?

    Read the article

  • IIS + PHP + Page with lots of images = Intermittent 403 errors

    - by samJL
    I am using an up-to-date Server 2008 R2 Datacenter, running IIS 7.5 and PHP 5.3.6/FastCGI On PHP pages with lots of images (60+), some of the images fail to load It is not always the same images-- on each page refresh an image that worked previously may not load, while an image that did not now does Looking at the Net tab in Firebug reveals that the failing image requests are 403 errors All of the images are located on the server in question, and the images directory has the correct permissions I believe this problem is the result of a limit on requests All of my attempts at researching this problem point to maxConnections setting in IIS, yet mine is set at the highest/default of 4294967295 (maxBandwidth too) I am also running a ColdFusion site on the same IIS installation, and it does not suffer from 403's on pages with lots of images I am left thinking that there is another connection limit (in PHP or FastCGI?) overriding the IIS connection limit I don't see anything that looks like a request limit in the php.ini, what am I missing? Any help would be appreciated, thank you

    Read the article

  • Does nginx auth_basic work over HTTPS?

    - by monde_
    I've been trying to setup a password protected directory in a SSL website as follows: /etc/nginx/sites-available/default server { listen 443: ssl on; ssl_certificate /usr/certs/server.crt; ssl_certificate_key /usr/certs/server.key; server_name server1.example.com; root /var/www/example.com/htdocs/; index index.html; location /secure/ { auth_basic "Restricted"; auth_basic_user_file /var/www/example.com/.htpasswd; } } The problem is when I try to access the URL https://server1.example.com/secure/, I get a "404: Not Found" error page. My error.log shows the following error: 011/11/26 03:09:06 [error] 10913#0: *1 no user/password was provided for basic authentication, client: 192.168.0.24, server: server1.example.com, request: "GET /secure/ HTTP/1.1", host: "server1.example.com" However, I was able to setup password protected directories for a normal HTTP virtual host without any problems. Is it a problem with the config or something else?

    Read the article

  • How to avoid "DO YOU HAZ TEH CODEZ" situations?

    - by volothamp
    I have a strange situation at work, where a colleague of mine often asks me and other co-workers for working code. I would like to help him, but this constant request of trivial snippets interrupts my thoughts and sometimes makes it hard to concentrate. Plus, I have the impression (...) that this requests are generated by lack of competence, more than by laziness. In fact, he often asks things pretending to know the answer, since when I solve the problem he usually says things like "Sure", "Yes, that's what I thought", giving me the impression that my answer isn't worth it. How can I solve this embarrassing situation? Should I show more explicitly in front of other colleagues his lack of knowledge (by saying things like: "do it yourself if you can, please") or continue giving him what he wants? I think that he should aggregate all his answers in one, so that I can give him a portion of my time and he can work all by himself on his things. There is no hierarchy in the team, I must say we both have a similar seniority of five years, more or less. For the same reason I believe I cannot report to management, since trivial questions are often ignored. I discussed with other two members and they agree with me: in fact he often ask things cycling through colleagues.

    Read the article

  • How should I group these variables?

    - by stariz77
    I have a shape that will be defined by: char s_type; char color; double height; double width; These variables are scanned in from a request string sent to my server and passed into my printing function, which then prints out the shape. Currently they are just local variables sitting in my main(); however, I was wondering if there would be any advantage in creating a struct containing these variables, and then passing the struct to my printing function? or how else might I improve my program's structure/style, would passing a struct by reference have any kind of performance benefit if there were many requests and therefore many printing function calls? printer(char st, char cr, double ht, double wd); int main() { // Other main functionality. char s_type; char color; double height; double width; sscanf (serv_req, "GET /%c/%c/%lf/%lf", &s_type, &color, &height, &width); printer(s_type, color, height, width); // Other main functionality. return 0; } It seemed "neater" if I had a struct or something that didn't leave me with declarations in the middle of everything else going on in main. I'm interested in structure/style as well as performance. EDIT: didn't mean to put printer declaration inside main.

    Read the article

  • Unregister SIP UAC message

    - by TacB0sS
    Hi, I've looked so much on the internet, but I could not find a any SIP Unregister example, and when I search RFC 3261,3665 the word does not even appear, perhaps I'm searching for the wrong phrase. I manage to understand the part of setting the expires to zero, but it still does not work and I could not find documentation about how a formal unregister should be. Does anyone knows how to compose an Unregister SIP Request? or what should I search for it? Thanks in advance, Adam Zehavi.

    Read the article

  • Firewall still blocking port 53 despite listing otherwise?

    - by Tom
    I have 3 nodes with virtually the same iptables rules loaded from a bash script, but one particular node is blocking traffic on port 53 despite listing it's accepting it: $ iptables --list -v Chain INPUT (policy DROP 8886 packets, 657K bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- lo any anywhere anywhere 2 122 ACCEPT icmp -- any any anywhere anywhere icmp echo-request 20738 5600K ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- eth1 any anywhere node1.com multiport dports http,smtp 0 0 ACCEPT udp -- eth1 any anywhere ns.node1.com udp dpt:domain 0 0 ACCEPT tcp -- eth1 any anywhere ns.node1.com tcp dpt:domain 0 0 ACCEPT all -- eth0 any node2.backend anywhere 21 1260 ACCEPT all -- eth0 any node3.backend anywhere 0 0 ACCEPT all -- eth0 any node4.backend anywhere Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 15804 packets, 26M bytes) pkts bytes target prot opt in out source destination nmap -sV -p 53 ns.node1.com // From remote server Starting Nmap 4.11 ( http://www.insecure.org/nmap/ ) at 2011-02-24 11:44 EST Interesting ports on ns.node1.com (1.2.3.4): PORT STATE SERVICE VERSION 53/tcp filtered domain Nmap finished: 1 IP address (1 host up) scanned in 0.336 seconds Any ideas? Thanks

    Read the article

  • Designing communications for extensibility

    - by Thomas S.
    I am working on the design stages of an application that will a) collect data from various sources (in my case that's scientific data from serial ports), keeping track of the age of the data, b) generate real-time statistics (e.g. running averages) c) display, record, and otherwise handle the data (and statistics). I anticipate that I will be adding both data producers and consumers over time, and would like to design this application abstractly so that I will be able to trivially add functionality with a small amount of interface code. What I'm stumbling on is deciding what communication infrastructure I should use to handle the interfaces. In particular, how should I make the processed data and statistics available to multiple consumers? Some things I've considered: Writing to several named pipes (variable number). Each consumer reads from one of them. Using FUSE to make a userspace filesystem where a read() returns the latest line of data even if another process has already read it. Making a TCP server, and having consumers connect and request data individually. Simply writing the consumers as part of the same program that aggregates the data. So I would like to hear your all's advice on deciding how to interface these functions in the best way to keep them separate and allow room for extenstions.

    Read the article

  • Url-based web site publishing on Windows Server platform

    - by Maxim V. Pavlov
    I have a Windows 2008 Enterprise SP2 server in a datacenter. It is 32bit OS. I need to be able to do a "smart" url-based web site publishing. So that with a single external IP I can publish many sites on port 80, and some firewall logic resolves, based on a requested URL, which site in IIS gets the request. Forefront TMG 2010 has this feature, but it is not supported on 32bit systems. Is there a software solution that can satissfy my need on Windows 2K8 platform? Thank you. P.S. Perhaps there is a workaround or a tweak to do what I need in IIS?

    Read the article

  • Problems restoring old backups in NetBackup 6.5

    - by gharper
    I had a server that was decommissioned & replaced last year, and since the server was no longer in use, I deleted it's client & backup policy from the NetBackup Admin Console shortly afterwards. I recently got a request to restore a file from the old server, however when I specify the source client for the restore, I get an error message saying: WARNING: server (backupserver) does not contain any backups for client (oldserver) using the specified policy type (Standard) as requested by client (backupserver). [Ok] In addition to that error, I can't seem to run a Client Backup report on the old client any more to determine what tapes I need to recall in order to re-index and restore the files... My questions: Does deleting the client somehow remove NetBackups ability to ever restore files from the old system, even if the backups have a retention period of infinity? Is there a way to restore the file from the tape, assuming I can figure out which tape I need?

    Read the article

  • .htaccess redirect root directory and subpages with parameters

    - by wali
    I am having difficulty trying to redirect a root directory while at the same time redirect pages in a sub directory to a different URL. For example: http://test.example.com/olddir/sub/page.php?v=one to http://test.example.com/new/one while also redirecting the any request to the root of the olddir folder. I have tried RewriteCond %{QUERY_STRING} v=one RewriteRule ^/olddir/sub/page.php /new/? [R=301] and RedirectMatch /oldir "test.example.com" RedirectMatch /olddir/sub/page.php?v=one "test.example.com/new/one" Any help at this point will be extremely appreciated...Thanks!

    Read the article

  • Solutions for "Maintenance Mode"

    - by Ka Lyse
    Given a web application running across 10+ servers, what techniques have you put in place for doing things like altering the state of your website so that you can implement certain features. For instance, you might want to: Restrict Logins/Disable Certain Features Turn Site to "Read Only" Turn Site to Single "Maintenance Mode" page. Doing any of the above is pretty trivial. You can throw a particular "flag" in an .ini file, or add a row/value to a site_options table in your database and just read that value and do the appropriate thing. But these solutions have their problems. Disadvantages/Problems For instance, if you use a file for your application, and you want to switch off a certain feature temporarily, then you need to update this file on all servers. So then you might want to look at running something like ZooKeeper, but you are probably overcomplicating things. So then, you might decide that you want to store these "feature" flags in a database. But then you are obviously adding unncessary queries to each page request. So you think to yourself, that you will throw memcached in to the mix and just cache the query. Then you just retrieve all of your "Features" from memcached and add a 2ms~ latency to your application on every page. So to get around this, you decide to use a two tier-cache system, whereby you use an inmemory cache on each machine, (like Apc/Redis etc). This would work, but then it gets complicated, because you would have to set the key/hash life to perhaps 60 seconds, so that when you purge/invalidate the memcached object storing your "Features" result, your on machine cache is prompt enough to get the the new states. What suggestions might you have? Keeping in mind that optimization/efficiency is the priority here.

    Read the article

  • Welcome to the Java Training Beat!

    - by tmcginn
    We are a group of dedicated training developers for Java, located in the US, India, and now Mexico. In this blog we will announce new training content and events that might be of interest to our readers. In this first installment of the Java Training Beat, I would like to introduce three new Oracle By Example (OBE) modules I recently released and posted to the Oracle Online Learning Library. Creating a Simple Java Message Service (JMS) Producer with NetBeans and GlassFish - covers how to create a simple text message producer with NetBeans 7 and GlassFish. Creating Java Message Service (JMS) Resources in WebLogic Server 12c - covers how to create JMS resources using the console and WebLogic Server 12c. With this tutorial, you can replicate the results of the first tutorial in WebLogic. Creating a Publish/Subscribe Model with Message-Driven Beans and GlassFish Server - covers how to create a publish/subscribe application using JMS. This tutorial includes a short case study that includes a JSF front-end application that sends a hotel reservation request object to the server as a MapMessage. Hope you find these useful!  And do check out the Online Learning Library - we have a wide range of additional content posted and more being added every month!

    Read the article

< Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >