Search Results

Search found 23449 results on 938 pages for 'browser close'.

Page 707/938 | < Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >

  • ssl port didnt work on nginx

    - by Jin Lin
    I set up the unicorn and nginx on one of my ec2 machine. and my request are loading ok with nginx listen to port 80. but when I enable it to ssl, which listen to port 443. It doesn't work. and it can still work with port 80, https. server { listen 443 ssl; # replace with your domain name server_name domain.com; # replace this with your static Sinatra app files, root + public root /home/ubuntu/domain/public; ssl on; ssl_certificate /etc/ssl/domain.crt; ssl_certificate_key /etc/ssl/domain.key; # maximum accepted body size of client request client_max_body_size 4G; # the server will close connections after this time keepalive_timeout 5; location ~ ^/assets/ { add_header ETag ""; gzip_static on; expires max; add_header Cache-Control public; } location / { proxy_set_header X-Forwarded-Proto https; try_files $uri @app; } location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; # pass to the upstream unicorn server mentioned above proxy_pass http://unicorn_server; } }

    Read the article

  • Is this a solution for having multiple SSL certificates on the same IP

    - by Saif Bechan
    I am running CentOS running on a VPS. I read some guides on having multiple SSL certificates on the same system, but I can not get the basics to work. The guide I got that makes the most sense to me is the doing the following. In CentOS I can make virtual NIC's. So I made 2 virtual NIC's to start with. 192.168.10.1, 192.168.10.2. Now I work in ISP manager Pro, so this is listening on my primary ip 1.1.1.1 For each website I have them listening on 192.168.10.1:80, 192.168.10.1:443 In the hosts file I made the following 2 entries 192.168.10.1 1st.com 192.168.10.2 2nd.com Now the strange thing is that when I browser to 1st.com I do not get the website located at 192.168.10.1, I get the website located at my prim IP 1.1.1.1 Should I do something like forwarding or routing for this setup to work? And the basic question: Will this setup even work? Are the SSL certificates based on the IP adress, or are the based on the host name, 1st.com and 2nd.com.

    Read the article

  • both ssl and non-ssl on single port

    - by Zulakis
    I would like to make my apache2 webserver serve both http and https on the same port. With the different method i tried it was either not working on http or on https.. How can I do this? Update: If I enable SSL and then visit the with http I get page like this: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> <blockquote>Hint: <a href="https://server/"><b>https://server/</b></a></blockquote></p> <hr> <address>Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny16 with Suhosin-Patch mod_ssl/2.2.9 OpenSSL/0.9.8g Server at server Port 443</address> </body></html> Because of this, it seems very much possible to have both http and https on the same port. A first step would be to change this default-page so it would present a 301-Moved header. Update2: According to this, it is possible. Now, the question is just how to configure apache to do it.

    Read the article

  • maximum number of connections Squid

    - by Isaac
    I have a Squid proxy server that controls all internet traffic for my network. I need a way to stop users from downloading big files (say 50MB) in my network. I banned some famous ports (e.g. torrent) but some downloads are possible by HTTP port. Obviously I cannot ban port 80! A simple solution is limiting maxmimum number of the simultaneous connections for each IP (e.g. 3 connections). It's possible in Squid with this config: acl ACCOUNTSDEPT 192.168.5.0/24 acl limitusercon maxconn 3 http_access deny ACCOUNTSDEPT limitusercon But this solution has really bad impact in web browsing, because any smart browser get different parts of a website by several connections simultaneously to speedup web browsing. But if we have a maximum number of connections, the browsers will fail to get some parts and the website will be shown partially and some parts/images/frames will not be shown. So, can we limit maximum number of persist connections? I think this policy will works: Specify Maximum number of connections that is alive for 10 seconds But Number of simultaneous connections for every IP is unlimited But how can we implement this policy when Squid? With which config? UPDATE: artifex and Tom Newton offered using a bandwidth-limiting approach to fight against downloaders. But bandwidth-limiting in Squid has a shortcoming: It's static and cannot dynamically change. So a person has a limited bandwidth not matter how many people are using internet (maybe nobody!) Also, this solution cannot help to stop people from downloading. They still can download but in a lower speed. But if we find a way to terminate persist connections (or any connection that is alive more than a specific time), downloading big files will be almost impossible (always there is some way!)

    Read the article

  • Weird caching bug where old version of the same web page (same filename) is still called (Windows 2008 R2, Tomcat 5.5)

    - by user717236
    This is definitely one of the strangest errors I've seen and it occurs intermittently. I am running Windows 2008 R2, IIS 7.5, and Apache Tomcat 5.5, by the way. Let's say I have two machines, A and B. Both A and B are running Windows 2008 R2. I have a web page called login.jsp on machine A, and I have a newer, modified version of login.jsp on machine B . Now, I copy the new login.jsp from machine B and paste it to machine A, replacing the older version with the same filename. For whatever reason, when I hit up the web page in my browser from a local machine (i.e. my laptop), it still recalls the old version of the web page, even though it's been replaced! I tried restarting IIS and Apache Tomcat. That didn't work. I tried restarting machine A and that didn't work. I tried a cold reboot of my local machine and that didn't work, either. So, I spoke to someone I can confide in for help. He said to open the login.jsp page in notepad, put a space in, save the file, and try again. Sure enough, it worked. He said he hasn't seen it in Windows 2003, but this is occurring with Windows 2008. What I don't understand is why did it work and what the heck is this error and I do I really diagnose it and resolve it for good, instead of the hack my colleague proposed? Is this bug related to Windows 2008, Windows 2008 R2, Tomcat, or something else entirely? Anyone else have the same problem? Thank you for any help.

    Read the article

  • Can I make TCP/IP session to run less than 60 seconds?

    - by par
    Our server is overloaded with TCP/IP sessions, we have 1200 - 1500 of them. Most of them are hanging in TIME_OUT state. It turns out that a connection in TIME_OUT state occupies a socket until 60 second time-out is elapsed. The problem is that the server gets unresponsive and many clients are not getting served. I have made a simple test: download an XML file from the server with Internet Explorer 8.0 The download finishes in a fraction of second. But then I see that the TCP/IP connection is hanging in TIME_OUT state for 60 seconds. Is there any way to get rid of TIME_OUT waiting or make it less to free the socket for new connections? I understand why TCP/IP connection enters TIME_OUT state, but I don't understand why Internet Explorer does not close the connection after the XML file download is over. The details. Our server runs web service written in Perl (mod-perl). The service provides weather data to clients. Client is a Flash appication (actually Flash ActiveX control embedded in Windows application). OS: Ubuntu Apache "Keep Alive" option is set to 0

    Read the article

  • Apache22 on FreeBSD - Starts, does not respond to requests

    - by NuclearDog
    Hey folks! I'm running Apache 2.2.17 with the peruser MPM on FreeBSD 8.2-RC1 on Amazon's EC2 (so it's XEN). It was installed from ports. My problem is that, although Apache is running, listening for, and accepting connections, it doesn't actually respond to any or show them in the log at all. If I telnet to the port it's listening on and type out an HTTP request: GET / HTTP/1.1 Host: asdfasdf And hit enter a couple of times, it just sits there... Nothing. No response requesting with a browser either. There doesn't appear to be anything helpful in the error log: [Sun Jan 09 16:56:24 2011] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Sun Jan 09 16:56:25 2011] [notice] Digest: generating secret for digest authentication ... [Sun Jan 09 16:56:25 2011] [notice] Digest: done [Sun Jan 09 16:56:25 2011] [notice] Apache/2.2.17 (FreeBSD) mod_ssl/2.2.17 The access log stays empty: root:/var/log# wc httpd-access.log 0 0 0 httpd-access.log root:/var/log# I've tried with accf_http and accf_data both enabled and disabled, and with both the stock configuration and my customized config. I also tried uninstalling apache22-peruser-mpm and just installing straight apache22... Still no luck. I tried removing all of the LoadModule lines from httpd.conf and just re-enabled the ones that were necessary to parse the config. Ended up with only the following loaded: root:/usr/local/etc/apache22# /usr/local/sbin/apachectl -M Loaded Modules: core_module (static) mpm_peruser_module (static) http_module (static) so_module (static) authz_host_module (shared) log_config_module (shared) alias_module (shared) Syntax OK root:/usr/local/etc/apache22# Same results. Apache is definitely what's listening on port 80: root:/usr/local/etc/apache22# sockstat -4 | grep httpd root httpd 43789 3 tcp4 6 *:80 *:* root httpd 43789 4 tcp4 *:* *:* root:/usr/local/etc/apache22# And I know it's not a firewall issue as there is nothing running locally, and connecting from the local box to 127.0.0.1:80 results in the same issue. Does anyone have any idea what's going on? Why it would be doing this? I've exhausted all of my debugging expertise. :/ Thanks for any suggestions!

    Read the article

  • IIS 7.0 - responses throttled to 500ms blocks?

    - by Julia Hayward
    Scenario: ASP.NET MVC wep app sitting on my local machine (Vista Ultimate, IIS 7.0), nothing going on except one user (me) logged in and viewing an index page. The page includes 9 dynamic images drawn from the underlying DB and returned from a controller action. I have got the actual processing time for these images down to 15ms each. Turn on Firebug and watch the page load. What I see is 9 requests for images firing off together – no surprise – but four come back to me almost immediately; two more after 0.5s; another after 1s; then at 1.5s and 2s. Logging on the server side suggests the individual responses are still only taking 15ms. So it appears IIS is queueing things up into 500ms chunks. (Repeating the experiment produces different results, but each time the images return in similar blocks – you might get three in the first group, then three at 0.5s, two at 1s etc, for example – and it’s always at 500ms intervals, not anything else.) It’s also repeatable cross-browser, and it’s not repeatable with other forms of content. I haven't found any particular mention of this problem out there, so I'm sort of assuming it's not an IIS bug, so is it: i) IIS on desktop OSs deliberately does it, to make you use server OSs in production? ii) There is some magical setting that has eluded me for as long as I’ve known IIS? iii) Something peculiar to MVC or SQL Server 2008? or something else?

    Read the article

  • APC fragmentation on EC2 Micro for Wordpress + W3TC

    - by Maarten Provo
    I'm trying to optimize APC for my Amazon EC2 Micro server running one Wordpress-site with W3TC. I've started with the settings advised by TechZilla in another topic but I keep getting high fragmentation with 50% of space being free. I've uploaded an image to http://www.maartenprovo.be/downloads/apc.jpg but I can't post it here since I need at least 10 reputation. What values can I optimize to prevent fragmentation? [apc] apc.enabled=1 apc.shm_segments=1 ;32M per WordPress install apc.shm_size=164M ;Leave at 2M or lower. WordPress does't have any file sizes close to 2M apc.max_file_size=2M ;Relative to the number of cached files apc.num_files_hint=1000 ;Relative to the size of WordPress apc.user_entries_hint=4096 ;The number of seconds a cache entry is allowed to idle in a slot before APC dumps the cache apc.ttl=7200 apc.user_ttl=7200 apc.gc_ttl=3600 ;Auto update chache files on change in WP-ADMIN or W3TC apc.stat=1 ;This MUST be 0, WP can have errors otherwise! apc.include_once_override=0 ;Only set to 1 while debugging apc.enable_cli=0 ;Allow 2 seconds after a file is created before it is cached to prevent users from seeing half-written/weird pages apc.file_update_protection=2 ;Ignore files apc.filters apc.slam_defense = 0 apc.write_lock = 1 apc.cache_by_default=1 apc.use_request_time=1 apc.mmap_file_mask=/var/tmp/apc.XXXXXX apc.stat_ctime=0 apc.canonicalize=1 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.lazy_classes=0 apc.lazy_functions=0

    Read the article

  • Website has become slower on a VPS, was much fast on a shared host. What's wrong?

    - by Arpit Tambi
    My shared host suspended my website stating system overload, so I moved my website to a VPS which has 4GB RAM. But for some reason the website has become very slow. This is the vmstat output - procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 3050500 0 0 0 0 0 1 0 0 0 0 100 0 0 Here's the Apache Benchmark output for a STATIC html page I ran on the server itself - Benchmarking www.ask-oracle.com (be patient)...apr_poll: The timeout specified has expired (70007) Total of 20 requests completed Update: Server Config: List item Centos 5.6 4 cores cpu 4 GB RAM LAMP stack with APC Wordpress Only one website It takes almost double time to load now, same website was much fast on shared hosting. I know I need to tweak some settings but have no clue where to start from? I have already tried to optimize apache, mysql etc. Update 2: CPU usage is low, see uptime output: 11:09:02 up 7 days, 21:26, 1 user, load average: 0.09, 0.11, 0.09 Update 3: When I load any webpage, browser shows "Waiting" for a long time and then page loads quickly. So I suspect server can accept only limited connections and holds extra connections in a waiting state. How to check this? Update 4: Following is the output on executing netperf TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost.localdomain (127.0.0.1) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.00 9615.40 [root@ip-118-139-177-244 j3ngn5ri6r01t3]# Here are the Apache MPM settings from httpd.conf, do they look okay? <IfModule worker.c> StartServers 5 MaxClients 100 MinSpareThreads 50 MaxSpareThreads 250 ThreadsPerChild 125 MaxRequestsPerChild 10000 ServerLimit 100 </IfModule>

    Read the article

  • Add files/folders/Apps to a new User Account

    - by odeho19
    I have created a new user acct. for my roommate. I/we don't really care much for IE, and I'd like to add Google Chrome, and FF for her, to let her choose her own browser. Plus I'd like to add some other apps for her, and transfer some of her personal info that she's been storing on my administrative acct. (i.e. personal photos and such). So, like I said the account is set up. But when I go to use Chrome, it won't open. It, along with other apps are listed in the list of "approved" applications under the parental control list. (Application Restrictions List) But they either don't show up when searched for,( in her acct.) or won't open and operate. One other example is Faststone Capture. And then, I've got one application, Yahoo, "Unchecked", and yet there it is in her account, AND IT WORKS! I do not get it. The purpose to setting up an account for her was to only restrict that she couldn't delete, or uninstall anything on the computer. I don't care, (obviously within reason) what she installs, but NO TAKING AWAY! lol Here is a screen shot of the restrictions list: So, here I sit, wondering what I did wrong, or forgot to do in order for these to work. I don't care if Yahoo works or not. But her photos being transferred, Chrome/FF working, and Faststone Capture working, are a priority for me. If anyone has any ideas, I would appreciate any advice. Thanks! OH Edit: Also, my account is an Administrative one. The account I am creating for her is just a Standard User Acct.

    Read the article

  • USPTO site asks for Quicktime Plug-in which I already have installed. Why?

    - by Kensai
    Whenever I try to watch the images of a patent in the USPTO site (example) using Firefox, the browser asks me to download the latest Quicktime, manually. This is totally strange because I already HAVE the latest plug-in (it even appears on my Firefox add-ons list). In the past I have only been able to see patent images using Safari. But never with Firefox. Is it a USPTO problem or a Mozilla one? Is there a way to fix the problem? edit: I can't see TIFF images neither with Internet Explorer (both 32-bit and 64-bit versions) nor with Chrome. All these browsers don't know how to open embedded TIFF images because they don't recognize the installed Quicktime plugin. A USPTO conspiracy to promote Safari? Come to think of it, I had this problem in my old computer as well. It had a 32-bit Vista OS, now I have 64-bit Windows 7. I hate TIFF and can't find Mozilla-specific information anywhere.. Arghh, am I the only one here with this freak problem?!

    Read the article

  • Subdomains not working with virtual hosts on apache2 ubuntu

    - by cy834sh4rk
    I'm trying to set up a subdomain on my ec2 account but can't figure out what's going on. I've looked for a few hours and haven't been able to find an answer :-/ I'm trying to set up a subdomain using virtual hosts but no matter what I try the browser can't find the subdomain :-( I have the following vhosts files set up: apache2/sites-available/mysite (this site currently works) <VirtualHost *:80 ServerName mysite.com ServerAdmin webmaster@localhost DocumentRoot /home/sites/mysite <Directory /home/sites/mysite Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory ErrorLog ${APACHE_LOG_DIR}/mysite-error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/mysite-access.log combined </VirtualHost apache2/sites-available/red (this is the subdomain I'm trying to set up) <VirtualHost *:80 ServerName red.mysite.com ServerAdmin webmaster@localhost DocumentRoot /var/www/red <Directory /var/www/red Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory ErrorLog ${APACHE_LOG_DIR}/red-error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/red-access.log combined </VirtualHost Apache mod_rewrite is enabled. I've enabled both sites using a2ensite and I make sure I restart apache every time I make a change. /etc/hosts 127.0.0.1 localhost 127.0.0.1 mysite.com 127.0.0.1 red.mysite.com Any help would be appreciated. Thanks!

    Read the article

  • Logitech QuickCam Pro 9000 & Windows 7 64-bit failing miserably

    - by Saxtus
    I am trying to install a Logitech QuickCam Pro 9000 webcam to Windows 7 64-bit. If I do it without using the Logitech drivers but instead the Windows Update ones, the camera works with low frame rate and without face tracking and all other bells and whistles that it's full driver provides. The moment I install the latest official Logitech driver, the problems begin: Camera works fine, until I decide to go to audio settings of the LWS panel or Windows'. Then LWS freezes and with it everything that tries to output audio. I am not able to open Playback/Recording devices window (it just doesn't appear) and system gets unstable and slow with LWS.EXE process not been able to close forcefully. If I reboot and forget the camera connected, this situation continues and system gets unstable from the beginning. If I reboot without the camera connected, everything works fine until I connect it and try to do something with audio settings of Windows or LWS panel. I should note, that until the freezing occurs, camera works as expected, with full frame rate, face tracking and everything that is expected to do. The soundcard is the ASUS SupremeFX II of the ASUS Striker II Extreme motherboard. Any ideas of what is causing this or what else to try so I can make it work as advertised? Thank you.

    Read the article

  • Problem with network after malware attack

    - by Cruelio
    Im trying to help some friends with a Win XP machine. I got rid of the malware using Malware Bytes, and HiJackThis. But now they(I) have another problem. When the computer boot into Windows it seems fine. When I start Internet Explorer the browser window opens just fine, but nothing happens for at minute or two. After the two minutes of waiting, the network icon appears in the taskbar next to the clock, and then everything works. The computer is connected to the internet using a Ethernet adapter. I have looked at the Rvent Log and found an error from Perfnet with eventid 2004 <Provider Name="PerfNet" /> <EventID Qualifiers="49152">2004</EventID> <Level>2</Level> <Task>0</Task> <Keywords>0x80000000000000</Keywords> What I have tried so far: In the device manager i have uninstalled the Ethernet adapter and installed it again. I have uninstalled and installed the Windows File and Printer Sharing service. I have verified that both server and workstation services are started. What should I do next?

    Read the article

  • How can I cause Task Scheduler to "fail" if a dialog box returns a certain result?

    - by Roger
    I'm working on a VBScript to do a weekly reboot of all machines on our network. I want to run this script via Task Scheduler. The script runs at 3:00 AM, but there is a small chance that users may still be on the network at that time, and I need to give them the option to terminate the reboot. If they do so, I would like the reboot to occur the next night at 3:00 AM. I've set Task Scheduler up to repeat in this way. So far, so good. The problem is that if the user selects "Cancel" in my script, the Task Scheduler does not see my task as failed, and won't run it again the next night. Any ideas? Can I pass an errorcode to task scheduler or otherwise abort the task via VBScript? My code is below: Option Explicit Dim objShell, intShutdown Dim strShutdown, strAbort ' -r = restart, -t 600 = 10 minutes, -f = force programs to close strShutdown = "shutdown.exe -r -t 600 -f" set objShell = CreateObject("WScript.Shell") objShell.Run strShutdown, 0, false 'go to sleep so message box appears on top WScript.Sleep 100 ' Input Box to abort shutdown intShutdown = (MsgBox("Computer will restart in 10 minutes. Do you want to cancel computer restart?",vbYesNo+vbExclamation+vbApplicationModal,"Cancel Restart")) If intShutdown = vbYes Then ' Abort Shutdown strAbort = "shutdown.exe -a" set objShell = CreateObject("WScript.Shell") objShell.Run strAbort, 0, false End if Wscript.Quit Appreciate any thoughts.

    Read the article

  • Why is Apache htdigest authentication failing in IE10 on Windows 8?

    - by Kevin Fodness
    One of our developers reported that for the past week or two, the htdigest authentication that we have set up on our test sites in Apache is not working in IE10 on Windows 8. It's fine on IE10 on Windows 7, and it's fine on Chrome on Windows 8. The specific behavior is: Navigate to site with htdigest authentication enabled, username and password form pops up, enter correct username and password, and the username and password box pops up again. Potentially useful information: All patches applied on Windows 8 box No additional software on Windows 8 box other than Outlook 2013 and a browser test suite (Chrome, Firefox, Opera, Chrome Canary, Opera Next) Win8 running in a virtual machine on Xen Same behavior can be replicated on Win8/IE10 on Browserstack.com Server running Ubuntu 10.10 with Apache 2.2.16 This feels like a patch was applied to the Windows box that broke digest authentication for IE10 on Win8 (box configured for automatic updates). However, without knowing a specific date I can't necessarily nail this down. Has anyone else experienced this problem? EDIT: This problem only happens in the "Metro" interface, not when running IE10 in desktop mode. As of a few weeks ago, it worked fine even in the "Metro" interface.

    Read the article

  • Windows 7 fails to connect to the internet a few minutes after startup

    - by SageTheGreat
    Problem Earlier today, when I turned on my desktop computer, my internet connection works fine. Cryptocurrency miners connecting and hashing as usual and I can browse websites. But after a few minutes, my miner fails indicating that there is something wrong with my internet connection. Tried refreshing my browser and is stuck at "resolving host", and then presented me an error. After that, i can't browse sites anymore. But the weird thing is that the network icon in Windows 7 shows no signs of problems. Solutions Made Restarted my computer without doing anything: Problem persists. Tried using the network troubleshooter of windows: Reported no problems Stopped bonjour still no progress. Loaded windows using Last good config: still no progress. Restarted Modem: No change. Current Status I currently did a system restore to my system to a point before installing the latest update from Microsoft. Because earlier today, I installed some updates and after that, the problem started to appear. (After system restore, same problem.) Latest Programs installed before the problem MS Visual Studio 2013 (but internet still worked fine after the install). I hope someone could provide answers on this problem. It is my first time encountering this. EDIT: Additional Info OS: Windows 7 SP1 64-bit AV: Avast Free Antivirus Internet Connection Type: Ethernet It appears that my Laptop can't even connect to the machine thru Remote Desktop My laptop and phone on WiFi works fine and can connect to the internet. EDIT 2: Whenever I boot into Safe Mode, my Internet is fine.

    Read the article

  • PHP sessions currupt

    - by Baversjo
    Using symfony framework 1.4 I have created a website. I'm using sfguard for authentication. Now, this is working great on WAMP (windows). I can login to several accounts on different browsers and use the website. I have ubuntu server 9.10 running apache (everything up to date and default configuration). On my server, when I login to the website in one browser it works great. When I on my other computer login with another user account on the public website, the login is successful. But when I refresh/go to another page the first user is shown as logged in instead! Also, when I press logout, It's not showing that I'm logged out after page load. When I press f5 again I'm logged out. As mentioned, all this works as expected on my local installation. I'm thinking there something wrong with my PHP session configuration on my ubuntu server, but I've never touched it.. Please help me. This is a school project and I'm presenting it today :(

    Read the article

  • Recommended open-source firmware for ASUS RT-N16

    - by MasterF
    I have recently acquired an ASUS RT-N16 router. My original plan for it was to install Tomato on it. However, after checking their website i found out that the firmware was not updated in the last 2 years. There seem to be a few updated mods but none of them really seemed mature/stable/well-documented. I would like to know what other people recommend as open-source firmware for this router. I know the answers will probably be subjective; so i will give a bit of background on my needs: for now i will only use the Wi-Fi on an Android phone the connection will not be shared with anyone (so QOS is optional) i want a stable (wired) connection on my PC (for online gaming etc.) i want the (wired) download/upload speeds to be as close as possible to those achieved by directly plugging the Ethernet cable to the PC's network card; i have a 100 Mbps connection my ISP uses PPPOE my technical level: i am a software developer and i have good knowledge of bash scripting, but no experience with networking Also, i know that i could probably just use the stock firmware (and maybe will use it for a while), but i'm interested in trying an open-source version (for more features, flexibility, as a learning exercise etc.)

    Read the article

  • .htaccess redirect to error page if port is not 80

    - by Momo
    I'm running a portable server through usb stick. The thing is I also have WAMP installed in my local machine and Apache somehow gets started on windows startup, because of some random reason which I don't recall now and it can't be changed. I want to prepare my portable server in situations like this, so closing httpd.exe from process and starting my portable server is not an option. Anyway, because of already active httpd.exe my portable server's WordPress site can only be accessed through localhost:81 - this is a problem as WP site is very dependent on the URL and I don't want to include the url with port on WP database. Here is what I want to do through .htaccess: On any path except for error.php file check if not port 80 If not port 80 redirect to /error.php?code=port It it possible for it to have priority over WP redirection or URL handling? In the error.php I provided info on how to manually close httpd.exe and such so my family and friends can access the portable site. It's sort of like a gallery and calender application for events and other such stuff... Please help? I'm I can't figure it out at all. I know others may not have apache already running, but I want to prepare for such a situation. Something like the following, but the following doesn't work. # BEGIN WordPress <IfModule mod_rewrite.c> <If "%{SERVER_PORT} = 80"> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </If> <Else> RewriteEngine On RewriteRule ^(error.php)($|/) - [L] RewriteRule ^(.*)$ /error.php?code=port [L] </Else> </IfModule> # END WordPress By the way, the portable server Server2Go automatically generates vhosts based o the hostname set on it's config file and changes ports if the port (e.g. 80) is already open.

    Read the article

  • How do I secure Sql Server 2008 R2

    - by Mark Tait
    I have both a dedicated and a VPS (from Fasthosts) virtual server - the web sites/applications I run on these, access Sql Server stored on the same web server. Until now, I have logged onto Sql Server on both the deidicated and VPS server, from Sql Server Management Studio - until I noticed in my server application logs, multiple attempts to logon to Sql Server using the 'sa' username, but failed password. So someone/bot is trying hard (repeatedly every couple of hours, for approx 20 attempts during each instance) to log on... so obviously I have to lock down access to Sql Sever remotely. What I have done is gone into Configuration Manager, and in Sql Server Network Configuration - Protocols for Sql2008 and also in Sql Native Client 10.0 Configuration - Client Protocols - I have diabled Named Pipes, TCP/IP (and VIA by default). I have left Shared Memory enabled. I also disabled in Sql Server Services, the Sql Server Browser. Now the only way I can manage the databases on these servers, is by logging on to them via Remote Desktop. Can anyone confirm if this is the correct way of stopping anyone maliciously logging on to Sql Server? (I'm not a DBA or security expert - and there are hundreds of articles advising all different ways - but I was hoping for the experts here to confirm, or otherwise, if what I've done is correct) Thank you, Mark

    Read the article

  • Can I run Excel 2010 on a server?

    - by Glen Little
    This question is not about a person using Excel on a computer that happens to have an Windows Server OS. And it is not about using any Sharepoint services features! The question is about automated processes that use code (Office Automation) to open Excel files, manipulate them, run calculations, read data, save copies of the file and close the files... all in code. In previous versions of Excel the licensing agreement prevented use on a public server, notes from Microsoft warned about the problems trying to use Office Automation in a server environment, and we were warned that Excel was single threaded and not designed for use on a server. Most of the articles about this were written before Office 2010. But now, Excel 2010 is designed to work on a High Performance Computing server using HPC Services for Excel. One HPC document mentions "Windows HPC Server 2008 R2 includes a comprehensive pop-up manager that can handle occasional dialog boxes and pop-up messages". So my question is... is it now "safe" to run code that automates Excel 2010 on a "normal" server without using the HPC services? If not, can the HPC Services for Excel work on a single server? I don't need the high performance, distributed computing, aspect of HPC Services for Excel... just the ability to run Excel on a server. Can that now be done? Thanks, Glen

    Read the article

  • Adobe e-book reader ("Digital Editions") not downloading on Mac OS X

    - by doug
    i recently bought a couple of books from ebooks.com, which i thought would be ordinary pdf files. After paying for them, and downloading them, you learn that while they are pdf files, they come with a lot of DRM baggage. The most conspicuous is that you can only view these files using an Adobe ebook reader called Adobe Digital Editions. (Note: this is not the ordinary Adobe Acrobat Reader, or anything close--it's a dedicated app for reading DRM-laden files. Fine--i'll know better next time. Still, i paid for these books and there's only one way i can actually read them, which happens to be an App that i seem to be unable to download. Here's the error message i get: "Couldn't write the application to the hard disk. Please verify the hard disk is available and try again" I've tried on several different browsers. My rig is a MBP, OS X 10.6.2. I've also checked the Adobe boards and this doesn't appear to be a known issue, nor could i find anything on their discussion forums. And just to be sure, i've checked my hard disk--no problems, plenty of space, and i have no problem, nor have i ever downloading other apps.

    Read the article

  • Write permissions LAMP (Debian Lenny)

    - by letseatfood
    I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5. The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!). I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution. I tried changing the owner of my site files to www-data per this post, but that also does not work. My user is mike, and it still does not work whether the owner of the files is mike or root. Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.

    Read the article

< Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >