Search Results

Search found 10445 results on 418 pages for 'basic'.

Page 303/418 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • query keepalived

    - by tdimmig
    *Note: I have trouble deciding what should go in serverfault and what should go in superuser, if some kindly admin decides this is in the wrong place please move it for me - many thanks. I am implementing a basic HA system with keepalived. I only want to be notified of the failover in the case of hardware failure. I do, however, have the servers switch roles periodically. I have a track_script running on the backup that will vary it's return between 0 and 1 on an interval (once a week, once a month, whatever). Upon returning 0, the priority is raised above that of the master, upon returning 1 the priority is lowered again. This way they trade places on the configured interval. The question: What can I do to tell the difference between a switch caused by my script, and a switch caused because one of the servers died? I certainly want to be notified when there is an actual problem, but not every time the servers change places because of the script. I see that version 1.2.7 has snmp support and I may be able to use it to get some information that could tell me one way or another, but to be honest I've never used snmp before and I don't know how to get the information I want with it (my Google foo failed me).

    Read the article

  • Windows takes a very long time to shut down even in safe mode

    - by user1526247
    On Windows 7 the computer freezes for about 5 minutes once it gets to "Shutting down...". I can't remember when it started happening. I just lived with it for a while. The first thing I tried was a full scan using Microsoft Security Essentials. This did not solve the problem. I then went into msconfig and turned off everything I could get away with in the startup and services tabs. This did not solve the problem. I then uninstalled every program on this computer save the most basic programs. This did not solve the problem (did not uninstall drivers or catalyst). I then went through and turned off every single service and did a reboot. This did not solve the problem. I then booted into safe mode and just tried shutting it down. The problem even happens in safe mode. I have tried examining the event logs but with no success. They just say things like "blah blah has entered the stopped state" with no real clues about what program is causing me all this grief. *it may be worth noting that Ubuntu is installed on the same computer and the ubuntu boot loader is the one being used.

    Read the article

  • Query specific nameserver for a particular domain upon VPN connect

    - by MT
    Some background: I have a work laptop with Ubuntu 9.10 on it. I have a small network at home where I've been running some basic services (for myself/my family) for 10 some years. In my home network there is a nameserver (Fedora) running Bind 9 with two "views". One view is the "outside" view and it provides name resolution (to the Internet at large) for email, a wiki, and a couple of blogs. The "inside" view provides name resolution (to the internal RFC1918 addresses of theses servers) as well as all the inside hosts, network equipment, ...etc. I connect with an openvpn client to my home network from outside (such as work). What I'd like to be able to do is resolve names on my internal network across this VPN (so I get the RFC1918 "inside" responses) without fully changing my resolver to the DNS server at my hose. For example, if I connect to the VPN from work, I can change my resolver (by editing resolv.conf) to the DNS server at my house (across the VPN) and then successfully resolve all of the inside DNS names on my home network. The issue I have with this is that now I'm no longer able to resolve "inside" names provided by my work's DNS servers (because I'm using my home DNS server). Alternatively, I can connect to the VPN and access my home severs via IP addresses directly, but this is inconvenient and causes issues with Apache name-based hosting (among other things). In the end, the effect I'm trying to achieve is as follows: When I connect to the VPN I automatically start sending DNS requests for *.myhomedomain.com to my home nameserver, but any other requests continue to go the the nameserver I was using before (the one I received on my company LAN via DHCP). When I disconnect the VPN, requests for *.myhomedomain.com go back to the local LAN DNS server (e.g. all requests are going there now). I'm looking for suggestion at to how this can be accomplished.

    Read the article

  • Why does loading a html page opens automaically the printer dialog?

    - by Alex
    I have the problem that loading a certain webpage in firefox automatically opens the printer dialog. How is that? Additional information: This only happens on firefox 24.0 on Windows 7. It neither happens on Windows Explorer in Windows 7 nor on Firefox 24 on a Linux system. This also happens when using the firefox safemode, with all add-ons disabled. I cannot post the webpage, since it is not public and restricted. Multiple javascript files are used, but none contains the expression print(). The page content does not contain the phrase 'printer'. It has nothing to do with printing something. This happens sometimes, but not always. I cannot say for sure the URL and/or the content is always identical. I can answer any other questions, like 'Does the page contain this and that...', 'does the URL contain this and that...' Basic question: What could be the reason for the annoying printer pop up?

    Read the article

  • Random HTTP 413 error on apach2/php/joomla site

    - by jfab
    I have a Joomla site, and every once in a while when I submit something via a form, I get a HTTP 413 error: Request Entity Too Large The requested resource /index.php does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit. In the error.log file I get: Invalid Content-Length, referer: [site]/index.php It doesn't seem this has anything to do with the actual size of the request, for the following reasons: a) I tinkered with the configuration of both Apache, and PHP. In Apache I tried increasing LimitRequestBody, and in PHP post_max_size, max_input_vars, memory_limit, and even upload_max_filesize. Every value is far beyond what is sent in a typical request that generates an error. b) The error pops up quite randomly, and often just hitting refresh allows me to get through. c) I checked the request in Fiddler to make sure everything is right with the content-length stated in the header, and the content of the request itself. Everything appears to be in order. A curious thing is that when I resent the exact same request via Fiddler, I never got the error. It seems I can only recreate it through a browser. So I'm at my wit's end here. I don't even know where to look for the problem anymore. I don't know if it's Apache or PHP (though I can't find anything in PHP error logs, so maybe that means Apache is the more likely culprit?), or PHP in general, or my Joomla site in particular (my bets were on Joomla until a recreated the error on a test script, with a very basic post form, though it does pop up much more often on the Joomla site). If anyone can give any advice on where to even begin with this, I'll be very grateful!

    Read the article

  • allow public access to subfolder of protected folder on apache

    - by UnnamedMook
    I have password-protected the root folder of my website while i do maintenance, but I want to display a custom 401 error page to let people know the site is under construction. Unfortunately, my web host doesn't allow me write access to anything outside the root folder of my website, so this custom error page must by stored in the root folder or one of its subfolders. Instead of my custom error page I get the Apache default error page and it also says "Additionally, a 401 Authorization Required error was encountered while trying to use an ErrorDocument to handle the request." I searched for ways to make a subfolder of a protected directory public, and all I could find was to use the "Satisfy any" directive, but this doesn't work for me. It doesn't work on a file-only basis either, as with the .htaccess file below. #Authorization Restriction AuthType Basic AuthName "Access to root" AuthUserFile ********************************* Require user *********** Order Allow,Deny Satisfy any #Error Documents ErrorDocument 401 Error-401.html #Allow access to error documents <Files Error-*,html> Order Deny,Allow Allow from all Satisfy any </Files> I can only use .htaccess files; I don't have access to httpd.conf

    Read the article

  • Advanced cell selection in Excel

    - by Supuhstar
    I am new to this flavor of StackExchange, so if this belongs elsewhere, please move it; I figured this would be the best place, though. I am making an Excel Worksheet that simply stores basic financial data in 5 columns (Check Number, Date of Transaction, Description, Profit from Transaction, and Balance After Transaction) and indefinite rows. Each worksheet represents one month, and each Workbook represents a year. As I make or receive a payment, I store it as a new row, which, inherently, makes the number of rows per month indefinite. Each transaction's Balance cell is the sum of the Balance cell of the row above it and the Profit cell of its row. I want each month to start off with a special row (first one after column headers) that displays a summary of the last month's transactions. For instance, the Balance After Transaction cell would display the last row's balance, and the Profit from Transaction cell would display the overall profits of the month) I know that if I knew every month had exactly 100 expenses, I could achieve this for March with the following formulas for profit and balance, respectively: =February!E2 - February!E102 =February!E102 However, I do NOT know how many rows will be in each month's table, and I'd like to automate this as much as possible (for instance, if I find a missed or duplicated expense in January, I don't want to have to update all the formulas that point to the ending January balance). How can I have Excel automatically use the last entered value in a column, in any given Excel spreadsheet, in a formula?

    Read the article

  • How can I tell System Restore in WIndows 7 recovery console to use my recovered backup drive's restore point data?

    - by Rich Shealer
    My Windows 7 desktop PC failed to boot. It would get to a grayish screen with a mouse and would only respond to the power button. After much examination I found that the problem was not a failed drive as running CHKDSK from the Recovery Console on my main drives passed without any errors. I had been installing various Java version in the days before the failure so I decided to use a restore point to roll backwards. I have an external SATA drive controller with two 2 TB drives mirrored using the Windows mirroring function. My system has been backing up to this drive regularly. The problem is I accidently broke the mirror when testing to see if this drive system might have been causing my boot issue. Connecting it to another machine showed two dynamic drives that were invalid. In the end I reformatted one as an NTFS basic disc and used recovery software on the other to copy all of the files to the reformatted drive. I had to copy the restore points into the new drive's System Volume Information folder by granting rights to that user. I moved the drive back to the original machine and rebooted. I can see my new drive, it even uses the same drive letter as it did in normal mode. Running System Restore it lists a new Automatic Restore point created while sitting at the RC along with all of my backups. Selecting the backup I want (or any other) I get a dialog. The backup drive could not be found. System Restore is looking for restore points on your backup. Make sure the backup drive is on and connected to this computer and then click OK. What do I need to do to allow system restore to see the restore points?

    Read the article

  • Windows 7 Icons, Buttons, and Tabs corrupted...Professional 32-bit

    - by xhyperx
    The other day, about two or three ago, I was simply typing in a Microsoft Word document when my screen froze. After a few moments, it went black...I thought it was my vid hardware (dual nVidia 9800 GTs). Anyway, I did a hard reboot, and chose to Start Normally. The system blue screened telling me there was a failure in the Memory Manager. So then I thought maybe a RAM failure or vid memory failure. I attempted reboot again, this time I got presented with the option to repair windows...so I went with that. The repair app finished and did an auto reboot. This time I got all the way back to my desktop where in a matter of a about 30 seconds, the system blue screened again and pointed to the Memory Manager as the area of cause. Again I rebooted, the repair thingy came up again and I allowed it to do its thing. Deciding if the same failure occured I'd begin pulling hardware to see at what point I may have found the possibly defective party. However, this time it rebooted, I got back to desktop and no crash. All looked well, untill I looked at the baloon messages when hovering over the System Bar icons. Also when I opened any of my browsers, the tabs had no text, and any window that pops up that has regular buttons (OK, Cancel, etc., etc.) looks weird. The buttons are really really long and have no text. So it seems like the system is once again running smoothly, however something has gotten corrupted.. something relating to drawing basic windows user interface objects. Help...all ideas are respected and appreciated. Have a great day everyone!

    Read the article

  • Windows OS level file open event trigger

    - by john
    Colleagues, I have need to run a script/program on certain basic OS level events. In particular when a file in Windows is opened. The open may be read-only or to edit, and may be initiated by a number of means, either from windows explorer (open or ), be selected from a viewing or editing application from the native file chooser, or drag-n-drop into an editing or viewing application. Further, i need the trigger to "hold" the event from completing the action until the runtime on the program has completed. The event handler program may return a pass state, or fail state. If fail state has been returned, then the event must disallow the initially requested action. Lastly, I need to add to the file in question a property or attribute that will contain metadata that will be used by the above event trigger handler program to make a determination as to the pass/fail condition that will ultimately determine if the user is permitted to open the file. Please note that this is NOT a windows event log situation, but one at the OS level file open event. thanks very much for your help. regards, j

    Read the article

  • How to create a WHM/cPanel account, without creating a new sub-domain?

    - by Cyclops
    I have a basic VPS (full root access), with WHM/cPanel, and am learning the ropes. I'm trying to create a new account for an existing domain (mysite.com), and so far WHM won't let me - it either wants a sub-domain or fake domain, but won't allow two accounts for one domain. In the beginning, there was only the root account, and it wouldn't let me login to cPanel - a quick chat with tech support, and I am informed that I need to create a second account, which I did. So now I have an account, call it ns1me, for domain mysite.com. Now I want to create a django account. I go through the same process, but WHM won't allow me to use mysite.com as the domain for django. The docs recommend a sub-domain, so I fill the box in with django.mysite.com. I then realize that has actually created a sub-domain - going to django.mysite.com shows me its home directory, along with helpful information about what version of Apache, Python, and other mods its running (thanks, Apache). I really don't want a sub-domain, so that's out. Another chat with tech support, and they recommend a fake domain name, as it won't create anything. Sure enough, using a domain of djangomysite.com works, and WHM allows me to create a django account. But of course, I can't send email to [email protected] (where I could to [email protected]). What I want, is to be able to create a second account, associated with mysite.com (so I can run cPanel logged in as django, send email to [email protected], etc) - without creating a whole new sub-domain, or fake domain.

    Read the article

  • How to set up multi users on dev server with git and github

    - by Derek Organ
    I'm working on lamp application. We have 2 servers (Debian) Live and Dev. I constantly work on dev main to add new features and fix bugs. When happy all works well I scp the relevant code to the Live system. Database (mysql) is local to each machine. Now this is pretty basic setup really and I want to improve the workflow a bit. I use git and github for version control. Admittedly I've only really used one branch. Their can be 3 different developers who work on the code at different times. We all use the same linux username to connect to the dev server and edit the code directly when needed. I usually then commit and push the code at the end of the day to github. One thing to bare in mind is it isn't easy to run this code on a local machine as there are many apache and subdomain configurations that wouldn't work on a local machine so it is important to work on the dev server not locally. I need to create a new process because we need to have a main trunk now and a branch with a big code re-write. What is the best way to do this. Should I create different unix logins for each developer and set up different working areas on the dev server for there changes? e.g. /var/www/mysite_derek /var/www/mysite_paul /var/www/mysite_mike my thinking is they can do a pull from the main branch and then create there own branch and merge it back in. I'm not sure how this will work though with git locally and with github. will i need to create different github user accounts as well. I'd like to do this the 'right' way and future proof for having lots of potential developers but I also don't want to over complicate it. I simple and elegant solution is preferred. any recommendations or suggestions?

    Read the article

  • Apache form authentication issues

    - by rfcoder89
    I am trying to authenticate users through Apache using the form authentication method to restrict https requests to a certain folder. Although, regardless of whether the correct login details are provided it keeps reloading the same page except the url has the form values embedded in it instead of redirecting to the appropriate page. I need to use the form authentication type instead of basic so I can write my own html for the user to login. I am using Apache 2.4.9 and this is our current configuration. Apache config file <Location C:/wamp/www/directory> SetHandler form-login-handler AuthFormLoginRequiredLocation https://localNetwork.com/username/TestBed/HTML/login.html AuthFormLoginSuccessLocation https://localNetwork.com/username/TestBed/HTML/test.html AuthFormProvider file AuthUserFile "C:/wamp/passwords" AuthType form AuthName realm Session On SessionCookieName session path=/ SessionCryptoPassphrase secret </Location> And in the login html page I've added that for the user to login <form method="POST" action="/test.html"> User: <input type="text" name="httpd_username" value="" /> Pass: <input type="password" name="httpd_password" value="" /> <input type="submit" name="login" value="Login" /> </form>

    Read the article

  • SVN very slow over HTTP (seems auth related)

    - by Sydius
    I'm using SVN version 1.6.6 (r40053) via the command-line in Ubuntu 10.04 and connecting to a remote repository over HTTP that is in the local network. For a while, it worked fine, but has recently become very slow for any operation that requires communication with the repository, however it does eventually work after several minutes (~3m for svn up). Looking at Wireshark, it appears to be taking a full minute between the HTTP auth denied and the subsequent request containing credentials. The issue is local to my machine because other coworkers running Ubuntu are not having the issue and I've tried using my credentials from another machine and it was very fast. I tried deleting the .subversion folder in my home directory and checking everything out fresh, but it didn't help. Update: I think it's auth related. When I check out SVN repositories off of the Internet over HTTP (from Google Code, for example), everything is very fast until I do something that requires a password. Before prompting for the password for the first time, it stalls for at least a minute. Update 2: I set the neon-debug-mask in the SVN settings (in /etc/subversion/servers under [Global]) to 138 and it seems to spending a lot of time on 'auth: Trying Basic challenge...'

    Read the article

  • How can I change the file name of the Program Files (x86) folder?

    - by Madrauk
    Let me stop you before the "you shouldn't do that" and "you will corrupt your file paths". I know what I'm doing, but I'll give you the story to convince you. Basically, my hard drive is failing to the point where programs installed on it are not responding consistently. So, in preparation for my replacement, I'm moving as many files as possible to my secondary because the new drive will be smaller (it's an emergency I can't buy a fancy expensive new drive) and so I can actually use my computer until the new drive comes in. The basic idea behind what I'm doing is I've copied the contents of the Program Files x86 folder to another spot on my other drive, and I want to replace the original folder in the C drive with a symbolic link that points to the other drive, so the programs can run from that drive and be fine but it will save space on my C drive and simplify the moving process when the new one comes in. To do that, I need to rename the program files folder, make the symbolic link, hopefully delete the program files folder, then restart my computer so all the programs are running properly and I can confirm it works. Now that I've told you why, can anyone help me out?

    Read the article

  • Subversion: Secure connection truncated

    - by Nick
    Hi, I'm trying to set-up a subversion server with apache2/webdav access. I've created the repository and configure Apache according to the official book, and I can see the repository in a webbrowser. The browser shows: conf/ db/ hooks/ locks/ Although clicking any of those links gives an empty xml document like: <D:error> <C:error/> <m:human-readable errcode="2"> Could not open the requested SVN filesystem </m:human-readable> </D:error> I've never used subversion before so I assume this is correct? Anyway, when I try to connect via a command line client, it asks for my password, I give it, then I get the (useless) error message: svn: OPTIONS of 'https://svn.mysite.com': Could not read status line: Secure connection truncated (https://svn.mysite.com) The command I'm using is: svn checkout https://svn.mysite.com/ svn.mysite.com Subversion was installed using Ubuntu's package manager. It's version 1.6.6 on Ubuntu 10.04. My Virtualhost Cofiguration: <VirtualHost 123.123.12.12:443> ServerAdmin [email protected] ServerName svn.mysite.com <Location /> DAV svn SVNParentPath /var/svn/repos SVNListParentPath On AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/subversion/passwd Require valid-user </Location> # Setup The SSL Certificate Paths SSLEngine On SSLCertificateFile /etc/ssl/certs/mysite.com.crt SSLCertificateKeyFile /etc/ssl/private/dmysite.com.key </VirtualHost>

    Read the article

  • Windows 7 Icons, Buttons, and Tabs corrupted...Professional 32-bit

    - by xhyperx
    The other day, about two or three ago, I was simply typing in a Microsoft Word document when my screen froze. After a few moments, it went black...I thought it was my vid hardware (dual nVidia 9800 GTs). Anyway, I did a hard reboot, and chose to Start Normally. The system blue screened telling me there was a failure in the Memory Manager. So then I thought maybe a RAM failure or vid memory failure. I attempted reboot again, this time I got presented with the option to repair windows...so I went with that. The repair app finished and did an auto reboot. This time I got all the way back to my desktop where in a matter of a about 30 seconds, the system blue screened again and pointed to the Memory Manager as the area of cause. Again I rebooted, the repair thingy came up again and I allowed it to do its thing. Deciding if the same failure occured I'd begin pulling hardware to see at what point I may have found the possibly defective party. However, this time it rebooted, I got back to desktop and no crash. All looked well, untill I looked at the baloon messages when hovering over the System Bar icons. Also when I opened any of my browsers, the tabs had no text, and any window that pops up that has regular buttons (OK, Cancel, etc., etc.) looks weird. The buttons are really really long and have no text. So it seems like the system is once again running smoothly, however something has gotten corrupted.. something relating to drawing basic windows user interface objects. Help...all ideas are respected and appreciated. Have a great day everyone!

    Read the article

  • Apache directory access with virtual host [SOLVED]

    - by alexeygaidamaka
    I have a virtual host with a configuration like that. When i'm trying to get into foobar.com/dir providing valid username/password pair i get 403 forbidden page instead of that directory contents. www.foobar.com/dir has 777 rights, .httpaswd is chmoded 644. But i can't figure out why i am still not seeing contents. Please, give me a hint. ServerAdmin webmaster@localhost ServerName www.foobar.com ServerAlias www.foobar.com DocumentRoot /var/www/foobar <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/foobar> Options -Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> <Directory /var/www/foobar/dir> AllowOverride AuthConfig AuthName "Authorize yourself, please!" AuthType Basic AuthUserFile /etc/apache2/.htpasswd AuthGroupFile /dev/null Allow from All Order Allow,Deny Options +Indexes<<- that one should be added Require valid-user you have to add the line Options +Indexes to see directory contents

    Read the article

  • Raspberry Pi how to format HDD

    - by Speed
    Hi I am very new to Raspberry Pi environment, so looking for a bit of help to format a usb hard disk drive. I ran lsblk and got sda 8:0 0 37.3G 0 disk sda1 8:1 0 37.3G 0 part looking on web, if tried the following "sudo mkfs.ext4 /dev/sda1 -L USB40gb" it did something but when I tried to mount the drive again, it still showed the files that were there before and I can not create new file/folder "Error creating directory: Permission denied" I am writing this from my windows 8.1 pc so can not cut and paste from the pi. trying to format its output is a bit hard. Oh, there is Nothing written after the word "part" above. There use to be /media/USB40gb so I have done something because this has disappeared. I am using PCManFM 0.9.10 It does not have a format option, which would make life a lot easier, but then its not windows. I think I am running the basic linux os for the pi. It boots to a graphic environment, but I do not know how to advise what it is. I think its OpenBox 2.0.4 Thanks in advance Speed PS: I reran the format string above but this time I changed the label to read USB37gb. I did this to confirm that I was in fact formatting the right drive. Low and behold, it actually formatted the drive, wiping everything from it. Great ... testing it by creating a new folder on the drive and get error msg Permission Denied! So I have fixed the formatting issue by trial and error but still can't use the drive... Suggestions anyone?

    Read the article

  • NginxHttpAuthBasicModule with Sinatra & Passenger

    - by scainey
    I'm serving static pages from a Sinatra application using Nginx. I've implemented Basic Authentication for one page on the site using NginxHttpAuthBasicModule, the authentication succeeds but Nginx doesn't resolve the link. Error log gives - 2010/03/22 12:15:19 [error] 7143#0: *2902 open() "/home/me/live/mysite_home/public /mypage" failed (2: No such file or directory), client: 82.71.18.122, server: mysite.com, request: "GET /mypage HTTP/1.1", host: "mysite.com" The actual file is found at: /home/me/live/mysite_home/live/mypage.erb The configuration file is: server { listen 80; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } server { listen 443; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; ssl on; ssl_certificate /etc/nginx/conf/certs/server.crt; ssl_certificate_key /etc/nginx/conf/certs/server.key; keepalive_timeout 70; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } Not sure if this is a Sinatra, Passenger or Nginx thing, or if I'm just missing something.

    Read the article

  • Reasons for Ajax navigation breaking on Coldfusion/Apache when running an app in iOS fullscreen mode? [closed]

    - by frequent
    Not sure if this belongs to SO or here. I'm running a webApp using jquery, jquerymobile, requireJS and apache, coldfusion8, mysql 5.0.88 serverside. The app works fine until I try to run it in fullscreen mode on iOS (add icon to homescreen, launch app from there with <meta name="apple-mobile-web-app-capable" content="yes" /> specified). This meta tag will break the Jquery Mobile AJAX navigation. The AJAX request will fail and the requested page will be loaded as a new page, thereby restarting the app on every page change. I have chased this through the whole front end starting from requireJS through Jquery Mobiles AJAX navigation down to the AJAX request being made in Jquery. xhr.send( ( s.hasContent && s.data ) || null ); In regular browser this works no problem. In fullscreen mode, this fails (readystate=0, empty response). I have found this article, which argues that fullscreen mode is like a browser instance with different HTTP strings. On ASP.net this results in the browser not being identified by the server and only basic browser settings being assumed (e.g. no Javascript). I'm a little lost where to start looking for possible reasons serverside. I have not written any server code for handling Ajax page navigation, so this must be something that is handled out of the box by Coldfusion or Apache? Question: Where could I start looking for problable causes of fullscreen mode breaking AJAX navigation if I assume Coldfusion or Apache are the culprits? Is there a setting I'm missing in httpd.config? What else could be the problem? Thanks for inputs!

    Read the article

  • Display on secondary video card (Nvidia 8400 GS): horrible refresh, bogs system

    - by minameismud
    This is my work computer, but it's a small shop. We do business software development. The most hardcore thing we create is some web animations with html5 and fancy javascript/css. The base machine is a Dell Precision T3500 - Xeon W3550 (3.07GHz quad), 6GB ram, pair of 500GB harddrives, and Win 7 x64 Enterprise SP1. My primary video card is an ATI FirePro V4800 1GB in a PCIe slot of some speed driving a pair of 23s at 1920x1080 through DisplayPort-HDMI adapters. The secondary card is an NVidia GeForce 8400GS in a PCI slot driving a single 17" at 1280x1024 through DVI. On either of the 23" monitors, windows move smoothly, scroll quickly, and are generally very responsive. On the 17", it's slow, chunky, and when I'm trying to scroll a ton of content, Windows will occasionally suggest I drop to the Windows Basic theme. I've updated drivers for both cards, and I've gotten every Windows update relating to video. Specifically: ATI FirePro Provider: Advanced Micro Devices, Inc Date: 6/22/2014 Version: 13.352.1014.0 NVidia 8400 GS Provider: NVIDIA Date: 7/2/2014 Version: 9.18.13.4052 Unfortunately, new hardware isn't really an option. Is there anything I can do software-wise to speed up the NVidia-driven monitor?

    Read the article

  • Allow more websocket connections

    - by Switz
    I want to load balance my node.js (DerbyJS to be specific) application on a basic Linode (512MB ram). It can probably take more than one process running at once. The querys/database does not concern me as I'm not doing anything intensive. The problem at the moment is that it can only handle up to ~40 websocket connections at once. I would love if I could get that number in the few hundred+ range. I anticipate a lot of traffic on launch due to the fact that it's a highly niche community with an engaged audience, but after it should be fine with just ~20-40 connections at once, which it handles perfectly as of now. I don't mind spending a bit of money for a week or two worth of running, but I also don't want to switch production environments. How can I test the process to see how many instances I am able to run on the box? Will increasing the number of processes increase the amount of websockets I can handle, or is that a limitation of the server's network? I have an old Macbook Pro running Linux sitting next to me that has 2GB ram and a 2.8 Dual Core Processor. Could I use this to handle some of the extra load? I could probably load balance with nginx to its IP. I'm on a FiOS home network. If you have any suggestions, I'd really appreciate it. Thanks

    Read the article

  • NginxHttpAuthBasicModule with Sinatra & Passenger

    - by scainey
    Hi, I'm serving static pages from a Sinatra application using Nginx. I've implemented Basic Authentication for one page on the site using NginxHttpAuthBasicModule, the authentication succeeds but Nginx doesn't resolve the link. Error log gives - 2010/03/22 12:15:19 [error] 7143#0: *2902 open() "/home/me/live/mysite_home/public /mypage" failed (2: No such file or directory), client: 82.71.18.122, server: mysite.com, request: "GET /mypage HTTP/1.1", host: "mysite.com" The actual file is found at: /home/me/live/mysite_home/live/mypage.erb The configuration file is: server { listen 80; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } server { listen 443; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; ssl on; ssl_certificate /etc/nginx/conf/certs/server.crt; ssl_certificate_key /etc/nginx/conf/certs/server.key; keepalive_timeout 70; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } Not sure if this is a Sinatra, Passenger or Nginx thing, or if I'm just missing something.

    Read the article

  • Slow Routing Over LAN (Wired)

    - by reverendj1
    I'm having issues with my router acting very slow (Adtran Netvanta 3458). We have two networks, let's call them A and B. When I run netperf from two servers on network A (no routing) I get speeds along the lines of 900 Mbps. Which makes sense, since we have all 1Gbps switches. When testing A to B (or vice-versa) I get speeds along the lines of 22Mbps. I have also tested connecting my laptop to the switchports on the router, and testing two servers on network A (no routing) and got speeds around 90 Mbps. Which makes sense since the switchports on the router are 100Mbps. Does anyone have any idea why routing would be so slow? We bought the router over a year ago, and we think it has been doing this since then, but we never actually tested it before. (network B isn't really used much, so we didn't notice much) We were implementing a site-to-site VPN and noticed it was ridiculously slow, so we started testing basic routing performance. I have ruled out cabling and router CPU/memory utilization. Adtran looked at my config, but didn't see anything wrong with it.

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >