Search Results

Search found 9011 results on 361 pages for 'common'.

Page 258/361 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • Windows 7 Taskbar resets on every login

    - by Arne Mertz
    I like to reorder my Taskbar a bit, other than the Windows 7 default is. I use two "rows", the lower is for quicklaunch and other toolbars: This works perfectly, as long as I don't log off from the computer. Every time I log in, Windows 7 has messed up/reset the toolbar positions like this: So I have to drag them into position again and again, every morning. Fixing the taskbar positions won't help, I tried to google for the problem but it does not seem to be very common. Does anyone recognize that problem and has a solution? Update: This is not the AutoLogon bug. AutoLogon is off. We have installed Novell at our company, and it does not matter wether I log directly onto the Novell network or only to the computer first and to Novell later. Update2: I get the same issue when I logon without Novell, i.e. when I log on only to the computer. When I boot in safe mode, the taskbar looks essentially the same: Update3: KB979155 says it's "not applicable to my system". Creating a neew user is not an option since I don't have the admin privileges to do that - I have almost any other local admin privileges, though.

    Read the article

  • HAProxy crashes on all requests in 1.5-dev12

    - by Daniel Hough
    I'm having an issue where HAProxy is crashing with no explanation when I switch from 1.4.12 to 1.5-dev12. The reason I'm switching is for the SSL offloading. My config file doesn't have any errors, it's quite simple and it works well with 1.4 - but for some reason when I run it with 1.5-dev12 I see the logs noting that the two backends I have have been set up, and then when I hit one of the frontends, I get an HTTP 400 in the browser and suddenly HAProxy isn't running anymore when I check. I understand that a common workaround to the lack of SSL support for HAProxy is to use Stud, and I may go with that since I am in need of an SSL solution for my service, but before I dele into that world I thought I might see if anybody has experienced the same problems and might know how to fix it. The server is Ubuntu 10.04 and I followed the make instructions on the Exceliance blog here. EDIT: On the advice of Kyle Brandt, I did a bit more investigation. I attached gdb to the haproxy process and when the crash occurred this is what I got: Program received signal SIGSEGV, Segmentation fault. 0x0804e5c2 in dequeue_all_listeners (list=0x9e1a418) at src/protocols.c:184 184 list_for_each_entry_safe(listener, l_back, list, wait_queue) { P.S. HAProxy is awesome, so thank you Exceliance for providing us with something so useful :)

    Read the article

  • Setting up apache vhost for Icinga

    - by DKNUCKLES
    It's been a while since I've worked with Apache so please be kind - I'm also aware of this question but it hasn't been much help to me. I'd like to set up a simple vHost w/ Apache for my Icinga instance. Icinga is up and running and I can access it from x.x.x.x/icinga, however would like to be able to access it externally as well as internally. I have set up the /etc/hosts file and the following is my barebones vhost statement in httpd.conf <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /usr/share/icinga ServerName icinga.domain.com ErrorLog logs/icinga.com-error_log CustomLog logs/dummy-host.example.com-access_log common </VirtualHost> I also have the following in my .htaccess file <Directory> Allow From All Satisfy Any </Directory> An entry has been made for the instance in the Windows DNS server on my network, however when I try to access the site by URL I am greeted with Internal Server Error. Reviewing the /var/log/icinga.com-error_log I see the following entry. [Thu Dec 13 16:04:39 2012] [alert] [client 10.0.0.1] /usr/share/icinga/.htaccess: <Directory not allowed here Can someone help me spot the error of my ways?

    Read the article

  • Windows 7 using llt for ipv6

    - by Seoman
    The question asked below is based on the specific implementations of the Os not the RFC. Looking on a way to be able to assign a fixed ip address to a host, before it boots I found that Centos 6 works fine with no modifications and Windows 7 does not work at all. As defined in enter link description here exists 3 valid ways of generate a DUID: 1 Link-layer address plus time 2 Vendor-assigned unique ID based on Enterprise Number 3 Link-layer address Looking at the centos, that works fine, I can see the following autogenerated DUID: option dhcp6.client-id 0:1:0:1:19:60:25:f1:52:54:0:6b:b9:9e; and the MAC address for this host is: ifconfig eth1 | grep HWaddr eth1 Link encap:Ethernet HWaddr 52:54:00:6B:B9:9E As you can see, the DUID containts the MAC address. I can assign a fixed ip address to this host by including an entry on my dhcp server similar to: host vm { hardware ethernet 52:54:00:6B:B9:9E; fixed-address6 2001:db8:0:1::200; if packet(0,1) = 1 { log(debug,"VM Request match!"); } } And the Centos 6 gets his ip. On the windows side, I faced a common problem explained on this other link enter link description here As summary, Win7 uses the option 2 of the DUID generation or a variation of this one. On the link explains how to move it to a llt (link layer + time) but is not working fine. If I modify the DUID to one that looks like the one generated on Centos (but with the right MAC) it works as expected. Question 1 How Can I change the DUID generation for Windows 7 to be based on MAC as Centos 6 does? Thanks

    Read the article

  • Setting up Virtual Host in Fedora Core 15 using apache

    - by Roland
    I'm trying to setup a couple of Virtual Host files on my Localhost PC running Fedora Core 15. Now I get this working, but now onloy one Virtual Host site works, and if I type in 127.0.0.1/test/testApp.php which is not related to the Virtual Host site , I get redirected to the Virtual Host site. Here's what I did. I created a new folder called virtualhosts in /etc/httpd/ where all my host files are stored in the following format site.conf In /etc/conf/httpd.conf I enabled NameVirtualHost *:80 and included the host files at the bottom of the config page like this Include virtualhosts/*.conf In /etc/hosts I added the line 127.0.0.1 website No when I run sudo httpd -t I get Syntax OK I restart apache and then the Virtualhost works, but as soon as I add other hosts and only use 127.0.0.1 as above it still links to the original host. Am I doing anything wrong here or left out something? An example of my Virtual Host file looks like this <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/html/website/ ServerName website ServerAlias website ErrorLog logs/dev-error_log CustomLog logs/dev-access_log common Alias /blog /var/www/html/blog/ <Directory /var/www/html/website/> Options FollowSymLinks Allow Override All Order allow,deny allow from all </Directory> #php_value error_reporting E_ALL & ~E_NOTICE & ~E_DEPRECATED php_flag display_errors On php_value date.timezone Europe/London </VirtualHost>

    Read the article

  • download and process a file by ftp at set intervals, with error handling, rescheduling and status messages

    - by compound eye
    I want to download a data file from a remote ftp server to my machine at regular intervals. Once the file is downloaded I want to call another script which will process the file. My development machine is mac os x, the eventual deployment environment is linux. What's would be the stock standard way to automate this? I know I can use cron to schedule curl to download and to run a script that will process the downloaded file at regular intervals, and I know could write a slightly more complex script or an application that would do this and add error handling, rescheduling and sending status emails. But one of my requirements for this project is to write as little custom code as possible, instead I should try to use standard, tried and true existing tools, and if I do have to write code, to try and write the most straightforward code possible. The reason for this is the code will potentially be installed on a large number of machines, all of which will need to be tweaked, customised and maintained by different people, long after I am gone from the project, so the intention is to use well documented, well supported tools as much as possible. This seems such a common task, there must be tools and scripts all over the internet, written by people who have carefully considered everything that could possibly go wrong when you need to download and process a file from a remote server at regular intervals, with error handling, rescheduling and sending status messages. Is that what Expect is for? What would you recommend? (the system will be downloading weather prediction data every six hours, so that the system can prepare in the event of bad weather warnings)

    Read the article

  • Is Ubuntu a bad distro for a standalone mysql database server?

    - by DhruvPathak
    I read an article here : http://www.mysqlperformanceblog.com/2011/12/08/which-linux-distribution-for-mysql-server/ On the other end there are Debian and Ubuntu. Both use tool called dpkg for package management. There isn’t a month that I log in to a system based on either distribution where there are no issues with packages consistency. Unfinished installations, unresolved conflicts are so common that it’s just beyond simple negligence. The packaging system is just not robust enough. Another problem is that one broken package may block you from installing or uninstalling anything else. Imagine that someone left system in such shape, you prepared for downtime, stopped MySQL and… error – text editor has not been properly installed, so you cannot upgrade MySQL either until the problem is fixed. In a stressful situation when downtime clock ticks – annoying at best We prefer Ubuntu server because of familiarity and Ubuntu also being development environment. Questions: Is Ubuntu used commonly in production for a mysql database server ? Is it worth the trouble ever to have one distro eg Ubuntu in web server, and another say Red Hat in database server ? Or Is a homogenous server pool a better choice ?

    Read the article

  • pfSense routing between two routers with shared network

    - by JohnCC
    I have a network set-up using two pfSense routers arranged like this:- DMZ1 WAN1 WAN2 DMZ2 | | | | | | | | \___ PF1 PF2___/ | | | | \___TRUSTED___/ Each pfSense router has its own separate WAN connection, and a separate DMZ network attached to it. They share a common TRUSTED LAN between them. The machines on the trusted network have PF1 as their default gateway. PF1 has a static route defined to DMZ2 via PF2, and PF2 has a static route to DMZ1 via PF1. There is NAT to the WAN but internal networks (DMZ1/2 and TRUSTED) use different RFC1918 subnets. I inherited this arrangement, and all used to work fine. I made a config change to PF1 (relating to multicast), and machines on DMZ2 suddenly could not talk to TRUSTED. I rolled the change back, but the problem persisted. What I guess you'd hope would happen is that TCP packets would go DMZ2 - PF2 - TRUSTED and on return TRUSTED - PF1 - PF2 - DMZ2. That's the only way I can see it would have worked. However, PF1 drops the returning packets. I've verified this using tcpdump. I've worked around this by adding static routes to DMZ2 via PF2 to the servers on TRUSTED, but some devices on there do not support static routes so this is not ideal. Is there way to make this arrangement work decently, or is the design inherently flawed? Thanks!

    Read the article

  • Utilities for finding x/y screen coordinates

    - by slothbear
    I have an application that needs the user to enter the x and y location of a couple of items on the screen. [Yes, this is crap and I'll replace it, but for now...] On Mac OS X, I'm using Snapz Pro X. When I choose to take a snap of a selection, the interface displays the mouse cursor location. This is OK for my personal use, but I can't ask users to buy a $69 program for this function. I thought I had a solution with the built-in program "Grab", but it reports the coordinates from the bottom left, and I need it from the top left. I don't have access to Windows; not sure what to use there. I've not even been able to come up with good search terms since mouse and coordinates and x/y are so common. Ideas are welcome there. Extra credit: same x/y finder for Linux. A Java program would be OK too, since the main application is in Java.

    Read the article

  • What to do with old laptop screens?

    - by Lord Torgamus
    This question is inspired by another SU question I came across earlier today: What to do with old hard drives? It made me think about two long-dead laptops I have with perfectly good screens still inside. One is a Dell Inspiron 5100 and the other is an Averatec E1200, but responses need not be geared towards those particular models' screens. Rules, based heavily on the original question's: Objectives and suggestions to keep in mind when you post an answer : Should showcase your geekiness, be plain ol' fun, serve a social purpose or benefit the community. Your answer need not be limited to only one screen. For a really good answer, I'll go out and buy additional leftover screens. Your answer need not be limited to one project per screen. If additional accessories need be purchased, make sure they are common. Don't tell me to get a moon rock or something. The projects you suggested should serve a useful purpose; art is nice, but functional art is way better. Thanks in advance, folks. EDIT: Found another related question. Fun projects to do with an old 17" LCD monitor EDIT 2: I, for one, am enjoying the new outpouring of creativity here. Best fifty bucks... I mean, rep points... I ever spent. EDIT 3: That does it. At the end of the week, there was a tie for most votes between the accepted answer and the game platform answer. The game platform answer was cooler, but less reasonable as a project to actually do; in other words, it was more moon rocky. Unfortunately, I think fencepost had the best comment on the topic, which is that displays on their own have no good interface. Thanks for playing, everyone!

    Read the article

  • MongoDB and GrifFS. What are the best storage options in the range of 1 TB?

    - by Nerian
    We are going to launch a service that will require between 1 and 2 GB for file storage per paid user. I am going to use GridFS for storing files. I am pondering the different options for storing the database. But since I am unexperienced at deployment and it is my first time with Mongodb I need your experience. Criteria: I want to spend my time developing my core business, that is, my own application. I am a Ruby on Rails developer. I do not like to mess with server configuration. Hence, I would like a fully managed hosting solution. But I would like to know about any other option, if you think it is worth it. It should be able to scale. Cloud style. Pay as you go. The lower the price, the better. So far I known of these services: https://mongohq.com/pricing https://mongomachine.com/pricing https://mongolab.com/about/pricing/ http://cloudcontrol.com/add-ons/mongodb/ And they seem to be OK for common needs, that is no file storage. But I am going to use GridFS, so the size matters. These services seems to scale, in price, quite poorly. MongoHQ: The larger plan max storage is 20 GB. Seems like a very little storage, for GridFS. MongoMachine: Flat price, 2.5$ per GB. I didn't found the limit. Seems like a good price, comparing the others. MongoLab: 3.984 GB max, which I don't think I will hit, so perfect. 8$ per GB, quite costly. CloudControl: The larger plan is 20 Gb. The custom service starts at 250€ plus some unspecified charge per GB. What is your experience with these services? Any downtimes? Other possibilities?

    Read the article

  • Windows drive letters A: and B:

    - by Workshop Alex
    This is a question that just popped into my mind and I can't help but wonder why it's still common for a Windows installation to be installed on C: with all other drive letters going up from D: to Z:. In the early MS-DOS times, all we had were floppy disks and they were at A:. When the 3.5 inch floppy started to replace the 5.25 floppy, many people had an A: and B: drive. Then the hard disk became popular and the hard disk was at C: because A: and B: were taken. Then the 5.25 floppy disappeared and most computers had a gap between A: and C:. Nowadays, the 3.5 floppy is just too outdated so A: disappeared too. All disks now start at C:. Yeah, I know I can assign my own drive letters and I've done so with my data disks. My installation disk will just continue to be stuck at C: and I don't really mind. I have no problems with drive letters. But why do the new Windows versions just continue to install themselves by default on C: instead of assigning the letter A: to the boot hard disk?

    Read the article

  • Configure New Server for .htaccess

    - by Phil T
    I have a new LAMP CENTOS 5 server I am setting up and trying to copy the configuration from another web server I have. I am stuck with what I think is a mod_rewrite problem. If I go to http://old-server.com/any_page_name.php it correctly routes through some handling code in index.php and shows me a graceful "Page Cannot Be Displayed" message. But if I go to http://new-server.com/any_page_name.php I get an ugly Apache 404 Not Found error message. I looked in both httpd.conf files and they both have only one reference to mod_rewrite. LoadModule rewrite_module modules/mod_rewrite.so So it seems like that should be fine. At the bottom of httpd.conf I have the code: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/html ServerName new-server.com ErrorLog logs/new-server.com-error_log CustomLog logs/new-server.com-access_log common </VirtualHost> Then in the root of /var/www/html I have the exact same .htaccess file that looks like this: RewriteEngine on Options +FollowSymlinks RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . index.php [L] ErrorDocument 404 /page-unavailable/ <files ~ "\.tpl$"> order deny,allow allow from none deny from all </files> So I don't see why the page load at old-server.com works fine while new-server.com doesn't route through index.php like I want it to do. Thanks.

    Read the article

  • Google Chrome won't install via IE9/Windows7

    - by purir
    Using IE9, I've tried installing Google Chrome on Windows 7 from this url http://www.google.com/chrome/eula.html?hl=en-GB&platform=win But get the following error (apologies for uncouthly dumping this nonsense...) Any ideas dearly appreciated! PLATFORM VERSION INFO Windows : 6.1.7601.65536 (Win32NT) Common Language Runtime : 4.0.30319.235 System.Deployment.dll : 4.0.30319.1 (RTMRel.030319-0100) clr.dll : 4.0.30319.235 (RTMGDR.030319-2300) dfdll.dll : 4.0.30319.1 (RTMRel.030319-0100) dfshim.dll : 4.0.31106.0 (Main.031106-0000) SOURCES Deployment url : _http://dl.google.com/update2/1.3.21.57/GoogleInstaller_en-GB.application?appguid%3D%7B8A69D345-D564-463C-AFF1-A69D9E530F96%7D%26iid%3D%7B26C55C3A-B26A-0484-FEDD-78443D269DA1%7D%26lang%3Den-GB%26browser%3D2%26usagestats%3D0%26appname%3DGoogle%2520Chrome%26needsadmin%3Dfalse%26installdataindex%3Ddefaultbrowser ERROR SUMMARY Below is a summary of the errors, details of these errors are listed later in the log. * Activation of _http://dl.google.com/update2/1.3.21.57/GoogleInstaller_en-GB.application?appguid%3D%7B8A69D345-D564-463C-AFF1-A69D9E530F96%7D%26iid%3D%7B26C55C3A-B26A-0484-FEDD-78443D269DA1%7D%26lang%3Den-GB%26browser%3D2%26usagestats%3D0%26appname%3DGoogle%2520Chrome%26needsadmin%3Dfalse%26installdataindex%3Ddefaultbrowser resulted in exception. Following failure messages were detected: + The system cannot find the file specified. (Exception from HRESULT: 0x80070002) COMPONENT STORE TRANSACTION FAILURE SUMMARY No transaction error was detected. WARNINGS There were no warnings during this operation. OPERATION PROGRESS STATUS * [25/06/2011 11:41:04] : Activation of _http://dl.google.com/update2/1.3.21.57/GoogleInstaller_en-GB.application?appguid%3D%7B8A69D345-D564-463C-AFF1-A69D9E530F96%7D%26iid%3D%7B26C55C3A-B26A-0484-FEDD-78443D269DA1%7D%26lang%3Den-GB%26browser%3D2%26usagestats%3D0%26appname%3DGoogle%2520Chrome%26needsadmin%3Dfalse%26installdataindex%3Ddefaultbrowser has started. ERROR DETAILS Following errors were detected during this operation. * [25/06/2011 11:41:04] System.IO.FileNotFoundException - The system cannot find the file specified. (Exception from HRESULT: 0x80070002) - Source: System.Deployment - Stack trace: at System.Deployment.Internal.Isolation.IsolationInterop.GetUserStore(UInt32 Flags, IntPtr hToken, Guid& riid) at System.Deployment.Application.ComponentStore..ctor(ComponentStoreType storeType, SubscriptionStore subStore) at System.Deployment.Application.SubscriptionStore..ctor(String deployPath, String tempPath, ComponentStoreType storeType) at System.Deployment.Application.SubscriptionStore.get_CurrentUser() at System.Deployment.Application.ApplicationActivator.PerformDeploymentActivation(Uri activationUri, Boolean isShortcut, String textualSubId, String deploymentProviderUrlFromExtension, BrowserSettings browserSettings, String& errorPageUrl) at System.Deployment.Application.ApplicationActivator.ActivateDeploymentWorker(Object state) COMPONENT STORE TRANSACTION DETAILS No transaction information is available.

    Read the article

  • How to make Firefox file associations consistent with Ubuntu file associations?

    - by wbharding
    This seems to be a pretty commonly Google question, but one for which there are no answers. http://www.linuxquestions.org/questions/linux-software-2/firefox-download-mime-types-378902 http://www.birkit.com/content/kubuntu-linux/internet/firefox/fix-file-associations-in-firefox.html Being three links amongst the many. The gist of what I want to accomplish is to have Firefox understand the file associations I download without me having to manually map all of them myself. Gnome knows the file extensions, so I would have expected that Firefox could just use the already-known file mappings there to open the right stuff (as I presume Chrome does). But it doesn't. At least not for me, using Firefox 4, and not by default. When I click on a downloaded file right now, Firefox always has to ask me what application should be used to open the file. A handful of Google results tell me that I can reassociate my file extensions by deleting ~/.mozilla/firefox/[profile name]/mimeTypes.rdf, but while deleting that file does in fact result in a new mimeTypes file being generated, the new mimeTypes is just as barren as the old one had been. Based on the amount of unanswered Qs on the Googlesphere, I know this is a very common problem for Ubuntu users, but it seems to be one for which nobody has chimed in with a good solution. Maybe Superuser can finally be the panacea for us all?

    Read the article

  • suPHP not working

    - by amarc
    OS: Ubuntu 10.04 etc/suphp/suphp.conf: [global] ;Path to logfile logfile=/var/log/suphp/suphp.log ;Loglevel loglevel=info ;User Apache is running as webserver_user=www-data ;Path all scripts have to be in docroot=/home ;Path to chroot() to before executing script ;chroot=/mychroot ; Security options allow_file_group_writeable=false allow_file_others_writeable=false allow_directory_group_writeable=false allow_directory_others_writeable=false ;Check wheter script is within DOCUMENT_ROOT check_vhost_docroot=true ;Send minor error messages to browser errors_to_browser=false ;PATH environment variable env_path=/bin:/usr/bin ;Umask to set, specify in octal notation umask=0077 ; Minimum UID min_uid=100 ; Minimum GID min_gid=100 [handlers] ;Handler for php-scripts application/x-httpd-suphp="php:/usr/bin/php-cgi" ;Handler for CGI-scripts x-suphp-cgi="execute:!self" some vhost in sites-enabled: NameVirtualHost *:8080 <VirtualHost *:8080> ServerAdmin ... ServerName ... ServerAlias ... AddType application/x-httpd-php .php AddHandler application/x-httpd-php .php suPHP_Engine on suPHP_UserGroup user user suPHP_ConfigPath "/home/user/etc" suPHP_PHPPath /usr/bin DocumentRoot /home/user/web/site.com/ ErrorLog /var/log/apache2/site.com-error_log CustomLog /var/log/apache2/site.com-access_log common <Directory /home/user/web/site.com/> Order Deny,Allow Allow from all Options +Indexes </Directory> </VirtualHost> But when I did nano /home/user/web/id.php and paste <?php system('id'); ?> in it, result I get is: uid=33(www-data) gid=33(www-data) groups=33(www-data) Have no idea what to do so I was hoping comunity could help ty.

    Read the article

  • Finding the current user authenticated by basic auth (Apache)

    - by jtd
    When you log in through a basic auth page, is the username you authenticated as stored anywhere (on the server or client machine), maybe in an environment variable? Background: I have a common web administration page for an e-mail server and I'd like to know who is doing what. When a user successfully logs in via basic auth, I somehow want to be able to identify them and log their actions. So each time a request is submitted, I can write to a log file. The basic format would be: $username ran a $function against $useraccount so if a user changed someone's permissions, eg: Admin-Bob ran a permission change against User-Scott So if errors occur, I can easily trace back in the log file what actions lead to the cause. I tried checking the %ENV hash to no avail, any Ideas? I don't really want to get into PHP-like sessions, because that would mean scrapping my basic auth, which gives me a fine degree of control already. If I have to code something with sessions, I'd need to implement a system to block users after maximum tries and so on, which I don't really want to code. I think this is better geared towards serverfault because it pertains to Apache moreso than the programming language. Sessions can be done in a myriad of languages.

    Read the article

  • Separate domains vs. one domain with alias-domains

    - by Quasdunk
    I have tried to ask this question a few days ago but I'm afraid it was not clear enough, so here's another try. I have set up a LAMP-server using ISPConfig 3 for the administration. PHP is running over Fast-CGI. I have several domains, like my_site.com, my_site.net and my_site.org, but they all point to the same application/website. Each domain has its own web-root-folder and is running under its own user. The application itself is in a common directory which is owned by another user, like so: # path to my_application (owned by web1) /var/www/clients/client1/web1/web/my_application/ # sym-link to my_application from my_site.com-web-root (owned by web5) /var/www/my_site.com/web -> /var/www/clients/client1/web1/web/ # sym-link to my_application from my_site.net (owned by web4) /var/www/my_site.net/web -> /var/www/clients/client1/web1/web/ With a setup like this I have encountered a few problems concerning the permissions when performing filesystem-operations with PHP. For instance, if the application is called via my_site.com, the user web5 is trying to write something to the application-folder. But the application-folder is owned by the user web1, so web5 is not allowed to write there. As far as I unterstand, this is how Fast-CGI works. After some research and asking a few people, the solution seems to be to break it all down to one domain (e.g. my_site.com) and define the other domains (my_site.org, my_site.net) as alias for this one domain. That way, there would be only one user who has all necessary permissions. However, this would mean that we'd have to buy a multidomain SSL-certificate - but we already have an SSL-certificate for each domain. We were able to use them with our previous provider (managed hosting), and there we also had only one web-directory and multiple domains. So if this was possible, I wonder: Is putting all the domains together into one v-host with one main- and several alias-domains the right approach in this case? Or may I have misunderstood something?

    Read the article

  • Redirection of outbound UDP port.

    - by pboin
    For my residential service, I changed ISPs to Zoom/Armstrong. Just after that, my NTP daemons stopped working. I dug deep and diagnosed the problem: Unprivileged ports are getting out. When i run 'ntpdate' for example, I go out on a high, unprivleged port, and get a response on UDP 123. That's fine. The 'ntpd' daemon though, expects to go out on 123 and get its reply there as well. This must be a common problem, because it's directly addressed in the NTP troubleshooting guide. Just to see what would happen, I wrote a detailed email to the general support address at Armstrong. They replied almost immediately with a complete technical answer! They have everything <1024 blocked, except for a few ports to support outbound VPN. So, the question: Can I use IPtables to essentially re-write my outbound UDP 123 up to 2123 or something like that? If I do, does there need to be a corresponding 2123-123 rule to translate the reply? This seems like NAT, but with ports, not addresses. I tried, but can't seem to get iptables to do what I want. I'm not sure if it's my lack of skill, or if I'm trying the wrong solution. True, I could run ntpdate from cron, but that loses all of the adjustment smarts of NTP.

    Read the article

  • Why does pulling the power cord then pressing the power button fix a non-booting PC?

    - by sidewaysmilk
    I've been working at this institution for about 6 years. One thing thing that I've always found curious is that sometimes—especially after a power outage—we find a PC that won't boot when the power button is pressed. Usually, the fans will spin up, but it won't POST. Our solution is to pull the power cord, press the power button with the computer unplugged, then plug it in and turn it on. It seems more common with Gateway brand PCs than the Dells or HPs that we have around. Does anybody know what pressing the power button does when the computer is unplugged? I have some vague notion that closing the power button circuit allows some capacitors to discharge or something, but I'd like a firmer answer to offer my users when they ask me what I'm doing. My best guess as to why fans can spin but it can't POST is that the BIOS is in some non-functional state. I don't know how BIOS stores state, but my best guess is that there is some residual garbage in its registers or something, like the stack pointer isn't starting at 0 maybe?

    Read the article

  • How can I create a separate toolbar from the Task Bar?

    - by Iszi
    In Windows XP, you could separate toolbars from the Task Bar by dragging them to the desktop. They could then be left lying about anywhere on your screen or, my preferred option, docked to any side of the screen. I found this particularly useful to keep a handy list of common phone numbers quickly accessible. I'd create a new toolbar pointing to a custom folder, and put a bunch of dead shortcuts in the folder that had names and numbers as their file names. I'd then dock the toolbar to the left side, set it to auto-hide and always on top (options which could be set separate from the Task Bar as well) and it would be readily available no matter what else I was doing on my system. However, on my Windows 7 system, I seem unable to perform the crucial step of pulling the new toolbar off of the Task Bar. This is of course with the Task Bar "unlocked" so that I can move all my toolbars around. Is there something I'm missing here, or is this a feature that's been disabled in Windows 7? Is there any way to re-enable it, or otherwise achieve similar functionality? I'd rather be able to do this without additional software, if possible.

    Read the article

  • NFS on top of GFS2 - does it work?

    - by Matthew
    We're currently using a NoSQL derivative called Splunk to receive our data. The software supports something called "search head pooling" in which the job-dispatching engine is housed on several servers which share a common storage point. Originally our intention was to use a clustered filesystem like GFS2 because of low latency, stability, and ease of setup. We set up GFS2, and it's working with no issues. However when trying to run the software, it's trying to create lock files, and a bunch of other things that their support team can't quite explain. Ultimate feedback from them was that they only support NFS. Our network administration team heavily frowns on NFS (lack of stability, file lock issues, etc). So, I was thinking about the possibility of setting up NFS on each server in the cluster to act as a wedge layer between the GFS2 filesystem and the software. Basically configure each server to export the GFS2 filesystem's mountpoint via NFS, and then tell each server to connect to that NFS share. That way we aren't introducing any single-points-of-failure should a dedicated NFS server go down, but the vendor gets their "required" NFS share. I'm just brainstorming ways around, so please tear this apart :)

    Read the article

  • What is the risk of introducing non standard image machines to a corporate environment

    - by Troy Hunt
    I’m after some feedback from those in the managed desktop or network security space on the risks of introducing machines that are not built on a standard desktop image into a large corporate environment. This particular context relates to the standard corporate image (32 bit Win XP) in a large multi-national not being suitable for a particular segment of users. In short, I’m looking at what hurdles we might come across by proposing the introduction of machines which are built and maintained by a handful of software developers and not based on the corporate desktop image (proposing 64 bit Win 7). I suspect the barriers are primarily around virus definition updates, the rollout of service packs and patches and the compatibility of existing applications with the newer OS. In terms of viruses and software updates, if machines were using common virus protection software with automated updates and using Windows Update for service packs and patches, is there still a viable risk to the corporate environment? For that matter, are large corporate environments normally vulnerable to the introduction of a machine not based on a standard image? I’m trying to get my head around how real the risk of infection and other adverse events are from machines being plugged into the network. There are multiple scenarios outside of just the example above where this might happen (i.e. a vendor plugging in a machine for internet access during a presentation). Would a large corporate network normally be sufficiently hardened against such innocuous activity? I appreciate the theory as to why policies such as standard desktop images exist, I’m just interested in the actual, practical risk and how much a network should be protected by means other than what is managed on individual PCs.

    Read the article

  • INSERT DELAYED on locked tables blocks PHP processes to continue

    - by sw0x2A
    Our webservers write some tracking information into a MySQL database (using INSERT DELAYED into MyISAM table). When a huge SELECT query is executed on this table or when it is locked for another reason, the webserver processes (with INSERT DELAYED) are waiting for the database and in some cases the MaxServer limit is reached in Apaches, so they will stop serving requests. We use INSERT DELAYED because The DELAYED option for the INSERT statement is a MySQL extension to standard SQL that is very useful if you have clients that cannot or need not wait for the INSERT to complete. This is a common situation when you use MySQL for logging and you also periodically run SELECT and UPDATE statements that take a long time to complete. Quote from MySQL documentation. I am wondering why the Apache processes are waiting for the INSERT DELAYED to finish. And what can I do to just send the data and forget about it. (Since this is logging data, I do not care if we lose some entries.) Even when the table is locked the PHP script should just go on and should not wait for an answer of MySQL. (We do not want to setup Master-slave for this table but we are thinking about move this data to some NoSQL database. But for now I would like to know why INSERT DELAYED is not working as expected.)

    Read the article

  • Windows XP usb drivers reinstalling upon reboot

    - by iWerner
    We have a Windows XP SP3 laptop (Acer Travelmate 7320) to which we connect a variety of astronomy equipment (a telescope, its mount, some cameras and others) all of which connect through USB. When we plug in these devices, Windows tells us that it detects the hardware and installs the driver. All of these devices then function correctly using the software that came from the vendor (unfortunately, one of the vendors does not support Vista 64, and that is why we're using our XP laptop). However when we reboot the computer we experience a variety of symptoms: Windows reports that it found new hardware for some of the devices and tries to reinstall their drivers, and for some of the other devices needs to be unplugged and plugged in again before they are detected again by the operating system, in which case Windows still tries to reinstall their drivers. It is as if Windows does not remember that it has already installed the drivers. Is this a common problem on Windows XP? If so, what can be done about it? Should we rather be looking at the laptop's firmware and drivers? We've looked into updating the drivers for the chipset, but this did not solve the problem. Thank you in advance.

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >