Search Results

Search found 54052 results on 2163 pages for 'run configuration'.

Page 526/2163 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • How can I check the location of perl and CPAN files?

    - by Rob
    I constantly have to set up new servers for an employer of mine for an exact purpose of his, and as such they all have to be set up in exactly the same way. So I've created a script in PHP that I run from my own box to automatically send over all the relevant files, compile everything, run updates, and everything else. However, for some reason these brand new servers come with perl, which is fine, but they have perl installed in different locations. This makes it a pain for me to copy over Config.pm for CPAN without going in and finding the location manually. Is there perhaps some command I'm unaware of that will hunt down the precise location? If it helps, usually the servers are CentOS 5

    Read the article

  • Doesn't DNS diversity negatively affect performance? Why/how?

    - by cnst
    If you look at the press releases of various orgs that run the internet, you can see them praise the fact that now they run root server X in city Y, as if that magically makes everyone in city Y get all the relevant resolutions from the local server X, instead of going 200ms across the oceans and lands to other continents for resolutions. Similarly, the zones of some geographical domain names, like .ru, are being mirrored not just within Europe, but also, for example, in Hong Kong, which is no more, no less, but is about 300ms away from central Europe, since the traffic is often crossing the two oceans on each way. Doesn't all of this negatively affect DNS performance? Isn't it more of a liability to have a diverse pool of geodispersed authoritative servers, especially if your target audience is quite geographically concentrated? Perhaps a better question is, are there any DNS resolvers that use something better than the naive round-robin for choosing which authoritative server to contact?

    Read the article

  • nVidia performance with newer X and newer driver abysmal with Compiz

    - by Nakedible
    I recently upgraded Debian to Xorg 2.9.4 and installed nvidia-glx from experimental, version 260.19.21. This was somewhat of an uphill battle as the dependencies for the experimental nvidia-glx package are still somewhat broken. I got it to work without forcing the installation of any packages and without modifying the packages. However, after the upgrade compiz performance has been abysmal. I am using the desktop wall plugin and switching viewports is really slow - takes a few seconds for each switch. In addition to this, every effect that compiz does, such as zoom animations for icons when launching applications, takes seconds. The viewport switching speed changes relative to the amount of windows on that virtual screen - empty screens switch almost at normal speed, single browser windows work almost decently, but just 4 rxvt terminals slows the switches down to a crawl. My compiz configuration should be pretty basic. Xorg is likewise configured without anything special - the only "custom" configuration is forcing the driver name to be "nvidia". I've fiddled around with the nvidia-settings and compizconfig trying different VSync settings, but none of those helped. My graphics card is: NVIDIA GPU NVS 3100M (GT218) at PCI:1:0:0 (GPU-0). This is laptop GPU that is from the Geforce GTX 200 series. Graphics card performance should naturally be no problem. EDIT: In the end, nothing really worked, and I got really annoyed with the state of compiz and its support in Debian. Many nVidia driver revisions have passed and I am using Gnome 3 now, so I am accepting the best answers to this question even though the issue was not resolved.

    Read the article

  • Commands not working in Windows 7 32-bit command prompt

    - by Precious Tijesunimi
    I have an HP laptop with a Windows 7 32-bit Home Premium operating system. My command prompt doesn't run lots of commands like help, shutdown, ipconfig, ping, etc. I get a message like: 'help' is not recognized as an internal or external command, operable program or batch file. Only simple commands like cd and dir are working. I noticed that whenever I navigate to c:/windows/system32, the command works. But I need to run some important commands like java on a file that is on the desktop and not in the system32 folder. How can I fix this?

    Read the article

  • Why do we still have to use drive letters to identify file systems?

    - by Charles E. Grant
    A friend has run into a problem where they installed Windows 7 from an external drive, and the internal boot drive is now assigned to H:. Theoretically this shouldn't cause problems because there are programming interfaces for getting the drive letter for the system drive. In practice though, there are quite a few programs that assume that C: is the only possible location for the system directories, and they refuse to run with the system directories on H:. That's not Microsoft's fault, but it's a pain none-the-less. The general consensus seems to be that a re-install, setting the internal boot drive to C:, is the only way to avoid fix these problems. UNIX-like systems display all file systems in a single unified directory tree and mostly seem to avoid problems like this. Is it possible to configure a Windows system without reference to drive letters, or does the importance of backwards compatibility mean that Windows will be working with drive letters from now until doomsday?

    Read the article

  • Clustered MSDTC

    - by niel
    Hi I'm setting up a SQL cluster (SQL 2008), Windows 2008 R2. I enable the network access on local dtc and then create a DTC resource in my cluster . the problem is that when i start up the resource it does nto pull through my settings to enable network access. the log shows this: MSDTC started with the following settings: Security Configuration (OFF = 0 and ON = 1): Allow Remote Administrator = 0, Network Clients = 0, Trasaction Manager Communication: Allow Inbound Transactions = 0, Allow Outbound Transactions = 0, Transaction Internet Protocol (TIP) = 0, Enable XA Transactions = 0, Enable SNA LU 6.2 Transactions = 1, MSDTC Communications Security = Mutual Authentication Required, Account = NT AUTHORITY\NetworkService, Firewall Exclusion Detected = 0 Transaction Bridge Installed = 0 Filtering Duplicate Events = 1 where when i restart the local dtc service it says this: Security Configuration (OFF = 0 and ON = 1): Allow Remote Administrator = 0, Network Clients = 1, Trasaction Manager Communication: Allow Inbound Transactions = 1, Allow Outbound Transactions = 1, Transaction Internet Protocol (TIP) = 0, Enable XA Transactions = 1, Enable SNA LU 6.2 Transactions = 1, MSDTC Communications Security = No Authentication Required, Account = NT AUTHORITY\NetworkService, Firewall Exclusion Detected = 0 Transaction Bridge Installed = 0 Filtering Duplicate Events = 1 settings on both nodes in teh cluster is the same. I have reinstalled and restarted to many times to mention. Any ideas ?

    Read the article

  • Route web traffic through a separate iterface

    - by tkane
    I'd like to route web traffic through the wlan0 interface and the rest through eth1. Can you please help me with the iptables commands to achieve this. Below is my configuration. Thank you :) Edit: This is about desktop configuration not a web server set up. Basically I want to use one of my connections to browse the web and the other one for everything else. ifconfig: eth1 Link encap:Ethernet HWaddr 00:1d:09:59:80:70 inet addr:192.168.2.164 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::21d:9ff:fe59:8070/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:33 errors:0 dropped:0 overruns:0 frame:0 TX packets:41 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4771 (4.7 KB) TX bytes:7081 (7.0 KB) Interrupt:17 wlan0 Link encap:Ethernet HWaddr 00:1c:bf:90:8a:6d inet addr:192.168.1.70 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21c:bfff:fe90:8a6d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:77 errors:0 dropped:0 overruns:0 frame:0 TX packets:102 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:14256 (14.2 KB) TX bytes:14764 (14.7 KB) route: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.2.0 * 255.255.255.0 U 1 0 0 eth1 192.168.1.0 * 255.255.255.0 U 2 0 0 wlan0 link-local * 255.255.0.0 U 1000 0 0 wlan0 default adsl 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • How to clear the REPLACEBATT state from APC UPS after hot-swapping the battery

    - by Hubert Kario
    I replaced a battery in APC SmartUPS by hot-swapping. While the swap was all right and didn't disturb connected computers, the "Battery failed" LED on the front face kept glowing. I tried to use apctest from apcups package. Turned off the daemon and run self test (first changing BATTDATE in EEPROM). The test run fine but didn't clear the REPLACEBATT status or the glowing LED. How to clear the failed battery state without powering the UPS and connected equipment down?

    Read the article

  • XAMPP server giving 404 error when requested by ipv4 connection

    - by boyb
    This is in reference to a previous question that I asked and was answered by womble. http://serverfault.com/a/406280/127729 So, now we have the real DNS records, we can do some diagnosis. dig for both A and AAAA on akosiboybastos.broker.freenet6.net gives a valid response, with an appropriate address. Good. dig for both A and AAAA on bastosforum.strangled.net gives the same responses (with a CNAME response thrown in). Also good. This means that the problem is not DNS-related, as those records are in order. wget -6 bastosforum.strangled.net/ gives a 200 OK response. wget -4 bastosforum.strangled.net/ gives a 404 Not Found response. This means that your webserver is misconfigured so that it's not serving the response you desire on IPv4. Given that the initial DNS problem asked in this question has been solved, I would recommend posting a new question with relevant webserver-related configuration, if you can't determine the configuration error yourself. I am using XAMPP(latest version) running phpbb3.0.10 via ipv6 tunnel from freenet6 and my domain is akosiboybastos.broker.freenet6.com, nothing fancy with the installation just out of the box install(with a few cosmetic mod). Both ipv4 and ipv6 traffic can connect using that url, but when I try to put a CNAME record on my test domain which is bastosforum.strangled.net pointing it to akosiboybastos.broker.freenet6.com only ipv6 can connect. As suggested by womble, this is a misconfigured webserver. To be honest I don't know where to start checking on the server as it is fully working if you use the domain given by freenet6 (akosiboybastos.broker.freenet6.com), any info on how to go about this server issue is welcome as i'm really a noob when it comes to computers. regards boyb

    Read the article

  • Windows 7 Permissions

    - by Scott
    I have an odd problem with a windows 7 laptop. It's a single user installation currently. This is a fresh install on an Asus laptop. I have a svn repo checked out on my second partition. I have a directory which I have added to svn:ignore list, because it is for tmp files. This specific directory shows as read-only. I need write access on this directory for my project to function properly. If I right click and modify the directory to be not be read only and run this recursively, it simply is immediately reverted back to a read-only directory. I have also modified apache's service to run as myself to no avail. I'm stumped... Any ideas?

    Read the article

  • Can't change read only folder in windows 7

    - by James Drinkard
    I'm trying to run a Spring MVC 2.5 tutorial and when I run the ant script for a deploy, I get this error: deploy: [copy] Copying 2 files to C:\apache-tomcat-7.0.8\webapps\c:\projects\workspace\springapp BUILD FAILED C:\projects\workspace\springapp\build.xml:46: Failed to copy C:\projects\workspace\springapp\war\WEB-INF\web.xml to C:\apache-tomcat-7.0.8\webapps\c:\projects\workspace\springapp\WEB-INF\web.xml due to failed to create the parent directory for C:\apache-tomcat-7.0.8\webapps\c:\projects\workspace\springapp\WEB-INF\web.xml After reviewing the directory: springapp I saw the properties as read-only. No problem I thought as I'm logging in as administrator. However, changing the uac settings, going to a command prompt as admin and then trying to change the properties of the folder with attrib, making me the owner of the folder, changing the security settings etc... did nothing. I can't seem to change this folder to anything. So my question is, how do I change the settings on that folder so ANT can make changes to that folder?

    Read the article

  • Outdoor WiFi Mesh Topology vs. Repeaters

    - by IronJaxor
    Here's the current configuration in our organization (which I believe is incorrect): We have a number of Cisco 1500 series AP's (22 in total), that are mounted outdoors to provide seamless WiFi coverage over a large area. Each AP however has its own physical ethernet connection back to the WLC (All the AP's are marked as Root AP's). They are all broadcasting the same SSID. We have tried to stagger the channel selection but because there are only three non-overlapping channels to choose from, and in some areas the density of AP's is quite high, there is multiple places of channel interference. With this configuration we experience 100-150 disconnects from clients every day. (Our clients are mobile so they move throughout the coverage area constantly). My idea is to switch the AP's to the same channel thereby forming a wireless mesh, use the built in functionality of the 1500 series to use 802.11a as the backhaul, designate one or two AP's as root AP's and wire them back to the WLC. Thereby forming a WiFi mesh, which if I'm not mistaken is the point of the 1500 series in the first place! I am however completely new at WiFi networks and wondering if I am simply mistaken in what I believe my proposed changes will enable, or if there is a better way to tackle the WiFi topology.

    Read the article

  • Where does a Quick-Time powered application store changes to the user interface?

    - by Luke
    I have downloaded an application written with the Quick Time library (for Windows 7). The application does not need an installation: just unzip it in a directory and run the program. It works, but I have a problem: the program allows the user to change a lot of values using its interface but does not have an option to reset them to their default values. What is more problematic is that when I exit the program and run it again, the interface still has the changed values. In the program directory there is no file that stores the changes done to the UI of the program. I suspect that Quick Time records these changes somewhere, but I can't find the right file. I have even deleted the application and re-unzipped it to another location - but the UI values still remain the same values changed by me!

    Read the article

  • Nagios remote monitoring: NRPE Vs. SSH

    - by sam
    We use Nagios to monitor quite a few (~130) servers. We monitor CPU, Disk, RAM and a few other things on each server. I've always used SSH to run the remote commands, purely because it requires little to no additional config on the remote server, just install nagios-plugins, create the nagios user and add the SSH key, all of which I've automated into a shell script. I've never actually considered the performance implications of using SSH over NRPE. I'm not too bothered about the load hit on the Nagios server (It's probably over-speced for what it does, it's never been over 10% CPU), but we run each remote check every 30 seconds and each server has 5 different checks performed. I assume SSH requires more resources for each check but is there a huge difference? (I.E. enough of a difference to warrant the switch to NRPE). If it's any help, we monitor a mix of physical servers (Normally with 8, 12 or 16 physical cores) and Amazon EC2 medium/large instances.

    Read the article

  • Strange issue with 64 bit OS

    - by Sherwin Flight
    So I own two versions of Windows 7, one is 32 bit, the other is 64. The 64 bit version came with my new desktop, and the 32 bit version came with my Laptop. I was doing a clean install of my laptop, and the install went smooth, Windows is up and running! However, after installing it I realized that I accidentally used the 64 bit installation disk instead of the 32 bit version. I confirmed this in the System Information screen, it says: System type: 64-bit Operating System As far as I knew this laptop was only a 32 bit machine. My understanding is that a 64 bit OS would NOT run on 32 bit architecture. Am I correct with this assumption? If this was a 32 bit laptop is there any way a 64 bit OS would even run at all on it?

    Read the article

  • windows 2008 R2 TS printer security - can't take owership

    - by Ian
    I have a Windows 2008 R2 server with Terminal server role installed. I'm seeing a problem with an ordinary user who is member of local printer operators group on the server. If the user opens a cmd window using ‘run as administrator’ they can run printmanager.msc without needing to enter their password again. In printmanager they can change the ownership of redirected (easy print) printers without problems. If, from the same cmd window, they use subinacl to try and change the onwership of the queue to themselves they get access denied: >subinacl.exe /printer "_#MyPrinter (2 redirected)" /setowner="MyDom\MyUsr" Elapsed Time: 00 00:00:00 Done: 1, Modified 0, Failed 1, Syntax errors 0 Last Done : _#MyPrinter (2 redirected) Last Failed: _#MyPrinter (2 redirected) - OpenPrinter Error : 5 Access denied so, same context, same action but one works and one doesn't. Any ideas for this odd behaviour? I'm using subinacl x86 on an x64 server as I can't find anything more up to date. I've tried with icacls and others but couldn't get them to do anything with printers.

    Read the article

  • Refreshing Windows Media library by command line

    - by dangowans
    Many file download managers allow you to run a command after your download finishes. Is there a command line to run a Windows Media Player 12 library refresh? Videos don't show up in the available list on my PS3 until the library is refreshed. Right now, I manually open Windows Media Player after the downloads finish, watch the bottom-right corner for the refresh to complete (ie. Update Complete), then close the player. This works, but there has to be a better way. Yes, I know PS3 Media Server would do the trick, and I do use it when I need to transcode something, but WMP is running all the time, so I'd like to take advantage of it.

    Read the article

  • No rule to make target libmysql.c', needed bylibmysql.lo'. Stop

    - by user1711008
    I install mysql5.1.53, run #./configure is well, but run #make have this error. My system is centos5.8, gcc version 4.1.2 20080704 (Red Hat 4.1.2-52) make[2]: Leaving directory /root/soft/mysql-5.1.53/libmysql' make[1]: Leaving directory/root/soft/mysql-5.1.53/libmysql' Making all in libmysql_r make[1]: Entering directory /root/soft/mysql-5.1.53/libmysql_r' make all-am make[2]: Entering directory/root/soft/mysql-5.1.53/libmysql_r' make[2]: * No rule to make target libmysql.c', needed bylibmysql.lo'. Stop. make[2]: Leaving directory /root/soft/mysql-5.1.53/libmysql_r' make[1]: *** [all] Error 2 make[1]: Leaving directory/root/soft/mysql-5.1.53/libmysql_r' make: * [all-recursive] Error 1

    Read the article

  • Wordpress 3 mutli site install

    - by mike
    Hello, Trying to figure out if this is possible... My company has a cms product that was written in Java and we decided to use Wordpress to run blogs for our clients. Obviously, Wordpress does not run on tomcat(at least not by default) so we installed Pound(http://www.apsis.ch/pound/) on our server and have setup any Apache and Tomcat on different ports. When "/blog/" is requested, the request is directed to Apache. This works fine but we would like to use Wordpress multi site so that we can manage all the blogs from a single interface. We would also like the url for every site to be "/blog/" example: http://www.site1.com/blog/ http://www.site2.com/blog/ I'm thinking it would have to be done with apache??? Is it even possible? Thanks!

    Read the article

  • Install Rails on linux [migrated]

    - by Jseb
    I am trying to install Ruby on Rails. and i have followed the following link for the install https://www.digitalocean.com/community/articles/how-to-install-ruby-on-rails-on-ubuntu-12-04-lts-precise-pangolin-with-rvm. This link was helpful i was able to run it, and everything, however as soon has i restarted the computer i then go into my apps and run it again with rails s and this message occurs. The program 'rails' is currently not installed. You can install it by typing: sudo apt-get install rails But work before, what should i do, do i need to set the path of somesort?? I am not that great in linux so bear with me. I am using linux ubuntu 12.04 desktop and my user is john Thanks in advance

    Read the article

  • One bigger Virtual Machine distributed across many Nodes [on hold]

    - by flyer
    I just setup virtual machines on one hardware with Vagrant (this is just a test environment, not production!). I want to use a Puppet to configure them and next try to setup OpenStack. I am not sure If I am understanding how this should look at the end. Is it possible to have below architecture with OpenStack after all where I will run one Virtual Machine with Linux? ------------------------------- | VM | ------------------------------- | NOVA | NOVA | NOVA | ------------------------------- | OpenStack | ------------------------------- | Node | Node | Node | ------------------------------- (In my environment Nodes are just virtual machines, but my question concerns separate Hardware nodes) After some comments... Is it a language barrier, or? This is only my 'virtual environment'. If we imagine this virtual machines are a separate Nodes (e.g. every has 4 cores) the OpenStack is still the same, right? Can I run one Virtual Machine across many Nodes with OpenStack? Is it possible to aggregate the computation power of separate machines in one virtual distributed operating system?

    Read the article

  • Ubuntu: crypt user's home directory and protect from admin ?

    - by Luc
    I have the following problem: I need to run some scripts on a Ubuntu machine but I do not want those scripts to be visible by anybody. What could be the best way to do that ? I was thinking of the following: create a particular user Add the scripts in this user's home directory Protect + crypt the user's home directory = Can I run the script from outside if the directory is crypted ? Can superuser see the content of the home dir ? Is there a right way to do this ? UPDATE I thing the best way would be that root own those scripts. In this case I would need to allow an another user to modify the network configuration. Is it possible to ONLY provide network rights to a user ? (via sudo or else)

    Read the article

  • Apache Redirect is redirecting all HTTP instead of just one subdomain

    - by David Kaczynski
    All HTTP requests, such as http://example.com, are getting redirected to https://redmine.example.com, but I only want http://redmine.example.com to be redirected. For example, requests for I have the following in my 000-default configuration: <VirtualHost *:80> ServerName redmine.example.com DocumentRoot /usr/share/redmine/public Redirect permanent / https://redmine.example.com </VirtualHost> <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> . . . </VirtualHost> Here is my default-ssl configuration: <VirtualHost *:443> ServerName redmine.example.com DocumentRoot /usr/share/redmine/public SSLEngine on SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown <Directory /usr/share/redmine/public> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> LogLevel info ErrorLog /var/log/apache2/redmine-error.log CustomLog /var/log/apache2/redmine-access.log combined </VirtualHost> <VirtualHost *:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> . . . </VirtualHost> Is there anything here that is cause all HTTP requests to be redirected to https://redmine.example.com?

    Read the article

  • Auto-scaling EC2 Servers and Updating Code

    - by jstats
    We've come to the point where we need to set up autoscaling for our web server and I'm unsure how to go about the process of scaling servers and updating the the existing code without remaking a new AMI and changing the autoscale config to use it. I've read a bit about people bundling the new code and uploading it to s3 and having new servers grab the bundle on boot up but that doesn't seem all that pleasant either. Currently the web app's files live in a git repo, and when we update the code, we push it to github, ssh into the web app and run a hook to bring down the latest code. So I was thinking that another option could be to just run that hook on an hourly or daily cron task. Unfortunately that doesn't cover everything type of update (for example new blog posts' images and such which aren't included in the git repo) but it's something. Could anyone provide some advice on what a common solution is or anything as to why my proposed solution is a bad idea? Thanks all

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >