Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 328/500 | < Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >

  • Transferring Postfix install to new computer

    - by mlissner
    I have postfix installed on one computer, with DKIM and SPF working properly. What I'd like to do is start using a different computer instead, with the minimal amount of fuss. Mail servers have a way of baffling me, but I know there are things with cryptography going on here that I don't fully understand (and I don't really care to - I figured it out when I set up the last computer about a year ago, and am happy not to delve into it again). Right now, I'm working on the early steps of this process -- installing postfix on the new machine, and getting it going. Are there specific steps I could take to move the correct configs and key files and such to the new computer?

    Read the article

  • Fix Fatal Error Condition showing system path

    - by JMC
    I've noticed there are a large number of servers running Magento Commerce that will return a fatal error showing the system path: Fatal error: Uncaught exception 'Exception' with message 'File '/usr/local/www/magento/data1702/media/css' does not exists.' in /usr/local/www/magento/data1702/lib/Varien/File/Transfer/Adapter/Http.php:96 Stack trace: #0 /usr/local/www/magento/data1702/get.php(205): Varien_File_Transfer_Adapter_Http->send('/usr/local/www/...') #1 /usr/local/www/magento/data1702/get.php(165): sendFile('/usr/local/www/...') #2 {main} thrown in /usr/local/www/magento/data1702/lib/Varien/File/Transfer/Adapter/Http.php on line 96 Magento as an application is generally good about supressing error messages. How can a linux server running apache be configured to avoid returning this error message since the app has problems suppressing it.

    Read the article

  • Why isn't VIM storing macros across sessions?

    - by dotancohen
    In VIM 7.3 on Ubuntu Server 12.04.1, VIM forgets macros and registers after closing. I do have set nocompatible in .vimrc and the command :set viminfo? gives this result: viminfo='100,<50,s10,h What might be preventing the macros and registers from being stored across close / open? Note that I am not interested in storing mappings for long term use in .vimrc. Rather, sometimes (such as during refactoring) I need to perform a simple operation on a few files and I find it easier to do in VIM than with Perl. I just need the macros and registers stored across open / close, which I do have working on other servers. Thanks.

    Read the article

  • 8 Character Device names

    - by Lee Harrison
    Is there any reason to still use only 8 characters in a device name? My boss still uses this rule for printers, computers, routers, servers... basicly any device connected to our network. This leads to massive confusion among users, especially when it comes to printer. It also leads to confusion from an administration standpoint because every device is named vaguely, and similarly(its only 8 characters!). I understand the history behind this and compatibility with older systems, but none of our legacy systems will ever make use of PS-printers and Wifi networks. Is there any reason to still do this, and what is everyone else doing when it comes to naming network devices at an enterprise level?

    Read the article

  • CoovaAP router does not get DHCP settings from modem

    - by Dan Sosedoff
    Im having some serious trouble making CoovaAP-powered router get network settings from modem. Wireless router works in captive portal mode (so it handles user login on the same device). But DNS settings and connection ip (for router) are not set as they supposed to via DHCP on modem. It makes installation in other location really complicated, because you have to go and set everything manually. For now i use google`s dns servers. What`s the way to make router self-configurable via modem's dhcp ?

    Read the article

  • Setting up IIS7 to mimic a GoDaddy shared hosting plan

    - by NerdFury
    I host multiple domains on a GoDaddy shared hosting account. I would like to setup a website locally in IIS 7 that mimics the setup of my hosted account so that I can test and debug applications locally before deploying, as debugging after deploying, or discovering there are issues after deploying is frustrating. I have created a folder WebRoot, at put my main application in that folder. I created a website in IIS 7 and pointed it at that folder. I setup bindings with a fake domain, and created a matching entry in my hosts file to make the fake domain point at my 127.0.0.1. I then created a folder www.otherdomain.com under webroot. I then created an application underneath my website, and pointed it at this folder. I can't find how I can add bindings to the web application to have it referenced as a different fake domain, rather than a subdirectory under my root domain. What would be the proper way to setup IIS to best simulate the environment on the GoDaddy servers.

    Read the article

  • Whats a good secure Windows FTP server?

    - by Keith Nicholas
    Whats a good FTP server? I have been running FileZilla, which seems okish. But I've noticed that a lot of people try to hack ftp servers and FileZilla only has very basic controls to prevent people from hacking. (so far no ones actually managed to get in... so thats good!) I was wondering if there were better options out there? Especially interested in recommendations from people who know they get targeted by hackers.

    Read the article

  • nginx errors: upstream timed out (110: Connection timed out)

    - by Sparsh Gupta
    Hi, I have a nginx server with 5 backend servers. We serve around 400-500 requests/second. I have started getting a large number of Upstream Timed out errors (110: Connection timed out) Error string in error.log looks like 2011/01/10 21:59:46 [error] 1153#0: *1699246778 upstream timed out (110: Connection timed out) while reading response header from upstream, client: {IP}, server: {domain}, request: "GET {URL} HTTP/1.1", upstream: "http://{backend_server}:80/{url}", host: "{domain}", referrer: "{referrer}" Any suggestions how to debug such errors. I am unable to find a munin plugin to keep a check on number of upstream errors. Sometime the number of errors per day is way too high and somedays its a more decent 3 digit number. A munin graph would probably help us finding out any pattern or correlation with anything else How can we make the number of such error as ZERO

    Read the article

  • Setting Rails up on a Linode - Nginx Issue

    - by rctneil
    I am extremely new to this so please don't shoot me down: I have set up a Linode running Ubuntu, It is all sort of working except Nginx. I am following this guide: http://rubysource.com/deploying-a-rails-application/ And this for nginx: http://library.linode.com/web-servers/nginx/installation/ubuntu-10.04-lucid When I go to my IP, I get a 500 internal server error. I have tried starting nginx and it looks like it starts fine. I run this: ps awx | grep nginx and I get: 308 ? Ss 0:00 nginx: master process /usr/sbin/nginx 2309 ? S 0:00 nginx: worker process 2311 ? S 0:00 nginx: worker process 2312 ? S 0:00 nginx: worker process 2313 ? S 0:00 nginx: worker process 2850 pts/0 S+ 0:00 grep --color=auto nginx I really am not sure what else to do to get it running. Any help? Neil

    Read the article

  • DD-WRT: What firmware and what webserver will fit on my 8MB of flash?

    - by Jeshii
    Attempting to make a portable WiFi webserver with php support on an old WRT54GS (v1.0) with DD-WRT. I have 8MB of flash on there. I know, it's a tall order. I tried the combination of dd-wrt.v24-13064_VINT_openvpn_jffs_small.bin, optware, and lighttpd. Didn't have enough space. Now I'm going to try dd-wrt.v24-13064_VINT_mini.bin, but I'm only saving 300KB, and I don't think that is going to make the difference. Any other small http servers with php support? Heck, I didn't even got to the point where I could add php! Maybe a way to calculate the size and dependencies of packages from optware BEFORE trying to install is more what I'm looking for. Any ideas?

    Read the article

  • nginx static file buffer

    - by Philip
    I have a nfs which several frontend-servers are connected to for making the files stored on the nfs available for http downloads. It looks like I have problems with the way apache is serving the files, there seems to be a very small buffer or no buffer at all which results in a lot disk seeks. I did some testing with loading the whole requested file into memory at once and serve it to the client from memory. With this technique I need less disk seeks for a download stream. Since I don't want to implement this by myself for production use I thought that I could maybe use nginx for that because the documentation says that it uses buffers for static file serving. Is it possible to increase the buffer size to a few mb, if so which config parameter do I have to change for this? Has anyone experience with large buffers for static file serving? Is there a better way to reduce disk seeks?

    Read the article

  • Updating wordpress in a multi-node environment

    - by Peter
    I'm finding this very tricky in a multi node environment, with code under revision control. AKA. multiple frontends and single database. I have a deployment process that pushes a git repo to the servers, but obviously if I update Wordpress from within the admin panel, it will update the files to one FE. Then I would need to copy over the new files to the other FE nodes. Plus, whenever these changes are written when Wordpress updates on a node, it writes code into the git repo. As such, it then breaks the auto deploys that perform 'git pulls', as it then has untracked changes and refuses to pull in new deploys unless manually intervened. How does one easily keep Wordpress updated in a multi node (load balanced) environment?

    Read the article

  • Windows 2008 Group Policy Setting? - Migration Headache

    - by DevNULL
    I have a small domain of users that I just migrated from a linux domain running open-ldap. Our new servers are running Windows 2008 Standard. I've installed Active Directory and everything is working perfectly... except that the initial user privileges is pretty restrictive and I need to loosen it up a bit. For example once they login to their workstations, they can create new files and folders but can not modify existing files or start. I basically want to open it all up except for software installations. Can someone please help with with this migration headache?

    Read the article

  • Need to move a debian server from i686 to x86_64 architecture

    - by user64204
    I have a debian server that I need to move from one hosting provider to another. I don't really know how the old server was setup, all I know is that it's running a Ruby on Rails application with a lot of custom libraries installed and that I should prepare myself for a painful migration. Old server: -os: debian 5.0.9 -used disk space: 3.2GB -architecture: i686 New server: -os: debian 5.0.9 -free disk space: 10GB -architecture: x86_64 As you can see the problem is that the servers are running different architectures. Q: Is there anyway I could somehow migrate the old to the new server in a few steps (or am I just dreaming I could) ? I was thinking maybe I could: -get list of packages and gems installed on old server and use for loop to install them all on the new -copy the disk content from old to new server while excluding what is architecture-specific (the problem is that I don't really know what to exclude).

    Read the article

  • VirtualBox cloned Ubuntu Server network error

    - by Luke
    I run a number of virtual servers on my network and I want to be able to easily clone base installations of Ubuntu Server. I use the VBoxManage command to clone the actual hard disk and I then create a new profile for my VM and copy over the settings of the original VM. However, when I boot into the cloned VM, there seems to be a network problem. When I issue a PING I get the message "Network unreachable". I traced it down to the fact that the virtual network card of the cloned VM has a different MAC address then the original VM. When I copy the MAC address the clone seems to work fine. How can I have the cloned VM have it's own MAC address?

    Read the article

  • Doesn't DNS diversity negatively affect performance? Why/how?

    - by cnst
    If you look at the press releases of various orgs that run the internet, you can see them praise the fact that now they run root server X in city Y, as if that magically makes everyone in city Y get all the relevant resolutions from the local server X, instead of going 200ms across the oceans and lands to other continents for resolutions. Similarly, the zones of some geographical domain names, like .ru, are being mirrored not just within Europe, but also, for example, in Hong Kong, which is no more, no less, but is about 300ms away from central Europe, since the traffic is often crossing the two oceans on each way. Doesn't all of this negatively affect DNS performance? Isn't it more of a liability to have a diverse pool of geodispersed authoritative servers, especially if your target audience is quite geographically concentrated? Perhaps a better question is, are there any DNS resolvers that use something better than the naive round-robin for choosing which authoritative server to contact?

    Read the article

  • Connecting to Aerohive AP's from Laptops running Win. 7 using authentication from a Windows 2008 domain server

    - by user264116
    I have deployed a wireless network using Aerohive access points. 2 of them are set up as radius servers. I want my users to be able to use the same user name and password they use when they log onto our domain. They are able to do this from android devices or computers running Windows 8. It will not work on Windows 7 machines. How do I remedy this situation, keeping in mind that the machines are personal machines not company owned and I will have no way to change their hardware or software.

    Read the article

  • Upgrade no raid server to raid

    - by AZee
    I have just learned that our PDC has a single drive with 2 partitions. I also know that this drive has bad blocks as recorded in the event log. What I would like to do is to convert this to a RAID solution with a nice balance between economy and performance. I will admit that I have only configured servers with RAID from scratch, and have no experience upgrading an existing system into a RAID system. In fact, I'm not sure it is even possible. Since this is the PDC for 350+ workstations downtime is important. I'd like to hear from other System Administators how they would tackle this and their recommendations for all devices. At this time it seems to me that I can replace the existing drive and then restore from backup or install a controller, drives, configure the RAID an basically start from scratch. Thank you for taking your time. ~AZee

    Read the article

  • How do I configure namecheap for "arbitrarily-nested" wildcard subdomains?

    - by rabidsnail
    I'm trying to set up something like nyud.net, where any arbitrary chain of subdomains resolves to the same CNAME record (which in my case points to an amazon elastic load balancer). Ex: www.gogle.com.nyud.net:8080 points to one of their cache servers, which looks at the HOST header and returns www.google.com. I'm using namecheap as my dns host. Adding a CNAME record for *.mydomain.com doesn't seem to do anything (nslookup gives NXDOMAIN for all subdomains). What do I have to do to set this up? Do I have to use something fancier than namecheap (like route53)?

    Read the article

  • Distributed website server redundancy

    - by Keith Lion
    Assume a website infrastructure is very complicated and is fully distributed (probably like most large web companies). Am I right in thinking that although there are all these extra web servers to handle multiple client requests, there is still a single "machine" whereby users must enter? I am guessing this machine will be the one physically associated to the IP address? I ask because I need to know whether, in places where distributed systems exist, there is still a single point of failure- usually the control node or, in this example, the machine connected to the public internet? Surely there cannot be two machines connected to the internet, as they would have to have different IP addresses? This "machine" may not be a server per se, but maybe it is a piece of cisco equipment. I just need to know whether, in the real world, these distributed systems still have a particular section where they depend on the integrity of one electronic device?

    Read the article

  • why do I get this mail server configuration error?

    - by Francesco
    <<The configuration of your mail servers and your DNS are not ok! The report of the test is: mail.mydomain.com. -> mydomain.com -> 78.47.63.148 -> static.148.63.47.78.clients.your-server.de Spam recognition software and RFC821 4.3 (also RFC2821 4.3.1) state that the hostname given in the SMTP greeting MUST have an A record pointing back to the same server.>> I have a A Record that points mail.mydomain.com to 78.47.63.148 (which is my given ip address for my vps) All other records are fine, so what's wrong and what record should I create to make it right? Thanks

    Read the article

  • USB 3 adapter for a dell 2850 with PCI (or PCIX) ports

    - by Don Dickinson
    Does anyone know if there is a plain PCI (or PCIX) USB 3 adapter. i understand the bandwidth of PCI < USB3, but it still beats the heck out of USB 2. i have some older dell 2850s that do not have the PCI E ports that most USB 3 adapters require. i'd really like to get usb3 in those servers. i searched the internet but didn't see any. the local computer store said they only had pcie adapters. tia, don

    Read the article

  • Network Block Device (NBD) clients for Windows or similar solutions

    - by przemoc
    Are there any NBD clients for Windows? Strangely, I cannot find any, or I am searching for them in a wrong way. Such client should be possibly a driver with front-end tool (may be a command-line one) allowing to create virtual drives and associate them with given hosts (or simply localhost) and ports where NBD servers are listening. From user perspective virtual drive should be close to what physical drive is, so it should be accessible as something like \\.\PhysicalDriveX (maybe \\.\VirtualDriveX?), be visible in Disk Management (diskmgmt.msc) and mountvol tools at least. (The only thing I found remotely close to NBD on Windows is ImDisk's proxy mode and companion tool devio, but AFAIK ImDisk only works at partition level (so no virtual drive) and devio uses different protocol.) Secondary question is: Are there any (preferably simple) Windows-specific solutions allowing creation of virtual drive delegating read/write request to user-space via some explicit way (like via TCP, IPC, DLL implementing given API, etc.)?

    Read the article

  • migrating storage to a different controller

    - by bellocarico
    Hello, I've just purcheased a couple of adaptec controller (2405/5405) for my ESXi 4.0 U1 servers. Currently ESXi and a couple of VMs are hosted on single sata boot disk connected to a nvidia on board non-RAID controller. I know that it's possible to migrate from single disk to RAID 1 with adaptec and I'm pleased with that, but I'm not sure if ESXi has already the right drivers installed/loaded for this controller. Is there any way I can check this? Is ESXi clever enough to recognize the new hardware and load the right module? Thanks

    Read the article

  • Best idea dataserver serving small pictures 40 ko

    - by Nicolas Manzini
    I'm designing the server structure for my application in case things go well. I have one server DB connected to multiple server who process connections. All those with lots of RAM and fast processors. (still looking for a way to use the multithread because now it's dumb apache php... so loooots of ram needed). Upon an answer from those servers, the client can then connect to another server to retrieve pictures using the address he previously got from the db. Is it a good idea to have one database server with let's say nginx and ssd disk having to send all pictures to everybody? or should I have multiple server accessing to a shared ssd disk drive or multiple disk updating each other? Also should I put a lot of RAM on the database server? because probably there wont be a picture more popular than another.

    Read the article

< Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >