Search Results

Search found 90811 results on 3633 pages for 'hyper v server 2012 r2'.

Page 1668/3633 | < Previous Page | 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675  | Next Page >

  • Perl module error on solaris-10

    - by ramesh.mimit
    I have installed perl and pm_dbdmysql perl module on solaris-10. I have a perl script which makes the mysql DB connection to a diff server and runs some queries and returns the results. Its working fine on linux(redhat) but when I am running the script on solaris-10 its giving me the below error: 2010-12-14 00:00:00 and 2010-12-14 23:59:59DAILY INSIDE : 2010-12-14 00:00:00 -- 2010-12-14 23:59:59 install_driver(mysql) failed: Can't locate DBD/mysql.pm in @INC (@INC contains: /usr/local/lib/perl5/5.10.1/i86pc-solaris /usr/local/lib/perl5/5.10.1 /usr/local/lib/perl5/site_perl/5.10.1/i86pc-solaris /usr/local/lib/perl5/site_perl/5.10.1 .) at (eval 15) line 3. Perhaps the DBD::mysql perl module hasn't been fully installed, or perhaps the capitalisation of 'mysql' isn't right. Available drivers: DBM, ExampleP, File, Gofer, Multiplex, Proxy, Sponge, Sybase. at cerberus_report.pl line 114 Though dbd-mysql perl module is already installed. PKGINST: CSWpmdbdmysql NAME: pm_dbdmysql - MySQL driver for the Perl5 Database Interface (DBI) Is it something related to the path variables to need some other perl moudule dependency!

    Read the article

  • Why is my FTP output file blank?

    - by Nathan Long
    From the Windows command prompt, I have FTPd to a Windows web server. I can get a file, and I can see a directory listing with dir, but I want to save that list locally. I tried dir > c:\somefile.txt, and the file is created, but it's blank. Same thing if I do ls > c:\somefile.txt. The result is the same when I FTP from a Linux box. FTP sends back the following: 200 PORT command successful 150 Opening ASCII mode data connection for /bin/ls 226 Transfer complete

    Read the article

  • ssh_exchange_identification: Connection closed by remote host?

    - by user51684
    debug1: Connection established. debug1: identity file /home/DAMS/.ssh/id_rsa type 1 debug1: identity file /home/DAMS/.ssh/id_rsa-cert type -1 debug1: identity file /home/DAMS/.ssh/id_dsa type -1 debug1: identity file /home/DAMS/.ssh/id_dsa-cert type -1 ssh_exchange_identification: Connection closed by remote host hello this one is different . no missing or anything. im using cygwin. and it just stop when im doing git push production on my server. usually its ok, but i dont know why its stop connections i wonder whats wrong.

    Read the article

  • Access Amazon Linux EC2 over VNC using Guacamole

    - by Neon Flash
    I have a t1.micro Amazon Linux AMI instance running. Now, I want to access it using VNC so that I get the GUI. I came across Guacamole and the installation instructions for the server side configuration. So, I get it that we need to setup Apache Tomcat on the Linux machine and then install all the required dependencies, edit the configuration files for Tomcat. But, how do I access it from Windows? What is the client side configuration? From what I understood so far, instead of using a VNC Client like TightVNC or VNCViewer, we can use the Web Browser to access the Amazon EC2 instance. I am using Windows 7 as the client. I would like to access the Amazon Linux AMI (t1.micro instance) over VNC so that I get the GUI.

    Read the article

  • IIS6 host multiple websites under same sub-domain (or something similar)

    - by user28502
    I'm trying to figure out a structure for a hosted application that i'm working on. I've got a domain lets call it app.company.com (a sub-domain company.com of course) that is setup to redirect to my IIS 6 web server. I would like to set up one website in IIS for each client that will use this application. And have the URL schema be like this: app.company.com/clientA -- would point to ClientA website in IIS app.company.com/clientB -- would point to ClientB website in IIS Do you guys have any pointers or best practices for my scenario?

    Read the article

  • Calculating Cloud Service Costs [closed]

    - by capdragon
    Possible Duplicate: Can you help me with my capacity planning? I would like to scale a web application to the Cloud and wanted to know if anyone had any experience calculating costs and could tell me how I would go about that. I have NO experience with cloud services at all. Currently my production environment consists of two web servers and one database server. If the application continues on a linear growth path, eventually, I want to scale to the cloud to avoid any more long term commitments to extra hardware. I want to be able to create a similar environment I have now as a baseline. Have this as my fixed cost that I will always have. I also want to calculate my variable costs that will increase with more users or bandwidth. I don't have a preferred cloud vendor. Amazon, Rackspace, Terremark or any other is fine as long as understand how to calculate my fixed and variable costs.

    Read the article

  • SASL + postfixadmin - SMTP authentication with hashed password

    - by mateo
    Hi all, I'm trying to set up the mail server. I have problem with my SMTP authentication using sasl. I'm using postfixadmin to create my mailboxes, the password is in some kind of md5, postfixadmin config.inc.php: $CONF['encrypt'] = 'md5crypt'; $CONF['authlib_default_flavor'] = 'md5raw'; the sasl is configured like that (/etc/postfix/sasl/smtpd.conf): pwcheck_method: auxprop auxprop_plugin: sql sql_engine: mysql mech_list: plain login cram-md5 digest-md5 sql_hostnames: 127.0.0.1 sql_user: postfix sql_passwd: **** sql_database: postfix sql_select: SELECT password FROM mailbox WHERE username = '%u@%r' log_level: 7 If I want to authenticate (let's say from Thunderbird) with my password, I can't. If I use hashed password from MySQL I can authenticate and send an email. So I think the problem is with hash algorithm. Do you know how to set up the SASL (or postfixadmin) to work fine together. I don't want to store my passwords in plain text...

    Read the article

  • rsync error unexplained error (code 255) at io.c

    - by kabeer
    I was using a script to perform rsync in sudo crontab. The script does a 2-way rsync (from serverA to serverB and reverse). After i reboot both the server machines, the rsync is not working in sudo crontab. I also setup a new cronjob and it fails, The error is: rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.6] rsync: connection unexpectedly closed (0 bytes received so far) [receiver] However, when run from terminal, the rync script works as expected without issues. please help.

    Read the article

  • (simple) linux HA with vmware vsphere?

    - by derhelge
    I hope my upcoming question is specific enough, and you are able and willing to support :-) We have several openSUSE VMs in an ESX-Cluster (three ESX-Servers) with an attached iSCSI-SAN. All of those Linux VMs are "single point of failure"-configured, which means in the case of a Web-Server: LAMP, storage, etc. everything on this machine. This was very simple and in case of a failure (in the last years: kernel panics or apache crashes) a simple reboot triggered by a script did it. But the problem is: How to upgrade/maintain the w(eb-)application or the underlying OS without downtime? This wasn't really managable and i did this in the early morning ;) How can i achieve a "simple" High-Availability Cluster now? I thought of: DRBD with heartbeat with 2 VMs. And for the storage a RDM (raw device mapped) LUN and change the read-write-permissions for both VMs. Is this a good idea? Anyone has a better solution?

    Read the article

  • When DNS doesn't cache

    - by John Francis
    We've had some odd DNS problems over the past couple of days that I don't fully understand. Some of our DNS names stopped resolving for some of our customers due to some 'unknown' server reconfiguration at our DNS provider. The problem seemed to be intermittent i.e. stopped working and started working within a few minutes over a couple of days. I'm no expert on DNS, but I'd have expected DNS caches to prevent this sort of thing from happening - when we need to change an IP address for a DNS record, it can take 24 hours to propogate, so how can our DNS provider be breaking name resolution intermittently for our customers so easily? Shouldn't the DNS caches kick in here? We had a similar problem about a month ago when one of their nameservers 'decided to reload the DNS database from scratch' - this broke our name resolution too. Again, why didn't the caches satisfy the name resolution requests. Any guesses would be appreciated. John

    Read the article

  • Can't get PostgreSQL to run in Debian Lenny

    - by nuqqsa
    I'm trying to install PostgreSQL 8.3 on Debian Lenny (kernel 2.6.26-2-686) using the package manager: sudo apt-get install postgresql-8.3 postgresql-client-8.3 postgresql-contrib-8.3 postgresql-common When deploying the package postgresql-common, the following warning is displayed: supported_versions: WARNING: Unknown Debian release: 5.0.4 I wonder if this is related to the problem I describe below. Next I try to launch the database server but it doesn't have any effect, neither any logs are generated. sudo /etc/init.d/postgresql-8.3 start Does anybody have a clue of what might be going on?

    Read the article

  • What does it mean to setup Postfix as "SMTP only"? [closed]

    - by BryanWheelock
    Possible Duplicate: What does it mean to setup Postfix as “SMTP only”? I am trying to setup Postfix a few different domains on a virtual host. I need to have email setup just to send out registration confirmations and new password requests. No one will have a mailbox on this server. It seems this means that I want to setup Postfix as SMTP only. I've also read about configuring Postfix null clients for simular needs. What is the difference between Postfix null client and SMTP only?

    Read the article

  • how do web hosting companies host end users domain and give so many public IPs

    - by Registered User
    Hi, I am a Computer Science guy who understands networking very well. But when it comes to Web hosting companies I am clue less. I want to know how do web hosting companies give so many public IPs to so many users and each of them has root login also. How this is technically done that is what I am interested to know. I do not know how you people configure it. In my case if I have to do I will buy a public IP from some one and connect my server to it and at max give some people SSH access to it.In case of Web hosting companies how is it done.

    Read the article

  • Why does StackExchange store images in imgur rather than its own servers? [migrated]

    - by martin's
    I am trying to understand the technical (and business) logic behind taking such an approach. Certainly SE isn't short of server or bandwidth resources. I don't think imgur is a CDN, so that can't be the reason. On the one hand one is giving up local control (meaning your files, your hardware) of the content. On the other, you don't have to use your own bandwidth, storage and resources. Then again, you depend on someone else for the reliability and up-time of your service.

    Read the article

  • generate correctly a self signed certificate Zimbra

    - by rkmax
    I have a Single mail server with Zimbra 8.0.0 for generate certificate I'm following Generate the cert. ORG=MyOrganization CN=mail.mydomain.com COUNTRY=myCountry CITY=myCity /opt/zimbra/bin/zmcertmgr createcrt -new -days 365 -subject "/C=$COUNTRY/ST=N/A/L=$CITY/O=$ORG/OU=ZCS/CN=$CN" /opt/zimbra/bin/zmcertmgr deploycrt self -allserver su - zimbra "zmcontrol restart" Veririficate with /opt/zimbra/bin/zmcertmgr viewdeployedcrt. i can see the new cert In Chrome go to https://mail.mydomain.com and export the .cer test in a Windows client certutil.exe -addstore root \path\to\exported.cert root "Root Certification Authorities trusted" You can add a root certificate to the root store CertUtil:-addstore command error: 0x8007000d (WIN32: 13) CertUtil: Invalid data. even from chrome i've tried to add the cert without successful results. can anyone help me with this problem?

    Read the article

  • Is there any way to isolate the python2.7 , mod_wsgi installation from main environment

    - by user31
    I have many local virtual machines for building the django websites. I find it very hard to configure all the machines with mod_wsgi , python and all that installation issues. Is there any way that i can install even python 2.7 , mod_wsgi etc and all that inside the virtual environment folder so that i can just copy paste that folder in my live server and i don't need to mess with mos_wsgi , python 2.7 and other issues. Is it possible or even any close variation of that so that puting the site to live servers is very easy and everything which is needed by site should be included locally I also face many problems when i need to move the django sites across servers

    Read the article

  • How do I minor upgrade MySQL on Windows?

    - by TruMan1
    I currently have MySQL 5.1.35 installed on a Windows 2008 server via the MSI installer. I need to upgrade to the latest 5.1.44 to fix a bug, but docs were not clear on how to do this. I ran the MSI installer, but it did not give me any upgrade option so I quit it. I am weary because it's a production machine with many PHP websites running on it. Also, my data directory is not the default one, it's kept on another partition. How can I upgrade it? Thanks for any help.

    Read the article

  • Skipping hardlinks when using TSM Backup

    - by Lars Haugseth
    We need to backup a filesystem with lots of hardlinks. Since there are several hardlinks for each "true" file, we would like to skip all the hardlinks when backing up the filesystem to avoid n exact copies of each file. The backup is done using Tivoli Storage Manager Backup, and we've been unable to get it to treat hardlinks as anything other than separate files to be backed up alongside each other. In case it's relevant for possible solutions, I'd like to note that it's possible to tell a hardlink from a proper file by the filename: foobarbaz-123.ext # file foobarbaz-123-1.ext # hardlink foobarbaz-123-2.ext # hardlink barbazfoo-456.ext # file barbazfoo-456-1.ext # hardlink barbazfoo-456-2.ext # hardlink barbazfoo-456-3.ext # hardlink That is, all hardlinks have two hyphens in the filename, where as proper files have just the one. The server is running Ubuntu Linux, and the files are situated on a gfs volume on our SAN.

    Read the article

  • Mac OS X: pushing all traffic through a VMWare VM [closed]

    - by bj99
    I want to set up an Astaro (Sophos) UTM in a Virtual Machine. The Setup should be at the end the following: Cable Modem (one IP adress) | [Ethernet] Sophos UTM (running as VM [VMWare Fusion 5] on the MacMini) | [WIFI] Airport Express v2 (for sharing Local Network to wireless and wired clients) 1)| [WIFI] 2)| [Ethernet over Thunderbolt Ethernet Adapter]* Clients MacMini (Local File Server) *To have the Mini also protected behind the UTM So the setup process for the UTM works fine, but then the problems start: I just have one external IP (from my cable modem provider)== So if I put the VM in briged mode my Internet connection drops, because the MacMini also has its IP adress. If I put the VM to NAT mode the Mini itself is not protected by the UTM So: is there a way to hide the en0 interface(Ethernet) and the en1 interface (Wifi) from the MacMini, so that they not even appear in System Preferences Network section but are available to the VM? That way the Mini must connect to the en2 interface (Thunderbolt adapter) to make any Internet/LAN connection and I just use the given single IP from the Cable Modem. Thaks for any suggestions... Sebastian

    Read the article

  • Website requests not reaching IIS?

    - by pete the pagan-gerbil
    To start off with a confession, I am not a server admin - just a developer tasked with getting to the root of a problem. Please be gentle! I have an intranet ASP.NET website running in IIS on a virtual machine. The website is not accessed very often (the last IIS log file was modified nearly six months ago). Both the IP address and Host header value are now failing to return the website, and the IIS log still doesn't show any more recent activity. The virtual machine was moved to a different physical location a few months ago, and the IP address for it has changed. Could this be what has broken access to the site? What else should I be checking to solve this? I don't have totally unrestricted access to the building's network settings, structures, etc. I would be grateful for any advice, even if I can't use it myself it'll improve my knowledge of what's going on behind the scenes!

    Read the article

  • QNAP ftp: hide Mac OSX hidden files .DS_Store and others

    - by tobia.zanarella
    I am using a QNAP as an FTP server. Users access this NAS from their Mac OSX Lion clients and put files and folders inside the QNAP shares, which are later being accessed via FTP from external points. The problem is that inside the folders uploaded to the QNAP folders Mac OS leaves its .DS_Store and .* hidden files. I know that for a Samba share, for example, I need to add two lines to the /etc/samba/smb.conf file: veto files = /._*/.DS_Store/ delete veto files = yes But what can I do on a QNAP FTP service? Thank you.

    Read the article

  • Setting up a static IP address (public) in Ubuntu

    - by ycseattle
    I have a business class internet connection and need to setup a static ip address for a machine. I did a search online and only find how to setup static local ip addresses (like 192.168..). I tried the same technique, and only setup the ip address and netmask, but after restart networking the computer could not connect to the outside world. This is what I did: 1) edit /etc/network/interfaces iface eth0 inet static address 173.10.xxx.xx netmask 255.255.255.252 2) edit /etc/resolv.conf search wp.comcast.net nameserver xx.xx.xx.xxx nameserver xx.xx.xx.xxx 3) restart network sudo /etc/init.d/networking restart Now the last step didn't report error, ifconfig shows the ip address was set, but this server cannot connect to outside world, ping google.com and reports "unknown host google.com". Any ideas?

    Read the article

  • Puppet nodes cant' find master, ec2 public versus internal ip addresses and hosts files

    - by Blankman
    If I setup my hosts files such that they reference all other ec2 nodes using the internal ip addresses, will this work or do I have to use the external ip addresses? Do I need to specify anything in my security group to get internal ip addresses to work? e.g. /etc/hosts ip-10-11-12-13.internal some_node_name If I do this, can I reference some_node_name anywhere in my scripts where I would have used the ip address previously? On my puppet agent servers, I have a reference to my puppet master like: public-ip-here puppet When I reboot my puppet agent's, syslog shows they couldn't find the master with the message: getaddinfo : name or service not known I did get it to work by updating /etc/default/puppet and I added to the options: --server=public-ip-here From what I read, puppet will by default try using 'puppet', and I set this in my hosts file so why wouldn't it be picking this up?

    Read the article

  • In Nginx can I set Keep-Alive dynamically depending on ssl connection?

    - by ck_
    I would like to avoid having to repeat all the virtualhost server {} blocks in nginx just to have custom ssl settings that vary slightly from plain http requests. Most ssl directives can be placed right in the main block, except one hurdle I cannot find a workaround for: different keep-alive for https vs http Is there any way I can use $scheme to dynamically change the keepalive_timeout ? I've even considered that I can use more_set_input_headers -r 'Keep-Alive: timeout=60'; to conditionally replace the keep-alive timeout only if it already exists, but the problem is $scheme cannot be used in location ie. this is invalid location ^https {}

    Read the article

  • Changing Network Path of Offline Files

    - by Adam
    Many of our users have their Home folder set as Available Offline. Their Windows 7 laptops will not be back on our network for a few weeks. In the mean time, we're setting up new servers and reorganizing our files, so the network path to the Home folder is going to be completely different. Based on some testing I did, when the users return, any files they've created or modified while offline will be gone, and the new Home folder will be there and not set to sync. The offline cache of the old Home folder is still accessible through the Sync Center, but they're not going to want to dig through that and try to find what's missing. Avoiding this would involve keeping the old server around and moving everyone to the new location in person, so we know for sure they're synced first. Is there any way to avoid this that isn't as tedious, like a quick registry edit or something that will point the old offline cache to the new location?

    Read the article

< Previous Page | 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675  | Next Page >