Search Results

Search found 5174 results on 207 pages for 'goku da master'.

Page 133/207 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • VMWare - Windows XP guest licensing

    - by jcooper
    Hi, If I have a VMWare ESXi server with 4 Windows XP guests running on it, I understand that I need a separate license for each guest. Is there a way to simplify license compliance on these VMs? For example, I want to create a master VM image and boot it 4 times. By default those 4 VMs will have the same Windows activation key installed. Is there a simple solution for this? Or is it ok to do what I've described above provided I have 4 unique license keys on hand in case of audit? thanks!

    Read the article

  • Fresh install CentOS 6.4 64b with directadmin slowly consumes all memory and crashes

    - by Coen Ponsen
    Dear server fault community, This is my first question on server fault, i'm new to server (mis)configuration so please forgive me for asking something stupid :) I'm running Directadmin on a CentOS 6.4 64b with 4GB memory and over 10000Gh virtual machine. I migrated my websites because my former vps couldn't keep up anymore. Only half of the websites from this 1GB machine were migrated jet. So the migration is still in progress and already my server crashes every day. The server performance up until that moment is perfect. The directadmin log files show nothing out of the ordinary. Yesterday only the mysql server crashed but it also crashed the entire machine before. The memory usage in DA seems to be normal: directadmin directadmin (pid 3923 22158 22159 22160 22161 22162 )8.75 MB dovecot dovecot (pid 3851 ) 47.8 MB exim exim (pid 1350 ) 1.29 MB httpd (pid 21525 21528 21529 21530 21531 21532 21546 21571 21742 21743 21744 )490.4 MB mysqld mysqld (pid 1299 ) 287.8 MB named named (pid 3807 ) 16.3 MB proftpd proftpd (pid 1481 ) 1.91 MB sshd sshd (pid 1173 21494 ) 5.16 MB Restarting services immediately frees up memory, but slowly over time the memory usage increases(about 24 hours to crash). The commands: # sync # echo 3 > /proc/sys/vm/drop_caches Will free al memory correct. I could just create a cronjob but it seems the wrong way around to me. I can't seem to pinpoint the cause. Any advices, references or tips are highly appreciated! Greetings, Coen edit: free -m : after drop_caches: total used free shared buffers cached Mem: 3830 735 3095 0 0 21 -/+ buffers/cache: 712 3117 Swap: 991 0 991 I'll post another one this evening.

    Read the article

  • Simple Windows+Linux server provisioning? Chef/Puppet/Ansible etc

    - by Andrew
    I'm primarily a developer, part time devops; and manage servers here and there for my projects. I want to automate provisioning of web/app/database servers going forward for my projects I manage a mixture of both Windows and Linux servers (VPS, cloud and dedicated) I've looked at investigated Chef/Puppet/Ansible briefly; and I am wanting to find something that: Is easy to learn and understand. I don't want to invest weeks into understanding a complicated piece of tech. Ideally does not require a server ("master server") to hold the configurations Supports provisioning of Windows and Linux servers Comes with suitable documentation to get started Does anyone have any advice on what tool is best suited? Thanks

    Read the article

  • Control cell reference increment when dragging a forumula in Libre Office Calc (3.5)

    - by Chuck
    Using Libre Office Calc (3.5) and have a question. When copying a formula that references cells into multiple empty cells the default is to increment each cell reference by one column or row, depending on the direction that the formula is being drug. A formula '= 1 + A1' drug horizontally changes to '= 1 + B1' when pulled one cell to right and '=1 + A2' when pulled one cell down. Is there a way to control increase the increment of the referenced cell? Is is possible to have a formula '= 1 + A1' that effectively changes to '= 1 + A3' when drug down one cell, '= 1 + A5' when drug down two cells, etc? If it matters, I am trying to take a constantly updating master list of data that is organized by dates (Wednesdays and Saturdays) and create separate spread sheets for each day of the week that can be updated by only pulling down the formula into the next cell. My attempts at using the 'lookup' function, 'offset' function, and creating a sort column in Libre Office Calc are thwarted by my inability to figure out how to get around the single step increment when pulling a formula down into the next cell. Thanks

    Read the article

  • wifi connection turns off oll the time

    - by er-v
    Hello! I realy need help with one strange problem. I have a wifi network in my appartment with wireless N home router Trendnet tew-652BRP. Everething work fine for three of my laptops, but I have one PC with D-Link DWA-140 adapter. It looses connection 2-3 times in 5 minutes. There is following messages in my system log when it does so: The browser has forced an election on network \Device\NetBT_Tcpip_{9537A5C1-3B43-4C56-B94C-CE69A257C3AD} because a master browser was stopped. The TCP/IP NetBIOS Helper service was successfully sent a stop control. The reason specified was: 0x40030011 [Operating System: Network Connectivity (Planned)] Comment: None The TCP/IP NetBIOS Helper service entered the stopped state. in order of appearence. How can I stop it? I have the latest driver installed.

    Read the article

  • How to use postfix header_checks with zarafa outgoing mail

    - by olvrlrnz
    I'm using zarafa as MDA with postfix. For privacy reasons I want to filter client internal IP-addresses and stuff like this. To do so I've added the following to master.cf: submission inet n - - - - smtpd [...] -o cleanup_service_name=subcleanup [...] and further down the file: subcleanup unix n - - - 0 cleanup -o header_checks=pcre:/etc/postfix/smtp_header_checks which works perfectly for clients delivering their mail through the submission port. But my zarafa is of course not using the submission port to send mail, hence it doesn't hit the subcleanup routine and outgoing mails contain a very nice X-Mailer: Zarafa-exact_version header which is rather unsatisfying. Is there any way to make zarafa use the subcleanup routine? Any help is much appreciated.

    Read the article

  • Updating ATI HD 5970 Graphics card - version errors?

    - by user55406
    I'm having an issue...My system specs is: Intel i7 960 6GM Corair XMS RAM ATI HD5970 graphics card Intel dx58so motherboard Cooler Master HAF 922 case 1.5TB Seagate hard drive Windows Vista x86 (32-bit). Here is my issue: when I go to AMD/ATI website to update my graphics card - it doesn't. when I type DxDiag and then click on display it tell me my version is 8.17.0 and its on 10.10.0 for the latest version. How can I get 8.17.0 too 10.10.0? I figure it would have done that after I updated the driver for my graphics card. Thanks.

    Read the article

  • Can expire_logs_days be less than 1 day in MySQL?

    - by Scott
    So... yesterday I received an "after the fact email" about a campaign that has started for one of the services that I run. Now the DB server is getting hammered, hard, to the tune of about 300mb/min in binary logging for the replicate. As you could imagine, this is chewing up space at a fairly tremendous rate. My normal 7 day expiry of binary logs just isn't cutting it. I've resorted to truncating logs to just the last for 4 hours with(I'm verifying that replication is up to date with mk-heartbeat): PURGE MASTER LOGS BEFORE DATE_SUB( NOW(), INTERVAL 4 HOUR); I'm just running that from cron every few hours to weather the storm, but it made me question the minimum value for expire_logs_days. I haven't come across a value that is less than 1, but that doesn't mean that it isn't possible. http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_expire_logs_days gives the type as being numeric, but doesn't indicate if it's expecting integers.

    Read the article

  • How can I set up an FTP user with a home directory inside another user's home folder?

    - by simon180
    Hi I have an Ubuntu (Hardy) server which I am using to host multiple websites. All of the sites are stored in subfolders of a public_html folder for my main login to the server and accessed via a single SSH account. I now have a website user who wants FTP (or similar) access to enable them to upload various files etc to the directory where their website is situated, however I still need the SSH account to have access to this directory as I may need to make changes using my master account. Basically I want to create an FTP account (I have VSFTPD installed) for a user with the home directory inside my own user account but they should only be able to read/write to this folder or its subfolders but not go further up the directory tree. How can I achieve this? Thanks

    Read the article

  • Can't Read/Write the Hard disk used in NAS

    - by mgpyone
    I've lately purchased a Synology DS212j and I intended to use my two 3.5" HDs into it. One of them was in used as an external HD. Thus when I install these two unit in NAS, it asked me to formatted in order to used with its format (I think it's ext3?) . I installed the Disks and omit the formatting option. I just got another 3.5" Hard Disk now. I've installed it in the NAS. everything's fine. However, when I take out the (used) HD from the NAS and install back in the standalone casing, I found out that it can't be read from both OSX an Windows 7. I've tried with ext2sd and I only found 2GB portion of the whole 1.5 TB Hard Disk. Here's another reference from EASEUS Partition Master

    Read the article

  • How to resolve 'No internet connectivity issues' with a Virtualised 2008 R2 Server using Forefront UAG

    - by user684589
    I have spent some considerable time reading up on as many possible blogs and articles as I can to help me solve why my VM (Running on Hyper-V) for DirectAccess has suddenly stopped being able to access the internet. The VM setup shares the same internet connection on which I have written and submitted this question so I know that the actual underlying internet connection is fully functional. Previous to last week the DirectAccess was fully functional and had no issues. This is a recent problem which was led up to by a number of consistent crashes on the DA machine when access was attempted. Upon reboot all seemed well until recently. I am not certain whether it is relevant, but previously to this I had a number of power issues where the entire VM host shutdown unexpectedly leaving around 8 VM's in a bad way. Upon restart, the UAG DirectAccess machine was unable to access its configuration service (although the service was started) but this seemed to relate to the Light-Weight Active Directory Service AD LDS which had a corrupted database. Having repaired this database, I restarted the service and could subsequently reconnect to the configuration service again. For good measure I re-bound the network adapters (virtualised through Hyper-V) and DirectAccess claimed to be all happy again. However as it stands my machine is still unable to access the internet showing the "No internet connectivity" exclamation mark for the external facing NIC. I have also tried removing the adapters, disabling, re-enabling and the problem persists. The intranet part of the VM CorpNet seems to be fully functional as before and I'm running out of ideas. Any input would be greatly appreciated. I am not an advanced Domain Administrator so please be gentle.

    Read the article

  • Get percentage free space on database volumes w/ SQL Server 2005?

    - by Allen
    I am currently using SQL Server 2005 and (undocumented I believe) master..xp_fixeddrives to get free space on my database volumes as part of my monitoring. However, this only gives me an absolute number of MB free. What I really need is percentage free. Is there another way in SQL Server 2005 to get this? If not, is there some other light-weight way to get it? If I can, I want to avoid installing a Java JRE, or Perl, or Python on my database server. Perhaps vbscript, or a small Windows executable on the file system? Yes, I know I can Google this, and I have. It looks like there are a few ways to accomplish it, and I'm curious how my DBA brethren have handled this.

    Read the article

  • When to use MySQL replication or DRBD for HA on Xen VM?

    - by user62513
    I'm setting up a database which needs to be needs to provide High Availabilty. My primary concern is high performance and robustness (I don't want something that will fail fast and badly). The database is accessed by the application at an average of 300 qps. It's will run on Xen VMs and it has some InnoDB tables as well as MyISAM tables. The VMs are connected via ethernet 100Mbit/s ethernet cables. Which of the two - MySQL replication or DRBD - would you recommend in such a situation? Or should I use DRBD to make the master database Highly Available and use MySQL replication on the slaves? I'm a developer so these things are all not so easy for me to make a sound judgement.

    Read the article

  • Will installing an Ultra ATA cable backwards affect performance?

    - by GMMan
    I've recently purchased a hard drive upgrade for my Xbox 320GB WD Caviar Blue WD3200AAJB and StarTech.com Ultra ATA/66/100/133 cable IDE66 yes I'm crazy When it came to installing the cable, it was too short (my fault), and there wasn't enough space between the master and slave ends to reach both the DVD drive and the hard drive. The only thing I could do was install the cable backwards and twisting it quite a bit to make it fit. The upgrade works, but reading the manual for the hard drive I replaced (10GB Seagate U Series 5), apparently there is a specific way you have to connect the cable. I don't have that option, so the question comes down to, will my drive performance be at Ultra ATA levels, or is it still performing at original ATA speeds? Is there any way I can test this (benchmarking software for Xbox)?

    Read the article

  • SSL issues with puppet agent at openSUSE

    - by Roman Grazhdan
    I have a master running at my vps, and it has a simple helloworld manifest which works fine with any ubuntu machine I have. It connects, exchanges keys and creates test file allright, so I'm sure it's not server issue. The agent which is running at a virtual machine with openSUSE says: err: Could not request certificate: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed. This is often because the time is out of sync on the server or client I believe it's probably a broken or missing lib, since the package is not built very accurately - it wouldn't start out of the box because of wrong path to lockfile, for example. So how do I figure out what exactly is wrong here? The time is allright, I've checked it. I probably could do without SSL if it's possible, since that SUSE machines are just for training, but it's the last opportunity.

    Read the article

  • Duplicating keepass files instead of creating a new file

    - by BlakBat
    I'm currently using KeePass 2 and syncing them via dropbox. I have a few KeePass files (one for websites, one to store software licenses, etc...) Every time I need a new KeePass file, I just create a copy of the kbdx file, open it, remove all existing entries, change the key transformation rounds to another pseudo-random value. I do not change the master password. I want to know if this was unsafe practice, or was a security risk, compared to just creating a new KeePass file via the "File-New" menu. The reason I don't use the menu: i'm lazy enough to not want to reconfigure "database settings" every time.

    Read the article

  • How to sync apps with one Mac and 3 devices?

    - by openfrog
    We have 1 Mac, with 1 iTunes account, and 3 iOS devices. Every time we sync one device, iTunes either spams it with ALL apps from ALL devices, OR it removes all of the apps from the device including all the data. Which is very annoying. Someone told me there is a way to tell iTunes to keep separate track of the devices. How can I setup iTunes such that it will not transfer all my iPhone-only apps to my iPad every time I sync? Basically I want the devices to be the "master". They give the direction which apps should be on the device, and not the other way around.

    Read the article

  • How to let Linux Python application handle termination on user logout correctly?

    - by tuxpoldo
    I have written a Linux GUI application in Python that needs to do some cleanup tasks before being terminated when the user logs out. Unfortunately it seems, that on logout, all applications are killed. I tried both to handle POSIX signals and DBUS notifications, but nothing worked. Any idea what I could have made wrong? On application startup I register some termination handlers: # create graceful shutdown mechanisms signal.signal(signal.SIGTERM, self.on_signal_term) self.bus = dbus.SessionBus() self.bus.call_on_disconnection(self.on_session_disconnect) When the user logs out, neither self.on_signal_term nor self.on_session_disconnect are called. The problem occurs in several scenarios: Ubuntu 14.04 with Unity, Debian Wheezy with Gnome. Full code: https://github.com/tuxpoldo/btsync-deb/tree/master/btsync-gui

    Read the article

  • Converting massive images to PDF, without crashing applications

    - by BloodyIron
    I'm trying to work with a large-format scanner, and we are scanning very long documents. Example, one of our documents we cut into two pieces, and one of those pieces is 3633x82486 in resolution. My application, Scanning Master 21+, which comes with the device (Graphtec CSX300-09) can output PDF, however when I try to save to PDF it complains about file being too large. I can successfully output to BMP however. GIMP can even open this BMP, after taking a while to load it. The resulting files range from 200MB - 1.2GB in size. Acrobat refuses to open the BMP format, saying it isn't supported or is damaged (which I know is not true). As I mentioned, the PDF plugin for GIMP crashes when I try to export to PDF. I'm really not sure what is the best tool for this job. So what is the best tool to produce PDF documents of very large images?

    Read the article

  • Can GnomeKeyring store passwords unencrypted?

    - by antimeme
    I have a Fedora 15 laptop with the root and home partitions encrypted using LUKS. When it boots I have to enter a pass phrase to unlock the master key, so I have it configured to automatically log me in to my account. However, GnomeKeyring remains locked, so I have to enter another pass phrase for that. This is unpleasant and completely pointless since the entire disk is encrypted. I've not been able to find a way to configure GnomeKeyring to store its pass phrases without encryption. For example, I was not able to find an answer here: http://library.gnome.org/users/seahorse-plugins/stable/index.html.en Is there a solution? If not, is there a mailing list where it would be appropriate to plead my case?

    Read the article

  • Options to efficiently synchronize 1 million files with remote servers?

    - by Zilvinas
    At a company I work for we have such a thing called "playlists" which are small files ~100-300 bytes each. There's about a million of them. About 100,000 of them get changed every hour. These playlists need to be uploaded to 10 other remote servers on different continents every hour and it needs to happen quick in under 2 mins ideally. It's very important that files that are deleted on the master are also deleted on all the replicas. We currently use Linux for our infrastructure. I was thinking about trying rsync with the -W option to copy whole files without comparing contents. I haven't tried it yet but maybe people who have more experience with rsync could tell me if it's a viable option? What other options are worth considering?

    Read the article

  • Reducing volume of an audio device on windows 7

    - by bdonlan
    I have a USB headset with a very loud amplifier, but low granularity in its gain control. In order to get comfortable audio, I have to reduce the individual application levels in the mixer to '1', and the master mixer to around '10'. Of course, new applications start out at '10', and immediately blast out my ears. Is there a way to add a filter to cut down the volume some so I can get better control of it? That is, reduce the volume of '100' so I can work within a reasonable range.

    Read the article

  • Tool or website or process to display previews of website templates residing in archive files?

    - by Tony_Henrich
    I have hundreds of website templates in rar or zip files. To view any of them I have to extract the archive to a temporary folder and then view the template in there. It's a time consuming manual process to do this for each template Is there a tool which enables me to quickly preview the templates in the files? OR (if I extract each template into a separate folder off a master folder) A web app which can enable previewing of each template by automatically creating a link or a preview image (similar to template sites) of the home page for? OR any method to preview the templates in the fastest convenient way possible?

    Read the article

  • puppet onlyif specified nodes

    - by Valintinr
    I'm trying to write a puppet template. I have a puppet-master and a few puppet-agents and they all must be divided. I think it's good to do this by the node's hostname. But when I tried to do this I've encountered an error "puppet-agent[169037]: (/Stage[main]//Exec[adduser]) Could not evaluate: Could not find command 'ru1'" see code below exec { 'adduser': command => 'sudo adduser -m -p pawSfQewWrUAA test -G wheel', path => [ '/bin','/usr/bin' ], onlyif => "$hostname == ru1" } I need to specify this task for only one node with the hostname ru1. So have can I do this? Thanks.

    Read the article

  • What could be causing LVM errors on first boot after install in Debian?

    - by ianfuture
    Hi, I've installed Debian (lenny) on a machine at home. It was set up during install to have a /boot partition, then the rest was encrypted, then had an LVM ontop of that, then all the other partitons inside LVM. After install completed and on first boot it asked for password to un-encrypt(same password for both drives) then it showed an error which said LVM could not find a physical device with a particular UUID or something similar. LVM install is over two HDs. One is 120GB and one 40GB. 120GB is Master on its IDE cable and this has /boot on it. 40GB is slave on the other IDE cable. Is there anything that could be done to rescue this install? Or diagnose problem? It took ages to get installed due to time spent enrypting drives and I'd rather not go through that again. :( Thanks.. Ian

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >