Search Results

Search found 3056 results on 123 pages for 'ben daniel'.

Page 39/123 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • ESXi add datastore without partitioning

    - by Daniel
    hi all, I've recently started playing with ESXI and now want to move all my current data (movies etc) to an openfiler vm image. Currently I have esxi 4.1 running off a Patriot XT memory stick and a 500gb hdd for the VM datastore which I will put OF on. How do I go about adding in the other hard drives I have to make them available to the OF machine without losing the data on them? They are currently formatted as NTFS

    Read the article

  • memcached and PHP ... massive lag with sessions

    - by Ben Dauphinee
    I'm working on a new server built with Unbuntu 10.04, running php-fastcgi, nginx, and memcached. phpinfo() script loads and works great, same as a test memcached script. For any script using sessions, page load time rockets through the roof. --- memcached.ini --- extension=memcached.so memcache.hash_strategy = "consistent" memcache.max_failover_attempts = 100 memcache.allow_failover = 1 session.save_handler = memcached session.save_path = "tcp://127.0.0.1:11211?persistent=1&weight=1&timeout=1&retry_interval=15" Let me know if you need to see any other configs.

    Read the article

  • Can a Double-Density Floppy drive be swapped out for a High-Density drive?

    - by Ben
    I'm trying to fix the floppy drive in my Roland MC-500. The floppy drive inside is a Matsushita DDF3-1 which doesn't seem to be in production anymore. The DDF3-2 is still available, however it is an HD drive. Does anyone know off the top of their heads if there is any issue with simply swapping the two drives? Could the power usage be different on the drives? Since the machine is expecting a 720k DD drive, are there any jumper settings to have the HD drive function as a DD drive?

    Read the article

  • How to configure Dovecot to not serve large emails to high-latency clients?

    - by Daniel Quinn
    I have a Dovecot mailserver running at home on a flaky cable connection. For the most part, the IMAP functionality works beautifully, but I'd like to add one feature if I can: I want Dovecot not to serve large messages to high-latency clients. That is to say, if someone decides that it's a good idea to send me a 9.3mb email to me, I don't want to get it unless I'm on my LAN at home. This can't be an uncommon request, but I'm having trouble finding the configuration option in their documentation. Any ideas and/or good keywords to use in Googling would be awesome.

    Read the article

  • Problems getting Cron to run processes tagged @reboot for LDAP users

    - by Ben Torell
    I have a lab of computers running Ubuntu 9.10. Most of the people who log on to these computers are users from an LDAP server, and not local users. We discovered that if an LDAP user has a crontab with an entry marked to be run @reboot, the command will not actually run upon the reboot of a machine. I'm pretty sure that this is because the cron daemon starts before networking is fully up, so the crontabs of any LDAP users aren't loaded and run or checked for @reboot. In fact, cron will ignore LDAP users' crontabs entirely after a reboot until that user runs crontab -e again and saves, or until the cron daemon is rebooted. We were able to fix one part of this problem by adding the following line to /etc/crontab: @reboot root /bin/sleep 45 && /etc/init.d/cron restart Thus, when cron starts back up upon a reboot, it waits for networking to get up, then restarts the cron daemon. That fixes the problem of crontabs not being read at all for LDAP users. However, since it's the cron daemon being restarted and not the computer, @reboot entries are ignored. Is there a way for a user to make a command run upon restarting the daemon, rather than a reboot? Or is there a better solution to this overall problem? Thanks.

    Read the article

  • Run QuickTime from command line to export video

    - by Daniel Huckstep
    I have a bunch of avi's I am converting to m4v, and I can do this in QuickTime by opening the video and then go 'Save As', select a folder, select the type (iPhone, Movie, etc), blah blah blah. But I have around 100 videos I want to do this with. Command line options? Or batch processing options in the GUI? Enlighten me, please. This is QuickTime X on Snow Leopard.

    Read the article

  • Calculate minimum ext3 partition size for certain amount of data

    - by Daniel Beck
    These following ext3 partitions contain identical data. As we can see, the larger the partition size, the more space is required for the same files: Filesystem 1K-blocks Used Available Use% Mounted on /dev/loop11 3965777 561064 3199964 15% [...] /dev/loop19 573029 543843 29186 95% [...] Filesystem Size Used Avail Use% Mounted on /dev/loop11 3.8G 548M 3.1G 15% [...] /dev/loop19 560M 532M 29M 95% [...] Filesystem Inodes IUsed IFree IUse% Mounted on /dev/loop11 1024000 1656 1022344 1% [...] /dev/loop19 1024000 1656 1022344 1% [...] I start with a partition of fixed size that possibly wasted a lot of space and I want to create a partition that is able to hold that data but with (almost) minimal size. How can I reliably calculate that minimal partition size needed for storing a certain amount of data? The amount of data changes over time, and I need to automate these calculations.

    Read the article

  • WAMP server won't run with PHP 5.3.4 but will with PHP 5.2.11

    - by Ben Williams
    I have a 64bit Windows 7 Professional machine. I'm running WampServer Version 2.1 with Apache 2.2.4. It was installed on a clean machine. I'm using the default ini/conf files as they come. Wamp is installed in C:\wamp\, with php5.2 at C:\wamp\bin\php\php5.2.11 and php5.3 at C:\wamp\bin\php\php5.3.4. Both folders have the same permissions. When I run WAMP with 5.2.11 picked, it starts fine. When I run it with 5.3.4 picked, there are no errors in the Apache or PHP error logs, but I get The Apache service named reported the following error: httpd.exe: Syntax error on line 115 of C:/wamp/bin/apache/apache2.2.4/conf/httpd.conf: Cannot load C:/wamp/bin/php/php5.3.4/php5apache2_2.dll into server: The Apache service named is not a valid Win32 application. in my system application error logs. 5.2.11 calls C:/wamp/bin/php/php5.2.11/php5apache2_2.dll and that doesn't throw an error. What am I doing wrong?

    Read the article

  • Getting SMB file shares working over a PPTP VPN

    - by Ben Scott
    I'm having issues getting SMB file shares working over a PPTP VPN. The server setup consists of a security device (DrayTek V3300) which passes the PPTP authentication to a SBS2003 server running RRAS. The server is the DC and provides DNS and WINS, the single NIC's name server is set to 127.0.0.1, and DHCP on the DrayTek sets the server IP as the DNS. If I create a new VPN connection in Win7, leaving everything as default apart from the server, username, password and domain, I can: ping everything by IP address resolve IPs with nslookup using their fully-qualified name, as in nslookup fileserver.mydomain.local ping machines by fully-qualified name, as in ping fileserver.mydomain.local However if I try to access a file share: within Explorer, I get "Windows cannot access ..." with "Error code: 0x80004005 Unspecified Error", using net use z: \\fileserver.mydomain.local\share, I get "System error 53 has occurred. The network path was not found." If I add the machine name to my HOSTS file I can use the file share, which is my last-ditch workaround, but I have a number of VPN users and would rather a solution that doesn't involve me trying to hand-edit system files on computers half a country away. If I set the WINS server explicitly in the connection's IPv4 settings I don't have to use the FQN to ping the machine, but that doesn't change anything else.

    Read the article

  • GitLab post-receive hook not firing

    - by Ben Graham
    Apologies if this isn't the right stackexchange. I have a GitLab install. It was installed over the top of a gitolite install that was only a few days old, and I assume this non-standard setup is at the root of my problem, but I cannot pin it down. The problem is straightforward: post-receive hooks are not fired. This prevents 'project activity' appearing in GitLab. The problem looks like: $ git push #... error: cannot run hooks/post-receive: No such file or directory Hook Exists The post-receive hook/symlink exists and is executable: -rwxr-xr-x 1 git git 470 Oct 3 2012 .gitolite/hooks/common/post-receive lrwxrwxrwx 1 git git 45 Oct 3 2012 repositories/project.git/hooks/post-receive -> /home/git/.gitolite/hooks/common/post-receive It's Executable By GitLab The gitlab user can execute the script (I have removed the /dev/null redirect and fed in blank input to get an 'OK' as output): sudo su - gitlab -c /home/git/.gitolite/hooks/common/post-receive OK GitLab Can Find It GitLab is looking for hooks in the correct location: $ grep hooks /srv/gitlab/gitlab/config/gitlab.yml hooks_path: /home/git/.gitolite/hooks/ and $ bundle exec rake gitlab:app:status RAILS_ENV=production # ... /home/git/.gitolite/hooks/common/post-receive exists? ............YES Environment The env -i line in the hook is commonly cited as an issue. I think that would occur after this problem, but for completeness, redis-cli is found OK: $ env -i redis-cli redis> I've run out of debugging ideas on this one. Does anybody have any suggestions?

    Read the article

  • How to Eliminate Tape Backup and Off-site Storage Service?

    - by Daniel Lucas
    PLEASE READ UPDATE AT THE BOTTOM. THANKS! ;) Environment Info (all Windows): 2 sites 30 servers site #1 (3TB of backup data) 5 servers site #2 (1TB of backup data) MPLS backbone tunnel connecting site #1 and site #2 Current Backup Process: Online Backup (disk-to-disk) Site #1 has a server running Symantec Backup Exec 12.5 with four 1TB USB 2.0 disks. BE jobs for full backups run nightly on all servers in site #1 to these disks. Site #2 backs up to a central file server there using software they already had when we purchased them. A BE job pulls that data nightly to site #1 and stores them on said disks. Off-site Backup (tape) Connected to our backup server is a tape drive. BE backs up the external disks to tape once a week which gets picked up by our off-site storage company. Obviously we rotate two tape libraries, one is always here and one is always there. Requirements: Eliminate the need for tape and off-site storage service by doing disk-to-disk at each site and replicating site #1 to site #2 and vice versa. Software based solution as hardware options have been too pricey (ie, SonicWall, Arkeia). Agents for Exchange, SharePoint, and SQL. Some Ideas So Far: Storage DroboPro at each site with an initial 8TB of storage (these are expandable up to 16TB at present). I like these because they are rackmountable, allow disparate drives, and have iSCSI interfaces. They are relatively cheap too. Software Symantec Backup Exec 12.5 already has all the agents and licenses we need. I'd like to keep using it unless there is a better solution, similarly priced, that does everything BE does plus deduplication and replication. Server Because there is no more need for a SCSI adapter (for tape drive) we are going to virtualize our backup server as it is currently the only physical machine save for SQL boxes. Problems: When replicating between sites we want as little data as possible to go across the pipe. There is no deduplication or compression in what I have laid out here so far. The files being replicated are BE's virtual tape libraries from our disk-to-disk backup. Because of this each of those huge files will go across the wire every week because they change every day. And Finally, the Question: Is there any software out there that does deduplication, or at least compression, to handle just our site-to-site replication? Or, looking at our setup, is there any other solution that I am missing that might be cheaper, faster, better? Thanks. Sorry so long. UPDATE 2: I've set a bounty on this question to get it more attention. I'm looking for software that will handle replication of data between two sites using the least amount of data possible (either compression, deduplication, or some other method). Something similar to rsync would work but it needs to be native to Windows and not a port involving shenanigans to get up and running. Prefer a GUI based product and I don't mind shelling out a few bones if it works. Please, answers that meet the above criteria only. If you don't think one exists or if you think I'm being to restrictive keep it to yourself. If after seven days there is no answer at all, so be it. Thanks again everyone. UPDATE 2: I really appreciate everyone coming forward with suggestions. There is no way for me to try all of these before the bounty expires. For now I'm going to let this bounty run out and whoever has the most votes will get the 100 rep points. Thanks again!

    Read the article

  • Using FastCGI for PHP on Mac OS X

    - by DanieL
    I have apache2 running on a Mac OS X (10.6) machine, and it is currently serving PHP pages fine, using php5_module but I would like to configure fastcgi_module to handle the php pages. I have tried using the configuration found on www.fastcgi.com but I get the following error: [warn] FastCGI: (dynamic) server "/Path/to/script.php" has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds [warn] FastCGI: server "/usr/bin/php" has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds I'm thinking this is because PHP has not been compiled with FastCGI, but seeing as it came with Mac OS X i'm not sure how to recompile it. Is this the problem? And if so, how do I recompile PHP with FastCGI?

    Read the article

  • cant send using postfix from external ip address

    - by daniel
    i have postfix set up as a satellite to listen on port 587 i can send email outside fine trough the postfix(ubuntu) box from the local network with no problems when i try to connect to the postfix(ubuntu) box from a external ip and send mail it spits back a 554 5.7.1 Relay access denied error i can telnet to it fine, just cant send mail this is my main.cf : smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no readme_directory = no smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = smtp_use_tls = no myhostname = cotiso-desktop alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = mydomainname.com, cotiso-desktop, localhost.localdomain, localhost relayhost = smtp.mydomainname.com mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all there is no security set up yet, i'm just trying to get it working first any ideas? thanks in advance

    Read the article

  • Why is there a separate "unicorn_rails" for Rails apps?

    - by Ben Lee
    According to the Unicorn docs, there are different binaries for Rails apps and other Rack apps: non-Rails Rack applications In APP_ROOT, run: unicorn for Rails applications (should work for all 1.2 or later versions) In RAILS_ROOT, run: unicorn_rails They seem to also take the same command-line parameters. But Rails is built on top of rack, so I don't understand why this dichotomy is required. Is there any reason you can't just use unicorn for Rails apps?

    Read the article

  • Cfengine Perform action based on variable value

    - by Daniel
    In cfengine, I have a variable that is set to the output of a command. Let say variable myoutput is set to "hi world". How can I execute a command based on the contents of myoutput. I would like to do something like this (sudo cfengine code): bundle agent test { vars: "myoutput" string => execresult("echo 'hi world';","noshell"); commands: myoutput=="hi world":: "/usr/bin/php myaction.php"; }

    Read the article

  • HTC Android Fails to mount- Mount from computer?

    - by Ben Franchuk
    I Have an HTC Incredible S (S-Off, Rooted, ViperVIVO 1.3.0 ICS) that has seemingly ceased to posses the ability to mount its SD Storage to my computer. For whatever reason, whenever I plug in my device to transfer files from computer to phone and vice versa, the computer, for some reason, cannot actually aces the phone. I get prompted with a window on my phone when I first plug it in, asking me which mode I want to put the device into (Charge mode, tether mode, etc.), and even if I select the "Disk Drive" function, the phone still cannot successfully mount to my computer. The phone itself unmounts itself from the SD and says that the computer is connected, but again, it doesn't work. Is there any way to force mount the device from my computer- either via command or otherwise? This should help in that if I unmount the SD from the phone I should be able to mount it to my computer, from my computer, Correct?

    Read the article

  • Why does Microsoft Windows' performance appear to degrade over time?

    - by Ben Aston
    Windows XP/2k3 and earlier (can't attest to Vista, but suspect it's the same) all appear to become more sluggish over time as applications are installed and uninstalled. This is not a scientifically tested observation, but more of a learned-through-experience piece of wisdom. (I've always suspected the registry as being behind the issue.) Does anyone have any concrete evidence of this degradation occurring, or it just an invalid perception of mine?

    Read the article

  • Arrow keys don't work in htop on OS X in Terminal

    - by Daniel Huckstep
    Not sure when it happened, or what I did (if anything), but my arrow keys don't work in htop anymore to scroll around. You should be able to press up and down to scroll up and down the process list, but they don't work. Some keys tend to be the equivalent of 'go back' or something. If I'm on the settings screen, left, up and down all go back to the main screen. htop seems to be the only affected program.

    Read the article

  • SpeedTracer NETWORK_RESOURCE_RESPONSE vs NETWORK_RESOURCE_FINISH

    - by Ben Flynn
    I'm using SpeedTracer with GoogleChrome to measure the load times of requested resources. The SpeedTracer site says: NETWORK_RESOURCE_RESPONSE "Indicates that the renderer has started receiving bits from the resource loader" NETWORK_RESOURCE_FINISH "Indictes a resource load is successful and complete." In my mind that means we would always see a network resource response (bytes are arriving) before we see a finish (all bytes received). This doesn't seem to be the case at all. Here is a sample: Request Timing @33519ms for 926ms Response Timing @34445ms for -847ms Total Timing @33519ms for 78ms I'm guessing response time isn't supposed to be negative. Can someone explain this or is this a bug? I'm using Chrome 10.0.612.3 dev with a SpeedTracer I downloaded today.

    Read the article

  • Windows repair console, impossible?

    - by Daniel
    I found an old Windows XP SP2 in my -trash- cd can and tried it on a 30 GB FAT32 partition. Installation went fine till the copying operation was completed and XP asked for reboot. After that either it starts over again or throws invalid disk. Starting over is an infinite loop the only way I see is to choose the "Repair console" but I'm not used to a DOS box. Can anyone help me through this harmful installation?

    Read the article

  • Recommendations for VMWare web server environment with load balancer.

    - by Ben
    We run IIS websites on a VMWare production server that pull image content and video content from a separate IIS instance on another server (media server). The media calls (images and video) are straight http:// calls and not using a streaming application. During peak traffic periods, we clone the production server five times and have a load balancer distribute traffic to all five production servers. The media server does not get ramped up. We noticed that the processing and resources on the media server gets very taxed during this period. Would it make sense to run the IIS instance for the media server locally on the production server and have it cloned with the production servers, then have a rule on the load balancer negotiating these media calls from the website? Would it be better to allocate more resources (memory and CPUs) to the media server VM and not clone it with the production servers? Recommendations are sincerely appreciated.

    Read the article

  • Dangers of Running Computers w/o Air Conditioning

    - by Daniel Bingham
    I recently moved in to an apartment with out air conditioning. This is fine most of the time as I am in upstate New York. It only ever gets above the high 70s during the hottest of the summer months. And when it does, I'm stubborn enough that I'll just deal with wearing minimal clothing around the house. However, I'm worried about my computers. I'm a software developer and gamer, so many of my machines are very high powered. And at least one of them is a server that must be left on 24/7 (not just a game server - also serves multiple websites). I've never before had to worry about the heat too much, as I always lived in buildings with central air. The in building temperature rarely got much above 70 F. All of the machines I built had good enough air cooling that I never saw a problem. Now the temperature in building is pushing 100F and I'm worried that the machines will not be able to keep themselves cool enough by simply blowing already hot air over themselves. The hottest of them I've turned off. However, the server I cannot. It's an old Dell (not custom build) that runs on a Pentium 4 (2.2GHz). It only has a single hard drive, integrated video. And it'd not running any processor intensive servers. Just basic LAMP. It used to run a MUD server, but that's off for now. So it should be idling most of the time. I haven't been able to find any sort of built in temperature sensors in the hardware... at least not any that the programs I've found in the Debian repository can read. And it's an inherited machine to which I do not have the full specs, so I don't know the tolerances anyway. How worried should I be about it melting down on me? How worried should I be about the hard drive melting or becoming corrupted? To generalize the question for other people, what are the safe temperature tolerances for most machines. How widely does it vary, and how does one go about determining when their machine is running too hot and needs to be shut down?

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >