Search Results

Search found 46487 results on 1860 pages for 'reading files'.

Page 1626/1860 | < Previous Page | 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633  | Next Page >

  • Windows Image Backup - renamed folder now restore cannot find any backups

    - by Schneider
    A while back I decide to create a couple of Windows Image Backups of my workstation at various points during installation from clean. While doing this I decided to rename the folders containing the VHDs from 'Backup <Date>' to something else of my choosing. I didn't bother testing at the time that the restore still worked. Now I come to use these backups for doing a bare metal restore to a different computer. The problem is restore cannot 'see' any of the backups. So I have deduced that maybe I need to rename them back to the 'Backup <Date>' pattern unfortunately I cannot determine the exact values that would have originally been used here. I have tried by best guest but the images still cannot be found. I have tried doing both a network and a usb hdd restore. No luck on either. P.S. I know I can retrieve files from within the VHDs, the problem is I am trying to save myself time of reinstalling lots of big applications... not trying to recover data.

    Read the article

  • Ubuntu in VirtualBox File Modified Time in Future and PHP slow file operations

    - by user1750
    For some reason, some of my files have a last modified date in the future. In addition to this, file operations in PHP are SUPER slow. For example, rebuilding the Symfony2 cache can take over 40 seconds (its takes 1-2 on my MacBook Pro). Notice the time for ListingsCRUDController.php. It just says "2012". In order see the date more clearly I ran ls --time-style="full-iso" -l For some reason it shows that this file's last modified date is ~5 hours into the future. System time: To make things more confusing, the system will intermittently speed up. Suddenly, my app will start serving requests in 1-2 seconds (down from 40 seconds) for no apparent reason. I mean I don't do anything to my code/system config - it just changes. Also, during a slow PHP request, the php5-fpm process (nginx) uses 100% of the CPU for the duration of the request. This is the second VM this has happened on and I need to know why its doing this. It has become unusable. Information About My Setup VirtualBox 4.2.0 Host: Macbook Pro Guest: Ubuntu Server 12.04 Package dkms is installed Timezones match for Ubuntu and PHP. Things I've Tried Both Apache and Nginx. APC enabled and disabled. Xdebug enabled and disabled. 1 processor up to 4 processors. 1gb memory up to 4gb memory. I've installed Ubuntu using the regular kernel and the VM kernel.

    Read the article

  • How to rescue from an SD (SDHC) card that I can't reformat (possible hardware failure)

    - by sbwoodside
    I have a transcend 16GB SDHC card and a lot of photos on it that I'd like to recover. When I plug it into the SD card reader, it takes a while for the Mac to even recognize that there's a disk present, and it shows up as 1.07GB with geometry 520/64/63 (according to fdisk). First I tried file recovery: PhotoRec: no files are found (the images are in CR2 format and I'm using testdisk-6.14-WIP which claims to recognize that format under TIF) dd / ddrescue: they create a 1.07GB image, same problem as above TestDisk: doesn't find any partitions to recover I found a source saying that the correct geometry for this type of SD Card is Heads 255, Sectors/Track 63, Cylinders 1953, so I tried manually setting that geometry in PhotoRec/TestDisk. No improvement. Next I tried formatting the disk with fdisk. After writing and quitting, I ran fdisk again and it reported that the new format hadn't been saved on the disk. I also tried resetting the format/partitions with TestDisk and that failed also. The fdisk log is below. I don't really care about the card, I've already ordered a new SanDisk card. But I'd like to get the data off. Maybe, is there any way to force dd or some other tool to create an image of the disk based on the original geometry and not on what the card "thinks" its geometry is? Or am I missing something?

    Read the article

  • WIM2VHD failing with "Cannot derive Volume GUID from mount point."

    - by Jacob
    I'm trying to use WIM2VHD according to the instructions on Scott Hanselman's blog post to create a Sysprepped VHD image to boot from. I've installed the WAIK, and I have my Windows 7 sources mounted as a virtual drive. When I try to run WIM2VHD like this: cscript WIM2VHD.wsf /wim:F:\sources\install.wim /sku:Ultimate /vhd:E:\WindowsSeven.vhd /size:30721 I get the following log: Log for WIM2VHD 6.1.7600.0 on 11/2/2009 at 10:51:18.16 Copyright (C) Microsoft Corporation. All rights reserved. MACHINE INFO: Build=7600 Platform=x86fre OS=Windows 7 Ultimate ServicePack= Version=6.1 BuildLab=win7_rtm BuildDate=090713-1255 Language=en-ZA INFO: Looking for IMAGEX.EXE... INFO: Looking for BCDBOOT.EXE... INFO: Looking for BCDEDIT.EXE... INFO: Looking for REG.EXE... INFO: Looking for DISKPART.EXE... INFO: Session key is E01E1ED7-C197-4814-BDE4-43B73E14FCC4 INFO: Inspecting the WIM... INFO: Configuring and formatting the VHD... ******************************************************************************* Error: 0: Cannot derive Volume GUID from mount point. ******************************************************************************* INFO: Unmounting the VHD due to error... WARNING: In order to help resolve the issue, temporary files have not been deleted. They are in: C:\Users\Jacob\AppData\Local\Temp\WIM2VHD.WSF\E01E1ED7-C197-4814-BDE4-43B73E14FCC4 *emphasized text*Summary: Errors: 1, Warnings: 1, Successes: 0 INFO: Done. Any ideas?

    Read the article

  • Puppet and Vim fighting over Ruby version

    - by devians
    I have installed puppet from the .dmg from puppetlabs. If I remove ruby 1.9.3, puppet works, but other things like my vim install (dependant plugins) do not. According to http://docs.puppetlabs.com/guides/platforms.html#ruby-versions 1.9.3 is supported. So whats going wrong with puppet? % uname -a Darwin Kusanagi.local 11.4.2 Darwin Kernel Version 11.4.2: Thu Aug 23 16:25:48 PDT 2012; root:xnu-1699.32.7~1/RELEASE_X86_64 x86_64 % which ruby /usr/local/bin/ruby % ruby --version ruby 1.9.3p327 (2012-11-10 revision 37606) [x86_64-darwin11.4.2] % /usr/bin/ruby --version ruby 1.8.7 (2012-02-08 patchlevel 358) [universal-darwin11.0] % brew info ruby 1 ? ruby: stable 1.9.3-p327, HEAD http://www.ruby-lang.org/en/ Depends on: pkg-config, readline, gdbm, libyaml /usr/local/Cellar/ruby/1.9.3-p327 (796 files, 17M) * https://github.com/mxcl/homebrew/commits/master/Library/Formula/ruby.rb ==> Options --with-tcltk Install with Tcl/Tk support --with-suffix Suffix commands with "19" --universal Build a universal binary --with-doc Install documentation ==> Caveats NOTE: By default, gem installed binaries will be placed into: /usr/local/Cellar/ruby/1.9.3-p327/bin You may want to add this to your PATH. % puppet /usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- puppet/util/command_line (LoadError) from /usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/bin/puppet:3:in `<main>'

    Read the article

  • RSA keys - virtual hosts

    - by Bosworth99
    Pardon my noobness, but I just got started with VPS (linux) hosting; setting up passwordless ssh for multiple users has proved to be kind of a pain. Currently I'm the single user of this ubuntu 10.04 LTS VPS (linode.com). I was able to establish a single rsa passkey under my home/user/.ssh/authorized_keys location. Fine. PuTTy works as expected, and Filezilla (sftp) links up as required. I've been working on a single site that this user owns, and thats not been a problem. Now, I want to set up some other sites, and I've chosen Webmin with the VirtualMin plugin to make this work. I made another user (or, rather, virtualmin did), but I've been unable to get FileZilla to link up to this new user. Could anyone with experience here explain what the setup is supposed to look like? IE - can I use a single rsa key pair for all accounts (if, for example, I give ownership of files to the original user?). Or is it standard practice to create a separate key pair for each user, and establish a separate putty/filezilla login for each? I've spent enough time dinking around with this to be frustrated. "Sever rejected the provided key" error sucks after the fifth hour. I'm about to set up an ftp server and call it a day. Any thoughts would be most welcome -

    Read the article

  • Apache+PHP on Windows Server 2008

    - by Álvaro G. Vicario
    I've installed Apache/2.2 and PHP/5.3 lots of times under Windows XP, Windows Vista and Windows Server 2003. The official *.msi installers work fine and configure everything. Now I need to install them into a Windows Server 2008 R2 Standard 64-bit box and I'm facing nothing but problems: There are no official 64 bit binaries for Apache and no binaries at all for PHP (official or third-party). It's alright, I'll do with good 32 bits, but it's kind of surprising. Official documentation is vague, generic and completely unaware of UAC or any recent Windows security feature. The PHP installer is unable to configure mod_php and the Apache installer is unable to configure... well, Apache. After three hours I've finally reached the point where I'm installing everything in the root folder and assigning full control access to all users in all files and directories and all I've got is a PHP-less Apache server that's able to serve static pages. So I guess it's time to stop and think. My question is: Has anyone installed an Apache+PHP production server under Windows Server 2008 in a serious, secure and reliable way and documented the whole process? Or should I just find a bundle like XAMPP and the like that requires no installation? === EDIT === I've installed Xampp Lite 1.7.3 and everything was working in 5 minutes. I'd still like to find some documentation about installing the original packages: XAMPP installs tons of stuff I don't need and offers no tool to enable and disable PHP extensions.

    Read the article

  • Exim 4 Virtual Domains and Catchall on Debian (Squeeze)

    - by parazuce
    Hello, I've been at it for about 4 hours now. Searching as well as trying different tutorials. Here's my setup: I have 2 domains, both under my own DNS server (MX records setup as well). I have exim4 successfully running, and it is able to send messages from both of those domains. I have tested this using sendmail, and manually setting the "From" attribute. Exim successfully delivers mail to users no matter which domain was specified. I'm fine with that, but I'm having an issue editing virtual domains, and adding custom delivery options (such as a catch all). I've been searching for about 4 hours, and I can't find any up-to-date documentation on how to do this. The old methods would be to add a line such as: domainlist local_domains = @:localhost:dsearch;/etc/exim4/virtual Once that line was added, I made a directory at /etc/exim4/virtual, then created files inside such as example.com which would then contain rules for delivery under that domain. This did not work, however. Searching further, I've found that exim no longer supports dsearch (I guess because they claim it never has?) This is where I'm stuck. I'm on a "split" configuration as well.

    Read the article

  • Postfix: change sender in queued messages

    - by ring0
    Following a complete re-installation we got a problem with the configuration: the sender address was wrong and some recipients (mail servers) rejected them. So there is a bunch of mails stuck in the Postfix queue. Ideally, a change of the sender address directly in the queued mails, and then flushing the queue would be optimal. I tried this answer that addresses this very problem. But messages don't seem to be easily modifiable in the version I have (2.11.0). For instance there is no /var/spool/mqueue dir, but, instead, /var/spool/postfix/... active bounce corrupt defer deferred dev etc flush hold incoming lib maildrop pid private public saved trace usr and the dir of interest is deferred. I tried to modify a few files there changing the wrong domain with the correct one (and was careful to ensure only those were changed). But then, those mails were moved to corrupt, meaning that a simple text change doesn't seem to work (done with vi). Any other cleaner way to change the sender in queued mails?

    Read the article

  • Can I have a single solid state drive and a RAID array on the same machine?

    - by jaminto
    Hi- To summarize, i'm looking to use a single solid state drive as my primary drive, and two conventional sata drives in a RAID 1 configuration for data. I am trying to install 64-bit Windows 7 onto this configuration. Is this possible? Here are the details: I built a desktop that has been running 64-bit Vista on two 500Gb in a RAID 1 array for a few years. I just purchased an Intel X25-M 80Gb Sata Solid-State Drive, and was planning on using this a my primary drive, and keeping the RAID 1 array as my data drive. I added the SSD drive and in the RAID setup, configured it as a RAID 0 array of only one disk. Then, I tried to do a clean install of windows 7 64-bit, but got stuck in the "Missing driver for CD/DVD drive" black hole of selecting driver files and Windows telling me that i don't have the appropriate driver for my hardware. The missing hardware is NOT a CD/DVD drive, since i'm installing off of my only CD/DVD drive. Plus at one point i was able to point it at a driver for my raid controller, and then my hard drives magically showed up as browsable sources for finding drivers for some other unnamed device that setup couldn't recognize. After a few hours of trying drivers (this was a very slow process) i decided to reboot and look at the BIOS settings. I'm using an ASUS M2A-VM motherboard which has an ATI SB600 RAID controller on board. I switched the "On board SATA Type" setting from "SATA" to "AHCI" thinking that since AHCI is an Intel thing, this would help. Unfortunately, this abandoned my RAID configuration, and my previously mirrored drives are showing up as separate drives when i boot into my current windows installation. Am i trying to do the impossible here? Should i just buy a separate SATA/RAID PCI card and plug the SSD into that? Any help would be greatly appreciated.

    Read the article

  • Can I make two wireless routers communicate using the wireless?

    - by Dana Robinson
    I want to make a setup like this: cable modem <-cable- wireless router 1 <-wireless- wireless router 2 in another room <-cables- PCs in another room Basically, I want to extend my network access across the house and then have a bunch of network jacks available for my office PCs. Right now, I have a cable modem going to a wireless router in one room and a PC with a wireless PCI card in it in the office on the other side of the house. I use internet connection sharing with the other PCs in the office. The problem is that ICS is flaky, especially when I switch to VPN on the Windows box to access files at work. I picked up a wireless USB adapter that I thought I could share among the PCs I work on but I'm not very happy with it so I'm going to return it (NDISwrapper support for it is poor). Is this possible? My wireless experience so far has been pretty straightforward so I have no idea what kind of hardware is available. I've looked at network extenders but those just look like repeaters for signal strength. I want wired network jacks in my office.

    Read the article

  • Google Play Music Not Adding MP3s On-Demand

    - by J0e3gan
    My recent attempts to add music on-demand to Google Play Music have yielded nothing - no "Processing music..." or "Added __ of __" messages, just nothing. Previously I could add music on-demand; and nothing has changed on the machine from which I successfully added music previously, from which I have tried to add music on-demand recently. What could be hampering my ability to add music on-demand? WHAT I'VE TRIED: Right after I started using GPM, I briefly found that I could not add music (on-demand), but the problem went away after a logout/login. This time a logout/login has not helped. Dragging & dropping or browsing to folders or files to add has made no difference either. Nor has waiting ridiculously long for GPM to show signs of life after adding music on-demand seemed to work. Digging deeper, I read a related Google Play Help article and followed its suggestions... ran the Google Play Music Manager troubleshooter = no errors or warnings double checked my available storage = 8 GB free double checked supported file types = MP3 is still supported (of course) ..., but the problem remains. UPDATE: I found that if I configure GPM to automatically upload music added to specific folders, it strangely does add automatically what it will not add on-demand.

    Read the article

  • WGet or cURL: Mirror Site from http://site.com And No Internal Access

    - by alharaka
    I have tried wget -m wget -r and a whole bunch of variations. I am getting some of the images on http://site.com, one of the scripts, and none of the CSS, even with the fscking -p parameter. The only HTML page is index.html and there are several more referenced, so I am at a loss. curlmirror.pl on the cURL developers website does not seem to get the job done either. Is there something I am missing? I have tried different levels of recursion with only this URL, but I get the feeling I am missing something. Long story short, some school allows its students to submit web projects, but they want to know how they can collect everything for the instructor who will grade it, instead of him going to all the externally hsoted sites. UPDATE: I think I figured out the issue. I though the links to the other pages were in the index.html page that downloaded. I was way off. Turns out the footer of the page, which has all the navigation links, is handled by a JavaScript file Include.js, which reads JLSSiteMap.js and some other JS files to do page navigation and the like. As a result, wget does not pick up an other dependencies because a lot of this crap is handled not on web pages. How can I handle such a website? This is one of several problem cases. I assume little can be done if wget cannot parse JavaScript.

    Read the article

  • ghettoVCB issue

    - by romgo75
    I have setup a ghettoVCB script in order to backup three VM. I put it in a crontab but I have an issue. In my backup folder I have 3 different folders, one for each VM. In each folder I have the following files: -rw-r--r-- 1 root root 1263 Mar 17 01:51 vm1-2010-03-16--2.gz -rw-r--r-- 1 root root 1263 Mar 17 00:41 vm1-2010-03-16--3.gz -rw-r--r-- 1 root root 1261 Mar 18 01:22 vm1-2010-03-17--1.gz drwxr-xr-x 1 root root 980 Mar 19 23:39 vm1-2010-03-19 The problem is the last folder. It seems that a backup didn't finish the process. When I read the logs concerning this folder I get: 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/datastore1/backup/ 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2010-03-19 23:00:01 -- info: CONFIG - DISK_BACKUP_FORMAT = zeroedthick 2010-03-19 23:00:01 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-03-19 23:00:01 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-03-19 23:00:01 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-03-19 23:00:01 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-03-19 23:00:01 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-03-19 23:00:01 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-03-19 23:00:01 -- info: CONFIG - LOG_LEVEL = info 2010-03-19 23:00:01 -- info: CONFIG - BACKUP_LOG_OUTPUT = stdout 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-03-19 23:00:01 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all http://... 2010-03-19 23:39:35 -- info: Initiate backup for vm1 2010-03-19 23:39:35 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-03-19" for vm1 Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1_1.vmdk'... ^MClone: 0% done.^MClone: 1% done.^MClone: 2% done.^MClone: 3% done.^MClone: 4% done.^MClone: 5% done.^MClone: 6% done.^MClone: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone Failed to clone disk : The file already exists (39). Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1.vmdk'... 2010-03-20 00:46:20 -- info: Removing snapshot from vm1 ... one: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone: 10% done.^MClone: 11% done.^MClone: 12% done.^MClone: 13% done.^MClone: 14% done.^MClone: 15% done.^MClone: 16% done.^MCl 2010-03-19 23:51:19 -- info: Removing snapshot from vm1 ... I can't run ghettoVCB anymore because the VM has a snapshot which has not been deleted. I know how to delete the snapshot, but I don't know why the VCB script is not able to handle rotation of the VM backups? Any ideas? Thanks!

    Read the article

  • Hyper-V Guests Dying

    - by Jon Rauschenberger
    I just hit my THIRD instance of a Hyper-V guest machine dying with the exact same behavior. In all three instances we are hosting WS2008 guests on a WS2008 host. AFter a config change, we reboot the guest and the guest OS comes up but in a very cripled state. Specifically, we are able to log into the guest, but can't launch any apps and the guest never comes active on the network. I opened a support ticket with MS the second time this happened and they focused in on the DCOM subsystem not coming up...best explanation they could provide was that permissions on key system files got corrupted. I eventually gave up on the ticket after close to 10 hours on the phone trying different things that were going no where. What really concerns me is that we have now seen the exact same thing happen to a guest hosted on a completly differet host machine. There is zero hardware overlap between the two. Has anyone seen this before?? It's really odd behavior, but it also seems like there's a pattern here that's concerning me. Thanks, jon

    Read the article

  • Stream video file in debian?

    - by Rob
    I've tried ffserver with ffmpeg, I've tried VLC, and I'm not sure what else to try or what I've done wrong. I've gone through, with VLC +-[ robert@s10 ]--[ ~ ] +[#!]¬ vlc --version VLC media player 2.0.0 Twoflower (revision 2.0.0-0-g421a4fc) VLC version 2.0.0 Twoflower (2.0.0-0-g421a4fc) Compiled by buildd on biber.debian.org (Mar 1 2012 22:21:37) Compiler: gcc version 4.6.2 (Debian 4.6.2-14) This program comes with NO WARRANTY, to the extent permitted by law. You may redistribute it under the terms of the GNU General Public License; see the file named COPYING for details. Written by the VideoLAN team; see the AUTHORS file. and tried everything I could in the streaming section, but I can't get the stream to actually work. Looking around, apparently debian strips the encoders from the package? I want to do share some videos I've made with friends on IRC, and it would be easiest if I could just stream it so we can all watch at the same time and critique parts of it in real time. Has anyone done something similar? Linux s10 3.2.0-2-686-pae #1 SMP Tue Mar 20 19:48:26 UTC 2012 i686 GNU/Linux Basic home network, I am behind a NAT (192.168.1.*) and have dynamic DNS set up. That doesn't really matter too much, I can figure that out, but it's not even working locally. I have a file server set up and could just share the files that way, but I'd rather have everyone watching at the same time (or just about). Not worried about installing new packages or building something from source, that's not a big issue, just want to get it working. Big plus if I can do it from command line.

    Read the article

  • Serving images from another hostname vs Apache overload for the rewrites

    - by luison
    We are trying to improve further the speed of some sites with older HTML in order as well to obtain better SEO results. We have now applied some minify measures, combined html, css etc. We use a small virtualized infrastructure and we've always wanted to use a light + standar http server configuration so the first one can serve images and static contents vs the other one php, rewrites, etc. We can easily do that now with a VM using the same files and conf of vhosts (bind mounts) on apache but with hardly any modules loaded. This means the light httpd will have smaller fingerprint that would allow us to serve more and quicker, have more minSpareServer running, etc. So, as browsers benefit from loading static content from different hostnames as well, we've thought about building a rewrite rule on our main server (main.com) to "redirect" all images and css *.jpg, *.gif, *.css etc to the same at say cdn.main.com thus the browser being able to have more connections. The question is, assuming we have a very complex rewrite ruleset already (we manually manipulate many old URLs for SEO) will it be worth? I mean will the additional load of main's apache to have to redirect main.com/image.jpg (I understand we'll have to do a 301) to cdn.main.com/image.jpg + then cdn.main.com having to serve it, be larger than the gain we would be archiving on the browser? Could the excess of 301s of all images on a page be penalized by google? How do large companies work this out, does the original code already include images linked from the cdn with absolute paths? EDIT Just to clarify, our concern is not to do so much with server performance or bandwith. We could obviously employ an external CDN server but we have plenty CPU and bandwith. Our concern is with how to have "old" sites with plenty semi-static HTML content benefiting from splitting connections for images and static content via apache without having to change the html to absolute paths (ie. image.jpg to cdn.main.com/image.jpg happening on the server not the code)

    Read the article

  • update all the servers through one virtual servers using Storage are network virtual machine

    - by Mr.Calm
    Using UBUNTU and Virtal Box by Oracle, and Using this script to start nginx in Virtual Box, and placing it in Virtual box inside~/init.d #!/bin/bash ### BEGIN INIT INFO # Provides: Testinit # Required-Start: # Required-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start daemon at boot time # Description: Enable service provided by daemon. ### END INIT INFO # RETVAL=0; start() { CurrentTime=$(date +%d/%m/%Y"-"%I:%M:%S) ./usr/local/nginx/sbin/nginx echo "Current Time:"$CurrentTime>>/home/server/Desktop/NginxLogs.txt echo "!Starting nginx!" >>/home/server/Desktop/NginxLogs.txt Like this i want to write auto script (setup.sh file) and place that script in all virtual boxes inside my system, for example 8 virtual boxes and in all Virtual boxes NGINX is installed. Now, The thing is i am facing problem when i want change something in setup.sh i have to go to each and every virtual box, or Communicate each Virtual machine through SSH from my main machine. i am thinking to write another script (ex: Update.sh),and inside that script we give one path of file which is saved and recently edited in main machine (ex: DummySetup.sh). as soon as i run that script all the setup.sh files which are saved in each virtual machines should update the change or replace contents with DummySetup.sh's contents. Hope this is possible thing. Help would be appreciated.Thanking you

    Read the article

  • A complicated nginx/php-fpm chroot setup

    - by Rsaesha
    I'm running nginx and php-fpm, and I want to set up jails for each host. My setup is a little complicated, so following tutorials on the web gets me nowhere. Each site has a directory /var/www/domain.name/ Inside that directory, there will be a public/ directory which will be the website root, a logs/ directory which will store nginx logs for that site specifically, and the chroot filesystem (etc/, usr/, etc.) The first problem I've run into is that nomatter how I configure it, PHP-FPM cannot find the files that are passed to it via nginx. They result in a "Primary script unknown" error, and to make matters worse, the error messages from PHP-FPM are no more verbose than that, so I can't figure out what path is being passed by nginx. A php-fpm pool configuration for a host looks like this: [host] user = host group = www-data chroot = /var/www/domain.name chdir = /public listen = 127.0.0.1:900x 'x' is incremented for each pool. The nginx config for this host looks like this: server { listen 80; server_name domain.name *.domain.name; root /var/www/domain.name/public; index index.php index.html index.html; location ~ \.php$ { expires epoch; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9001; } } I'm guessing that the problem is the SCRIPT_FILENAME parameter, but I've changed it to just $fastcgi_script_name, and various other combinations, but to no avail. Can anyone help?

    Read the article

  • Mysql Fail to start

    - by John Naegle
    I'm running a Ubuntu 12.04 LTS Virtual Machine. Last week, the VM stopped unexpectedly now mysql will not start on the VM. These two events may be related, they may not be. When I try to connect: $ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) Then: $ sudo service mysql start start: Job failed to start And $ dmesg [ 1838.218400] type=1400 audit(1374633238.253:50): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=18473 comm="apparmor_parser" [ 1838.358656] init: mysql main process (18477) terminated with status 1 [ 1838.358695] init: mysql main process ended, respawning [ 1839.269303] init: mysql post-start process (18478) terminated with status 1 And $ service mysql status mysql stop/waiting I think this means mysql is crashing when it starts: $ sudo mysqld start 130723 21:51:24 InnoDB: Assertion failure in thread 3064211200 in file fut0lst.ic line 83 InnoDB: Failing assertion: addr.page == FIL_NULL || addr.boffset >= FIL_PAGE_DATA InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 02:51:24 UTC - mysqld got signal 6 ; Per the manual, I went to the data directory (/var/lib/mysql) and ran this: myisamchk --silent --force */*.MYI Then: $ sudo mysqld ... InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: for more information. ... Is my database corrupt? What can I do to recover? Re-install mysql? Something less drastic? I'm fine with losing the database, I just want a working system.

    Read the article

  • Recommendations for secure business collaboration tools

    - by Michael Prescott
    I'm searching for a secure and easy way for business partners to collaboratively edit and exchange documents, share calendars, create schedules, and assign tasks. I speculate that the ideal collaboration environment or work-flow would actually involve several technologies and services. My co-workers and I have tried a variety of things from Google Apps to Wiki's, but nothing feels very fluid or complete. I suppose defining what we need and our constraints is probably in order: collaboratively edit basic text documents and spreadsheets exchange documents like flow-charts, graphs, and files generated by our other desktop applications, but not source code assign tasks to each other and ourselves and track the history of those tasks easily see when relevant documents have been modified since last viewing and ability to easily push notifications to relevant workers (a clean front page that shows updates would probably suffice) provide limited access to contract workers and guests users if a remote user system is compromised (keystroke logger or other spyware) we don't want the criminal to be able to gain access to all business documents (processes, trade-secrets, customer lists, etc.) simply because they gained access to a single Google account (or whatever web service) Cannot be a difficult to administer VPN infrastructure Cannot cost more than $100 per month (yeah, money is tight) Needs to support up to 25 users We can host our own web applications, but it must be low maintenance solution

    Read the article

  • PXE boot -- kernel not found on TFTP server

    - by user70523
    I followed the following link for PXE boot, http://www.howtoforge.com/setting-up-a-pxe-install-server-on-ubuntu-9.10-p3 and I was able to ping the client from the server and also when I booted up the client It is getting the IP address from the server. But later,I got this error PXELinux 3.82 2009-06-09 . . . [other informations] !PXE Entry point found (we hope) at 9D3B:0109 via plan A UNDI code segment at 9D3B len 16C2 UNDI data segment at 933B len A000 Getting cached packet 01 02 03 . . . [other informations] TFTP prefix: Trying to load: pxelinux.cfg/ec5db4c0-74fe-d511-b9e7-3d9235afe5a1 Trying to load: pxelinux.cfg/01-00-17-31-b6-5e-a8 Trying to load: pxelinux.cfg/0A64491E Trying to load: pxelinux.cfg/0A64491 Trying to load: pxelinux.cfg/0A6449 Trying to load: pxelinux.cfg/0A644 Trying to load: pxelinux.cfg/0A64 Trying to load: pxelinux.cfg/0A6 Trying to load: pxelinux.cfg/0A Trying to load: pxelinux.cfg/0 Trying to load: pxelinux.cfg/default Unable to locate configuration file Boot failed: press a key to retry or wait for reset I have put all the files mentioned in the link in tftpboot. Can anyone explain what could be the problem. Thanks in advance

    Read the article

  • Convert raw IMAP server data into local folders, then upload partial dataset to new IMAP server?

    - by Manca Weeks
    I am transitioning a company with about 30 IMAP accounts, loaded with data (about 77GB total), to a new email host. The majority of the data will be converted into a local archive and distributed to the company computers as a static reference data set. The server side folders the users absolutely cannot do without being on the server will be uploaded back to the new server. I used Mac OS X Mail (Snow Leopard 10.6.6) to download the content. I notice some messages have the name [xxx].partial.emlx, which leads me to believe they have not been downloaded all the way. I have root access to the mail server data and could download the IMAP server data via FTP. I am not sure what utility to use to convert that data to local Mail.app mailboxes. Furthermore, I would appreciate any input on the best way to upload a portion of the data to the new server (GoDaddy), preserving the original dates of the messages. edit OK - forget the raw server data. I found a script that apparently does pretty good archiving IMAP folders to local mbx files. My main quest now is to batch upload a mailbox hierarchy to the new IMAP server without having to start-stop and deal with similar issues. Anyone know of a utility (hopefully for OS X, but if not, I'll fire up my XP virtual system...) that would be capable of this? Thanks, M

    Read the article

  • Windows Server 2008 RAID10

    - by JT
    Hello All, I am building a storage system for myself. I have a 16 bay SATA chasis and right now I have 1 x 500GB SATA for booting 8 x 1.5TB for data. 3Ware 9500S-8 RAID card where these 8 drives above are connected to. I am used to linux, but not in the RAID department. I have Windows experience too. What I am looking for is something that I can just let sit, be reliable and use for other items as well. (Like running test websites, Apache, MySQL, etc). This box is private on a Class-C subnet. My thought is to at least consider Windows Server 2008. I especially like the potential for NON-GUI Mode. Can Windows Server 2008 do a Software RAID 10 out of the box? Software RAID is better performance and better in case the raid needs to be moved to another machine? I just want to SCP files, so OpenSSH running on it? Can one install the GUI, but not use it unless they get in a bind? Is Windows a good idea or should I stick to a Linux Software RAID or FreeBSD + ZFS?

    Read the article

  • How do I host multiple independent, secured SharePoint sites (WSS 3.0) without using Active Directory on the same server?

    - by Kyle Noland
    I have a SharePoint site set up on one of my networks to service Active Directory users. To be clear, this is a Windows SharePoint Services 3.0 installation running on Windows Server 2003 Standard. It is not an option to upgrade the server or SharePoint version. Management would like to create several new sites, one for each of a handful of clients. These sites will be used like "dropboxes" or FTP sites so that my company can make large files available to outside contacts, and vice versa. Here are my requirements: I do not want to have to create Active Directory accounts for each external contact. If possible, I would like to store the external usernames and passwords in a database that I can write a small GUI for so that management can handle adding their own external contacts. Each client site must be sandboxed from each other and from my main company SharePoint site. I would like to keep everything running on port 80 and be able to access the sites as either clientname.mycompany.com or www.mycompany.com/clientname If anybody has ever done this I would really appreciate hearing about any lessons you learned and suggestions for how to set this up. Kyle

    Read the article

< Previous Page | 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633  | Next Page >