Search Results

Search found 29043 results on 1162 pages for 'jquery tools'.

Page 1053/1162 | < Previous Page | 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060  | Next Page >

  • Dedicated virtual setup is slow with WordPress

    - by kovshenin
    Hey. I'm running a Fedora linux server on the Amazon EC2 platform. I'm pretty sure there's something wrong with my configuration as it seems to be very slow. SSH sometimes takes over 30 seconds to connect, a WordPress generated web page could take 5 seconds to load, and it could take 20 seconds to load, which is pretty awkward. MySQL queries are all executed in less than a second, so I don't think that's the case. I'm not really sure where the issue lies, but a simple page written in PHP loads instantly. A fresh WordPress installation starts lagging. Same works perfect on grid hosting at MediaTemple for instance, so I'm pretty sure I missed something. If you could please direct me to the right tools and articles which would help me out. Thanks so much! Fedora Core 8, php 5.2.6, MySQL 5.0.45, OpenSSH 4.7p1, OpenSSL 0.9.8b. PHP is configured as a module to Apache 2.2.9, all websites based on virtual hosts. I have some on-going php scripts running from time to time in the background via cron. Thanks.

    Read the article

  • Multiple *NIX Accounts with Identical UID

    - by Tim
    I am curious whether there is a standard expected behavior and whether it is considered bad practice when creating more than one account on Linux/Unix that have the same UID. I've done some testing on RHEL5 with this and it behaved as I expected, but I don't know if I'm tempting fate using this trick. As an example, let's say I have two accounts with the same IDs: a1:$1$4zIl1:5000:5000::/home/a1:/bin/bash a2:$1$bmh92:5000:5000::/home/a2:/bin/bash What this means is: I can log in to each account using its own password. Files I create will have the same UID. Tools such as "ls -l" will list the UID as the first entry in the file (a1 in this case). I avoid any permissions or ownership problems between the two accounts because they are really the same user. I get login auditing for each account, so I have better granularity into tracking what is happening on the system. So my questions are: Is this ability designed or is it just the way it happens to work? Is this going to be consistent across *nix variants? Is this accepted practice? Are there unintended consequences to this practice? Note, the idea here is to use this for system accounts and not normal user accounts.

    Read the article

  • Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most

    - by Henno
    Problem Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most. Topology Facts ESXi5 version is 5.0.0.504890 VM has the latest Vmware Tools installed VM is using E1000 network driver Physical box has Win Srv 2008 R2 as the OS CrystalDiskMark says the drive on physical box can read/write 100MB/s vCenter is another vm on esx both vm and physical box are showing 1Gbps link speed Configuration Networking shows vmnic0 as 1000 Full NTttcp is a client/server tool from Microsoft for measuring pure network throughput Here's what I've done so far: Test1: VM is running Filezilla FTP Server (default settings, one user account made) Physical box is running Filezilla FTP Client (default settings) Physical box is uploading a big file to FTP server Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Physical box is downloading that file from FTP server Transfer speed (as observed by Windows Task Manager on both machines): still ~11MB/s (bad) Could it be disk performance issue? Test2: Physical box is running ntttcpr.exe -a 6 -m 6,0,VM_IP_ADDRESS VM is running ntttcps.exe -a 6 -m 6,0,PHY_BOX_IP_ADDRESS Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Could it be switch performance issue? Test3: physical box is running vSphere Client I open Summary Storage datastore Browse Datastore... from physical box and upload a file to datastore Transfer speed (as observed by Windows Task Manager on physical box): ~26-36MB/s (good) Could it be a vm specific issue? Test4: Installed ntttcp to another vm on the same esx server Measured network performance between vms on the same esx server with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~90-120MB/s (excellent :) Test5: I have another esx server on the same site, connecting to the same datastore and same switch. Those two ESX servers have both 2 NICs. One NIC goes to switch while the other goes directly to the other ESX server. vMotioned one of the testing vms off to the other ESX host Measured network performance between vms on different esx servers with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~11MB/s (bad) While I'm aware of these: ESXi 4.1 slow file transfer ESXi 5 network performance is slow Debian Etch and ESXi slow network speeds VMWare ESXi slow file copy to guest they did not help (or I must have been missed something)

    Read the article

  • mshtml.dll latest version for Internet Explorer 8, Windows XP Service Pack 3

    - by AllSolutions
    Many applications in my system (Internet Explorer 8, Yahoo Messenger, Skype 10) are crashing and error details shows module name mshtml.dll. I checked the version of mshtml.dll in system32 folder. It is 8.0.6001.19170. My Internet Explorer version is 8.0.6001.18702. I am not concerned about crash of IE, because I generally use Firefox, but how do I solve the crashes in other applications, which are due to mshtml.dll? I have moved to Windows XP Service Pack 3 (32 bit). I have tried to update Internet Explorer 8 (from Tools-Windows Update), but again it crashes. I can not migrate to IE 9, as it requires Vista or Windows 7. I have applied Cumulative Security update for IE8, which has this file name: IE8-WindowsXP-KB2618444-x86-ENU.exe I could not get much info from Microsoft sites or Google. I do not want to use Automatic Updates feature of Windows. Can somebody give the download links for mshtml.dll and any associated files, which I can replace in system32 folder? Thanks.

    Read the article

  • Windows Server 2012 Hyper-V very slow

    - by Matt Taylor
    I have been running several Hyper-V VMs on Windows Server 2008 R2 for the past couple of years and enjoying perfectly adequate performance for my testing/development/r&d environments. I'm a software developer so my hardware knowledge is basic however I built the rig using: •Gigabyte GA-X58A-UD3R Intel X58 (Socket 1366) DDR3 Motherboard •Intel Core i7 960 3.20GHz (Bloomfield) (Socket LGA1366) •24GB triple channel RAM The host OS is running on an OCZ SSD and all the VMs are running on a 2TB Marvell SATA3 RAID 0 array consisting of 2 Western Digital Caviar Black 7,200rpm drives. I have tested the speed of the 2TB drive and appear to be getting less than 3Mbs but it can adequately run a 4 VM farm including a DC, (SQL) database and IIS application servers. I recently upgraded the SSD on which the host runs to a 256GB OCZ Vertex 4 and took the opportunity to upgrade to Windows Server 2012 and installed the Hyper-V role. I tried importing one of my existing Windows Server 2008 R2 VMs (and converted it to .vhdx) plus I have tried creating a brand new Windows Server 2008 R2 VM but both are running extremely slowly and I can see nothing obvious using the host and guest Task Manager/Resource Monitor tools. In both cases the VM has 8GB RAM (fixed), 4 CPUs, fixed size HD (not expanding) and is using an external virtual network running on a separate NIC to the host. I have upgraded the BIOS to the latest available version and checked the virtualization settings. I have run out of "obvious" (to a developer) things to check/configure and my next option will be to re-install the host OS but before I do I would very much appreciate any advice from any experts out there. Thanks

    Read the article

  • Creating a pseudoterminal to make sudo happy

    - by larsks
    I need to automate the provisioning of a cloud instance (running Fedora 17) for which the following initial facts are true: I have ssh-key based access to a remote user (cloud) That user has password-free root access via sudo. Manual configuration is as simple as logging in and running sudo su - and having at it, but I would like to fully automate this process. The trick is that the system defaults to having the requiretty option enabled for sudo, which means that an attempt to do something like this: ssh remotehost sudo yum -y install puppet Will fail: sudo: sorry, you must have a tty to run sudo I am working around this right now by first pushing over a small Python script that will run a command on a pseudoterminal: import os import sys import errno import subprocess pid, master_fd = os.forkpty() if pid == 0: # child process: now that we're attached to a # pty, run the given command. os.execvp(sys.argv[1], sys.argv[1:]) else: while True: try: data = os.read(master_fd, 1024) except OSError, detail: if detail.errno == errno.EIO: break if not data: break sys.stdout.write(data) os.wait() Assuming that this is named pty, I can then run: ssh remotehost ./pty sudo yum -y install puppet This works fine, but I'm wondering if there are solutions already available that I haven't considered. I would normally think about expect, but it's not installed by default on this system. screen can do this in a pinch, but the best I came up with was: screen -dmS sudo somecommand ...which does work but eats the output. Are there any other tools available that will allocate a pseudoterminal for me that are going to be generally available?

    Read the article

  • How to set up mysql storage for certain rsyslog input matches?

    - by ylluminate
    I'm draining various logs from Heroku to an rsyslog linux (ubuntu) server and am starting to have a little more to bite off than I can chew in terms of working with my log histories. I am needing to be able drill back in time based on more flexible details and more flexible access than what the standard syslog file(s) provide. I'm thinking that logging to mysql may be the correct approach, but how do I set this up such that it pulls only certain log entries into a table based on an identified? For example, I see a long hex string identifying each log entry from a certain Heroku app instance. I assume that I can just pipe those into the mysql socket vs ALL rsyslog input into mysql... Could someone please direct me to a resource that can walk me through the process of setting something like this up or simply provide some details that can help? I have 15+ years of Unix experience so I just need some nudging in the right direction as I've not really done a tremendous amount of work with syslog daemons previously in terms of pooling various servers into one. Additionally, I'd be interested in any log review tools that could make drilling through log arrangements like this more handy for developers.

    Read the article

  • Disk wipe preferences

    - by hmvm123
    I manage a pool of systems that are loaded with software and sent to potential customers for evaluations which often land sensitive information on the drives. Before shipping them back, they typically like a standard wipe to be run to clean out the drives. Most are familiar with DBAN so I try to make sure it can work on my systems. Unfortunately, this means I'm usually in RAID driver hell trying to make sure that the versions out there support the ones my systems are shipping with. These are various kinds of 3ware and LSI ones. Consequently, I have DBAN 1.0.7 working on some, a beta version of 2.0 on the others and 2.2.6 on some of the latest SSD based ones. Now with the LSI controllers on my IBM x3550 M3s (1064/1068) I'm getting no love at all. Is there a way out? Do you buildroot with DBAN and try to piece the drivers together? Any other tools, free or commerical, that stay updated. I'm trying to walk people of varying technical proficiencies through this, so a boot disk with simple choices is preferable.

    Read the article

  • On Windows 2008 R2, how do I back up DHCP if the DHCP .mdb database is always busy?

    - by johnny
    I get this from my backup software. C:\WINDOWS\system32\dhcp\dhcp.mdb : The process cannot access the file because it is being used by another process. C:\WINDOWS\system32\dhcp\j50.log : The process cannot access the file because it is being used by another process. C:\WINDOWS\system32\dhcp\j50tmp.log : The process cannot access the file because it is being used by another process. C:\WINDOWS\system32\dhcp\tmp.edb : The process cannot access the file because it is being used by another process. My questions: Should I be doing a manual backup of DHCP via command line tools or maybe with MMC, Action, Backup before I run my backup? Is the %SystemRoot%\System32\DHCP\Backup directory always kept up to date? (which does get backed up by backup software) I'm answering my own question but the registry key is set up for 3c, 60 minutes, I believe. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\DHCPServer\Parameters\BackupInterva This is not the included backup software for Windows. It is another product, but I have seen this with every backup software I've ever used.

    Read the article

  • Monitoring AWS Systems Behind ElasticBeanStalk

    - by A. Avadis
    So I'm getting a company set up in the Amazon Cloud -- creating IAAS protocol/solutions/standardized implementation, etc while also being the SysAdmin for individual systems, app environments, and day-to-day uptime. One of the biggest issues I'm having is tracking various system/application logs, as well as logging/monitoring/archiving system metrics like memory usage, cpu usage, etc etc In a centralized fashion. E.g. -- Nagios + Urchin. The BIGGEST impediment to my endeavors is the following: The company application is deployed in the form of a Java *.WAR file, uploaded to an Elastic BeanStalk application environment, load balancing and auto-scaling between 3(min) and 10(max) servers, and the EC2's that run the application are fired up and disposed of ad-hoc. That is to say, I can't monitor the individual EC2's for very long because so many are being terminated then auto-provisioned/auto-scaled on the fly -- so I'd constantly be having to "monitor what I'm monitoring", and continuously remove/add EC2 machine addresses to my monitoring lists. IS there some sort of way to use monitoring tools like Zabbix or Nagios to monitor the ElasticBeanStalk, and have it automatically add on new EC2's, and remove terminated/failed EC2's from its monitoring list automatically? Furthermore, is there anything I can do with GrayLog to achieve similar results with the aggregation/centralization of my application logs from multiple EC2 instances into ONE consolidated set of logs/events? If not GrayLog, is there ANYTHING LIKE GrayLog that can automatically detect what EC2 members are being added/removed from the environment, and collect the logs from them automatically? Any and all advice or direction is appreciated. Thanks much, and cheers!!

    Read the article

  • Internet Explorer will not open

    - by KCotreau
    I recently migrated a company to a Microsoft domain environment, logged the users in under their new domain accounts, and then copied the old profiles to the new profile. I am not sure if that is related since they did not complain about it right away and it may have been a subsequent patch or something, but I have two XP computers that will not open IE8. You click on it, and it nothing graphically happens at all, but you can see a process in task manager. If you click many times, you get multiple instances. It will appear often TWICE per click. It still works in the old profile, so it is specific to the profile, and I would like to fix it rather than blow it away. Here is what I have done without success: Tried opening without add-ons (the one in System Tools) Reinstalled IE8 Ran SFC /SCANNOW I found a script that was supposed to repair any registry entries, and ran it. I tried exporting the whole HKCU\Software\Microsoft\Internet Explorer key and deleting it, hoping that when I restarted it, it would recreate it...No joy. I restored it. Any ideas?

    Read the article

  • Ubuntu 12.04/12.10 can't detect windows or any other partitions(Asus z77 UEFI BIOS)

    - by user971155
    I've recently completed tinkering my new pc(motherboard ASUS z77 with UEFI BIOS) and unfortunately not everything works quite well. After installing windows 7 ultimate on a single primary partition(SATA drive) I decided to allocate one more logical partition for additional needs. When I tried doing it with the manager - it said that it couldn't allocate requested size even though I certainly asked for much less than it was available. I thought that it might have been a windows issue and proceded to installing Ubuntu 12.10 x64. When the graphical interface loaded it showed me a message stating that it can't find any other operating system on the drive. When I used custom partioning option it showed me none of my current partions(including that with windows). However, when I boot with "Try Ubuntu" feature it does find them ! I find it weird though. Here's what the console present me with: ubuntu@ubuntu:~$ sudo os-prober /dev/sda1:Windows 7 (loader):Windows:chain ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders, total 1250263728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00072b98 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 100020223 49906688 7 HPFS/NTFS/exFAT /dev/sda3 100022270 1250263039 575120385 5 Extended /dev/sda4 566669312 1250263039 341796864 83 Linux I also tried creating partitions from disk utility which results in error: , Error creating partition: helper exited with exit code 1: In part_add_partition: device_file=/dev/sda, start=51211402240, size=1923000000, type=0x83 Entering MS-DOS parser (offset=0, size=640135028736) MSDOS_MAGIC found looking at part 0 (offset 1048576, size 104857600, type 0x07) new part entry looking at part 1 (offset 105906176, size 51104448512, type 0x07) new part entry looking at part 2 (offset 51211402240, size 588923274240, type 0x05) Entering MS-DOS extended parser (offset=51211402240, size=588923274240) readfrom = 51211402240 MSDOS_MAGIC found Exiting MS-DOS extended parser looking at part 3 (offset 290134687744, size 349999988736, type 0x83) new part entry Exiting MS-DOS parser MSDOS partition table detected containing partition table scheme = 1 got it Error: Can't have overlapping partitions. ped_disk_new() failed Here's what I get when I try to install the system i.stack.imgur.com/pjlb9.png, i.stack.imgur.com/g1lXN.png P.S. It's strange that I even can't create any more partitions neither with disk-utility nor with windows 7 native tools

    Read the article

  • Edubuntu video playback and apt-get

    - by asdasd
    They had installed some modified edubuntu's at school... So i have some questions about setting some things up: How we can play HD videos ? They are made for windows machines and are in .wmv format but we need to play them on our multimedia class but don't know how - which player, which codecs etc. How to edit properly the /etc/apt/sources file ? Anything we try to install via apt-get it just says that E:\ is not available. Please tell me which repositories to put in there so we could be able to install some tools. Where are usually viruses/trojans put in ubuntu ? I mean in which directories ? Because our computers are behaving really slow and we need to check for some malware manually - we are not even allowed to install any kind of AV software. So tell me the usual directories and places for hiding such files, how are they hiddem, how to recognize them etc. Any others nice tricks/tips that we need to know. Thank you very much in advance.

    Read the article

  • Issues regarding internet connectivity

    - by andySF
    Hello. My problem started when Yahoo Messenger stopped connecting. I've tried to see if Internet Explorer was working but will not load any page. The diagnostics of Internet Explorer says that is something wrong with my dns(using just ip of google or yahoo or my local webserver was not working). I use Windows 7 and at the moment i've had Internet Explorer 8 and after a lot of failing updates to ie9 I've successfully install the Romanian version of IE9(now i have ie8 after a system restore). Then I installed the service pack 1. I've done a lot of things and I will try to enumerate them, but my problem persists. Settings from Yahoo Messenger and Internet Explorer are OK. I've try to reset winsock and ip from netsh. I've scanned my pc with spybot, mallwarebytes, Trojan Remover(simplysup), Loaris Trojan Remover, Avast, Nod32, Kaspersky, Bitdefender,alot of registry cleaner including CCleaner and maybe others that I cannot remember now. I reset the registry permissions using subinacl. At a moment my files permissions was set jut to "trusted installer" and I've put the permission back to files and folders using the model of other windows 7 machine. I have try so many things that now i'm stuck in a loop using different security tools to check for problems. Oh, and my virtual machines are working just fine.(I'm using VirtualBox) Please Help. PS, Reinstalling Windows is not an option. Thank you!

    Read the article

  • unable to recover data from failed hdd

    - by Eslam Elyamany
    my hdd failing (or maybe totally dead) i've connected the hdd via USB but it doesn't appear in fdisk Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xe9fb38fb Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 40959999 20376576 7 HPFS/NTFS/exFAT /dev/sda4 40962046 976771071 467904513 5 Extended Partition 4 does not start on physical sector boundary. /dev/sda5 82913280 86910975 1998848 82 Linux swap / Solaris /dev/sda6 86913024 394113023 153600000 7 HPFS/NTFS/exFAT /dev/sda7 40962048 82913279 20975616 83 Linux /dev/sda8 394122708 976768064 291322678+ 7 HPFS/NTFS/exFAT Partition 8 does not start on physical sector boundary. no sdc appears here , BUT it's appears on /dev/ rootghost-lap:/home/ghost# ls /dev/sd* /dev/sda /dev/sda2 /dev/sda5 /dev/sda8 /dev/sdb /dev/sdc1 /dev/sdc2 /dev/sdc6 /dev/sdc8 /dev/sda1 /dev/sda4 /dev/sda6 /dev/sda9 /dev/sdc /dev/sdc10 /dev/sdc5 /dev/sdc7 /dev/sdc9 also it appears in proc Code: rootghost-lap:/home/ghost# cat /proc/partitions major minor #blocks name 8 0 488386584 sda 8 1 102400 sda1 8 2 20376576 sda2 8 4 1 sda4 8 5 1998848 sda5 8 6 153600000 sda6 8 8 291322678 sda8 8 9 20975616 sda9 11 0 1048575 sr0 11 1 99136 sr1 8 32 244198583 sdc 8 33 14651248 sdc1 8 34 1 sdc2 8 37 15380480 sdc5 8 38 4153344 sdc6 8 39 48829536 sdc7 8 40 48829536 sdc8 8 41 110374551 sdc9 8 42 1975963 sdc10 and dmesg : [10604.777168] end_request: I/O error, dev sdc, sector 1 [10604.817238] sd 26:0:0:0: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [10604.817243] sd 26:0:0:0: [sdc] Sense Key : Aborted Command [current] [10604.817248] sd 26:0:0:0: [sdc] Add. Sense: No additional sense information [10604.817253] sd 26:0:0:0: [sdc] CDB: Read(10): 28 00 00 00 00 02 00 00 06 00 ok now , let's see what i've tried testdisk to check for partitions -- failed dd to copy data from /dev/sdcX -- provide strange output size for example /dev/sdc1 is about 15G , the output for dd is 62G+ so i had to cancle it safecopy successfully made an image for partitons , but can't fix images, can't mount it, can't do any thing with it and some other tools i've tried and all failed , so any idea ?

    Read the article

  • Simple way to set up port knocking on Linux?

    - by Ace Paus
    There are well known benefits of Port Knocking utilities when utilized in combination with firewall IP table modification. Port Knocking is best used to provide an additional layer of security over other tools such as the OpenSSH server. I would like some help setting it up on a ubuntu server. I looked at some port knocking implementations here: PORTKNOCKING - A system for stealthy authentication across closed ports. IMPLEMENTATIONS http://www.portknocking.org/view/implementations fwknop looked good. I found an Android client here. And fwknop (both client and server) is in the ubuntu repos. Unfortunately, setting it up (on the server) looks difficult. I do not have iptables set up. My proficiency with iptables is limited (but I understand the basics). I'm looking for a series of simple steps to set it up. I only want to open the SSH port in response to a valid knock. Alternatively, I would consider other port knocking implementations, if they are much simpler to set up and the desired Linux and Android clients are available.

    Read the article

  • How to clone a HDD and then use the clone with VMware (so that Windows works!)?

    - by Ahmad
    I have a system on which Windows 7 is installed, and I am trying to make a clone of its HDD image, which I then want to use in my main PC with VMware, so that I can boot Windows 7 off the cloned HDD. I used Ultimate Boot CD v5.1.1 with the system whose HDD I wanted to clone, and I cloned it using EaseUs Disk Copy, which comes with Ultimate Boot CD. The source HDD was 250 GB in size which had 3 partitions, while the USB HDD I attached to the system, which was supposed to be the destination/clone HDD, was 320 GB in size. I chose to create an exact replica, and so 250 GB worth of data (partitions, etc.) was copied exactly, and the rest of the space was un-allocated. I now connected this USB HDD to my main PC, fired up VMware Workstation 8 and defined a new Virtual Machine, and chose to boot off the USB HDD. Result is that when Windows is booting (from the cloned HDD inside VMware), I get the blue screen error before I reach the login screen. How can I change my methodology so that Windows even boots from the clone? I can change any tools I use, etc.

    Read the article

  • How do I change the Dropbox directory on a headless GNU/Linux server?

    - by DrTwox
    I have installed Dropbox 2.0.0 via command line on my home server (Ubuntu Server 12.04) to use for off-site automated backups, but I can't change the directory that the Dropbox daemon keeps synced. I've tried the following: The official docs say to use the desktop application, which is not applicable in my situation. However I installed the desktop app on my desktop machine and changed the default folder location, but I can't find where this change is stored in the ~/.dropbox/ directory so I can make the same change on the server. This page (and several others) recommends a Python script to do the job. Looking at the script, it opens a SQLite database called ~/.dropbox/dropbox.db, which does not exist on my Dropbox install, leading me to believe the script is out-of-date. This forum thread suggests manually inserting the required row in the config.db database, which I did, but it made no difference. I checked the same database file on my desktop machine, and it does not have the dropbox_path key, so I'm presuming the information in that thread is also out of date for version 2.0. I have tried to launch the Dropbox GUI configuration wizard over SSH with X11 forwarding, as suggested in one of the answers, but the binary must detect the absence of a local X11 install and it starts a command line daemon instead, which provides no means to change the option I need. I am currently using a symlink, as suggested as an answer, but this is a kludge. I would like to know the correct way to make the change. How do I change the Dropbox directory on a headless GNU/Linux server? Update: I've ditched Dropbox and started using Copy. Their Linux tools and support is far superior to Dropbox. I leave this question here in case someone, someday, can answer it.

    Read the article

  • Production deployment to EC2 with minimal downtime

    - by jensendarren
    I have a simple web application deployed on a large instance with EC2. I now want to deploy the latest code to this server but I want to do this in a way which minimizes downtime and is a smooth as possible for the end user. Here is my plan: Fire up another large instance Install all the software layers on that instance Restore and attach an EBS drive to the instance Deploy our latest production ready code on the new instance Run all tests (including manual testing of the application) (If tests pass) Put a "Site Under Maintenance" notice on the live site. Backup the EBS instance on the live site Detach the EBS instance from the new server and replace with the latest backup Use ec2-associate-address to move the IP address to the new instance Sit back and wait for traffic to start flowing though the new instance Terminate the old instance Does this seem like a good strategy? Are there any tutorials or books that might cover this topic? I have already read Cloud Application Architectures by George Reese, which is an excellent book, but does not cover deployment. Additionally, I know that there are tools that can help with this like RightScale or enStratus which I will use when I start using more than one instance.

    Read the article

  • Reduce "Metafile" memory usage?

    - by Jay Conrod
    My work computer (Windows 7 64-bit) spends a lot of time swapping memory when I switch between programs. This surprises me since I have 4 GB of RAM, and the programs I use aren't particularly RAM hungry (Outlook, Emacs, p4win, Firefox, various build tools). I downloaded RAMMap, and it shows over a gigabyte of memory used by "Metafile". From the Sysinternals blog: Metafile is part of the system cache and consists of NTFS metadata. NTFS metadata includes the MFT as well as the other various NTFS metadata files. ... In the MFT each file attribute record takes 1k and each file has at least one attribute record. Add to this the other NTFS metadata files and you can see why the Metafile category can grow quite large on servers with lots of files. So I understand what the "Metafile" data is... I work on large builds comprising hundreds of thousands of files (none are that big, but they add up to several gigabytes). My question is how can I reduce the amount of memory used by "Metafile"? I'm not actively using all those files at once, so why does Windows need to keep info in RAM? Restarting my machine every time I sync a new build is really annoying.

    Read the article

  • Desktop Provisioning for a Small Linux Software Development Team

    - by deakblue
    Goal: Get a small team using a standard development image rather than 4 software devs setting up their own environments. Why: it takes a day or days to install a distro, build-specific libraries, tools like editors and IDEs, mysql, couchdb, java, maven, python, android-sdk, etc. It's a giant PITA that when repeated 4 times by 4 developers (not sys admins) wastes time and generates annoying divergences that crop up later (it-builds-on-my-box syndrome). There's no sharing of productivity, settings, tricks, scripts, set-ups. Some of this is helped by segregating the build systems into headless virtualbox images. This doesn't really address tooling though or the GUI-desktop dev that needs doing. So I see three basic strategies, ghosting, virtualization, and finally creating a kind of in-house linux distro (I guess Google does something like this). The target dev environment is based on Debian OpenBox and must allow a mix of 3rd gen Core i7 notebooks 8GB-minimum to work both single and multihead. Important, the lappies are not the same, but a mix of 2012 macbooks and PCs. So: virtualization: is doing all of your work within a VM, like VirtualBox, practical on this hardware or annoying. ghosting: will laptops from different manufacturers make this impractical. DIY distro: short of scripting a bunch of package installs, I don't know if there's any "distro-maker" that could keep this from being an epic project of scripting package installs. So any advice?

    Read the article

  • Dropped WD External Harddisk, now it's shown as "Not initialized"

    - by Phelios
    So, the WD my passport external harddisk is dropped, and after that, the computer is unable to read it anymore. I was hopping if I can just find another case to try if the harddisk is still readable, but looks like the hard drive itself is not a normal SATA or PATA drive. I think it's modified. So, I can't find another case that I can try on. In the computer, I still can see the drive in the "Disk Management", but it's shown as Uninitialized, no size, and no drive letter. I've also tried a couple of recovery tools. Some can't detect at all, there is one (find and mount software) that can detect but shows 0 size. None of them can recover the data. WD is willing to replace it with a new one, but I still need to recover the data. Any way I can recover the data? UPDATE: I tried initialized it from the windows Disk Manager, but it give error "The request could not be performed because of an I/O device error."

    Read the article

  • Automating first time login process in Windows Server 2008 R2 SP1 virtual machine

    - by George Durzi
    I have a set of Windows 2008 Server R2 SP1 Enterprise Edition virtual machines running in Hyper-V. The host server has 64GB of RAM and two SSD drives (one drive for the host OS, and the second one for the VMs). The virtual machines are as follows: Domain Controller: 4GB RAM Exchange Server: 4GB RAM Terminal Services: 50GB RAM We use this setup for a travelling training class where users remote desktop to one of the VMs - let's call it the Terminal Services or "TS" VM - where tools such as Visual Studio are installed. The students go through some labs on the TS VMs in Visual Studio. Overall, this setup works great. However, when users are collectively logging in for the first time, the VM really struggles to keep up while all the user profiles are created. It can take some users up to 10 minutes to login. The number varies from 30 to 40 students. A workaround to this would be to manually remote desktop to the TS virtual machine using all the accounts to ensure that the local profile is created in advance. I'm looking for a way to automate the first time login process on the TS virtual machine. I am envisioning iterating through the accounts in a certain Active Directory OU, and then somehow initiating a remote desktop session to the TS VM to log them in for the first time. Are there ways to do this? Thanks

    Read the article

  • Backup solution to backup terabytes and lots of static files on linux server?

    - by user28679
    Which backup tool or solution would you use to backup terabytes and lots of files on a production linux server ? Note that the files are all different and almost never modified, and usage is mostly adding files, so data volume is today 3TB growing all the time at around +15GB/day. Please do not reply rsync. Basic unix tools are not enough, rsync does not keep history, rdiff-backup miserably fails from time to time and screw the history. Moreover these are all file based backup, which put a lot of IOwait just to browse directories and query stat(). But i guess, except R1Soft CDP, there is no way around that. We tried R1Soft CDP backup, which is block level backup, and it proved good and efficient for all our other servers, but systematically fails on the server with 3 terabytes and gazillions of files. That is already more than 2 months that the engineers of R1Soft and datacenter are playing a hot ball game... and still no backup except regular rsync We never tried big commercial solutions, except R1Soft CDP since it was provided as an optional service by the datacented hosting our servers.

    Read the article

  • Emails not sending from outlook / OWA - Not even hitting the mail queue in exchange

    - by webnoob
    We are having an issue this morning where we can receive external emails but cannot send internal or external ones from Outlook or OWA. If I use: Send-MailMessage –From <[email protected]> –To <[email protected]> –Subject “Test #01”-Body “Just a test message.” –SMTPServer <Server-Name> –Credential <domain\user> the email is sent correctly which makes me think there is a connection issue with OWA and Outlook. However, outlook is reporting as Connected with exchange. I have checked the message tracking in exchange tools and emails sent via outlook and OWA do not appear. Nothing has changed on the server on the weekend so I don't really know where to start debugging this issue. We are using Windows SBS 2011. We only have one send connector which isn't using Smart Hosts and is set to use DNS MX records. Use external DNS is not checked and I can ping google.com etc so doesn't appear to be a DNS issue (plus the email sends from the console anyway). EDIT It appears that users using IMAP can send emails correctly, its only ones that rely on the normal exchange connection type that don't work. EDIT Emails from IMAP are hitting the email queue's where as emails from the normal exchange accounts aren't. EDIT It seems that some of the emails we tried to send yesterday sent at about 1am but now it won't work again..

    Read the article

< Previous Page | 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060  | Next Page >