Search Results

Search found 17054 results on 683 pages for 'wordpress plugin dev'.

Page 424/683 | < Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >

  • My php homepage downloads index.php instead of being processed on Gandi.net

    - by alekone
    if I go to the homepage of my website http://www.website.com (on a brand new server) the index.php gets donwloaded instead than processed. I don't have the same problem on other folders. my .htaccess reads: AddHandler php5-script .php what could this be? I suspect it's something with php config / or htaccess, but I'm not able to figure it out. help please! edit: I don't know if this helps: it's a wordpress installation, I have this problem only on the public part of the website, not on the admin (that renders correctly)

    Read the article

  • Subversion causes network problems

    - by richard
    I often use tortoisesvn to checkout or update a working copy on a development server. Whenever I do this, it seems to slow down the network and other users complain that browsing websites and accessing files on the dev server is slow. Is this a common bug in Subversion or has anyone else has come across similar problems?

    Read the article

  • after redmine install i see only the filesystem

    - by derty
    After installing redmine, i cann only access the filesystem! I reinstalled redmine 2-3 times in different ways. Used this "how to"s: http://www.redmine.org/projects/redmine/wiki/HowTo_Install_Redmine_using_Debian_package http://www.redmine.org/projects/redmine/wiki/HowTo_Install_Redmine_210_on_Debian_Squeeze_with_Apache_Passenger http://beeznest.wordpress.com/2012/09/20/installing-redmine-2-1-on-debian-squeeze-with-apache-modpassenger/ the webserver of 10.0.0.14 is going to be behind a reverse apache proxy. but for know i'm working directly in the system. This change wouldn't be a problem. I use this on a bunch of other services. The Database does exist and i can enter it. The configuration file config/database.yml is set up right, with the data i use to enter as redmineuser. So does one have an idea why it is not working like i wish?

    Read the article

  • Why is APC (PHP Accelerator) Only caching apc.php?

    - by amvx
    I installed PHP5,APC and XDEBUGGER with the install script found here: Dreamhost PHP+APC Install Instructions/Script All systems seem to be working... except... I added the apc.php to see the caching stats of APC (GUI Interface), and it showed that only apc.php file was being cached despite having installed wordpress and prestashop under the same domain, installing and running those php scripts. I wonder if I did something wrong... but my php5 is running as a cgi/fastcgi. I think I might have read somewhere about there being some issue with this. Not sure. Any help is of coursed appreciated.

    Read the article

  • how to set the font with mailx?

    - by Bob
    Solaris Korn Shell I am writing sql reports to an oracle database, spooling them to a file and emailing them with mailx. I use the syntax below. The reports do not format properly, unless I use the Courier New font. How do I set this with mailx? mailx -s "MY REPORT, date +'%D %r " -r "REPORTING SYSTEM" [email protected]< /mydir/mysql.log /dev/null

    Read the article

  • Qemu not installed on ubuntu with xen 4.0.1 ?

    - by Disco
    I'm kinda confused. I've installed Xen on a fresh/clean ubuntu 10.10 server (x64) using : hg clone xen-4.0-testing.hg make world make install-kernels Trying to launch qemu; first "not found" Then I installed from source (from tarballs at qemu.org) But then qemu complains about not finding /dev/kvm Am I missing something here ? I thought that qemu was part of xen install (at least it was from 3.x series). All i have is qemu-img-xen and qemu-nbd-xen.

    Read the article

  • Coldfusion 8 Application Crashes Under Heavy Load

    - by KM01
    Hello, We have a CF8 app that runs for 20-25 minutes before crashing under heavy load ~ 1200 users. This load is generated by our load testing tool: 1200 users ramped up in 5 mins (approx behavior of our users), running for an hour. We have this app on Solaris 10, Apache 2, JRun 4 and Oracle 10g. Java version is 1.6. During the initial load tests, the thread dumps pointed to monitor deadlocks that pointed to sessions. "jrpp-173": waiting to lock monitor 0x019fdc60 (object 0x6b893530, a java.util.Hashtable), which is held by "scheduler-1" "scheduler-1": waiting to lock monitor 0x026c3ce0 (object 0x6abe2f20, a coldfusion.monitor.memory.SessionMemoryMonitor$TopMemoryUsedSessions), which is held by "jrpp-167" "jrpp-167": waiting to lock monitor 0x019fdc60 (object 0x6b893530, a java.util.Hashtable), which is held by "scheduler-1" We increased the number of sessions relative to the number of CPUs (48 simultaneous threads against 32 CPUs), and the deadlock went away. While varying the simultaneous threads helped a little bit in terms of response time, the CF server still tanked in 20-25 minutes during all of these tests. We ran more thread dumps, and saw a thread locking a monitor, for e.g.: "jrpp-475" prio=3 tid=0x02230800 nid=0x2c5 runnable [0x4397d000] java.lang.Thread.State: RUNNABLE at java.util.HashMap.getEntry(HashMap.java:347) at java.util.HashMap.containsKey(HashMap.java:335) at java.util.HashSet.contains(HashSet.java:184) at coldfusion.monitor.memory.MemoryTracker.onAddObject(MemoryTracker.java:124) at coldfusion.monitor.memory.MemoryTrackerProxy.onReplaceValue(MemoryTrackerProxy.java:598) at coldfusion.monitor.memory.MemoryTrackerProxy.onPut(MemoryTrackerProxy.java:510) at coldfusion.util.CaseInsensitiveMap.put(CaseInsensitiveMap.java:250) at coldfusion.util.FastHashtable.put(FastHashtable.java:43) - locked <0x6f7e1a78> (a coldfusion.runtime.Struct) at coldfusion.runtime.CfJspPage._arrayset(CfJspPage.java:1027) at coldfusion.runtime.CfJspPage._arraySetAt(CfJspPage.java:2117) at cfvalidation2ecfc1052964961$funcSETUSERAUDITDATA.runFunction(/app/docs/apply/cfcs/validation.cfc:377) As you see in the last line above there were several references CFMs and CFCs, and the lines have "cflock" tags, which were scoped to the "application." We (the dev team) then changed them to be scoped to a "name". After more load tests, there is no locking going on and there no deadlocks, but now the application tanks in 7-10 minutes. We've gotten system, network and DB reports from the respective admins, and they are not being taxed; even watched the server stats with server monitor, top, prstat, ran sar reports, etc. So we believe it is an issue with the CF server or maybe the JVM. I am running out of ideas as to what else we can try. Disclaimer: I am not a CF developer or Admin. I am just running the load test, analyzing the reports, threads etc, and sharing the results with the dev and admin teams, and trying the next change, and so on. So far no dice. Has anyone run into something similar? How did you go about diagnosing and troubleshooting? All thoughts and pointers welcome. Thank you for your time! KM

    Read the article

  • Encoding multiple video streams with a single avconv invocation

    - by automatthias
    I played with avconv on Ubuntu and I'm now able to e.g. record the desktop with sound from a soundcard. One thing I wanted to do was recording two video inputs at the same time, for instance the desktop and from the webcam. I thought about doing something like this: avconv \ -f alsa \ -i default \ -acodec flac \ -f video4linux2 \ -r 6 \ -i /dev/video0 \ -f x11grab \ -i :0.0 \ out.mkv My thinking was that if you define multiple video inputs, and the .mkv format can handle multiple video streams, avconv will encode 2 video streams and 1 audio stream into one file. But this isn't what happens: avconv version 0.8.4-6:0.8.4-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Nov 6 2012 16:51:11 with gcc 4.7.2 [alsa @ 0x1091bc0] capture with some ALSA plugins, especially dsnoop, may hang. [alsa @ 0x1091bc0] Estimating duration from bitrate, this may be inaccurate Input #0, alsa, from 'default': Duration: N/A, start: 1354364317.020350, bitrate: N/A Stream #0.0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s [video4linux2 @ 0x10923e0] Estimating duration from bitrate, this may be inaccurate Input #1, video4linux2, from '/dev/video0': Duration: N/A, start: 100607.724745, bitrate: 29491 kb/s Stream #1.0: Video: rawvideo, yuyv422, 640x480, 29491 kb/s, 6 tbr, 1000k tbn, 6 tbc [x11grab @ 0x107b2a0] device: :0.0+83,87 -> display: :0.0 x: 83 y: 87 width: 854 height: 480 [x11grab @ 0x107b2a0] shared memory extension found [x11grab @ 0x107b2a0] Estimating duration from bitrate, this may be inaccurate Input #2, x11grab, from ':0.0+83,87': Duration: N/A, start: 1354364318.488382, bitrate: 196761 kb/s Stream #2.0: Video: rawvideo, bgra, 854x480, 196761 kb/s, 15 tbr, 1000k tbn, 15 tbc Incompatible pixel format 'bgra' for codec 'mpeg4', auto-selecting format 'yuv420p' [buffer @ 0x107fcc0] w:854 h:480 pixfmt:bgra [avsink @ 0x10bdf00] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out' [scale @ 0x10dc680] w:854 h:480 fmt:bgra -> w:854 h:480 fmt:yuv420p flags:0x4 Output #0, matroska, to '.../out.mkv': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: mpeg4, yuv420p, 854x480, q=2-31, 4000 kb/s, 1k tbn, 15 tbc Stream #0.1: Audio: libvorbis, 48000 Hz, 2 channels, s16 Stream mapping: Stream #2:0 -> #0:0 (rawvideo -> mpeg4) Stream #0:0 -> #0:1 (pcm_s16le -> libvorbis) Press ctrl-c to stop encoding [mpeg4 @ 0x10bd800] rc buffer underflow ^Cframe= 160 fps= 15 q=2.0 Lsize= 3414kB time=10.66 bitrate=2623.0kbits/s video:3273kB audio:131kB global headers:4kB muxing overhead 0.165600% Received signal 2: terminating. I'm not sure if it's the question of mapping (some -map options to add?) or that avconv just can't encode more than 1 video stream at one time. So is it an actual avconv limitation, or a limitation of the available containers, or me simply not finding the right combination of command line options?

    Read the article

  • Slow ASP.NET site using IIS6

    - by lars
    Hi, I have two servers, one virtual and one physical, running the exact same site on both machines. Both machines are running on ~ 2% CPU load and very much RAM available for usage. Somehow the site, with cache turned off ofcourse, loads in ~ 500ms on the virtual machine (which is a dev-server by the way) but almost 3 full seconds on the physical machine. They're both running Server 2003, IIS6 aswell as asp.net version 2.0 Any ideas where I can start troubleshoot this? Best Regards LP

    Read the article

  • Scanning for new disks attached using virtio?

    - by larsks
    I can successfully attach disks to a running KVM instance using virsh attach-disk... virsh attach-disk node-1 /dev/vg_lunsr/lun1 vdb Disk attached successfully ...but these new devices aren't seen by the guest without a reboot, which almost defeats the purpose of dynamic attachment. If these were SCSI devices I would use e.g. /sys/class/scsi_host/host0/scan to request the SCSI drivers to scan for new devices. Is there an equivalent capability for the virtio block driver?

    Read the article

  • How to mount a HFS partition in Ubuntu as Read/Write?

    - by GiH
    I plugged in my external harddrive (which was formatted on my Mac into HFS+ journaled) to my Ubuntu desktop 9.04 64bit. I am not able to get the drive to mount with write capability, how do I do that? Right now all I'm getting is read access, I tried sudo mount -t hfsplus /dev/sdf2 /media/"Portable HD" but that still gave me only read access... ideas??

    Read the article

  • Server load is spiking from 2 to 250

    - by Hakzona
    Hello, I'am using Wordpress 2.9. Webserver 8GB from Hostgator. I'am fighting with this problem for long time but still can not find the solution. Php switched to run as an apache module, Php 5 in DSO, Apache suEXEC, eaccelerator installed, but this configuration started making huge server load on server. Server load spiking from 1 to 250 (4 cpus) and server stops, after period of time its back again and stops in about 10 minutes. It started happening when hostgator support team installed eaccelerator on server. What can make this problem and how can I fix it?

    Read the article

  • Basic clarification about Limited FTP/sFTP users

    - by mattewre
    I would like to get some clarification about the correct way to create limited users to access to my VPS user as WEBSERVER with Nginix. I'm used to NOT install FTP and access via SFTP only. It is ok for every set up? this is what I usually do from to create a limited user called "admin" that should be able to have access via SFTP to the folder with the website data mkdir -p /var/www/mysite.com/ adduser admin adduser admin www-data chown -R root:root /var/www chmod -R 755 /var/www chmod -R 755 /var/www/mysite.com chown -R admin:www-data /var/www/mysite.com/ It seems not to be the correct way, I always have problems with permission when I upload some files (for example with Wordpress in general). I would like to create an user that does work exactly as the one that the "provides" give to their client when they buy an Hosting service (that is a FTP, I would prefer SFTP access). It is for personal user, but I think that a limited user is a lot safer to use then the "root" via SFTP.

    Read the article

  • Make a Phone ring from BASH through a HUAWEI e220

    - by microspino
    Hello, I have a Debian Linux system with a HUAWEI e220 on /dev/ttyUSB0. I'd like to make It ring a generic GSM phone for just one time. I'm doing this because I'd like to build an embedded system that fires some special behavior by a single GSM phone ring. How can I do It? I've tried wvdial but I receive always a "NO CARRIER" answer when I try to send It an "ATDT XXX-phone-number-to-dial-XXX" command.

    Read the article

  • Why is mount -a not mounting fuse drive properly when executed remotely (via Fabric)?

    - by Jim D
    This is a weird bug and I'm not sure where it's coming from. Here's a quick run down of what I'm doing. I'm trying to mount a FUSE drive to an Amazon EC2 instance running Ubuntu 10.10 using s3fs (FUSE over Amazon). s3fs is compiled from source according to the instructions etc. I've also added an entry to /etc/fstab so that the drive mounts on boot. Here's what /etc/fstab looks like: # /etc/fstab: static file system information. # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 LABEL=uec-rootfs / ext4 defaults 0 0 /dev/sda2 /mnt auto defaults,nobootwait,comment=cloudconfig 0 2 /dev/sda3 none swap sw,comment=cloudconfig 0 0 s3fs#mybucket /mnt/s3/mybucket fuse default_acl=public-read,use_cache=/tmp,allow_other 0 0 So the good news is that this works fine. On reboot the connection mounts correctly. I can also do: $ sudo umount /mnt/s3/mybucket $ sudo mount -a $ mountpoint /mnt/s3/mybucket /mnt/s3/mybucket is a mountpoint Great, right? Well here's the problem. I'm using Fabric to automate the process of building and managing this instance. I noticed I was getting this error message when using Fabric to build s3fs and set up the mount process: mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected I isolated it down the the problem and built a fabric task that reproduces the problem: def remount_s3fs(): sudo("mount -a") Which does: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: mount -a [And yes, I was sure to unmount it before running this task.] When I check the mount using mountpoint I get: $ mountpoint /mnt/s3/mybucket mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected Done. But if I run sudo mount -a at the command line, it works. Hrm. Here is that fab task output again, this time in full debug mode: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a" Again, I get that transport endpoint not connected error. I've also tried copying and pasting the exact command run into my ssh session (i.e. sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a") and it works fine. So...that's my problem. Any ideas?

    Read the article

  • cannot find usb flash drive in Ubuntu

    - by user23950
    I tried a little searching first before I came to ask in here. And I found this code, but I don't understand it. sudo mkdir /mnt/usbdrv sudo mount -t vfat /dev/sda1 /mnt/usbdrv What is vfat?What is sda1 and what is -t? How do I type this in order to be compatible with my flash drive?

    Read the article

  • pptpd not working externally on Ubuntu Server 11.10

    - by Brendan
    I am trying to set up a pptpd vpn on our newly installed Ubuntu 11.10 64 bit server, but am not having success having a client connect via an iPhone to the VPN. Note that no clients have been able to connect to this VPN from outside of the network. The system is up to date with patches. Here is the output of /var/log/syslog. Please note that 222.153.x.y is my remote IP address. Mar 30 22:07:47 server pptpd[9546]: CTRL: Client 222.153.x.y control connection started Mar 30 22:07:47 server pptpd[9546]: CTRL: Starting call (launching pppd, opening GRE) Mar 30 22:07:47 server pppd[9555]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Mar 30 22:07:47 server pppd[9555]: pppd 2.4.5 started by root, uid 0 Mar 30 22:07:47 server pppd[9555]: Using interface ppp0 Mar 30 22:07:47 server pppd[9555]: Connect: ppp0 <--> /dev/pts/3 Mar 30 22:07:47 server pptpd[9546]: GRE: Bad checksum from pppd. Mar 30 22:08:17 server pppd[9555]: LCP: timeout sending Config-Requests Mar 30 22:08:17 server pppd[9555]: Connection terminated. Mar 30 22:08:17 server pppd[9555]: Modem hangup Mar 30 22:08:17 server pppd[9555]: Exit. Mar 30 22:08:17 server pptpd[9546]: GRE: read(fd=6,buffer=6075a0,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Mar 30 22:08:17 server pptpd[9546]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Mar 30 22:08:17 server pptpd[9546]: CTRL: Reaping child PPP[9555] Mar 30 22:08:17 server pptpd[9546]: CTRL: Client 222.153.x.y control connection finished As you can see, the problem seems to be the connection timing out after 30 seconds ("Mar 30 22:08:17 server pppd[9555]: LCP: timeout sending Config-Requests". Over Wifi however (inside the local network) there are no issues: Mar 30 22:12:33 unreal-server pptpd[12406]: CTRL: Client 192.168.0.100 control connection started Mar 30 22:12:33 unreal-server pptpd[12406]: CTRL: Starting call (launching pppd, opening GRE) Mar 30 22:12:33 unreal-server pppd[12407]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Mar 30 22:12:33 unreal-server pppd[12407]: pppd 2.4.5 started by root, uid 0 Mar 30 22:12:33 unreal-server pppd[12407]: Using interface ppp0 Mar 30 22:12:33 unreal-server pppd[12407]: Connect: ppp0 <--> /dev/pts/3 Mar 30 22:12:33 unreal-server pptpd[12406]: GRE: Bad checksum from pppd. Mar 30 22:12:36 unreal-server pppd[12407]: peer from calling number 192.168.0.100 authorized Mar 30 22:12:36 unreal-server pppd[12407]: MPPE 128-bit stateless compression enabled Mar 30 22:12:36 unreal-server pppd[12407]: Cannot determine ethernet address for proxy ARP Mar 30 22:12:36 unreal-server pppd[12407]: local IP address 192.168.0.10 Mar 30 22:12:36 unreal-server pppd[12407]: remote IP address 192.168.1.1 I have set up an iptables config for the server; to check this isn't the problem I allowed all traffic temporarily, but this does NOT change the symptoms in the first example. Here is the output from /etc/iptables.rules.save *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] COMMIT Even with these rules applied, the output from /var/log/syslog is LINE FOR LINE what I saw in the the first block of code. Please note that before running this Ubuntu server; an old SME Server box was running in place of it, that had a pptpd server on it just like we are using, and we experienced no issues.

    Read the article

  • Linux filesystem with inodes close on the disk

    - by pts
    I'd like to make the ls -laR /media/myfs on Linux as fast as possible. I'll have 1 million files on the filesystem, 2TB of total file size, and some directories containing as much as 10000 files. Which filesystem should I use and how should I configure it? As far as I understand, the reason why ls -laR is slow because it has to stat(2) each inode (i.e. 1 million stat(2)s), and since inodes are distributed randomly on the disk, each stat(2) needs one disk seek. Here are some solutions I had in mind, none of which I am satisfied with: Create the filesystem on an SSD, because the seek operations on SSDs are fast. This wouldn't work, because a 2TB SSD doesn't exist, or it's prohibitively expensive. Create a filesystem which spans on two block devices: an SSD and a disk; the disk contains file data, and the SSD contains all the metadata (including directory entries, inodes and POSIX extended attributes). Is there a filesystem which supports this? Would it survive a system crash (power outage)? Use find /media/myfs on ext2, ext3 or ext4, instead of ls -laR /media/myfs, because the former can the advantage of the d_type field (see in the getdents(2) man page), so it doesn't have to stat. Unfortunately, this doesn't meet my requirements, because I need all file sizes as well, which find /media/myfs doesn't print. Use a filesystem, such as VFAT, which stores inodes in the directory entries. I'd love this one, but VFAT is not reliable and flexible enough for me, and I don't know of any other filesystem which does that. Do you? Of course, storing inodes in the directory entries wouldn't work for files with a link count more than 1, but that's not a problem since I have only a few dozen such files in my use case. Adjust some settings in /proc or sysctl so that inodes are locked to system memory forever. This would not speed up the first ls -laR /media/myfs, but it would make all subsequent invocations amazingly fast. How can I do this? I don't like this idea, because it doesn't speed up the first invocation, which currently takes 30 minutes. Also I'd like to lock the POSIX extended attributes in memory as well. What do I have to do for that? Use a filesystem which has an online defragmentation tool, which can be instructed to relocate inodes to the the beginning of the block device. Once the relocation is done, I can run dd if=/dev/sdb of=/dev/null bs=1M count=256 to get the beginning of the block device fetched to the kernel in-memory cache without seeking, and then the stat(2) operations would be fast, because they read from the cache. Is there a way to lock those inodes and/or blocks into memory once they have been read? Which filesystem has such a defragmentation tool?

    Read the article

  • "Error 1067: The process terminated unexpectedly" when trying to install MySQL on Win7 x64.

    - by Gravitas
    Hi, I've run into a brick wall trying to install MySQL v5.5 on my machine. My PC is Windows 7 x64, Enterprise edition. MySQL installs fine, but when I run the "MySQL Instance Configuration Wizard", it pauses forever on the step "Start Service" (I can let it run for 30 minutes with no response). If I go into services, I see that the "MySQL" service hasn't started, and if I try to start it, it says "Windows could not start MySQL Service on Local Computer. Error 1067: The process terminated unexpectedly." I've tried the following: Turning off firewall. Uninstalling all antivirus software. Installing / reinstalling 32-bit version of MySQL. Installing / reinstalling 64-bit version of MySQL. Uninstalling, deleting the contents of "C:\program files\MySQL" and "C:\program files (x86)\MySQL", reinstalling. Checking to see that there is no rogue services named MySQL???? (from a previous install). Checking that port 3306 is not used by an alternate program. Changing the default port that MySQL uses. Checking for "my.ini" and "my.ini.cnf" in "C:\windows" (nothing there but that can cause a problem). Running both MySQL installer, and configuration wizard, in "Adminstrator mode". Turning off UAC. Installing with defaults, not changing anything. Rebooting my machine (about 6 reboots so far). Opening up port 3306 in the firewall (both TCP and UDP, inbound and outbound). Swearing at the klutz of a programmer who designed MySQL so you can't even install it (as if that would help!) My machine is working 100% in every other way. InfiniDB (a MySQL compatible database) installs 100%, as does Visual Studio 2010, Microsoft SQL Server, etc, etc. Your advice on how to work around this? p.s. Here is the screen it got stuck on for 15 minutes until I killed the process: Update 2010-12-20 Tried MySQL v5.1, it didn't work either. Its amazing - if you type "mysqld /?", or "mysqld -help", it doesn't give you any help. And, if you try to restart the service manually, it doesn't display any error messages. Could it be any more unhelpful? Update 2010-12-21 Installed MySQL 6.0 alpha, and it worked. However, I'd rather not use an alpha release, given that the "stable" release is anything but :( Update 2010-12-21 Found http://dev.mysql.com/doc/refman/5.1/en/windows-troubleshooting.html, dealing with troubleshooting under Windows. Discovered that you can generate an error log if the service doesn't start - see here: http://dev.mysql.com/doc/refman/5.1/en/error-log.html

    Read the article

  • Window 7 + SVN permission problem

    - by acidzombie24
    In tortoisesvn i get the error Can't create directory 'C:\dev\repo\subfolder\db\transactions\49-1g.txn': ... I used robocopy (i cant remember if i used /copyall) to copy the files onto an external drive then i copied it back after a format. Now ATM i cannot commit. How do i fix this?

    Read the article

  • Unecrypted Image of Truecrypt-Encrypted System Partition

    - by Dexter
    The general tenor around the internet seems to be that you can't create images of system partitions that have been encrypted (with truecrypt) other than with dd or similar sector-by-sector copy tools. These files however are very impractical given their size (and are obviously incompressible) which makes keeping multiple states/backups of your system partition rather expensive (..especially considering current hdd prices). The problem is that backup tools (like Acronis True Image, Clonezilla, etc.) won't give you the option to create an image of (mounted/opened) Truecrypt partitions, or that there is no recovery environment for restoring the backup, that would allow to run truecrypt before doing any actual restoring. After some trial and error however, I believe I have found a very simple way. Since Truecrypt (running in Linux) creates a virtual block device, that it uses for mounting the unencrypted partitions into the file system, partclone can be used for creating/restoring images. What I did: boot up a linux live disk mount/open the drive/device/partition in truecrypt unmount the filesystem mount point again, like so: umount /media/truecryptX ("X" being the partition number assigend by truecrypt) use partclone (this is what clonezilla would do too, except that clonezilla only offers you to back up real drive partitions, not virtual block devices): partclone.ntfs -c -s /dev/mapper/truecryptX -o nameOfBackupFile for restoring steps 1-3 remain the same, and step 4 is partclone.ntfs -r -s nameOfBackupFile -o /dev/mapper/truecryptX A backup and test-restore of the system (with this method) seems to have worked fine (and the changed settings were reverted to the backup-state). The backup file is ~40 GB (and compressible down to <8GB with 7zip/LZMA2 on the "fast" setting). I can't quite believe that I'm the only one that wants to create images of encrypted drives, but doesn't want to waste 100GB on the backup of one single system state. So my question now is, given how simple this was, and that no one seems to mention anywhere that this is possible - did I miss something? or did I do something wrong? Is there any situation that I didn't think of where this method will fail? Obviously, the backup file needs to be stored in some other encrypted place in order to still remain confidential, since it is unencrypted. Also, in order to do a full "bare metal" restore, one would have to actually first (re-)install Windows, encrypt it, and only then restore the backup file. The funny thing however is that you won't need to backup any partition tables, etc. since the reinstall will effectively take care of that. Is there anything else? This is imho still a lot better than having sector-by-sector images..

    Read the article

  • installing libraries for cygwin

    - by Hoang
    I want to install these libraries in cygwin, how do I do it? are all of them available on cygwin environment or only on linux? g++ - the version 4.4 graphviz gnuplot plotdrop libboost version 1.38 libgsl0-dbg libgsl0-dev libgsl0ldbl

    Read the article

< Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >