Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 1132/1620 | < Previous Page | 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139  | Next Page >

  • PHP Script causing high CPU

    - by user20996
    I have a website which is causing me a major headache. There is one PHP script which is using far to much CPU, this only seem to happen when bots hit the site, I dont want to block all the bots because we need them. I have the the process manager output: Pid Owner Priority CPU % Memory % Command 16943 (Trace) (Kill) specialone 0 99.4 1.0 /usr/bin/php /var/www/specialone/page.php I ran strace -p 16943 on the process but it comes up with nothing. We have 2GB of RAM and the php memory_limit is set to 128M which should be enough. The trouble I have is the PHP code is a Framework and the culprit page.php pulls in a lot of other PHP files so I cant debug the PHP. Is there any way of finding out what the script is doing when its using so much CPU which will help me solve it?

    Read the article

  • Video converters don't work anymore after reinstalling Windows

    - by tassiekev
    A few days ago, I decided to reinstall Windows 7 as my HD partition seemed to be nearly full and things were slowing down. I'd been using Handbrake almost exclusively to convert TV recordings and used Freemake on occasion. Following the reinstall, I can't get either to work: Handbrake says it's encoding for about 2 seconds and then says it's finished, but there are no converted files of any size. Freemake just says 'Conversion Error' and won't go any further. As an experiment I tried two programs that I don't normally use, VideoReDo & Any Video Converter. Both worked fine. Anyone got any clues?

    Read the article

  • What hash should be used to ensure file integrity?

    - by Corey Ogburn
    It's no secret that large files offered up for download often are coupled with their MD5 or SHA-1 hash so that after you download you can verify the file's integrity. Are these still the best algorithms to use for this? Obviously these are very popular hashes that potential downloaders would have easy access to. Ignoring that factor, what hashes have the best properties for being used for this? For example, bcrypt would be horrible for this. It's designed to be slow. That would suck to use on your 7.4 GB dual layer OS ISO you just downloaded when a 12 letter password might take up to a second with the right parameters.

    Read the article

  • Encrypt tar file asymmetrically

    - by DerMike
    I want to achieve something like tar -c directory | openssl foo > encrypted_tarfile.dat I need the openssl tool to use public key encryption. I found an earlier question about symmetric encryption at the command promt (sic!), which does not suffice. I did take a look in the openssl(1) man page and only found symmetric encryption. Does openssl really not support asymmetric encryption? Basically many users are supposed to create their encrypted tar files and store them in a central location, but only few are allowed to read them.

    Read the article

  • How to get list of defined shortcut keys in the Start menu?

    - by Peter Mortensen
    How can I find out which keyboard shortcuts are defined inside the Start menu? And what shortcuts they are assigned to? Platform: Windows XP SP2 64 bit. Example: I open my main Visual Studio solution with a shortcut key, Ctrl+Alt+M. This is setup by having a shortcut inside the Start menu with: Target: "D:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe" D:\dproj\MSQall\MSQuant\MSQuant.sln Shortcut key: Ctrl+ALT+M If a new shortcut is added and its shortcut key is also set to Ctrl+Alt+M then there are now two shortcuts with the same shortcut key (conflict). To prevent this it would be nice to know which shortcut keys are already assigned and to which shortcuts.

    Read the article

  • Windows 7: "localhost name resolution is handled within DNS itself". Why?

    - by Portman
    After 18 years of hosts files on Windows, I was surprised to see this in Windows 7 build 7100: # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost Does anyone know why this change was introduced? I'm sure there has to be some kind reasoning. And, perhaps more relevantly, are there any other important DNS-related changes in Windows 7? It scares me a little bit to think that something as fundamental as localhost name resolution has changed... makes me think there are other subtle but important changes to the DNS stack in Win7.

    Read the article

  • NGINX downloads text file instead of displaying it

    - by Hoang Lam
    I have Nginx installed with the following nginx.conf: user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /vagrant/access.log main; sendfile on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; } Everytime I tried the URL of any text files (not PHP, it works fine), the browser asks to download it. Common measure such as disabling default-type octet/stream fails.

    Read the article

  • fsck: FILE SYSTEM WAS MODIFIED after each check with -c, why?

    - by Chris
    I use a script to partition and format CF cards (connected with a USB card writer) in an automated way. After the main process I check the card again with fsck. To check bad blocks I also tried the '-c' switch, but I always get a return value != 0 and the message "FILE SYSTEM WAS MODIFIED" (see below). I get the same result when checking the very same drive several times... Does anyone know why a) the file system is modified at all and b) why this seems to happen every time I check and not only in case of an error (like bad blocks)? Here's the output: linux-box# fsck.ext3 -c /dev/sdx1 e2fsck 1.40.2 (12-Jul-2007) Checking for bad blocks (read-only test): done Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Volume (/dev/sdx1): ***** FILE SYSTEM WAS MODIFIED ***** Volume (/dev/sdx1): 5132/245760 files (1.2% non-contiguous), 178910/1959896 blocks

    Read the article

  • How to recover from broken file?

    - by Earlz
    Hello I have a broken file. It was encrypted(by me) and I did get it unencrypted though I can not remember if it is tar or tar.gz. I believe whichever one it is though is corrupted. I've tried both methods and everything else I could think of for what kind of archive format it could be(7z, zip, etc) and it errors out. What is a possible course of action to take here? The files on it are not extremely important though they are irreplaceable. Are there any tools out there for "fixing" tar or tar.gz formats?

    Read the article

  • formated d partition by mistake

    - by duncan-benoit
    Hi there I just did a huge mistake. Yesterday, a freind of mine has asked me to intal windows xp for him. I did this task hundres of times, but yesterday i was tyred when i started the install procedure and i've formated the D partition and installed windows on it. Now I'm running PhotoRed on that partition, but it recovers the files in a weird way(filenames and diretory structure is lost). My questions are: 1) how can i recouver as much is possible from the previous data of that partition? 2) how to tell to my freind what i've did?

    Read the article

  • I want to build an debian apt site for local LAN updates

    - by user73504
    Hi, I have downloaded all debian's DVD disks, and I have set up apache httpd service. I combined all dvd disk 's file, but I found the .gpg file I need and I can't create it. it looks like source's signature file. so when I set my /etc/apt/sources.list file as follow: deb http://192.168.1.102/apt/debian squeeze main contrib it noticed me the gpg files verilied faild. so I want to know , how to create gpg file, and do I need some other work except put DVD's file to the apache's htdocs path?

    Read the article

  • i[Pod|Phone|Pad|*] backups in iTunes

    - by Maroloccio
    iTunes <- iPhone. At sync time, a back-up is performed. Which data is included, which data is not? i.e. are songs (potentially redundant) backed-up so that a computer ends up having both the source file on the filesystem and the copy within the device back-up? Is anything on the iPhone filesystem not backed up? (i.e. on a Mac using Time Machine, some files are excluded from the back-up even if not all of them can be recreated upon restore - I lost my postfix config this way..)

    Read the article

  • nagios contact groups to check_mk

    - by Skiaddict
    I have Nagios installed with traditional configuration files. I have created some contact groups and assigned them to hosts. For web UI I'm using check_mk. And here's the question: Check_mk supports showing hosts/services based on contact group membership. But I can't use the Nagios contact groups in check_mk. (Result should be that if person XYZ is logged in, he see only hosts and services assigned to him.) My users are in LDAP (I'm using check_mk login form, not apache authorisation). I can't find any information about this in documentation so if someone have experience, please tell me how this works. The problem is that I cannot let everybody be admin and receive all alerts...

    Read the article

  • VMware Server guest systems are extremely slow with IO load on host (Ubuntu 8.04)

    - by Dennis G.
    We are experiencing performance issues with a VMware Server 2.x installation on an Ubuntu 8.04 host. When the host system is generating IO load (for example, copying large files as part of a backup operation), the guests (also Ubuntu 8.04) become extremely unresponsive and slow (simple Apache HTTP requests taking 5 seconds instead of the usual 200ms). We tried optimizing various aspects of the VMs, but the issue remains. Is there a known bug with VMware performance under linux if host IO load is high? Is there a way to fix this? Is this only an issue with Ubuntu systems, or have you seen it on other systems before? Thanks!

    Read the article

  • Transfer many Gigabytes between two servers

    - by Bernhard
    Hello, I have a big problem. I have to move data from an old Webspace which is only accessibla by ftp. The new root server is accessible by ssh of course :-) I need to move all the data from the old space but the amount is just huge. Is there a way to move all the files directly from the old ftp to the storage and not over a third station (my local machine)? I´ve tried it with ftp but it didn't work. I think I´ve used the wrong commands. Is there a way to do this? Thank you in advance Bernhard

    Read the article

  • Upgrading from MySQL Server to MariaDB

    - by Korrupzion
    I've heard that MariaDB has better performance than MySQL-Server. I'm running software that makes an intensive use of MySQL, thats why I want to try upgrading to MariaDB. Please tell me your experiences doing this conversion, and instructions or tips. Also, which files I should take care of for making a backup of MySQL-Server, so if something goes wrong with MariaDB, I could rollback to MySQL without issues? I would use this but i'm not sure if it's enough to get a full backup of MySQL-Server confs and databases mysqldump --all-databases backup /etc/mysql My Environment: uname -a (Debian Lenny) Linux charizard 2.6.26-2-amd64 #1 SMP Thu Sep 16 15:56:38 UTC 2010 x86_64 GNU/Linux MySQL Server Version: Server version 5.0.51a-24+lenny4 MySQL Client: 5.0.51a Statistics: Threads: 25 Questions: 14690861 Slow queries: 9 Opens: 21428 Flush tables: 1 Open tables: 128 Queries per second avg: 162.666 Uptime: 1 day 1 hour 5 min 13 sec Thanks! PS: Rate my english :D

    Read the article

  • How to tell if linux disk IO is causing excessive (> 1 second) application stalls

    - by noahz
    I have a Java application performing a large volume (hundreds of MB) of continuous output (streaming plain text) to about a dozen files a ext3 SAN filesystem. Occasionally, this application pauses for several seconds at a time. I suspect that something related to ext3 vsfs (Veritas Filesystem) functionality (and/or how it interacts with the OS) is the culprit. What steps can I take to confirm or refute this theory? I am aware of iostat and /proc/diskstats as starting points. Revised title to de-emphasize journaling and emphasize "stalls" I have done some googling and found at least one article that seems to describe behavior like I am observing: Solving the ext3 latency problem Additional Information Red Hat Enterprise Linux Server release 5.3 (Tikanga) Kernel: 2.6.18-194.32.1.el5 Primary application disk is fiber-channel SAN: lspci | grep -i fibre 14:00.0 Fibre Channel: Emulex Corporation Saturn-X: LightPulse Fibre Channel Host Adapter (rev 03) Mount info: type vxfs (rw,tmplog,largefiles,mincache=tmpcache,ioerror=mwdisable) 0 0 cat /sys/block/VxVM123456/queue/scheduler noop anticipatory [deadline] cfq

    Read the article

  • Can I import contacts from an Exchange 2003 EDB file for a single user?

    - by Drarok
    We have recently had to reinstall a very unhappy Exchange 2003 server for a client, and whilst rebuilding their server we moved them onto a temporary one. During the course of all this, ExMerge was used due to Windows being so broken that none of the backup software could run. As a number of the user mailboxes were way over 2GB, we had to do date ranges of a few months at a time to avoid ExMerge's 2GB limit. We went back as far as 2006, as the files produced by ExMerge seemed empty that far back. Unfortunately, one of the users has reported that around 2 3rds of their contacts are missing, and that pre-2006 sounds about right for the missing items. Is there any way I can mount the old EDB file into Exchange, or otherwise read their contacts into a usable format? The server is running Windows Server 2003 SBS R2 (SP2), and Exchance 2003 (SP2, I think).

    Read the article

  • Issues with MongoDB install on Ubuntu 8.04 LTS

    - by Tom
    I am installing MongoDB (1.4.1) on Ubuntu (8.04 LTS) and I continuously have a problem where I can be in /usr/local/mongodb/bin and run ./mongo or ./mongod and I am returned "No such file or directory." Let me be very clear here... the files ARE there! The obvious go-to solution is that it is because of permission issues but the permissions are fine. I've even tried others out, still without any luck. I'm really at the end here and any help would be MUCH appreciated. Thank you!

    Read the article

  • Methodologies for performance-testing a WAN link

    - by Chopper3
    We have a pair of new diversely-routed 1Gbps Ethernet links between locations about 200 miles apart. The 'client' is a new reasonably-powerful machine (HP DL380 G6, dual E56xx Xeons, 48GB DDR3, R1 pair of 300GB 10krpm SAS disks, W2K8R2-x64) and the 'server' is a decent enough machine too (HP BL460c G6, dual E55xx Xeons, 72GB, R1 pair of 146GB 10krpm SAS disks, dual-port Emulex 4Gbps FC HBA linked to dual Cisco MDS9509s then onto dedicated HP EVA 8400 with 128 x 450GB 15krpm FC disks, RHEL 5.3-x64). Using SFTP from the client we're only seeing about 40Kbps of throughput using large (2GB) files. We've performed server to 'other local server' tests and see around 500Mbps through the local switches (Cat 6509s), we're going to do the same on the client side but that's a day or so away. What other testing methods would you use to prove to the link providers that the problem is theirs?

    Read the article

  • OSS Router firmwares

    - by Cherian
    DD-WRT, Open WRT , Tomato or Third-party firmware projects ? What are the compelling reasons to choose between these? I used to be a great DD-WRT fan until I realized that the author was deceiving users by publishing it as a OSS, but made it very cumbersome to download the source and change it (requires you to download GB’s of source files) .Also their bandwidth monitoring feature was part of the paid version, which IMHO is a killer. Having said that, DD-WRT just worked. And I think that’s great..

    Read the article

  • Any Application to bind various documents

    - by Codeslayer
    I communicate with the client using various tools such as MS Outlook,Mailing through Google/ Yahoo accounts, sending Word or Excel documents as attachments through this mail. What I am looking at is there any tool which will help me in binding all these documents so that I may be able to virtually bind all these documents of a particular client. For example all these documents were sent to Client A 2 Outlook mails without attachment 2 Web mails with MS-Word attachment 1 Web Mail with Excel attachment Now I wish I had a document which would bind the Outlook mail bodies as text files MS-Word documents Excel document Previous versions of MS-Office had Office Binder. Is there something similar to this Thanx.

    Read the article

  • remote symbolic link / junction

    - by Blueberry
    Might be a pretty obvious one but have had some trouble finding solid answers. I have a directory on a windows network share containing different versions of an application. I would like to have a link to one of these called 'current', which will be a symbolic link to the directory sitting beside all the other versions and pointing to one of these. Creating this link seems to be more of an issue than I would have thought. Looks like symlink only shows the link on the same machine as where it was created (which is not going to work for obvious reasons) and junction needs to be run on the server which is practically impossible due to various restrictions. What would be the best way to go about this? Would I just need to copy the files twice or can I have a symbolic link which can be created and accessed remotely?

    Read the article

  • Start a ZFS RAIDZ zpool with two discs then add a third?

    - by Doug S.
    Let's say I have two 2TB HDDs and I want to start my first ZFS zpool. Is it possible to create a RAIDZ with just those two discs, giving me 2TB of usable storage (if I understand it right) and then later add another 2TB HDD bringing the total to 4TB of usable storage. Am I correct or does there need to be three HDDs to start with? The reason I ask is I already have one 2TB drive I'm using that's full of files. I want to transition to a zpool but I'd rather only buy two more 2TB drives if I can. From what I understand, RAIDZ behaves similarly to RAID5 (with some major differences, I know, but in terms of capacity). However, RAID5 requires 3+ drives. I was wondering if RAIDZ has the same requirement. If I have to, I can buy the three drives and just start there, later adding the fourth, but if I could start with two and move to three that would save me $80.

    Read the article

< Previous Page | 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139  | Next Page >