Search Results

Search found 1657 results on 67 pages for 'writes on'.

Page 23/67 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Dualbooting Win7 and Gentoo, error

    - by Tommy Jakobsen
    Hello I'm trying to setup a dualboot with Gentoo Linux and Windows 7. Heres my partitions: /dev/sda1 /boot partition, ext2 /dev/sda2 win7 partition, ntfs /dev/sda3 swap partition, linux swap /dev/sda4 root partition, btrfs Using Grub, I can boot into Gentoo, but when I'm choosing to boot Windows 7, nothing happens. It just writes the Grub options for that choice, and then it hangs. grub.conf: default 0 timeout 30 title Gentoo root (hd0,0) kernel /boot/kernel-x86_64-2.6.31 root=/dev/sda4 title Windows rootnoverify (hd0,1) makeactive chainloader +1 Any ideas? Help will be much appreciated!

    Read the article

  • Is it a good idea to have the operating system on a solid state drive?

    - by Kenji Kina
    There is something I don't quite understand. I know a SSD helps with OS load times, but I'm not sure if all this boost is only noticeable/interesting when booting, or gives an all around considerably better experience thereafter. I am interested in having a quick and responsive environment after booting, which leads me to think that it'd be better to spend the SSD capacity in my most used apps (and the page file? Another inside question) and not the OS itself. This, of course, means that I don't know just how much the OS reads/writes its files during normal usage. So, how good an idea is it to dump the whole 20GB+ of Windows 7 OS into the SSD (considering the hefty price per GB of SSD capacity) if I can put up with the usual hard disk boot times? Would I be missing on a lot if I didn't?

    Read the article

  • Does Ubuntu Server have any sort of cron job to automatically clear /tmp?

    - by DWilliams
    I know it clears out /tmp on reboots, but I haven't been able to find any sort of cron job on my server that clears /tmp. I recently set up a script that writes lots of files to /tmp and my server usually goes several months between reboots so I'm concerned about it being cluttered. I've seen several other distros that have a tmpwatch script installed by default. Ubuntu's repository seems to have replaced tmpwatch with tmpreaper. Is there any mechanism in place on Ubuntu (8.04 currently, soon to be upgraded to 10.04 when I get around to it) to clean up temp files on a server that doesn't regularly reboot or do I need to install tmpreaper?

    Read the article

  • Segmentation Fault with mod_include

    - by Benedikt Eger
    Hi, I'm using a rather complex structure with multiple ssi-includes, set- and echo-commands. The first document writes a lot of set-commands, includes another document which then again includes a third document. On the last included document the variable values are printed using the echo-command. I noticed that with an increasing number of variables the probability for a segmentation fault to happen rises. Did anyone experience something similar? How do I go about debugging such a problem? I'm using IBM_HTTP_Server/2.0.47.1-PK65782 Apache/2.0.47

    Read the article

  • How do I protect large file downloads through PHP and/or Apache?

    - by Eric
    We have some large files (1-8GB) that are not publicly accessible. Currently we're serving them up through a PHP script that buffers the files in 1MB chunks and writes it to the output. It's incredibly CPU intensive and slows the server down when only a few downloads are active. We want to move the file transfer work to Apache or a more efficient method. We are using cookie authentication. FTP downloads are out unless there's some way to authenticate FTP sessions through the existing PHP session cookie. Ideally we'd like something where we can use PHP to hide the link to the file while it passes off the file transfer work to Apache, which is no doubt far more efficient at HTTP file transfers than PHP. We want to be able to resume downloads as well. Any help is appreciated.

    Read the article

  • MySQL Windows vs. Linux: performance, caveats, pros and cons?

    - by gravyface
    Looking for (preferrably) some hard data or at least some experienced anecdotal responses with regards to hosting a MySQL database (roughly 5k transactions a day, 60-70% more reads than writes, < 100k of data per transaction i.e. no large binary objects like images, etc.) on Windows 2003/2008 vs. a Debian-based derivative (Ubuntu/Debian, etc.). This server will function only as a database server with a separate Web server on another physical box; this server will require remote access for management (SSH for Linux, RDP for Windows). I suspect that the Linux kernel/OS will compete less than the Windows Server for resources, but for this I can't be certain. There's also security footprint: even with Windows 2008, I'm thinking that the Linux box can be locked down more easily than the Windows Server. Anyone have any experience with both configurations?

    Read the article

  • How to scale out OpenStreetMap data efficiently

    - by Pierre
    For over a year now, I'm running an in-house PostGIS server filled with OSM data, used for both Mapnik-based tile generation and Nominatim-based geocoding, updated with day replicates. This works pretty well. However, as usage is growing exponentially, I would like to achieve better reliability and performance by adding additional PostgreSQL servers. And I'm kind of lost. Since PostgreSQL doesn't seem to handle replication by itself, I would think about using a piede of middleware like PgPool-II to keep the servers in sync. But I'm afraid it would be nothing but necessary for this usage : very high read-to-write ratio, where all writes are done at the same exact time every day. My questions are simple : What would you do to keep these servers in sync? And, what is done for this at the OpenStreetMap Foundation, MapQuest, Mapbox or CloudMade? Thanks.

    Read the article

  • Using powershell call native command-line app and capture STDERR

    - by crtracy
    I'm using a port of a cygwin tool on Windows which writes normal status messages to STRERR. This produces ugly output when run from PowerShell: PS> dos2unix.exe -n StartApp.sh StartApp_fixed.sh dos2unix.exe : dos2unix: converting file StartEC3.sh to file StartEC3_fixed.sh in UNIX format ... At line:1 char:13 + dos2unix.exe <<<< -n StartApp.sh StartApp_fixed.sh + CategoryInfo : NotSpecified: (dos2unix: conve...UNIX format ...:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError Is there a better way? P.S. I intend to post one solution I've found and compare it to answers from others.

    Read the article

  • What hardware makes a good MongoDB Server ? Where to get it ?

    - by João Pinto Jerónimo
    Suppose you're on dell.com right now and you're buying a server to run your MongoDB database for your small startup. You will have to handle literally tens of thousands of writes and reads per minute (but small objects). Would you go for 2 processors ? Invest more on RAM ? I've heard (correct me if I'm wrong) MongoDB handles the most it can on the RAM and then flushes everything to the disk, in that case I should invest on a CPU with a large L2 cache, probably 40GB of RAM and a solid state drive.. right ? Would I be better off with a high end (~$11,309, 2 expensive processors, 96GB of RAM) server or 2x(~$6,419, 2 expensive processors, 12GB of RAM) servers ? Is Dell ok or do you have better sugestions ? (I'm outside the US, on Portugal)

    Read the article

  • data protector red tapes

    - by Caesar
    I am using HP Data Protector A.06.11 in my organization, with HP EML E-SeriesEML library, with 4 drives using LTO-4 tapes, and i am having some problems. Yesterday I put 5 new tapes in the robot and formatted them. At that time, the robot got just those empty 5 tapes with empty space. (all the rest of the tapes are red, or with protection) Today in the morning after the night (1 backup run at night), and 2 of the new tapes are red (the properties are): Writes : 2 Overwrites : 1 Errors : 9 I format one of them, and check for each drive if the tape become red, no one of the drives do it. In the main pool properties, in media condition got: Valid for : 36 (months) Maximum overwrites : 250

    Read the article

  • Optimal Disk Setup for OLTP SQL Server

    - by Chris
    We have a high transaction (lots of reads and writes) database server (running SQL 2005) that is currently set up with a RAID 1 OS partition (C:) and a RAID 5 data/log/tempdb partition (D:). The C: has 2 drives and the D: has 4 drives. The server has around 300 databases ranging from 10MB to 2GB in size. I have been reading up on best practices for partioning the disks, but would like some opinions on our setup since we are so limited in the number of disks. It seems like RAID 10 is popular, but I dont think we could use it with only 6 total disks to work with. Thanks. Update I went with 3 RAID 1 Partitions (2 disks each) Partition 1: OS, TempDB, Backups Partition 2: Logs Partition 3: Data

    Read the article

  • Calculating IOPS for a single HDD - what am I doing wrong?

    - by red888
    So I know there is no standardized way of calculating IOPS for a HDD, but from everything I have read it appears one of the most accurate formulas is the following: IOP/ms = + {rotational latency} + ({block size} / {data transfer rate}) Which is IOs per millisecond or what the book I've been reading calls "Disk Service Time". Also rotational latency is calculated as half of one rotation in milliseconds. This was taken from the EMC book "Information Storage and Management" -arguably a pretty reliable source right\wrong? Putting this formula into practice consider this Seagate data sheet. I am going to calculate IOPS for the ST3000DM001 model for a block size of 4kb: Seek Average (Write) = 9.5 -I'll measuring IOPS for writes Spindle speed = 7200rpm Average Data Rate = 156MB/s So my variables are: Seek Time = 9.5ms Rotational latency = (.5 / (7200rpm / 60)) = 0.004s = 4ms Data Rate = 156MB/s = (0.156MB/ms / 0.004MB) = 39 9.5ms + 4ms + 39 = IO/ms 52.5 1 / (52.5 * 0.001) = 19 IOPS 19 IOPS for this drive clearly is not right so what am I doing wrong?

    Read the article

  • Why is Nginx ignoring the access_log directive when post_action is specified?

    - by Chris
    Hi, in the location below nginx writes a custom download log. Everything works fine except when there is a post_action directive. I seems that nginx skips the access_log directive. Here is the config: location /download_intern/ { internal; if ($uri ~* ^/download_intern/([0-9]+)/) { set $transferID $1; set $server $arg_ip; set $url $arg_url; proxy_pass http://$server:80/$url; break; } log_format download '$remote_addr [$time_local] $upstream_cache_status "$scheme://$host$request_uri" $status [$transferID] $body_bytes_sent'; access_log /opt/nginx/logs/server.download_log download; # without this line the download log file is being written post_action /done; } location /done { internal; # log the transfer on the main server proxy_pass http://xxx.xxx.xxx.xxx:80/download_end/?tid=$transferID; }

    Read the article

  • I want Lotus Domino to only send one email to users that are both recipients and members of a cc'ed lotus group.

    - by Marcus
    Lotus Domino 7 and now Lotus Domino 8.5 The scenario: A@mycompany writes an email to b@internet and cc's it to group@mycompany. A@mycompany is a member of group@mycompany. With the initial email Domino is intelligent enough to not send the email which a@mycompany just wrote to a@mycompany again. But when b@internet answers to all (a@mycompany + group@mycompany) then a@mycompany gets this email twice, because he is not only the author but also a member of group@mycompany. During the smtp session the email is sent once with the recipients set to a@mycompany and group@mycompany and a single esmtp id. So Domino should well be able to see that the mail should only be sent to a@mycompany once. Can I make Lotus Domino behave in this sane fashion?

    Read the article

  • How to put 1000 lightweight server applications in the cloud

    - by Dan Bird
    The company I work for sells a commercial desktop/server app that runs on any non dedicated Windows PC or server and uses Tomcat for all interactions with the application. Customers are asking that we host their instance of the application so they don't have to run it locally on their own servers. The app is lightweight and an average server, in theory, could handle 25-50 instances before users would notice a slowdown. However only 1 instance can run per Windows instance (because the application writes to a common registry branch) so we'd need something like VMWare to create 25-50 Windows instances. We know we eventually need to reprogram to make it truly cloud-worthy but what would you recommend for a server farm or whatever for this? We don't have the setup to purchase our own servers so we must use a 3rd party. We have budgeted $500 - $1000 per year per customer for this service. Thanks in advance for your suggestions, experiences and guidance.

    Read the article

  • securing unpatched websites

    - by neuron
    I have a client with a lot (read several thousand) websites in several old cms solutions that are no longer maintained. Now moving all of them to a maintained solution isn't really an option at this point. So I'm thinking about ways to secure the solutions without patching them. The solutions are mostly joomla 1.0/1.5 and wordpress. What I'm thinking is something like this: mod_suexec to lock everyone into their own home directory apparmor to deny any and all file writes by default. (exclude by default, include things like "images" directories). use htaccess to prevent anything in writable directories from being executed. (aka disable php_engine for images/ directory). mysql triggers to check the "users" tables to prevent adding new admins/superadmins. Does this make sense? Is it viable? Am I missing something obvious?

    Read the article

  • Insert Hyperlink via VBA

    - by Martin
    I have a Word VBA macro that loops through a directory and writes down the file path of files selected for some criteria into a new Word document. Works well as plain text (as part of a loop): wdDocResults.Content.InsertAfter objFile.Path & Chr(13) However, I'd like them to be hyperlinks. The following works as single macro, but when called from within another script, it does nothing at all (no matter if path is provided as variable or string, or as H:... or \\MyServernameAsNetDrive...): ActiveDocument.Hyperlinks.Add Anchor:=Selection.Range, Address:= objFile.Path, _ SubAddress:="", ScreenTip:="", TextToDisplay:=objFile.Path If try to select the current line in order to make sure something is selected at the right place -- error: out of memory": wrdDocResults.Content.InsertAfter objFil.Path Selection.Expand wdLine ActiveDocument.Hyperlinks.Add Anchor:=Selection.Range, Address:= objFile.Path, _ SubAddress:="", ScreenTip:="", TextToDisplay:=objFile.Path I also tried inserting a string resembling the Hyperlink field code ({ Hyperlink "..." }, which is of course not recognized... Any help is appreciated... Thanks in advance!

    Read the article

  • VMWare ESX installation on sata disk

    - by ilansch
    I have a PC with Gigabyte H77 motherboard with Intel I5-3550 CPU 8 GB RAM 1600MHz and a 500GB Harddisk (7200RPM) - WD Sata III disk I wish to install esx on it and run some virtual machines on it. not alot, something like 2-3 VMs. My hardisk is Sata, is it possible to install ESX Server on it ? I am not worried about loading issues. When i try loading the installation it writes it cannot detect my disk (since its not SCSI disk). How can i bypass this ? or find a solution. thanks

    Read the article

  • MySQL Locking Up

    - by Ian
    I've got a innodb table that gets a lot of reads and almost no writes (like, 1 write for every 400,000 reads approx). I'm running into a pretty big problem though when I do INSERT into the table. MySQL completely locks up. It uses 100% cpu, and every single other table (in other databases even) have their statuses set to "Locked" until the INSERT is done. This is a big problem because MySQL stays locked up for up to 4 minutes. I'm using version 5.1.47 (rpm from mysql.com). Any ideas?

    Read the article

  • Is Software Raid1 Using mdadm with a Local Hard Disk and GNDB Possible?

    - by Travis
    I have multiple webservers which use many small files to created dynamic web pages. Caching the web pages isn't an option. The webserver also performs writes so I need a synchronous filesystem. I'm looking to maximise performance as it's my understanding that small files is the weakness (to varying degreess) of a cluster filesystem over ethernet. Currently I'm using Centos 5.5, 64 bit. Since it's only about 300MB of data, I'm looking at mdadm using RAID-1 with the GNBD and a local hard disk using the "--write-mostly" option so the reads are done using the local hard disk. Is this possible? If so, is there any advantage to making it a tmpfs disk instead of a local hard disk? Or will the files on the local hard disk just get cached in RAM anyway so I won't see a performance gain by using tmpfs, assuming there's enough RAM available?

    Read the article

  • What does dd conv=sync,noerror do?

    - by dding
    So what is the case when adding conv=sync,noerror makes a difference when backing up an entire hard disk onto an image file? Is conv=sync,noerror a requirement when doing forensic stuff? If so, why is it the case with reference to linux fedora? Edit: OK, so if I do dd without conv=sync,noerror, and dd encounters read error when reading the block (let's size 100M), does dd just skip 100M block and reads the next block without writing something (dd conv=sync,noerror writes zeros to 100M of output - so what about this case?)? And if is hash of original hard disk and output file different if done without conv=sync,noerror? Or is this only when read error occurred?

    Read the article

  • Why is music tagging software so inconsistent?

    - by Billy ONeal
    Hello :) A few years ago I spent an insane amount of time using the excellent Tag&Rename program. However, I find that for random, inexplicable reasons, some music tools simply disregard my tags, and drop or destroy the album art, or have strange handling around some characters. For example, "AC/DC" is poorly handled by most music players when I use Tag&Rename to write the tags. Is there a piece of software that works like Tag&Rename but is more compatible, or is there a way to ensure Tag&Rename writes more compatible tags?

    Read the article

  • Symlink across local volumes in webroot?

    - by geerlingguy
    I am looking for a good short-term solution to storage space concerns on my website. Currently, I have all uploaded files (flash video, images, etc.) inside the 'files' directory in my web root (/home/account/public_html/files). That directory is located on my high-speed main hard drive (a 15k SCSI drive). I have another drive with much more capacity, but spinning at 10k rpm (so still fast, but not as good for random reads/writes as the main drive. The entire drive is mounted at /backup Right now I'm just using it as a backup volume. I would like to create a symlink from my /home/account/public_html/files folder to /backup/files, and have all files reside on the second drive. However, if someone accesses a file at http://www.example.com/files/filename.jpg, would it still work if I symlinked to the second drive? (Basically, would Apache/PHP automatically know to follow the symlink for that directory?).

    Read the article

  • What is the effect on LVM snapshot size when a file block is rewritten with it's original contents?

    - by NevilleDNZ
    I'm exploring using LVM snapshot's to off site incremental archives from a snapshot "master" file system. In essence: simply copy across only the files on the "master" that have changed since the last incremental copy to the "archive". Then snapshot the "archive" to retain the incremental. I am a bit puzzled as to the block usage behaviour of the archive's own incremental snapshot. I'm expecting that LVM is not smart enough to know that the "file block" is actually unchanged, and the a new copy will be allocated and written for the fresh "archive" file system. Can anyone confirm this, or point me to a document/page that gives some hints? BTW: the OS hard disk cache, hard disk physical cache and hard disk itself also doesn't need to do any actual "disk writes" as the "disk block" likewise is unnecessary. Any pointers to discussion of this style of optimisation would also be ineresting.

    Read the article

  • Multiple users writing to one Samba mount point in OSX

    - by Sam
    I have an OSX box containing a script which writes a unique file to a Samba share. The first part of the script mounts the share. On the machine are 2 users- UserA and UserB. Each requires to run this script at any given time however only the user who mounted the share is able to write to it. I really need both users to have rwx access. Here is what I have tried: Mounting then chmod'ing the mountpoint (no effect- overruled by Samba server?) chmod'ing the mountpoint then mounting (same as above) sudo mount_smbfs Both users have admin privileges. Ideally a solution would be executable by one of the users (contained in the script) and not rely on mounting at machine boot time. Any ideas appreciated, thanks!

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >