Search Results

Search found 30085 results on 1204 pages for 'read'.

Page 14/1204 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • How to create a readonly root linux: Can be mounted as writeable for persistent changes?

    - by Mr Anderson
    I'd like a read only file system that runs almost entirely in RAM but the compact flash or hardrive can be mounted and made writeable to make persistent changes. How do I do this on Linux? I've looked at several tutorials but none really explain how to create such a system with the option of being able to mount the storage device and make persistent changes. I looked at this so far: http://chschneider.eu/linux/thin_client/ I also looked on the old gentoo wiki but the article was very specific to Gentoo. I'll be using a debian based Linux but it would be nice I've someone could explain to me how to do this in pretty generic instructions ,that would work on any Linux distro. Thanks.

    Read the article

  • Read video from dvd in RAW filesystem

    - by kpinhack
    Hallo, i've got some dvds that were recorded on a dvd recorder. I can not read those dvds on windows PCs (i tried multiple PCs). The properties of the discs say that the filesystem of the disc is RAW and i think that is the reason why windows does not read the disc. If that is true, what can i do to read the dvds ? Or what else could be the problem ? EDIT: The windows PCs i tried play back regular DVDs without any problem, and the "RAW" dvds recorded with the dvd recorder will not play on a standalone dvd player. Thank you!

    Read the article

  • Here's what I want... what do I tell my IT guy I need?

    - by Jason
    I work in the graphics department for a real estate brokerage, and we deal with a lot of photos. Agents take the photos, upload them to me, I touch up and standardize the photos, then I add them to an in-house server for future use by the graphics dept. I'd like to make the "sanitized" photo files available to the agents to use when they want, but I don't want the agents poking around the graphics department's files (things get misplaced, renamed and messed up in a hurry). What would be perfect is if we could create a read-only "mirror" (correct term?) of that server that could be accessed by the agents as needed, but which wouldn't feed back into our "sanitized" file system. What do I tell the office IT guy I need (platform-agnostically, because I don't know what he's running)? I never get to see him face-to-face, so I've got to word this as carefully and explicitly as possible. Thoughts? Thanks in advance for your feedback

    Read the article

  • Microsoft Office "Read-only" warning not appearing on Samba shares on Mac OS X Server

    - by bongo
    Hi, some of my users don't get the "read-only" warning (but "read-only" does appear on the title bar and the document is indeed opened read-only) when opening Office 2007 documents already opened by another user. We run the samba share off an XSan volume under Mac OS X 10.5.8 Server. Strict locking is on but oplocks are off (from Server Admin). At home, with a simple samba share in 10.6.3 server, it works correctly. Any ideas? or is this a 10.5.8 behavior?

    Read the article

  • XP can't read data from transferred HD

    - by Alexander Miller
    Computer A, running XP, died. XP was installed on a fresh HD in computer B. Slaved data-backup HD from A was installed as slave on B. B will not read it; shows only 2 folders, Recycler and System Volume Info. All of these are older machines with IDE drives. What's going on & how can I read/transfer the data from the transferred drive? This was only a trial run. Actually I will need to transfer the master HD from A - which has XP on it - and read from its data partition because (blush) the backup drive was not up to date.

    Read the article

  • Error mounting samba share, cannot mount block device xxxx read-only

    - by Jeff Ward
    After installing Ubuntu 12.04, I'm trying to mount a samba share from Windows under Linux, using a scripted command that's always worked, and the server hasn't changed. The error is as follows: $ mount -t cifs //<host>/<share> /media/<share> -o username=<user>,password=<pass> mount: block device //<host>/<share> is write-protected, mounting read-only mount: cannot mount block device //<host>/<share> read-only $ I've read a lot of discussions about permissions, but unfortunately, that wasn't the issue. I'm submitting my own answer below for reference, hope this helps someone else.

    Read the article

  • External Hard Disk's secter could not be read ?

    - by mgpyone
    I've an 500 GB Seagate External Hard Disk (NTFS) . Currently, I can't open it at Windows. Thus, I've tired with chkdsk command .. but still it stopped and can't continue checking disk.. Also I've tired with fsck on Mac . Then, it shows me the Error .. /Volumes/<HD Name>/ is not a character device CONTINUE? yes /Volumes/<HD Name>/ (NO WRITE) CANNOT READ: BLK 16 CONTINUE? yes THE FOLLOWING DISK SECTORS COULD NOT BE READ: 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, ioctl (GCINFO): Inappropriate ioctl for device fsck: /Volumes/<HD Name>/: can't read disk label The volume I've used is around 300 GB , Thus, it's hard to back up and format again . Thus, any helpful suggestions and solutions will be appreciated pretty well.

    Read the article

  • RAID setup for maximizing data retention and read speed

    - by cat pants
    My goals are simple: maximize data retention safety, and maximize read speeds. My first instinct is to do a a three drive software RAID 1. I have only used fakeraid RAID 1 in the past and it was terrible (would have led to data loss actually if it weren't for backups) Would you say software raid 1 or a cheap actual hardware raid card? OS will be linux. Could I start with a two drive raid 1 and add a third drive on the fly? Can I hot swap? Can I pull one of the drives and throw it into a new machine and be able to read all the data? I do not want a situation where I have a raid card fail and have to try and find the same chipset in order to read my data (which i am assuming can happen) Please clarify any points on which it sounds like I have no idea what I am talking about, as I am admittedly inexperienced here. (My hardest lesson was fakeraid lol) Thanks!

    Read the article

  • Why can't I set Windows 7 folder to Writeable?

    - by Clay Nichols
    Moved a SATA HD from one PC to another. Copied almost all the files from old drive to new, except for one folder ("oldFolder") and it's subfolders and files. I tried to copy just a single file (to simplify things): Can't copy that file to same directory. (get above error) CAN copy file to Desktop. Can not copy files' container folder to Desktop. Under Properties for that OldFolder: Read Only is Checked. Security: Everyone set to Allow everything except "special permissions" All users are set to allow WRITE.

    Read the article

  • Read binary file into a struct C#

    - by Robert Höglund
    I'm trying to read binary data using C#. I have all information about the layout of the data in the files I want to read. I'm able to read the data "chunk by chunk", i.e. getting the first 40 bytes of data converting it to a string, get the next 40 bytes, ... Since there are at least three slighlty different version of the data, I would like to read the data directly into a struct. It just feels so much more right than by reading it "line by line". I have tried the following approach but to no avail:StructType aStruct; int count = Marshal.SizeOf(typeof(StructType)); byte[] readBuffer = new byte[count]; BinaryReader reader = new BinaryReader(stream); readBuffer = reader.ReadBytes(count); GCHandle handle = GCHandle.Alloc(readBuffer, GCHandleType.Pinned); aStruct = (StructType) Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(StructType)); handle.Free(); The stream is an opened FileStream from which I have began to read from. I get an AccessViolationException when using Marshal.PtrToStructure. The stream contains more information than I'm trying to read since I'm not interested in data at the end of the file. The struct is defined like:[StructLayout(LayoutKind.Explicit)] struct StructType { [FieldOffset(0)] public string FileDate; [FieldOffset(8)] public string FileTime; [FieldOffset(16)] public int Id1; [FieldOffset(20)] public string Id2; } The examples code is changed from original to make this question shorter. How would I read binary data from a file into a struct?

    Read the article

  • NetworkStream.Read delay .Net

    - by Gilbes
    I have a class that inherits from TcpClient. In that class I have a method to process responses. In that method I call I get the NetworkStream with MyBase.GetStream and call Read on it. This works fine, excpet the first call to read blocks too long. And by too long I mean that the socket has recieved plenty of data, but won't read it until some arbitrary limit is reached. I can see that it has recieved plenty of data using the packet sniffer WireShark. I have set the recieve buffer to small amounts, and very small amounts (like just a few bytes) to no avail. I have done the same with the buffer byte array I pass to the read method, and it still delays. Or to put it another way. I am download 600k. The download takes 5 seconds (at a little over 100k/second connection to the server which makes sense). The initial Read call takes 2-3 seconds and tells me only 256 bytes are availble (256 is the Recieve buffer and the size of the array I read in to). Then magically, the other few hundred thousand bytes can be read in 256 byte chunks in only a few process ticks each. Using a packet sniffer, I know that during those initial 2-3 seconds, the socket has recieved much more than just 256 bytes. My connection wasn't .25k/second for 3 seconds and then 400k for 2 seconds. How do I get the bytes from a socket as they come in?

    Read the article

  • MySQL my.cnf file not being read, Ubuntu 10.04 64bit

    - by reallyordinary
    I've been researching this for a few hours with no luck. Basically it looks like my server's my.cnf file isn't being read at all. I've searched my server, and there's only one my.cnf file on it, located at /etc/mysql/my.cnf. Its ownership is root:root. I'm running Ubuntu 10.04 64bit on a Linode.com server. I have the latest versions of MySQL and PHP installed. I've edited the my.cnf file, commented out "skip-innodb", and have set innodb to be the default storage engine using default-storage-engine = innodb And then restarted mysql. But when I do show engines, MyISAM is still coming up as the default engine. Also - none of the innodb settings I've added to the my.cnf file are being read. For example, I have this in my.cnf: innodb_buffer_pool_size=4G But in phpmyadmin, InnoDB is showing as having a buffer pool size of 8,192 KiB. Similarly, I have this in the my.cnf: innodb_data-file_path = ibdata1:500M:autoextend But in phpmyadmin, it's reading as ibdata1:10M:autoextend. It doesn't look like MyISAM info is being read from the my.cnf file either. The my.cnf file has skip-external-locking queried out, but it's showing as "on" in phpmyadmin. So - yeah, it looks like nothing in the my.cnf file is being read at all. But the server still works. I'm running a Drupal site on it and it seems to operate fine. So mysql seems to be drawing default settings from... some mysterious secret location. Any idea how I can make mysql see and use this my.cnf file? Actually, wait - it looks like it may be being read, not sure. I checked the error.log and found this: 101128 4:28:52 [ERROR] Cannot find or open table databasename/cache_apachesolr from the internal data dictionary of InnoDB though the .frm file for the table exists. Maybe you have deleted and recreated InnoDB data files but have forgotten to delete the corresponding .frm files of InnoDB tables, or you have moved .frm files to another database? or, the table contains indexes that this version of the engine doesn't support. See http://dev.mysql.com/doc/refman/5.1/en/innodb-troubleshooting.html how you can resolve the problem. InnoDB: Error: auto-extending data file ./ibdata1 is of a different size InnoDB: 640 pages (rounded down to MB) than specified in the .cnf file: InnoDB: initial 32000 pages, max 0 (relevant if non-zero) pages! InnoDB: Could not open or create data files. InnoDB: If you tried to add new data files, and it failed here, InnoDB: you should now edit innodb_data_file_path in my.cnf back InnoDB: to what it was, and remove the new ibdata files InnoDB created InnoDB: in this failed attempt. InnoDB only wrote those files full of InnoDB: zeros, but did not yet use them in any way. But be careful: do not InnoDB: remove old data files which contain your precious data! 101128 4:28:52 [ERROR] Plugin 'InnoDB' init function returned error. 101128 4:28:52 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 101128 4:28:52 [ERROR] /usr/sbin/mysqld: unknown variable 'innodb_lock_wait_timout=50' 101128 4:28:52 [ERROR] Aborting 101128 4:28:52 [Note] /usr/sbin/mysqld: Shutdown complete

    Read the article

  • Image Management System with Different Levels of Access

    - by Jason
    I work in the graphics department for a real estate brokerage, and we deal with a lot of photos. Agents take the photos, upload them to me, I touch up and standardize the photos, then I add them to an in-house server for future use by the graphics dept. I'd like to make the "sanitized" photo files available to the agents to use when they want, but I don't want the agents poking around the graphics department's files (things get misplaced, renamed and messed up in a hurry). What would be perfect is if we could create a read-only "mirror" (correct term?) of that server that could be accessed by the agents as needed, but which wouldn't feed back into our "sanitized" file system. Edit: I'm looking for an automatic solution... manually posting the files to two separate locations seems like an inelegant solution when working digitally. Edit: I'm trying to avoid any access barriers to the public (dirty) file system (however it's finally implemented). There are 40-50 real estate agents who need to access these files, half of whom can't reliably download an email attachment.

    Read the article

  • how to check the read write status of storing media in python

    - by mukul sharma
    Hi All, How can i check the read/ write permission of the file storing media? ie assume i have to write some file inside a directory and that directory may be available on read only media like (cd or dvd)or etc. So how can i check that storing media ( cd, hard disk) having a read only or read write both permission. I am using windows xp os. Thanks.

    Read the article

  • how to identify process in kernel read func without using current->pid

    - by yoavstr
    my lecture wants us to build module where we need to identify each read process and where the same read process called twice on the same writer massage we should insert him to an queue who's we wake up when all readers have read I achieved this goal by by using list of pid's and boolean read/not_read inside each node but he decided to be nasty and require us to it with some argument from FILE struct can you please help me ?....

    Read the article

  • skipping a variable using while read in bash

    - by Aleksandar Ivanisevic
    i'm reading a few variables from a file using while read a b c; do (something) done < filename is there an elegant way to skip a variable (read in an empty value), i.e. if I want a=1 b= c=3, what should I write in the file? Right now i'm putting 1 "" 3 and then use b=$(echo $b | tr -d \" ) but this is pretty cumbersome, IMHO any ideas?

    Read the article

  • Mysql- “FLUSH TABLES WITH READ LOCK” started automatically

    - by mingyeow
    I would like to understand how this happened. I was running a query that would take a long time, but should not lock up any table. However, my dbs were practically down - it seems like it was being locked up by "FLUSH TABLES WITH READ LOCK" 03:21:31 select type_id, count(*) from guid_target_infos group by type_id 02:38:11 select type_id, count(*) from guid_infos group by type_id 02:24:29 FLUSH TABLES WITH READ LOCK But i did not start this command. can someone tell me why it was started automatically?

    Read the article

  • Centos 6.3 PERL CGI selinux file read access

    - by Steed
    I have a CGI script called index.cgi It is trying to read a log file called 10.128.0.242.2012.sep.20.downloaded.txt under the path /var/log/trafcount/ It appears that it is being blocked by selinux. The audit log shows something like type=AVC msg=audit(1348158321.873:1472116): avc: denied { read } for pid=11620 comm="index.cgi" name="10.128.0.242.2012.sep.20.downloaded.txt" dev=dm-0 ino=395264 scontext=unconfined_u:system_r:httpd_sys_script_t:s0 tcontext=unconfined_u:object_r:var_log_t:s0 tclass=file How can I allow this script full access to all files under /var/log/trafcount ?

    Read the article

  • Very poor read performance compared to write performance on md(raid1) / crypt(luks) / lvm

    - by Android5360
    I'm experiencing very poor read performance over raid1/crypt/lvm. In the same time, write speeds are about 2x+ faster on the same setup. On another raid1 setup on the same machine I get normal read speeds (maybe because I'm not using cryptsetup). OS related disks: sda + sdb. I have raid1 configuration with two disks, both are in place. I'm using LVM over the RAID. No encryption. Both disks are WD Green, 5400 rpm. IO test results on this raid1: dd if=/dev/zero of=/tmp/output.img3 bs=8k count=256k conv=fsync - 2147483648 bytes (2.1 GB) copied, 22.3392 s, 96.1 MB/s sync echo 3 > /proc/sys/vm/drop_caches dd if=/tmp/output.img3 of=/dev/null bs=8k - 2147483648 bytes (2.1 GB) copied, 15.9 s, 135 MB/s And here is the problematic setup (on the same machine). Currently I have only one sdc (WD Green, 5400rpm) configured in software raid1 + crypt (luks, serpent-xts-plain) + lvm. Tomorrow I will attach another disk (sdd) to complete this two-disk raid1 setup. IO tests results on this raid1: dd if=/dev/zero of=output.img3 bs=8k count=256k conv=fsync 2147483648 bytes (2.1 GB) copied, 17.7235 s, 121 MB/s sync echo 3 > /proc/sys/vm/drop_caches dd if=output.img3 of=/dev/null bs=8k 2147483648 bytes (2.1 GB) copied, 36.2454 s, 59.2 MB/s We can see that the read performance is very very bad (59MB/s compared to 135MB/s when using no encryption). Nothing is using the disks during benchmark. I can confirm this because I checked with iostat and dstat. Details on the hardware: disks: all are WD green, 5400rpm, 64mb cache. cpu: FX-8350 at stock speed ram: 4x4GB at 1066Mhz. Details on the software: OS: Debian Wheezy 7, amd64 mdadm: v3.2.5 - 18th May 2012 LVM version: 2.02.95(2) (2012-03-06) LVM Library version: 1.02.74 (2012-03-06) LVM Driver version: 4.22.0 cryptsetup: 1.4.3 Here is how I configured the slow raid1+crypt+lvm setup: parted /dev/sdc mklabel gpt type: ext4 start: 2048s end: -1 Now the raid, crypt and the lvm configuration: mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdc cryptsetup --cipher serpent-xts-plain luksFormat /dev/md1 cryptsetup luksOpen /dev/md1 md1_crypt vgcreate vg_sql /dev/mapper/md1_crypt lvcreate -l 100%VG vg_sql -n lv_sql mkfs.ext4 /dev/mapper/vg_sql-lv-sql mount /dev/mapper/vg_sql-lv_sql /sql So guys, can you help me identify the reason and fix it? It has to be something with the cryptsetup as there is no such read slowdown on the other setup (sda+sdb) where no encryption is present. But I have no idea what to do. Thanks!

    Read the article

  • SQL Server 2012 Read-Only Replica physical requirements

    - by Ddono25
    We are going to be setting up two replicas of our DataMart and related databases, and our plan includes using Hyper-V VM's to handle the load. When creating the VM's, we cannot find specific requirements or recommendations for RAM/CPU power for read-only replicas. Our current Primary DataMart setup has 64GB RAM and Two Quad-Core procs, which so far has been adequate for our usage. What should be the server setup for replicas and SQL Server to adequately support the read-only usage?

    Read the article

  • Slow SSD Read on Server 2008 SP2

    - by Ruben L.
    Im using a Intel X25-M 80G with the latest firmware. on XP /w using IDE and WIN7 using AHCI I get read speeds up to 250MB/S. But when running it with Server 2008 SP1 or SP2 on AHCI, I get read speeds around 180MB/S. Ive updated drivers for 2008, tested with writecache on/off. Any input would be appreciated. thanks!

    Read the article

  • How to clear Windows disk read cache?

    - by Sebastiaan Megens
    For performance testing I need to clear Windows' disk read cache. I tried googling but I couldn't find anything other than rebooting or other manual stuff. Before I give in and do that, I'd like to know if anyone knows of a way to clear Windows disk read cache. I'm testing on Windows 7, but I'm also interested in Windows XP solutions.

    Read the article

  • Rsync over ssh: "ERROR: module is read only" suddenly appeared

    - by user978548
    I've used from some time rsync/ssh to backup my shared host contents to my personal Synology NAS (212j for that matter), and it worked quite well. For information, I use a password-less ssh connection. 3 days ago, I updated my NAS software and since (or at least I believe it's since that), the backup won't work anymore. I get the following error on the host: rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32) ERROR: module is read only ..which I do not understand. beside that nothing changed that I know of in both source and destination that can be related to rsync or ssh, I did check a few things and all seems to be alright: I can still connect through ssh from the host to my NAS with the good user, so ssh stuff like keys haven't changed. I also have the correct file permissions on the NAS (I checked, and also tried to create files, directories, .. with the user used by rsync through ssh). I read here and there that the error means that I have to ensure that my rsyncd.conf have the right read only = no in it, but as far as I know, I never used rsyncd as well as I never configured anything for it and until now it worked like a charm.. I use the following command to do the backup: rsync -ab --recursive \ --files-from="$FILES_FROM" \ --backup-dir=backup_$SUFFIX \ --delete \ --filter='protect backup_*' \ $WDIRECTORY/ \ remote_backup:$REMOTE_BACKUP/ So I'm stuck and really can't figure out what happened. Edit: As suggested in comments, I also tried passing commands to ssh (but not from inside a ssh session), that worked as expected, and also tried a single rsync command, which didnt worked, failing just like the complete backup command. (sharedHost):hostuser:~ > touch test.txt (sharedHost):hostuser:~ > rsync test.txt remote_backup:backups/test.txt ERROR: module is read only rsync error: syntax or usage error (code 1) at main.c(1034) [Receiver=3.0.8] rsync: connection unexpectedly closed (9 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(601) [sender=3.0.7] and (sharedHost):hostuser:~ > ssh remote_backup 'touch /abs_path_to_backups/backups/test2.txt && echo "ProoF" > /abs_path_to_backups/backups/test2.txt' (sharedHost):hostuser:~ > ssh remote_backup 'cat /abs_path_to_backups/backups/test2.txt' ProoF

    Read the article

  • Konsole Read/Write Access to PTY Device

    - by Carmen
    When I run something like: $ konsole -e "sleep 30" I get this message Konsole is unable to open a PTY (pseudo teletype). It is likely that this is due to an incorrect configuration of the PTY devices. Konsole needs to have read/write access to the PTY devices. But if I try to run this: $ gnome-terminal -e "sleep 30" Then it is fine, and it does not throw any error. How can I change the read/write access of the Konsole to the PTY devices?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >