Search Results

Search found 24117 results on 965 pages for 'write'.

Page 735/965 | < Previous Page | 731 732 733 734 735 736 737 738 739 740 741 742  | Next Page >

  • Tridion 2011 SP1 Core Service - expose to live server within PROD env

    - by Neil
    We have a requirement to allow our users to submit information about their "projects" - a small piece of text and single image they upload. Ultimately we'll have a listing page of user contributed projects that others can comment on and rate. We've decided to user Tridion's UGC for rating & comments site-wide for this first phase which has got me thinking - UGC is tied to Tridion published pages & components, if we want UGC on our user-submitted projects, they'll have to be created within Tridion as components themselves, not be sat in some custom db table? Is this where the Core Service could come in? My understanding is that the CD Web Service is for retrieval, not for interacting with the Content Manager. Is it OK (!) architecturally to expose the Core Service only to our live application servers so our backend .NET code can create "project components" that can be then be published by editors allowing them to be commented on? Everything sounds pretty neat and tidy apart from the "exposing Core Service to live servers" bit. Without this though I'd have to write a custom way to "transfer" it back over to the Content Manager - maybe like Audience Manager Sync works? Anyone done this before?

    Read the article

  • What is the ideal way to set up multiple FTP enabled web accounts on Fedora?

    - by Nicholas Flynt
    I'm setting up a test server for use as a web development platform, and I'd like to mimic as closely as I can a typical shared hosting setup. That is, I'd like my server to have multple user FTP accounts, each of which links to a directory containing the webroot of the site, and I'd like apache to be able to easily see and manupulate these files. I'll admit: I'm not as familiar with Fedora as I'd like, I run Ubuntu on my home box and SElinux is giving me some grief. My initial plan was to have each user FTP into their home directory, and put the web directory there as well, but SElinux throws a hissy fit when apache tries to access anything outside of its web directory, so that plan was a no go. Would it be wise to continue this route, and perhaps mount web directories in user home folders so that FTP could still be used to access them, even though apache saw them in var/www like it expects? Would it make more sense to set up custom FTP accounts and use a single FTP user on the server box? What's the general course of action on something like this? I'm using vsftpd right now to host web directories, which is why I'm liking the home directory approach (it's simple and secure) but of course there's bound to be a better way to go about it. Thanks. (I'll leave other things, like restricted DB access and such, to another post. I'm interested right now with just getting FTP and apache to play nice in a multi-user environment.) PS: For the record, an issue I ran into when doing all of this was that if apache isn't running as the same user as the FTP account is saving as, there are permissions errors when FTP creates files, requiring the remote user to chmod the files to fix it. A logical fix would be to run apache in a special group, put all web users in this group, and have FTP access default to giving this group read/write access to everything like apache would expect, but I never could figure out how to accomplish this. Bonus points and cake if you know a solution.

    Read the article

  • issue in installing postgresql 9.3.4 on Windows server 2003 x64

    - by randydom
    Hello i really did all what i know to install the PostgreSQL 9.3.4 on my windows 2003 server x64, but i'm always stopped with this error : please see the error : http://oi57.tinypic.com/s4tb8i.jpg I really don't know what to do , if i click OK then when i go to the windows services list i don't find the PostgreSQL service so i can't Start the service . can any one please help me to install it correctly . PS: i've followed all steps in the : wiki.postgresql.org/wiki/Troubleshooting_Installation many thanks . here's the installer log * where i get " Failed to initialise the database cluster with initdb " : Called IsVistaOrNewer()... 'winmgmts' object initialized... Version:5.2 MajorVersion:5 Ensuring we can write to the data directory (using cacls): Executing batch file 'rad22ADE.bat'... processed dir: C:\Program Files\PostgreSQL\9.2\data Executing batch file 'rad22ADE.bat'... The files belonging to this database system will be owned by user "Administrator". This user must also own the server process. The database cluster will be initialized with locale "English_United States.1252". The default text search configuration will be set to "english". fixing permissions on existing directory C:/Program Files/PostgreSQL/9.2/data ... initdb: could not change permissions of directory "C:/Program Files/PostgreSQL/9.2/data": Permission denied Called Die(Failed to initialise the database cluster with initdb)... Failed to initialise the database cluster with initdb Script stderr: Program ended with an error exit code Error running cscript //NoLogo "C:\Program Files\PostgreSQL\9.2/installer/server/initcluster.vbs" "NT AUTHORITY\NetworkService" "postgres" "****" "C:\Program Files\PostgreSQL\9.2" "C:\Program Files\PostgreSQL\9.2\data" 5432 "DEFAULT" 0 : Program ended with an error exit code Problem running post-install step. Installation may not complete correctly The database cluster initialisation failed. Creating Uninstaller Creating uninstaller 25% Creating uninstaller 50% Creating uninstaller 75% Creating uninstaller 100% Installation completed Log finished 05/02/2014 at 04:04:04

    Read the article

  • Using NFS for scalable PHP/MySQL web application

    - by Jeroen Moons
    Here's the situation: I have a PHP/MySQL web application that accepts user uploads (pdf files). From these pdf files' pages a preview image is made on the fly and presented to the web app's users. Some pdfs might be on the large side, most will be under 50 MB but some extreme cases could be as large as a few hundred MB. A little waiting for the preview image for large pdf files is acceptable but no more than a minute let's say. Everything is running on one server for now, but soon the app will hit the server's limit on both storage and processing power. My idea to solve the problem: To deal with this situation I had the idea of having one or more pdf processing servers as needed, and one or more file storage servers. These two types of servers are mounted to the server on which the actual app runs using NFS. The app could then use GearMan to delegate pdf processing tasks to these processing servers. The processing server can mount the storage server and read the file stored there, process it and write its output to that server. The servers I'm talking about will be amazon ec2 instances. The web app returns a link to the resulting pdf preview image on the storage server that was used which can then be used on the front end to show the image to the user. My question: I have zero experience with apps that use multiple servers, is this idea viable or is there a better way to do it? Is an NFS setup fast and reliable enough for this situation?

    Read the article

  • Why does running "$ sudo chmod -R 664 . " cause me to get access denied on all affected directories?

    - by Codemonkey
    I have a project folder which has messy permissions on all files. I've had the bad tendency of setting everything to octal permissions 777 because it solved all non security related issues. Then FTP uploads, files created by text editors etc. has their own set of permissions making everything a mess. I've decided to take myself together and start using the permissions the way they were meant to be used. I figured 664 was a good default for all my files and folders, and I'd just remove permissions for others on private files, and add +x for executable files. The second I changed my project folder to 664 however: $ sudo chmod -R 664 . $ ls ls: cannot open directory .: Permission denied Which makes no sense to me. I have read/write permissions, and I'm the owner of the project folder. The leftmost part of ls -l in my project folder looks like this: -rw-rw-r-- 1 codemonkey codemonkey ... drw-rw-r-- 5 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... drw-rw-r-- 3 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... drw-rw-r-- 4 codemonkey codemonkey ... drw-rw-r-- 5 codemonkey codemonkey ... I assume this has something to do with the permissions on the directories, but what?

    Read the article

  • correct file permissions for trac and git user to access gitolite server repos

    - by klemens
    hi, sounds like a stupid questions (to me), but i couldn't find any info. on my server i host some git repositories via gitolite, and have a trac for every repository. i have a user called git to push/pull from server (git clone git@server:repo). and trac is a apache vhost with mod_wsgi. this runs with the www-data user. so what riddles me (maybe because I have not much of a clue about file-permissions at all) is whats the best permissions setup (chown, chmod) for the git repositories (/home/git/repositories/...). www-data (or trac) needs to at least read permissions (i think). and git (or gitolite) needs obviously read/write permissions to push changesets. i tried a little bit around (i.e. adding www-data and/or git to the www-data/git group), but didn't got it right. at least one of the two don't work (git or trac). any suggestions are highly appreciated. regard, klemens

    Read the article

  • Fix bad superblock on logical partition

    - by Chris
    I was following http://www.howtoforge.com/linux_resi...xt3_partitions and when i reboot and run: root@Microknoppix:/home/knoppix# fsck -n /dev/sda7 fsck from util-linux-ng 2.17.2 e2fsck 1.41.12 (17-May-2010) fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sda7 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> so i ran e2fsck with all the block numbers that you need (forget exactly what tool i used to find where the superblocks are hidden) no dice then i ran testdisk and had it look for the superblock, no results anyone have any ideas? fdisk -l for reference: root@Microknoppix:/home/knoppix# fdisk -l Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x97646c29 Device Boot Start End Blocks Id System /dev/sda1 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 38912 312046593 f W95 Ext'd (LBA) /dev/sda5 64 326 2104320 82 Linux swap / Solaris /dev/sda6 * 327 2938 20972544 83 Linux /dev/sda7 2938 38912 288968672+ 83 Linux To be honest it looks like I lost it... Next step if that happens is to dump the partition to an image file and hope i can find or write some software to parse through the data looking for known file headers, i think.

    Read the article

  • Raid-z unaccessible after putting one disk offline

    - by varesa
    I have installed FreeNAS on a test server, with 3x 1Tb drives. They are setup in raidz. I tried to offline one of the disks (from the FreeNAS web-ui), and the array became degraded, as I think it should. The problem is with the array becoming unaccessible after that. I thought a raid like that should be able to run fine with one of the disks missing. Atleast very soon after I offline'd and pulled out the disk, the iSCSI share disappeared from a ESXi host's datastores. I also ssh'd into the FreeNAS server, and tried just executing ls /mnt/raid (/mnt/raid/ being the mount point). The whole terminal froze, not accepting ^C or anything. # zpool status -v pool: raid state: DEGRADED status: One or more devices are faulted in response to IO failures. action: Make sure the affected devices are connected, then run 'zpool clear'. see: http://www.sun.com/msg/ZFS-8000-HC scrub: none requested config: NAME STATE READ WRITE CKSUM raid DEGRADED 1 30 0 raidz1 DEGRADED 4 56 0 gptid/c8c9e44c-08e1-11e2-9ba6-001b212a83ea ONLINE 3 60 0 gptid/c96f32d5-08e1-11e2-9ba6-001b212a83ea ONLINE 3 63 0 gptid/ca208205-08e1-11e2-9ba6-001b212a83ea OFFLINE 0 0 0 errors: Permanent errors have been detected in the following files: /mnt/raid/ raid/iscsivol:<0x0> raid/iscsivol:<0x1> Have I understood the workings of a raidz wrong, or is there something else going on? It would not be nice to have the same thing happen on a production system...

    Read the article

  • How to convert Markdown files to Dokuwiki, on a PC

    - by Clare Macrae
    I'm looking for a tool or script to convert Markdown files to Dokuwiki format, that will run on a PC. This is so that I can use MarkdownPad on a PC to create initial drafts of documents, and then convert them to Dokuwiki format, to upload to a Dokuwiki installation that I have no control over. (This means that the Markdown plugin is no use to me.) I could spend time writing a Python script to do the conversion myself, but I'd like to avoid spending time on this, if such a thing exists already. The Markdown tags I'd like to have supported/converted are: Heading levels 1 - 5 Bold, italic, underline, fixed width font Numbered and unnumbered lists Hyperlinks Horizontal rules Does such a tool exist, or is there a good starting point available? Things I've found and considered I initially thought that txt2tags would be helpful, but although it can write both markdown and Dokuwiki, it is very tied to its own specific input format I've also seen Markdown2Dokuwiki, and although I'd certainly be willing to use a sed script, even on a PC, this only supports a tiny, tiny part of Markdown's syntax. python-markdown2 also sounded promising, but it only writes out HTML. pandoc - but it doesn't support Dokuwiki output MultiMarkdown - does not appear to support Dokuwiki output

    Read the article

  • CDN Rerouting on 404 (file not yet in synch with original storage)

    - by Alan Ristic
    Here is the problem. I've setup my app(on EC2) to store uploaded images directly on Amazon S3. I'd like to be able to serve static files(cdn) from my 'home' server so I wrote script that does sync from S3. But there is a window of (at least) one minute in synch. Now I see two solutions on the problem of pics not been available on 'home' server here: 1.I write script on EC2 (where the app resides) to fetch from DB pics that have status of "not-yet-synch", which is default state when user uploads picture. The script then does a ping to picture and if it gets OK response, updates DB from "not-yet-synch" to "synch". 2.Prefered solution would be to let apache (in this case) redirect request for an image if it sees 404 (e.g. doesent find image requested) to S3. This way I wouldn't need script from solution 1. So what approach do you suggest I take in solving this redundancy problem? Or what is practice in production environments? To further clarify; I'd like so serve images first from 'home' server, if that fails serve them from S3. Tnx, Alan

    Read the article

  • Slow filesystem access

    - by danneh3826
    I'm trying to diagnose a slow filesystem issue on a server I look after. It's been ongoing for quite some time, and I've run out of ideas as to what I can try. Here's the thick of it. The server itself is a Dell Poweredge T310. It has 4 SAS hard drives in it, configured at RAID5, and is running Citrix XenServer 5.6. The VM is a (relatively) old Debian 5.0.6 installation. It's given 4 cores, and 4Gb's of RAM. It has 3 volumes. A 10Gb volume (ext3) for the system, 980Gb volume (xfs) for data (~94% full), and another 200Gb volume (xfs) for data (~13% full). Now here's the weird thing. Read/write access to the 980Gb volume is really slow. I might get 5Mb/s out of it if I'm lucky. At first I figured it was actually disk access in the system, or at a hypervisor level, but ruled that out entirely as other VMs on the same host are running perfectly fine (a good couple hundred Mb/s disk r/w access). So then I started to target this particular VM. I started thinking it was XFS, but to prove it I wasn't going to attempt to change the filesystem on the 980Gb drive, with years and years of billions of files on there. So I provisioned the 200Gb drive, and did the same read/write test (basically dd), and got a good couple hundred Mb/s r/w access to it. So that ruled out the VM, the hardware, and the filesystem type. There's also a lot of these in /var/log/kern.log; (sorry, this is quite long) Sep 4 10:16:59 uriel kernel: [32571790.564689] httpd: page allocation failure. order:5, mode:0x4020 Sep 4 10:16:59 uriel kernel: [32571790.564693] Pid: 7318, comm: httpd Not tainted 2.6.32-4-686-bigmem #1 Sep 4 10:16:59 uriel kernel: [32571790.564696] Call Trace: Sep 4 10:16:59 uriel kernel: [32571790.564705] [<c1092a4d>] ? __alloc_pages_nodemask+0x476/0x4e0 Sep 4 10:16:59 uriel kernel: [32571790.564711] [<c1092ac3>] ? __get_free_pages+0xc/0x17 Sep 4 10:16:59 uriel kernel: [32571790.564716] [<c10b632e>] ? __kmalloc+0x30/0x128 Sep 4 10:16:59 uriel kernel: [32571790.564722] [<c11dd774>] ? pskb_expand_head+0x4f/0x157 Sep 4 10:16:59 uriel kernel: [32571790.564727] [<c11ddbbf>] ? __pskb_pull_tail+0x41/0x1fb Sep 4 10:16:59 uriel kernel: [32571790.564732] [<c11e4882>] ? dev_queue_xmit+0xe4/0x38e Sep 4 10:16:59 uriel kernel: [32571790.564738] [<c1205902>] ? ip_finish_output+0x0/0x5c Sep 4 10:16:59 uriel kernel: [32571790.564742] [<c12058c7>] ? ip_finish_output2+0x187/0x1c2 Sep 4 10:16:59 uriel kernel: [32571790.564747] [<c1204dc8>] ? ip_local_out+0x15/0x17 Sep 4 10:16:59 uriel kernel: [32571790.564751] [<c12055a9>] ? ip_queue_xmit+0x31e/0x379 Sep 4 10:16:59 uriel kernel: [32571790.564758] [<c1279a90>] ? _spin_lock_bh+0x8/0x1e Sep 4 10:16:59 uriel kernel: [32571790.564767] [<eda15a8d>] ? __nf_ct_refresh_acct+0x66/0xa4 [nf_conntrack] Sep 4 10:16:59 uriel kernel: [32571790.564773] [<c103bf42>] ? _local_bh_enable_ip+0x16/0x6e Sep 4 10:16:59 uriel kernel: [32571790.564779] [<c1214593>] ? tcp_transmit_skb+0x595/0x5cc Sep 4 10:16:59 uriel kernel: [32571790.564785] [<c1005c4f>] ? xen_restore_fl_direct_end+0x0/0x1 Sep 4 10:16:59 uriel kernel: [32571790.564791] [<c12165ea>] ? tcp_write_xmit+0x7a3/0x874 Sep 4 10:16:59 uriel kernel: [32571790.564796] [<c121203a>] ? tcp_ack+0x1611/0x1802 Sep 4 10:16:59 uriel kernel: [32571790.564801] [<c10055ec>] ? xen_force_evtchn_callback+0xc/0x10 Sep 4 10:16:59 uriel kernel: [32571790.564806] [<c121392f>] ? tcp_established_options+0x1d/0x8b Sep 4 10:16:59 uriel kernel: [32571790.564811] [<c1213be4>] ? tcp_current_mss+0x38/0x53 Sep 4 10:16:59 uriel kernel: [32571790.564816] [<c1216701>] ? __tcp_push_pending_frames+0x1e/0x50 Sep 4 10:16:59 uriel kernel: [32571790.564821] [<c1212246>] ? tcp_data_snd_check+0x1b/0xd2 Sep 4 10:16:59 uriel kernel: [32571790.564825] [<c1212de3>] ? tcp_rcv_established+0x5d0/0x626 Sep 4 10:16:59 uriel kernel: [32571790.564831] [<c121902c>] ? tcp_v4_do_rcv+0x15f/0x2cf Sep 4 10:16:59 uriel kernel: [32571790.564835] [<c1219561>] ? tcp_v4_rcv+0x3c5/0x5c0 Sep 4 10:16:59 uriel kernel: [32571790.564841] [<c120197e>] ? ip_local_deliver_finish+0x10c/0x18c Sep 4 10:16:59 uriel kernel: [32571790.564846] [<c12015a4>] ? ip_rcv_finish+0x2c4/0x2d8 Sep 4 10:16:59 uriel kernel: [32571790.564852] [<c11e3b71>] ? netif_receive_skb+0x3bb/0x3d6 Sep 4 10:16:59 uriel kernel: [32571790.564864] [<ed823efc>] ? xennet_poll+0x9b8/0xafc [xen_netfront] Sep 4 10:16:59 uriel kernel: [32571790.564869] [<c11e40ee>] ? net_rx_action+0x96/0x194 Sep 4 10:16:59 uriel kernel: [32571790.564874] [<c103bd4c>] ? __do_softirq+0xaa/0x151 Sep 4 10:16:59 uriel kernel: [32571790.564878] [<c103be24>] ? do_softirq+0x31/0x3c Sep 4 10:16:59 uriel kernel: [32571790.564883] [<c103befa>] ? irq_exit+0x26/0x58 Sep 4 10:16:59 uriel kernel: [32571790.564890] [<c118ff9f>] ? xen_evtchn_do_upcall+0x12c/0x13e Sep 4 10:16:59 uriel kernel: [32571790.564896] [<c1008c3f>] ? xen_do_upcall+0x7/0xc Sep 4 10:16:59 uriel kernel: [32571790.564899] Mem-Info: Sep 4 10:16:59 uriel kernel: [32571790.564902] DMA per-cpu: Sep 4 10:16:59 uriel kernel: [32571790.564905] CPU 0: hi: 0, btch: 1 usd: 0 Sep 4 10:16:59 uriel kernel: [32571790.564908] CPU 1: hi: 0, btch: 1 usd: 0 Sep 4 10:16:59 uriel kernel: [32571790.564911] CPU 2: hi: 0, btch: 1 usd: 0 Sep 4 10:16:59 uriel kernel: [32571790.564914] CPU 3: hi: 0, btch: 1 usd: 0 Sep 4 10:16:59 uriel kernel: [32571790.564916] Normal per-cpu: Sep 4 10:16:59 uriel kernel: [32571790.564919] CPU 0: hi: 186, btch: 31 usd: 175 Sep 4 10:16:59 uriel kernel: [32571790.564922] CPU 1: hi: 186, btch: 31 usd: 165 Sep 4 10:16:59 uriel kernel: [32571790.564925] CPU 2: hi: 186, btch: 31 usd: 30 Sep 4 10:16:59 uriel kernel: [32571790.564928] CPU 3: hi: 186, btch: 31 usd: 140 Sep 4 10:16:59 uriel kernel: [32571790.564931] HighMem per-cpu: Sep 4 10:16:59 uriel kernel: [32571790.564933] CPU 0: hi: 186, btch: 31 usd: 159 Sep 4 10:16:59 uriel kernel: [32571790.564936] CPU 1: hi: 186, btch: 31 usd: 22 Sep 4 10:16:59 uriel kernel: [32571790.564939] CPU 2: hi: 186, btch: 31 usd: 24 Sep 4 10:16:59 uriel kernel: [32571790.564942] CPU 3: hi: 186, btch: 31 usd: 13 Sep 4 10:16:59 uriel kernel: [32571790.564947] active_anon:485974 inactive_anon:121138 isolated_anon:0 Sep 4 10:16:59 uriel kernel: [32571790.564948] active_file:75215 inactive_file:79510 isolated_file:0 Sep 4 10:16:59 uriel kernel: [32571790.564949] unevictable:0 dirty:516 writeback:15 unstable:0 Sep 4 10:16:59 uriel kernel: [32571790.564950] free:230770 slab_reclaimable:36661 slab_unreclaimable:21249 Sep 4 10:16:59 uriel kernel: [32571790.564952] mapped:20016 shmem:29450 pagetables:5600 bounce:0 Sep 4 10:16:59 uriel kernel: [32571790.564958] DMA free:2884kB min:72kB low:88kB high:108kB active_anon:0kB inactive_anon:0kB active_file:5692kB inactive_file:724kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15872kB mlocked:0kB dirty:8kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:5112kB slab_unreclaimable:156kB kernel_stack:56kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 4 10:16:59 uriel kernel: [32571790.564964] lowmem_reserve[]: 0 698 4143 4143 Sep 4 10:16:59 uriel kernel: [32571790.564977] Normal free:143468kB min:3344kB low:4180kB high:5016kB active_anon:56kB inactive_anon:2068kB active_file:131812kB inactive_file:131728kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:715256kB mlocked:0kB dirty:156kB writeback:0kB mapped:308kB shmem:4kB slab_reclaimable:141532kB slab_unreclaimable:84840kB kernel_stack:1928kB pagetables:22400kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 4 10:16:59 uriel kernel: [32571790.564983] lowmem_reserve[]: 0 0 27559 27559 Sep 4 10:16:59 uriel kernel: [32571790.564995] HighMem free:776728kB min:512kB low:4636kB high:8760kB active_anon:1943840kB inactive_anon:482484kB active_file:163356kB inactive_file:185588kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3527556kB mlocked:0kB dirty:1900kB writeback:60kB mapped:79756kB shmem:117796kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 4 10:16:59 uriel kernel: [32571790.565001] lowmem_reserve[]: 0 0 0 0 Sep 4 10:16:59 uriel kernel: [32571790.565011] DMA: 385*4kB 16*8kB 3*16kB 9*32kB 6*64kB 2*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 2900kB Sep 4 10:16:59 uriel kernel: [32571790.565032] Normal: 21505*4kB 6508*8kB 273*16kB 24*32kB 3*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 143412kB Sep 4 10:16:59 uriel kernel: [32571790.565054] HighMem: 949*4kB 8859*8kB 7063*16kB 6186*32kB 4631*64kB 727*128kB 6*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 776604kB Sep 4 10:16:59 uriel kernel: [32571790.565076] 198980 total pagecache pages Sep 4 10:16:59 uriel kernel: [32571790.565079] 14850 pages in swap cache Sep 4 10:16:59 uriel kernel: [32571790.565082] Swap cache stats: add 2556273, delete 2541423, find 82961339/83153719 Sep 4 10:16:59 uriel kernel: [32571790.565085] Free swap = 250592kB Sep 4 10:16:59 uriel kernel: [32571790.565087] Total swap = 385520kB Sep 4 10:16:59 uriel kernel: [32571790.575454] 1073152 pages RAM Sep 4 10:16:59 uriel kernel: [32571790.575458] 888834 pages HighMem Sep 4 10:16:59 uriel kernel: [32571790.575461] 11344 pages reserved Sep 4 10:16:59 uriel kernel: [32571790.575463] 1090481 pages shared Sep 4 10:16:59 uriel kernel: [32571790.575465] 737188 pages non-shared Now, I've no idea what this means. There's plenty of free memory; total used free shared buffers cached Mem: 4247232 3455904 791328 0 5348 736412 -/+ buffers/cache: 2714144 1533088 Swap: 385520 131004 254516 Though now I see the swap is relatively low in size, but would that matter? I've been starting to think about fragmentation, or inode usage on that large partition, but a recent fsck on it showed is as only like 0.5% fragmented. Which leaves me with inode usage, but how much of an effect really would a large inode table or filesystem TOC have? I've love to hear people's opinions on this. It's driving me potty! df -h output; Filesystem Size Used Avail Use% Mounted on /dev/xvda1 9.5G 6.6G 2.4G 74% / tmpfs 2.1G 0 2.1G 0% /lib/init/rw udev 10M 520K 9.5M 6% /dev tmpfs 2.1G 0 2.1G 0% /dev/shm /dev/xvdb 980G 921G 59G 94% /data

    Read the article

  • nginx: dump HTTP requests for debugging

    - by Alexander Gladysh
    Ubuntu 10.04.2 nginx 0.7.65 I see some weird HTTP requests coming to my nginx server. To better understand what is going on, I want to dump whole HTTP request data for such queries. (I.e. dump all request headers and body somewhere I can read them.) Can I do this with nginx? Alternatively, is there some HTTP server that allows me to do this out of the box, to which I can proxy these requests by the means of nginx? Update: Note that this box has a bunch of normal traffic, and I would like to avoid capturing all of it on low level (say, with tcpdump) and filtering it out later. I think it would be much easier to filter good traffic first in a rewrite rule (fortunately I can write one quite easily in this case), and then deal with bogus traffic only. And I do not want to channel bogus traffic to another box just to be able to capture it there with tcpdump. Update 2: To give a bit more details, bogus request have parameter named (say) foo in their GET query (the value of the parameter can differ). Good requests are guaranteed not to have this parameter ever. If I can filter by this in tcpdump or ngrep somehow — no problem, I'll use these.

    Read the article

  • How to get rid of "Maxback Engine" for good?

    - by Jonik
    I used to have a Maxtor Shared Storage II network drive; it broke down long ago already. (Later I tried to recover some data from it, and partially succeeded, but haven't yet fully documented it on that question.) Anyway, I just noticed there are still some lingering bits remaining of the (thourougly crappy) software that came with the Maxtor device: a background process called "MaxBack Engine". I googled around a bit and found something related but not very useful: http://www.straitmac.com/jforum/posts/list/600.page http://discussions.apple.com/thread.jspa?threadID=725692 Under /Applications I found "Maxtor EasyManage.app" which I used to use for controlling the drive, and showed it some "rm -rf". Before deleting, I noted that the bundle did contain "MaxBack Engine.app" under Content/Resources. But still, after reboot, the "MaxBack Engine" process is back. I did notice though that it only appears when logging in with my usual user account; with another account it wasn't launched. So, dear Mac gurus, what could I do about this pest? I guess I could fall back to some Unix hackery and write a cronjob that kills any process with that name, but obviously it'd be nicer to be able to clean up from my computer everything left behind by Maxtor's piece of software.

    Read the article

  • nfs mount with nfs 3

    - by rahrahruby
    I am running CentOS 6.4 Kernel version 2.6.32-358.23.2.el6.x86_64 #1 SMP and have the following nfs info: nfs-utils-lib-1.1.5-6.el6.x86_64 nfs4-acl-tools-0.3.3-6.el6.x86_64 nfs-utils-1.2.3-36.el6.x86_64 and am trying to mount an nfs volume with nfs3. I have the following line in my fstab: 172.16.11.87:/volume1/web /home/nas nfsver=3 rsize=8192,wsize=8192,timeo=14,intr(no_root_squach) When I run nfsstat it still shows the client as nfs4 Server rpc stats: calls badcalls badauth badclnt xdrcall 0 0 0 0 0 Client rpc stats: calls retrans authrefrsh 1988817 6 1988818 Client nfs v4: null read write commit open open_conf 0 0% 36943 1% 21606 1% 401 0% 392369 19% 375986 18% open_noat open_dgrd close setattr fsinfo renew 0 0% 0 0% 387945 19% 22904 1% 3 0% 2914 0% setclntid confirm lock lockt locku access 1 0% 1 0% 0 0% 0 0% 0 0% 97856 4% getattr lookup lookup_root remove rename link 613996 30% 29888 1% 1 0% 1248 0% 253 0% 414 0% symlink create pathconf statfs readlink readdir 26 0% 226 0% 2 0% 3 0% 0 0% 3825 0% server_caps delegreturn getacl setacl fs_locations rel_lkowner 5 0% 0 0% 0 0% 0 0% 0 0% 0 0% exchange_id create_ses destroy_ses sequence get_lease_t reclaim_comp 0 0% 0 0% 0 0% 0 0% 0 0% 0 0% layoutget layoutcommit layoutreturn getdevlist getdevinfo ds_write 0 0% 0 0% 0 0% 0 0% 0 0% 0 0% ds_commit 0 0%

    Read the article

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

  • apache adress based access control

    - by stijn
    I have an apache instance serving different locations, eg https://host.com/jira https://host.com/svn https://host.com/websvn https://host.com/phpmyadmin Each of these has access control rules based on ip adres/hostname. Some of them use the same configuration though, so I have to repeat the same rules each time: Order Deny,Allow Deny from All Allow from 10.35 myhome.com mycollegueshome.com Is there a way to make these reusable so that I don't have to change each instance everytime something changes? Ie, can I write this once, then use it for a couple of locations? Using SetEnvIf maybe? It would be nice if I could do something like this pseudo-config: <myaccessrule> Order Deny,Allow Deny from All Allow from 10.35 myhome.com mycollegueshome.com </myaccessrule> <Proxy /jira*> AccessRule = myaccessrule </Proxy> <Location /svn> AccessRule = myaccessrule </Location> <Directory /websvn> AccessRule = myaccessrule </Directory>

    Read the article

  • Move the uploads folder in Wordpress

    - by Victor Hurdugaci
    Currently, my Wordpress' upload folder is located in \wp-content\uploads. Initially there was no structure so all files were put there. After a while it was changed to upload the files in \wp-content\uploads\YEAR\MONTH. Now that folder contains a mix of files (those starting with + are folders) like: +wp-content | +2010 | | +02 | | | File-1 | | | File-2 | | | .. | | | File-n | | +01 | | | File-1 | | | File-2 | | | .. | | | File-n | +2009 | | +12 | | | File-1 | | | File-2 | | | .. | | | File-n | | +11 | | | File-1 | | | File-2 | | | .. | | | File-n | +.. | | | .. | Unstructured-file-1 | Unstructured-file-2 | ... | Unstructured-file-n Based on the dates of the unstructured files, I would like to move them in a structured hierarchy (based on date, move it to a folder \wp-content\uploads\YEAR\MONTH). Now, my questions are: Where do I write and execute a script to the movement (I don't have full access to the server, just to a cPanel and to the Wordpress Admin page)? What must be updated so that the links in posts, that reference the unstructured files, point to the new location of those files? Not fully related to the previous description: is it alright to move the whole uploads folder to another location, like \uploads? PS: Moving the files/updating the database manually is not an option :)

    Read the article

  • Creating a pseudoterminal to make sudo happy

    - by larsks
    I need to automate the provisioning of a cloud instance (running Fedora 17) for which the following initial facts are true: I have ssh-key based access to a remote user (cloud) That user has password-free root access via sudo. Manual configuration is as simple as logging in and running sudo su - and having at it, but I would like to fully automate this process. The trick is that the system defaults to having the requiretty option enabled for sudo, which means that an attempt to do something like this: ssh remotehost sudo yum -y install puppet Will fail: sudo: sorry, you must have a tty to run sudo I am working around this right now by first pushing over a small Python script that will run a command on a pseudoterminal: import os import sys import errno import subprocess pid, master_fd = os.forkpty() if pid == 0: # child process: now that we're attached to a # pty, run the given command. os.execvp(sys.argv[1], sys.argv[1:]) else: while True: try: data = os.read(master_fd, 1024) except OSError, detail: if detail.errno == errno.EIO: break if not data: break sys.stdout.write(data) os.wait() Assuming that this is named pty, I can then run: ssh remotehost ./pty sudo yum -y install puppet This works fine, but I'm wondering if there are solutions already available that I haven't considered. I would normally think about expect, but it's not installed by default on this system. screen can do this in a pinch, but the best I came up with was: screen -dmS sudo somecommand ...which does work but eats the output. Are there any other tools available that will allocate a pseudoterminal for me that are going to be generally available?

    Read the article

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

  • Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most

    - by Henno
    Problem Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most. Topology Facts ESXi5 version is 5.0.0.504890 VM has the latest Vmware Tools installed VM is using E1000 network driver Physical box has Win Srv 2008 R2 as the OS CrystalDiskMark says the drive on physical box can read/write 100MB/s vCenter is another vm on esx both vm and physical box are showing 1Gbps link speed Configuration Networking shows vmnic0 as 1000 Full NTttcp is a client/server tool from Microsoft for measuring pure network throughput Here's what I've done so far: Test1: VM is running Filezilla FTP Server (default settings, one user account made) Physical box is running Filezilla FTP Client (default settings) Physical box is uploading a big file to FTP server Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Physical box is downloading that file from FTP server Transfer speed (as observed by Windows Task Manager on both machines): still ~11MB/s (bad) Could it be disk performance issue? Test2: Physical box is running ntttcpr.exe -a 6 -m 6,0,VM_IP_ADDRESS VM is running ntttcps.exe -a 6 -m 6,0,PHY_BOX_IP_ADDRESS Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Could it be switch performance issue? Test3: physical box is running vSphere Client I open Summary Storage datastore Browse Datastore... from physical box and upload a file to datastore Transfer speed (as observed by Windows Task Manager on physical box): ~26-36MB/s (good) Could it be a vm specific issue? Test4: Installed ntttcp to another vm on the same esx server Measured network performance between vms on the same esx server with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~90-120MB/s (excellent :) Test5: I have another esx server on the same site, connecting to the same datastore and same switch. Those two ESX servers have both 2 NICs. One NIC goes to switch while the other goes directly to the other ESX server. vMotioned one of the testing vms off to the other ESX host Measured network performance between vms on different esx servers with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~11MB/s (bad) While I'm aware of these: ESXi 4.1 slow file transfer ESXi 5 network performance is slow Debian Etch and ESXi slow network speeds VMWare ESXi slow file copy to guest they did not help (or I must have been missed something)

    Read the article

  • SSH attack CentOS Amazon EC2

    - by user37143
    Hi, I run a few Rightscale CentOS AMI based instances on Amazon EC2. Two months back I found that our SSHD security is compromised( I had added host.allow and host.deny for ssh). So I created new instances and done an IP based ssh that allows only our IPs through AWS Firewall(ec2-authorize) and chnaged the ssh 22 default port to some other port but two days back I found I was not able to login to the server and when I tried on 22 port the ssh got connected and I found that sshd_conf was changed and when I tried to edit sshd_config I found root had no write permission on the file. So I tried a chmod and it said access denied for 'root' user. This is very strange. I checked secure log and history and found nothing informative. I have PHP, Ruby On Rails, Java, Wordpress apps running on these server. This time I did a chkrootkit scan and found nothing. I renamed the /etc/ssh folder and reinstalled openssh through yum. I had faced this on 3 instances on CentOS(5.2, 5.4) I have instances on Debian as well those working fine. Is this a CentOS/Rightscale issue. Guys, what security measures I should take to prevent this. Please support me this is very critical. Thanks

    Read the article

  • Are flash drives and hard drives thought of as "an ocean of bytes"?

    - by Jian Lin
    Why can a USB Flash drive be formatted as NTFS or FAT32? Is the USB Flash Drive and Hard Drive just to be thought of as "an ocean of bytes"? I get very used to hearing formatting a hard drive as FAT32 or NTFS, but we can also format a USB Flash drive as NTFS or FAT32? Is it because a hard drive or Flash drive both can be thought of as "an ocean of bits" or "an ocean of bytes"? I remember RAM as: it takes 16 bit or 32 bit as an address signal (the 16 or 32 copper footing on the circuit board), and give out 8 bit of data (the other 8 copper footing on the circuit board). So can a hard drive be thought of as working that way too? So that's why a Flash drive can be the same too? Just an "ocean of bytes". But is it true that hard drive's hardware make it an ocean of sector or something else, that is, the smaller unit of read / write is not byte but something else? So with this "ocean of bytes", NTFS has the format that says, "if the first byte is __, then it means __ (it is a file or folder, and link to which sector, indicated by byte 2 and 3, etc, etc)"

    Read the article

  • How do I increase the buffer size for domain sockets in OS X 10.6

    - by Chas. Owens
    In Linux I have no problem dumping tons of data into a domain socket, but the same code on OS X 10.6.2 blows up after about 65 records. The socket reader code looks like #!/usr/bin/perl use strict; use warnings; use IO::Socket; unlink "foo"; my $sock = IO::Socket::UNIX->new ( Local => 'foo', Type => SOCK_DGRAM, Timeout => 600, ) or die "Could not create socket: $!\n"; while (<$sock>) { chomp; print "[$_]\n"; } And the client code looks like #!/usr/bin/perl use strict; use warnings; use IO::Socket; my $sock = IO::Socket::UNIX->new ( Peer => 'foo', Type => SOCK_DGRAM, Timeout => 600, ) or die "Could not create socket: $!\n"; for my $i (1 .. 1_000_000) { print $sock "$i\n" or die $!; } close $sock; The error message I get is No buffer space available at write.pl line 15.. It seems fairly obvious that there is a difference in the buffer size between Linux and OS X, but I don't know how to set it OS X (or what the possible negative side effects might be).

    Read the article

  • What can cause a kernel hang on redhat 4?

    - by Ivan Buttinoni
    I've to solve a nasty problem on a ten machine "cluster": randomly one of these machine hang during an hard computation, sometime still ping sometime not. The problem was described me at the phone, I've still no touch/see these machine, so I can't be more precise. It seem there's no (real) keyboard or monitor linked to them, so I haven't nothing about keyboard led or messages on monitor. Don't worry, what I really need is some suggestion where to search the problem, some suggestions on what can cause a kernel hang on a working machine. I also see this post, but seem same need on a different situation. My ideas since now: - HW problem (ram, cpu, fan etc.) - bad autofs configuration - bad nfs(?) configuration - presence of a trojan/hacker/etc - /dev/"swap" linked to /dev/zero - kernel out of memory(??) - kernel bugged In other words I try to imagine what kind of envent can occour that can crash the kernel insted of the application that generate the event. What hang have YOU experienced before? Write it to me! TIA

    Read the article

  • Calling Excel from PHP 5 through COM fails on Windows 7 when Apache started through Task Planner

    - by Stefan Pantke
    I currently write an application, which controls Excel through COM: The app creates a COM-based Excel instance, opens some XLS files and reads their contents. Scenario I On Windows 7, I start Apache and mySQL using xmapp-control with system administrator rights. All works as expected. The PHP-based controller script interacts with Excel as expected. Scenario II A problem appears, if I start Apache and mySQL as 'background jobs'. Here is how: I created two jobs using Windows 7 Task Planner. One runs apache_start.bat, the other runs mysql_start.bat. Both tasks run as SYSTEM with elevated privileges when Windows 7 boots. Apache and mySQL work as expected. Specifically, Apache serves HTTP request from clients and PHP is able to talk to mySQL. When I call the PHP controller, which calls and interacts with Excel using COM, I do receive an error. The error seems to come from Excel [not COM itself] and reads like this: Excel can't read the XLS-file Excel failed to save the file due to an ill-name worksheet Interestingly, during the first run of the PHP-based controller script, it takes a few seconds to render the error message. Each subsequent run immediately renders the error message. Windows system logs didn't show a single problem report entry. Note, that the PHP program and the Apache instance didn't change - except the way Apache was started. At least the PHP controller script is perfectly able to read the file-system, since it provides the pathes to the XLS-file through scandir() of a certain directory. Concurrency issues can't be the cause of the problem. A single instance of the specific PHP controller interacts with Excel. Question Could someone provide details, why this happens?

    Read the article

< Previous Page | 731 732 733 734 735 736 737 738 739 740 741 742  | Next Page >