Search Results

Search found 82005 results on 3281 pages for 'cost based data structure'.

Page 1479/3281 | < Previous Page | 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486  | Next Page >

  • What tools can be used to monitor a web application? Beyond "doesn't 404"

    - by Freiheit
    I have an internal web application that has recently gone through a major version upgrade. I would like to monitor this application over the weekend and look for 'soft' errors. I will still need to spot check things by hand, but there are some common failure patterns that I think I can automate. Examples include data with bad formatting, blank rows in tables (indicates missing non-critical data), patterns for identifiers ("TEST" means one of my devs left a testing feed on), etc. I think there are applications out there that can be scripted to do things like: 1. log in 2. Go to $URL 3. select 3rd link in $LIST or $PATTERN 4. Check HTML from that link for $PATTERNS 5. Email report Are these goals sane? What applications/tools can help with this?

    Read the article

  • Use mod_rewrite or RedirectMatch to redirect oldfile.aspx?p=blah to newfile.php, ignoring ?p=blah

    - by Dan
    I've got a site with many incoming links to the old structure (gone for years), with tonnes of URL vars that are no longer relevant, as the database mappings were changed. So, I'd like to redirect: http://www.mysite.com/oldfile.aspx?p=1&c=2 to: http://www.mysite.com/newfile.php without the query string at the end. The actual query string varies - there are hundreds of them, but since they don't match up to a particular case anymore, I want to take people to the new index page for the content they're looking for, so they can find it from there. I currently use: RedirectMatch 301 ^/oldfile\.aspx$ /newfile.php This puts the query back on the end though. Can someone let me know the voodoo recipe I need?

    Read the article

  • Cant deploy "war" file from Virtual Hosts, see a directory listing.

    - by Kaustubh P
    This is my httpd.conf configured with Virtual hosts: NameVirtualHost *:80 <VirtualHost *:80> ServerName http://foo.baz.in DocumentRoot /var/www/foo/ </VirtualHost> <VirtualHost *:80> ServerName http://bar.baz.in DocumentRoot /var/www/ </VirtualHost> The second virtual host is a Wordpress blog, configured with .htaccess, and index.php in the root i.e. /var/www, and rest of the files in wordpress's own folder. However, the first virtual host is a "war" file, and when I goto foo.baz.in, I see the directory listing, containing the war. I also tried changing the DocumentRoot to /var/www/foo/foo.war` but I get an error Restarting web server: apache2Warning: DocumentRoot [/var/www/foo/foo.war] does not exist I also changed the owner and permission of the war to www-data:www-data and changed the permissions to 755, but to no avail. How do I make apache deploy my "war"? Thanks.

    Read the article

  • Ubuntu 10.04 recognizing USB 2.0 external HD as USB 1.1

    - by btucker
    When I connect the USB 2.0 drive I see this: usb 1-4.3: new full speed USB device using ohci_hcd and address 5 so I know it's getting seen as USB 1.1. usb-devices shows that it really is USB 2.0 and connected to a USB 2.0 hub: T: Bus=01 Lev=01 Prnt=01 Port=03 Cnt=01 Dev#= 2 Spd=12 MxCh= 4 D: Ver= 2.00 Cls=09(hub ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=05e3 ProdID=0608 Rev=77.61 S: Product=USB2.0 Hub C: #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=100mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub T: Bus=01 Lev=02 Prnt=02 Port=01 Cnt=01 Dev#= 4 Spd=12 MxCh= 0 D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=13fd ProdID=1340 Rev=02.10 S: Manufacturer=Generic S: Product=External C: #Ifs= 1 Cfg#= 1 Atr=c0 MxPwr=2mA I: If#= 0 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage It seems the problem is that root hub is: T: Bus=01 Lev=00 Prnt=00 Port=00 Cnt=00 Dev#= 1 Spd=12 MxCh=10 D: Ver= 1.10 Cls=09(hub ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=1d6b ProdID=0001 Rev=02.06 S: Manufacturer=Linux 2.6.32-25-server ohci_hcd S: Product=OHCI Host Controller S: SerialNumber=0000:00:02.0 C: #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=0mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub And there's no mention of ehci_hcd. lsusb -t gives me: /: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=ohci_hcd/10p, 12M |__ Port 4: Dev 2, If 0, Class=hub, Driver=hub/4p, 12M |__ Port 2: Dev 4, If 0, Class=stor., Driver=usb-storage, 12M |__ Port 3: Dev 5, If 0, Class=stor., Driver=usb-storage, 12M |__ Port 6: Dev 3, If 0, Class=stor., Driver=usb-storage, 12M It seems like I'm missing something which would allow the OS to see USB 2.0 devices. Can anyone point me in the right direction? EDIT Full lsusb -v output: Bus 001 Device 005: ID 13fd:1340 Initio Corporation Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x13fd Initio Corporation idProduct 0x1340 bcdDevice 2.10 iManufacturer 1 Generic iProduct 2 External iSerial 3 57442D574341595930323337 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 2mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 001 Device 002: ID 05e3:0608 Genesys Logic, Inc. USB-2.0 4-Port HUB Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x05e3 Genesys Logic, Inc. idProduct 0x0608 USB-2.0 4-Port HUB bcdDevice 77.61 iManufacturer 0 iProduct 1 USB2.0 Hub iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 100mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x00e0 Ganged power switching Ganged overcurrent protection Port indicators bPwrOn2PwrGood 50 * 2 milli seconds bHubContrCurrent 100 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0103 power enable connect Port 3: 0000.0103 power enable connect Port 4: 0000.0100 power Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 1 Single TT bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 2.06 iManufacturer 3 Linux 2.6.32-25-server ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:02.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 11 bDescriptorType 41 nNbrPorts 10 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 1 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 0x00 PortPwrCtrlMask 0xff 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0103 power enable connect Port 5: 0000.0100 power Port 6: 0000.0103 power enable connect Port 7: 0000.0100 power Port 8: 0000.0100 power Port 9: 0000.0100 power Port 10: 0000.0100 power Device Status: 0x0003 Self Powered Remote Wakeup Enabled

    Read the article

  • Nginx + PHP FASTCGI FAILS - how to debug ?

    - by Niro
    I have a server on AMAZON EC2 running Nginx +PHP with PHP FASTCGI via port 9000. The server runs fine for a few minutes and after a while (several thousands of hits in this case) FastCGI Dies and Nginx returns 502 Error. Nginx log shows 2010/01/12 16:49:24 [error] 1093#0: *9965 connect() failed (111: Connection refused) while connecting to upstream, client: 79.180.27.241, server: localhost, request: "GET /data.php?data=7781 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "site1.mysite.com", referrer: "http://www.othersite.com/subc.asp?t=10" How can I debug what is causing FastCGI to die?

    Read the article

  • compare the contents of two folders that are replicating by dfs

    - by Funky Si
    I have a large folder that I am replicating by dfs and I want to check that all files have been replicated correctly. Currently I am running the following script at both ends. cd e:\data\shared\ dir /a:-h /b /s > e:\data\shared\result.txt and then using a text editor to tidy the file before using a diff tool to compare them. Does anyone know a better way of doing this? Failing that does anyone know how to adapt my script to ignore all the files in the DfsrPrivate folders

    Read the article

  • Is my HD broken? Can i use it for anything?

    - by acidzombie24
    Someone suggested formatting my HD so i wont try to read bad data. From what i understand the OS marks which clusters are bad and skips them. So after copying all my data i did a quick format to NTFS. I copied files with an error then i right clicked and tried to create a new folder. I was greeted by this message. I have 926 of 931gb free. Is my external HD broken?

    Read the article

  • Open and scroll through 42 GB text file in Mac OS X

    - by Django Johnson
    I am running Mac OS X 10.8.4 (Mountain Lion) and I am trying to open and scroll through a 42 GB .XML file. I plan on using an XML parser to parse through it and delete parts, but first I need to know how the document is structured so I can know what parts to save. How can I open this text / XML file and scroll through it so I can get a glimpse of its structure? I tried my default text-editor, text-mate, and that couldn't open it. I tried gEdit and that shows the first 10 or so lines, but then quits after trying to load the rest. I would greatly appreciate any and all suggestions!

    Read the article

  • Mail Merge in Microsoft Word with images from Sharepoint

    - by Ian Turner
    Is there any way of doing a Mail Merge in Microsoft Word 2007 taking data, including images from a Sharepoint site? It's a bit crude, but I've managed to merge text by taking the data off the sharepoint site as an Excel sheet and then merging that. My problem is what to do with the images. I can set references to the images up in the Sharepoint site, however all I can find is a way of Mail Merging when images are in the same folder as the document you are trying to Merge and I can't find a sensible automated way to pulls these images together into one single folder.

    Read the article

  • Shell Script, iterating over a folder

    - by Martin
    I have very basic shell scripting knowledge. I have photos under original folder on many different folder like this folder + folder1 + original + folder2 + original + folder3 + original + folder4 + original Using mogrify I'm trying to create thumbs under a thumb folder following a structure to this. folder + folder1 + original + thumb + folder2 + original + thumb + folder3 + original + thumb + folder4 + original + thumb I'm a little lost in how to write the shell script that may iterate through it. I'm ok giving mogrify its settings but I don't complete understand how to tell the script to go iterate each folder to run the mogrify command.

    Read the article

  • Build and migrated to software raid (mdadm) on GPT disk, now can't assemble array

    - by John H
    mdadm, gpt issues, unrecognized partitions. Simplified question: How do I get mdadm to recognize GPT partitions? I have been attempting to convert/copy my Ubuntu 11.10 OS from a single drive to software raid 1. I have done similar in the past, but in this case, I was adding in a drive that has been configured for GPT and I tried to work with that without fully looking into the implications. Currently, I have a non-booting mdadm RAID 1 array of /dev/md127 (the OS assigned that and it keeps picking up). I am booting off of live USB keys, currently System Rescue CD from sysresccd. While gdisk and parted can see all the partitions, most of the OS utilities do not, including mdadm. My main goal is just to make the raid array accessible so I can get pull the data and start fresh (without using GPT). /dev/md127 /dev/sda /dev/sda1 <- GPT type partition /dev/sda1 <- exists within the GPT part, member of md127 /dev/sda2 <- exists within the GPT part, empty /dev/sdb /dev/sdb1 <- GPT type partition /dev/sdb1 <- exists within the GPT part, member of md127 History: POINT A: The original OS was install on sda (actually /dev/sda6). I used a the Ubuntu live usb to add sdb. I got warning from fdisk about GPT so I used gdisk to create a raid partition (sdb1) and mdadm to create a raid1 mirror with a missing drive. I had many issues getting this working (including being unable to get grub to install) but I eventually got it to boot using grub on sda and /dev/md127 off of sdb. So at point A, I had copied my OS from sda6 to md127 on sdb. I then booted into a rescue mode and attempted to get a bootloader onto sdb, which failed. I then discovered my mistake: I had installed the raid onto sdb instead of sdb1, essentially overwriting the sdb1 partition. POINT B: I now had two copies of my data- one on md127/sdb, and one on sda. I destroyed data on sda and created a new GPT table on sda. I then created sda1 for the raid array, and sda2 for a scratch partition. I added sda1 into the raid array and let it rebuild. md127 now covered /dev/sdb and /dev/sda1 as fully active and synced. POINT C: I rebooted onto linux rescue again and was still able to access the raid array. I then removed /dev/sdb from the array and created /dev/sdb1 for the raid. I added sdb1 to the array and let it sync. I was able to mount and access /dev/md127 without issues. Once it completed, both /dev/sda1 and /dev/sdb1 were GPT partitions and actively syncing. POINT D (current): I rebooted again to test if the array would boot and grub failed to load. I booted off of my live thumb drive and found that I can no longer assemble the raid array. mdadm doesn't see the required partitions. -- root@freshdesk /root % uname -a Linux freshdesk 3.0.24-std251-amd64 #2 SMP Sat Mar 17 12:08:55 UTC 2012 x86_64 AMD Athlon(tm) II X4 645 Processor AuthenticAMD GNU/Linux === /proc/partitions and parted look good: root@freshdesk /root % cat /proc/partitions major minor #blocks name 7 0 301788 loop0 8 0 976762584 sda 8 1 732579840 sda1 8 2 244181703 sda2 8 16 732574584 sdb 8 17 732573543 sdb1 8 32 7876607 sdc 8 33 7873349 sdc1 (parted) print all Model: ATA ST31000528AS (scsi) Disk /dev/sda: 1000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 750GB 750GB ext4 2 750GB 1000GB 250GB Linux/Windows data Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 750GB 750GB ext4 Linux RAID raid Model: SanDisk SanDisk Cruzer (scsi) Disk /dev/sdc: 8066MB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 31.7kB 8062MB 8062MB primary fat32 boot, lba === # no sda2, and I double the sdb1 is the one shown in parted root@freshdesk /root % blkid /dev/loop0: TYPE="squashfs" /dev/sda1: UUID="75dd6c2d-f0a8-4302-9da4-792cc7d72355" TYPE="ext4" /dev/sdc1: LABEL="PENDRIVE" UUID="1102-3720" TYPE="vfat" /dev/sdb1: UUID="2dd89f15-65bb-ff88-e368-bf24bd0fce41" TYPE="linux_raid_member" root@freshdesk /root % mdadm -E /dev/sda1 mdadm: No md superblock detected on /dev/sda1. # this is probably a result of me attempting to force the array up, putting superblocks on the GPT partition root@freshdesk /root % mdadm -E /dev/sdb1 /dev/sdb1: Magic : a92b4efc Version : 0.90.00 UUID : 2dd89f15:65bbff88:e368bf24:bd0fce41 Creation Time : Fri Mar 30 19:25:30 2012 Raid Level : raid1 Used Dev Size : 732568320 (698.63 GiB 750.15 GB) Array Size : 732568320 (698.63 GiB 750.15 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 127 Update Time : Sat Mar 31 12:39:38 2012 State : clean Active Devices : 1 Working Devices : 2 Failed Devices : 1 Spare Devices : 1 Checksum : a7d038b3 - correct Events : 20195 Number Major Minor RaidDevice State this 2 8 17 2 spare /dev/sdb1 0 0 8 1 0 active sync /dev/sda1 1 1 0 0 1 faulty removed 2 2 8 17 2 spare /dev/sdb1 === root@freshdesk /root % mdadm -A /dev/md127 /dev/sda1 /dev/sdb1 mdadm: no recogniseable superblock on /dev/sda1 mdadm: /dev/sda1 has no superblock - assembly aborted root@freshdesk /root % mdadm -A /dev/md127 /dev/sdb1 mdadm: cannot open device /dev/sdb1: Device or resource busy mdadm: /dev/sdb1 has no superblock - assembly aborted

    Read the article

  • OS X server large scale storage and backup

    - by user135217
    I really hope this question doesn't come across as trolling or asking for buying advice. It's not intended. I've just started working for a small ad agency (40 employees). I actually quit being a system administrator a few years ago (too stressful!), but the company we're currently outsourcing our IT stuff to is doing such a bad job that I've felt compelled to get involved and do what I can to improve things. At the moment, all the company's data is stored on an 8TB external firewire drive attached to a Mac Mini running OS X Server 10.6, which provides filesharing (using AFP) for the whole company. There is a single backup drive, which is actually a caddy containing two 3TB hard drives arranged in RAID 0 (arrggghhhh!), which someone brings in as and when and copies over all the data using Carbon Copy Cloner. That's the entirety of the infrastructure, and the whole backup and restore strategy. I've been having sleepless nights. I've just started augmenting the backup process with FreeBSD, ZFS, sparse bundles and snapshot sends to get everything offsite. I think this is a workable behind the scenes solution, but for people's day to day use I'm struggling. Given the quantity and importance of the data, I think we should really be looking towards enterprise level storage solutions, high availability and so on, but the whole company is all Mac all the time, and I cannot find equipment that will do what we need. No more Xserve; no rack storage; no large scale storage at all apart from that Pegasus R6 that doesn't seem all that great; the Mac Pro has fibre channel, but it's not a real server and it's ludicrously expensive; Xsan looks like it's on the way out; things like heartbeatd and failoverd have apparently been removed from Lion Server; the new Mac Mini only has thunderbolt which severely limits our choices; the list goes on and on. I'm really, really not trying to troll here. I love Macs, but I just genuinely don't know where I'm supposed to look for server stuff. I have considered Linux or FreeBSD and netatalk for serving files with all the server-y goodness those OSes bring, but some the things I've read make me wonder if it's really the way to go. Also, in my own (admittedly quite cursory) experiments with it, I've struggled to get decent transfer speeds. I guess there's also the possibility of switching everyone off AFP and making them use SMB or NFS, but I understand that this can cause big problems with resource forks and file locks. I figure there must be plenty of all Mac companies out there. If you're the sysadmin at one, what do you use? Any suggestions very gratefully received.

    Read the article

  • MySQL Windows vs. Linux: performance, caveats, pros and cons?

    - by gravyface
    Looking for (preferrably) some hard data or at least some experienced anecdotal responses with regards to hosting a MySQL database (roughly 5k transactions a day, 60-70% more reads than writes, < 100k of data per transaction i.e. no large binary objects like images, etc.) on Windows 2003/2008 vs. a Debian-based derivative (Ubuntu/Debian, etc.). This server will function only as a database server with a separate Web server on another physical box; this server will require remote access for management (SSH for Linux, RDP for Windows). I suspect that the Linux kernel/OS will compete less than the Windows Server for resources, but for this I can't be certain. There's also security footprint: even with Windows 2008, I'm thinking that the Linux box can be locked down more easily than the Windows Server. Anyone have any experience with both configurations?

    Read the article

  • Is a larger hard drive with the same cache, rpm, and bus type faster?

    - by Joel Coehoorn
    I recently heard that, all else being equal, larger hard are faster than smaller. It has to do with more bits passing under the read head as the drive spins - since a large drive packs the bits more tightly, the same amount of spin/time presents more data to the read head. I had not heard this before, and was inclined to believe the the read heads expected bits at a specific rate and would instead stagger data, so that the two drives would be the same speed. I now find myself looking at purchasing one of two computer models for the school where I work. One model has an 80GB drive, the other a 400GB (for ~$13 more). The size of the drive is immaterial, since users will keep their files on a file server where they can be backed up. But if the 400GB drive will really deliver a performance boost to the hard drive, the extra money is probably worth it. Thoughts?

    Read the article

  • OpenSSH SFTP: chrooted user with access to other chrooted users' files

    - by HannesFostie
    Decided to re-phrase the question entirely in order to not have to make a new one. I currently have an SFTP server set up using OpenSSH's SFTP functionality. All my users are chrooted, and everything works. What I need most right now is for one user, which is not root (because this user can't have any real SSH powers!), to have access to all other users' chrooted dirs. This user's job is to fetch all uploaded documents every once in a while. Directory structure as of now is: /home |_ /home/user1 |_ /home/user2 |_ /home/user3 With ChrootDirectory set as /home/%u User "adminuser" should have access to user1, user2 and user3's directories without having access to /home or at the very least not to anything but /home. Bonus points for the one who can tell me how to let users write inside /home/%u without having to make a new directory inside that dir which they own themselves, and not root as is the case with /home/%u (openssh chroot prerequisite).

    Read the article

  • USB Adapter for Memory Cards

    - by ktm5124
    I am looking for something like a USB adapter for video cards. It is a cable that on one end hooks into a USB port on a computer and on the other accepts any one of a variety of video cards. Essentially it's an "all-in-one" USB adapter for video cards. I'm told that they are sold all over... does anyone know what it is called or where to find them? I should clarify, by the way, that by "video card" I mean a memory card for a video camera, and that the goal is to read this video data onto a computer, from a variety of video card types, through a USB port. That way if you go to your friend's house and bring your computer, you can transfer the video data on his video camera to your computer, trusting that the adapter will have a slot for his kind of video card.

    Read the article

  • Gentoo Linux -> Ubuntu: Can I Preserve My LVM/RAID Devices, Or Do I Need To Reformat?

    - by Eddie Parker
    Hello: I've got a Gentoo box that I'm interested in switching over to an Ubuntu box. I currently have the partitions laid out using a mixture of RAID (mdadm) and LVM2, as specified in this document [1]. Ideally I'd like to just wipe out the non /home partition, as it's got data I'd like to keep. Is it possible to reuse the current setup, or do I need to restart? vgdisplay, vgchange -a y, etc don't yield any results from the Ubuntu LiveCD, and I'm wary to run any commands that might wipe my data. Your help would be appreciated. [1] http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml

    Read the article

  • Varnish doesn't seem to be caching

    - by Charlie Somerville
    I've setup a Varnish cache mirror to sit in front of a file server, but it seems to be endlessly re-downloading data from my file server. There's about 100GB of data in total, but so far Varnish has downloaded 800GB from my file server. I'm using the default VCL file that comes with Varnish and the response headers for files served by the file server are similar to the following: HTTP/1.1 200 OK Cache-Control: max-age=290304000, public Content-Type: image/jpeg Expires: Wed, 29 Dec 2010 21:38:33 GMT Server: Microsoft-IIS/7.0 E-Tag: "8b4723296ab697530768f18b1378b269" Content-Disposition: inline; filename=image046.jpg; X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Thu, 23 Dec 2010 05:38:33 GMT Content-Length: 100592 I'm starting varnishd with the following options: varnish/sbin/varnishd -a 0.0.0.0:80 -f varnish/etc/varnish/default.vcl -s file,varnish/var/lib/varnish/varnish_storage.bin,100G

    Read the article

  • How to upgrade a single instance's size without downtime

    - by Justin Meltzer
    I'm afraid there may not be a way to do this since we're not load balancing, but I'd like to know if there is any way to upgrade an EC2 EBS backed instance to a larger size without downtime. First of all, we have everything on one instance: both our app and our database (mongodb). This is along the lines i'm thinking: I know you can create snapshots of your EBS and an AMI of your instance. We already have an AMI and we create hourly snapshots. If I spin up a new separate instance of a larger size and then implement (not sure what the right term is here) the snapshots so that our database is up to date, then I could switch the A record of our domain from the old ip address to the new one. However, I'm afraid that after copying over the data from the snapshot, by the time it takes to change the A record and have that change propagate, the data could potentially be stale. Is there a way to prevent this, and is there a better way to do this than I am suggesting?

    Read the article

  • Load Balancing a UDP server

    - by Hellfrost
    Hello StackOverflow, I have a udp server, it is a central part in my business process. in order to handle the loads I'm expecting in the production environment I'll probably need 2 or 3 instances of the server. The server is almost entirely stateless, it mostly collects data, and the layer above it knows how to handle the minimal amount of stale data that can arise from the the multiple server instances. My question is, how can I implement load balancing between the servers? I would prefer to distribute the requests as evenly as possible between the servers. I would also would like to have some fidelity, I mean if client X was routed to server y, then I want all of X's subsequent requests to go to server Y, as long as it is sensible and not overloads Y. By the way it is a .NET system... what would you recommend?

    Read the article

  • Robocopy hiding folders on backup drives

    - by Neil Barnwell
    I have a backup batch file that uses Robocopy to backup my files: robocopy "C:\" "G:\Default\RoboCopyBackup\C" /XF Pagefile.sys /XD "System Volume Information" "Recycler" "Temporary Internet Files" "Installer Cache" "Temp" /E /R:1 /W:0 /TEE /XJ This should create a folder structure on the external backup drive like so: G:\Default\RoboCopyBackup\C\... However, G: appears totally empty. What is weird, is that the folders and files are there! If I type the above path into the address bar, I see all the files and folders! Can anyone help me work out why? I think it might be some NTFS-based ownership/permissions thing but I'm not sure.

    Read the article

  • Security of BitLocker with no PIN from WinPE?

    - by Scott Bussinger
    Say you have a computer with the system drive encrypted by BitLocker and you're not using a PIN so the computer will boot up unattended. What happens if an attacker boots the system up into the Windows Preinstallation Environment? Will they have access to the encrypted drive? Does it change if you have a TPM vs. using only a USB startup key? What I'm trying to determine is whether the TPM / USB startup key is usable without booting from the original operating system. In other words, if you're using a USB startup key and the machine is rebooted normally then the data would still be protected unless an attacker was able to log in. But what if the hacker just boots the server into a Windows Preinstallation Environment with the USB startup key plugged in? Would they then have access to the data? Or would that require the recovery key? Ideally the recovery key would be required when booted like this, but I haven't seen this documented anywhere.

    Read the article

  • How can I make results of a formula values that can be filtered or use vlookup with Excel

    - by Burt
    I am having an issue in that I am using various formulas to move, split data, etc from various sources. The problem is when my final results post to the final destination that I want, I still need to either run advanced filters, or a vlookup with the results. I can’t do this because as an example if cell A1 shows a value of: A127 the actual cell content is: =RIGHT(A2,FIND(" ",A2&" ")-2) Everything I read said to copy and paste special values, but this doesn’t work for me as the idea is to have the formulas/macros run everything and eliminating cutting and pasting. In the case above I have a formula that pulls that info from a spreadsheet that is saved every week. Once it is pulled part of it is cut out in another column. I then need to run a vlookup on those results for data already contained on another tab.

    Read the article

  • How to repair unbootable Fedora install

    - by Cerin
    How do you repair/reinstall Fedora without deleting any existing partitions or data? I was attempting to upgrade some old Fedora 13 servers to 17, following the instructions in the wiki. After the 14-15 upgrade, rebooting resulted in the output: Dropping to debug shell. sh: can't access tty; job control turned off dracut:/# Running dmesg also shows: dracut Warning: No root device "block:/dev/mapper/VolGroup-lv_root" found Googling shows this error is typically related to some weird RAID issues, but my server is a virtual machine not using any RAID. Using a rescue CD, I can chroot /mnt/sysimage, and all packages and data still seems to be there. How do I make the system bootable again?

    Read the article

< Previous Page | 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486  | Next Page >