Search Results

Search found 2668 results on 107 pages for 'tom tap'.

Page 95/107 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • Any Recommendations for a Web Based Large File Transfer System?

    - by Glen Richards
    I'm looking for a server software product that: Allows my users to share large files with: The general public securely to 1 or more people (notification via email, optionally with a token that gives them x period of time to download) Allows anyone in the general public to share files with my users. Perhaps by invitation. Has to be user friendly enough to allow my users to use this with out having to bug me as the admin. It needs to be a system that we can install on our own server (we don't want shared data sitting on anyone else's server) A web based solution. Using some kind or secure comms channel would be good too, eg, ssh Files to share could be over 1 GB. I found the question below. WebDav does not sound user friendly enough: http://serverfault.com/questions/86878/recommendations-for-a-secure-and-simple-dropbox-system I've done a lot of searching, but I can't get the search terms right. There are too many services that provide this, but I want something we can install on our own server. A last resort would be to roll my own. Any ideas appreciated. Glen EDIT Sorry Tom and Jeff but Glen specifically says that he's looking for a 'product' so given that I specialise in this field thought that my expertise in this area may have been of use to him. I don't see how him writing services is going to be easy for him to maintain going forward (large IT admin overhead) or simple for his users and the general public to work with.

    Read the article

  • Why is SMF manifest losing configuration data when exported on SmartOS?

    - by Scott Lowe
    I'm running a server process under SMF (Server Management Facility) on Joyent's Base64 1.8.1 SmartOS image. For those not aqauinted with SmartOS, it is a cloud-based distribution of IllumOS with KVM. But essentially it is like Solaris and inherits from OpenSolaris. So even if you've not used SmartOS, I'm hoping to tap into some Solaris knowledge on ServerFault. My issue is that I want an unprivileged user to be allowed to restart a service that they own. I have worked out how to do that by using RBAC and adding an authorisation to /etc/security/auth_attr and associating that authorisation with my user. I then added the following to my SMF manifest for the service: <property_group name='general' type='framework'> <!-- Allow to be restarted--> <propval name='action_authorization' type='astring' value='solaris.smf.manage.my-server-process' /> <!-- Allow to be started and stopped --> <propval name='value_authorization' type='astring' value='solaris.smf.manage.my-server-process' /> </property_group> And this works well when imported. My unprivileged user is allowed to restart, start and stop its own server process (this is for automated code deployments). However, if I export the SMF manifest, this configuration data is gone... all I see in that section is this: <property_group name='general' type='framework'> <property name='action_authorization' type='astring'/> <property name='value_authorization' type='astring'/> </property_group> Does anybody know why this is happening? Is my syntax wrong, or am I simply not using SMF incorrectly?

    Read the article

  • X.org - mouse gets stuck on press

    - by grawity
    I'm using Arch Linux. Very often, when I click on something, the OS thing sees the mouse press but not the release. If it was a link or file I clicked, moving the cursor would drag it too. Hammering the same mouse button again gives no effect. Usually, if I tap the touchpad (ALPS), the system finally sees both press and release of that, and I can continue working. (This might be because it uses a different driver - synaptics instead of evdev.) As you can imagine, this is quite annoying even for someone who spends 70% of his life in front of a terminal app. This is not a mouse issue - I'm on a laptop, and this affects both the Trackpoint thing and an external USB mouse. This is not a DE or window manager issue - I have used GNOME (with Metacity, Compiz and Xfwm4), Xfce (with Metacity and Xfwm4), mwm, twm, awesome, and wmii. Doesn't seem to be a hardware thing - after rebooting into Windows XP, everything works fine. hal is used for the auto-configuration of devices (as I have to disconnect the USB mouse often), so Xorg.conf really has nothing of relevance. Xorg -version shows: X.Org X Server 1.6.3.901 (1.6.4 RC 1) Release Date: 2009-8-25 X Protocol Version 11, Revision 0 If it changes anything, the laptop is stone-age Dell Latitude C840. I kinda suspect either hal or the evdev thing to cause it, but I really have no ideas on what to check further. In other words, HALP!#$ This thing is driving me nuts.

    Read the article

  • Libvirt / QEmu Machine Fails and Refuses Restart Because of Memory Allocation Errors

    - by Elmar Weber
    I'm having a problem with libvirt. On a system restart all virtual machines (VMs) are started without a problem and keep running. Then at some point in time a set of machines shuts down according to their log. When I try to restart the machine, I'm getting an error that the memory allocation failed, although more than enough memory is free. server ~ # free total used free shared buffers cached Mem: 16176648 16025476 151172 0 285432 950300 -/+ buffers/cache: 14789744 1386904 Swap: 0 0 0 server ~ # virsh start zimbra error: Failed to start domain zimbra error: Unable to read from monitor: Connection reset by peer server ~ # tail -n 4 /var/log/libvirt/qemu/zimbra.log LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 3072 -smp 2,sockets=2,cores=1,threads=1 -name zimbra -uuid d05ddb7a-83c4-a77b-d8bc-a322648520cf -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/zimbra.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -drive file=/var/lib/libvirt/images/zimbra.img,if=none,id=drive-ide0-0-0,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -netdev tap,fd=19,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:21:a9:ad,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -vnc 192.168.1.2:25 -k de -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 char device redirected to /dev/pts/2 Failed to allocate 3221225472 B: Cannot allocate memory 2012-07-06 08:42:56.076+0000: shutting down server ~ # uname -a Linux server 3.2.0-26-generic #41-Ubuntu SMP Thu Jun 14 17:49:24 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux The system is a Ubuntu 12.04 server. The problem seems to occurs since the last restart, which was due to a number of package upgrades and a kernel upgrade. I tried booting with the previous kernel, the problem persists. I was not able to pinpoint an exact event when the machines fail, they do it at nearly the same time. The last time a duplicity job was running, this was not always the case however. Any suggestions on how to debug this? Best regards, elm

    Read the article

  • RDP problem with Vista and Windows 7 destination

    - by MadBison
    I use a server a home to host a bunch of concurrently running Hyper-V VM's with different OS's and software for testing. I have Vista on the laptop, all latest SP's and patches. The server is Server 2008 R2, fully patched. The guests are a mix of XP, Vista, Server 2008 and Windows 7. If I connect to the Win XP or Server 2008 guest using RDP, it is always good. Very quick, no speed issues. If I connect to the Vista or Win 7 guests, the response time is so slow it is unusable. Usually 6 or 8 seconds, and at times it is to long to measure! This happens from both the laptop running Vista, and the server running Server 2008 R2. Does anyone know what the issue is with RDP on Vista and Windows 7 destinations? I did read this: http://blog.tmcnet.com/blog/tom-keating/microsoft/remote-desktop-slow-problem-solved.asp and that is not the problem I have applied that change to all PC's.

    Read the article

  • Laptop accessories for mobile warrior (light power adapter & case/bag)

    - by wonsungi
    Lugging my X301 between work and home, I realized my laptop's accessories weigh more than the laptop itself! I'm ordering a 2nd AC power adapter so I don't even have to carry one at all, but I may as well get the lightest one possible. My X301 came with a pretty svelt 65W power adapter, but can anyone suggest a lighter power adapter or confirm the weights I've found below? mass vol dimensions W Model ---- ------- ----------- --- ------------------- 210g 149cm^3 108x46x30mm 65W Coolermaster [NA 65] 244g 189cm^3 140x75x18mm 65W ThermalTake [ADP65W0001] 260g 130cm^3 104x43x29mm 65W Lenovo (came with X301) 326g 198cm^3 145x76x18mm 95W Coolermaster [SNA 95] 330g 180cm^3 150x60x20mm 90W Kensington USB [K38030US] Apple's 60W power adapter seems much smaller/lighter than the PC products listed above, so I think a better PC power adapter could exist. There are much smaller 45W "netbook" adapters, but are these too weak for my X301? I would not mind if it just meant the battery couldn't charge while the laptop was on, but I am afraid there will be worse consequences. Also, I have decided to swap my Logitech Kinetik briefcase for a Tom Bihn Ristretto. Less protection, but much lighter, less bulky, and easier to carry. Any suggestions for better laptop cases/bags?

    Read the article

  • maximum number of connections Squid

    - by Isaac
    I have a Squid proxy server that controls all internet traffic for my network. I need a way to stop users from downloading big files (say 50MB) in my network. I banned some famous ports (e.g. torrent) but some downloads are possible by HTTP port. Obviously I cannot ban port 80! A simple solution is limiting maxmimum number of the simultaneous connections for each IP (e.g. 3 connections). It's possible in Squid with this config: acl ACCOUNTSDEPT 192.168.5.0/24 acl limitusercon maxconn 3 http_access deny ACCOUNTSDEPT limitusercon But this solution has really bad impact in web browsing, because any smart browser get different parts of a website by several connections simultaneously to speedup web browsing. But if we have a maximum number of connections, the browsers will fail to get some parts and the website will be shown partially and some parts/images/frames will not be shown. So, can we limit maximum number of persist connections? I think this policy will works: Specify Maximum number of connections that is alive for 10 seconds But Number of simultaneous connections for every IP is unlimited But how can we implement this policy when Squid? With which config? UPDATE: artifex and Tom Newton offered using a bandwidth-limiting approach to fight against downloaders. But bandwidth-limiting in Squid has a shortcoming: It's static and cannot dynamically change. So a person has a limited bandwidth not matter how many people are using internet (maybe nobody!) Also, this solution cannot help to stop people from downloading. They still can download but in a lower speed. But if we find a way to terminate persist connections (or any connection that is alive more than a specific time), downloading big files will be almost impossible (always there is some way!)

    Read the article

  • NFS Datastore Appears Empty!

    - by daemonchild
    Hi guys, I've got an NFS server problem. The datastore connected and seems to be a valid datastore in both the vSphere client and under /vmfs/volumes. The issue is that it appears to be empty! I can create files (eg: touch /vmfs/volumes/nfs_common/thefile) and it is correctly written to the nfs store. I can verify this by looking on the nfs server itself. But the vmkernel only sees an empty datastore; the file disappears. Another freebsd box can mount the same NFS share and see the files correctly. Some useful data: ESXi 4.0.0 Build 208167 NFS is unfsd running on a Buffalo Linkstation Pro Duo (a bit hacky I know). The share has file system permissions set to 777 at the moment. My /etc/exports is as follows, and as I say it connects fine. /mnt/array1/ESX_Shared 192.168.16.0/255.255.255.0(insecure,rw,sync,no_root_squash,no_subtree_check) The ESXi servers can also successfully mount NFS shares from other NFS servers. Any ideas guys? Thanks, Tom

    Read the article

  • What steps to take when CPAN installation fails?

    - by pythonic metaphor
    I have used CPAN to install perl modules on quite a few occasions, but I've been lucky enough to just have it work. Unfortunately, I was trying to install Thread::Pool today and one of the required dependencies, Thread::Converyor::Monitored failed the test: Test Summary Report ------------------- t/Conveyor-Monitored02.t (Wstat: 65280 Tests: 89 Failed: 0) Non-zero exit status: 255 Parse errors: Tests out of sequence. Found (2) but expected (4) Tests out of sequence. Found (4) but expected (5) Tests out of sequence. Found (5) but expected (6) Tests out of sequence. Found (3) but expected (7) Tests out of sequence. Found (6) but expected (8) Displayed the first 5 of 86 TAP syntax errors. Re-run prove with the -p option to see them all. Files=3, Tests=258, 6 wallclock secs ( 0.07 usr 0.03 sys + 4.04 cusr 1.25 csys = 5.39 CPU) Result: FAIL Failed 1/3 test programs. 0/258 subtests failed. make: *** [test_dynamic] Error 255 ELIZABETH/Thread-Conveyor-Monitored-0.12.tar.gz /usr/bin/make test -- NOT OK //hint// to see the cpan-testers results for installing this module, try: reports ELIZABETH/Thread-Conveyor-Monitored-0.12.tar.gz Running make install make test had returned bad status, won't install without force Failed during this command: ELIZABETH/Thread-Conveyor-Monitored-0.12.tar.gz: make_test NO What steps do you take to start seeing why an installation failed? I'm not even sure how to begin tracking down what's wrong.

    Read the article

  • How important is dual-gigabit lan for a super user's home NAS?

    - by Andrew
    Long story short: I'm building my own home server based on Ubuntu with 4 drives in RAID 10. Its primary purpose will be NAS and backup. Would I be making a terrible mistake by building a NAS Server with a single Gigabit NIC? Long story long: I know the absolute max I can get out of a single Gigabit port is 125MB/s, and I want this NAS to be able to handle up to 6 computers accessing files simultaneously, with up to two of them streaming video. With Ubuntu NIC-bonding and the performance of RAID 10, I can theoretically double my throughput and achieve 250MB/s (ok, not really, but it would be faster). The drives have an average read throughput of 83.87MB/s according to Tom's Hardware. The unit itself will be based on the Chenbro ES34069-BK-180 case. With my current hardware choices, it'll have this motherboard with a Core i3 CPU and 8GB of RAM. Overkill, I know, but this server will be doing other things as well (like transcoding video). Unfortunately, the only Mini-ITX boards I can find with dual-gigabit and 6 SATA ports are Intel Atom-based, and I need more processing power than an Atom has to offer. I would love to find a board with 6 SATA ports and two Gigabit LAN ports that supports a Core i3 CPU. So far, my search has come up empty. Thus, my dilemma. Should I hold out for such a board, go with an Atom-based solution, or stick with my current single-gigabit configuration? I know there are consumer NAS units with just one gigabit interface (probably most of them), but I think I will demand a lot more from my server than the average home user. Any advice is appreciated. Thanks.

    Read the article

  • Could local ISP capture my location whenever i launch a VPN to a VPN server?

    - by Ozgun Sunal
    I am extremely concerned that my ISP collects any information once I am connected to a VPN server. For instance, as far as I know, when I start a connection to a HotSpotShield VPN server, an IP address is assigned to me just before a successful connection. Besides, I'll be having an extra IP address at the beginning with the TAP Adapter. An encryption tunnel is set up between me and the VPN server. Whenever my request for a website reaches them (VPN server), they decrypt the data and later they encrypt the reply which returns from the web (targeted) server. This works like that. So, the ISP can not see what I am watching, displaying and writing because the connection is encrypted. But, the targeted websites see and record all actions. Still, they can not identify my real IP address. I'm really concerned about if the ISP can see "my location". OK, it has an IP address from another country as my real IP address, but how does my ISP detect the traffic going through them? Can they find out who I am? Won't they say "Hey, there is a traffic but who is and what he is doing right now?", because I get the Internet from them?

    Read the article

  • How to place a virtual machine in DMZ?

    - by Giordano
    I have an Ubuntu 12.04 server running few virtual machines with KVM. I would like to expose some of these virtual machines on the internet, to make it possible for customers to test the products we're developing and make available other products for demo purposes. One of the server NICs is configured with a public IP. However before exposing anything on the web I would like to be sure that if one of the virtual machines get compromised, the attacker doesn't reach the rest of the hosts. What I would like to do is to put these virtual machines into a DMZ. These are the steps I'm planning to do: Create a tap interface in the virtualization host (let's say tap1) Create a bridge using tap1 and give it an IP in a subnet separate from the other hosts. Let's say 10.0.0.1 Attach the DMZ virtual machines to the bridge and configure their IP statically (10.0.0.2, 10.0.0.3, etc...) Using UFW, forbid any traffic from 10.0.0.0/24 to any of the internal hosts, allow the traffic from the internal hosts towards 10.0.0.0/24 and expose the virtual machines on the web using port forwarding. Do you think this setup is safe? Can you suggest any improvement or a better/safer approach? Thanks in advance!

    Read the article

  • Managing persistent data on an Amazon EC2 web server

    - by Derek
    I've just started trying out Amazon's EC2 service for running an asp.net web app which uses a SQL Server 2005 Express database. I have some questions about how to configure and operate it best for reliability, and I'm hoping to tap into some collective wisdom here as this is my first foray into EC2. Here's how I have it configured currently: OS: Windows 2003 SQL Server Express 2005 Web content stored on an EBS Volume (E Drive) Database Data stored on an EBS Volume (E Drive) Database backups to "C Drive" and then copied off to S3. Elastic IP Address attached to the production instance. Now when I make a change to the OS configuration, I make a new AMI using the bundle feature. Unfortunately, I found that this results in significant downtime. While the bundle is created and the new instance is started. It seems that when I'm ready to make a new AMI, I should: Start up a new temporary instance. Detach the EBS volume from the production instance. Detach the IP Address from the production instance. Attach the IP Address to the temporary instance. Attach the EBS volume to the temporary instance. Create an AMI from the production instance. After the production instance restarts, reverse the attach/detach steps to put it back in production. Is this the right order of events to prevent any chance to corrupt the EBS volume? Will the EBS volume become corrupt if I detach it while a database Write is taking place? Should I snapshot the EBS volume of the production instance and attach it to the temporary instance instead? Or could taking a snapshot of the EBS volume while it's in use cause corruption? Any suggestions to improve the reliability and operations?

    Read the article

  • Only tunnel certain applications via OpenVPN

    - by jinjin
    Hi, I've purchased a VPN solution, it works correctly when I have "redirect-gateway def1" in the configuration file (routing all traffic through the VPN). However when I remove that line from the configuration file, I am still able to ping-out of the machine (ping -I tap0), however I cannot ping the IP assigned to the machine (it's a public ip), i get the error: Destination Host Unreachable. I only want to have certain applications sending traffic through the VPN tunnel (eg: ZNC, irssi), all of which i can select which IP they use. However they can't recieve any data, making the tunnel essentially useless to me when disabling redirect-gateway. Any ideas on how to allow specific applications use the tunnel, without of forcing everything to go through it? My configuration file is as follows: dev tap remote #.#.#.# float #.#.#.# port 5129 comp-lzo ifconfig #.#.#.# 255.255.255.128 route-gateway #.#.#.# #redirect-gateway def1 secret key.txt cipher AES-128-CBC The output of ifconfig -a when the tunnel is connected: tap0 Link encap:Ethernet HWaddr 00:ff:47:d3:6d:f3 inet addr:#.#.#.# Bcast:#.#.#.# Mask:255.255.255.255 inet6 addr: <snip> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:612 errors:0 dropped:0 overruns:0 frame:0 TX packets:35 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:25704 (25.1 KiB) TX bytes:6427 (6.2 KiB) EDIT: the Bcast:#.#.#.# (ifconfig) is different from route-gateway #.#.#.# (openvpn) if that makes any difference.

    Read the article

  • Boot drive not found issue after cloning using Apricorn EZgig

    - by TomWilsonFL
    A couple days ago I cloned a drive for someone using the EZgig software. Usually this goes without a hitch, but this particular drive I was cloning is quite old. When I restarted with the new drive I received the typical bootable disk not found message, so I turned it off, messed with the BIOS, restarted and it came up fine. That night I was working remotely on the computer and had to restart it. It didn't come back up; not a good sign. When the user came to the computer in the morning it was giving the same message. I have found that to make the computer boot, all I have to do is go into the BIOS and "Load Defaults", then restart. It will boot and runs great. Any thoughts on what is causing this situation? Is it MBR corruption? Are some settings being saved in the CMOS? A couple points of mention: I have already attempted looking for a BIOS update for the computer, but the newest is already installed (from 2003). When the computer reboots it either shows "None" for Primary Master, or sometimes it will just not show anything. Thanks, Tom

    Read the article

  • iTunes copy just metadata (song and album ratings, playlists) from iPod

    - by Jared Updike
    I have an iPod touch that I synched with my Windows computer (iTunes 9.0 I think) until my harddrive failed and I lost my entire library. I rebuilt the library (songs) from a year old backup (and various other source for songs) but my playlists and ratings are of course a year old. My iPod itself has most of the playlists and ratings I care most about (favorite songs and albums, rated 4 and 5, for example). I have a catch 22 situation where I feel nervous that I haven't backed up my iPod in around 4 months (when my drive failed) so I'd like to back it up as soon as possible... but if I back it up I have to clear all the songs and playlists and copy them back, which I can't really do since I need to rebuild my playlists on my computer first (using the data only available on my iPod!) The question: is there a better way to READ the information off my iPod than doing it manually, song by song and album by album and playlist by playlist (XML, text dump, database, spreadsheet, anything). In other words, mostly I want the information (metadata like ratings and playlists, not songs) copied off the iPod so I can more quickly get my iTunes library ratings and playlists re-built (manually) so I can finally wipe the music and back up my apps, etc. Then I'd like to copy the music back immediately. The part I'd like to avoid is manually navigating everything on my iPod to read through all the playlists and ratings (50 GB, 6,000+ songs) as I re-enter all of that data by hand. I've done a few dozen albums and it's pretty time consuming having to tap around on the iPod. Reading from a spreadsheet (for example, or XML which I could write a script to get into spreadsheet form) would probably help tremendously, plus then I'd have a backup of that information somewhere besides just my iPod.

    Read the article

  • iTunes copy just metadata (song and album ratings, playlists) from iPod

    - by Jared Updike
    I have an iPod touch that I synched with my Windows computer (iTunes 9.0 I think) until my harddrive failed and I lost my entire library. I rebuilt the library (songs) from a year old backup (and various other source for songs) but my playlists and ratings are of course a year old. My iPod itself has most of the playlists and ratings I care most about (favorite songs and albums, rated 4 and 5, for example). I have a catch 22 situation where I feel nervous that I haven't backed up my iPod in around 4 months (when my drive failed) so I'd like to back it up as soon as possible... but if I back it up I have to clear all the songs and playlists and copy them back, which I can't really do since I need to rebuild my playlists on my computer first (using the data only available on my iPod!) The question: is there a better way to READ the information off my iPod than doing it manually, song by song and album by album and playlist by playlist (XML, text dump, database, spreadsheet, anything). In other words, mostly I want the information (metadata like ratings and playlists, not songs) copied off the iPod so I can more quickly get my iTunes library ratings and playlists re-built (manually) so I can finally wipe the music and back up my apps, etc. Then I'd like to copy the music back immediately. The part I'd like to avoid is manually navigating everything on my iPod to read through all the playlists and ratings (50 GB, 6,000+ songs) as I re-enter all of that data by hand. I've done a few dozen albums and it's pretty time consuming having to tap around on the iPod. Reading from a spreadsheet (for example, or XML which I could write a script to get into spreadsheet form) would probably help tremendously, plus then I'd have a backup of that information somewhere besides just my iPod.

    Read the article

  • Managing SharePoint permissions via Active Directory?

    - by rgmatthes
    My company has thousands of employees organized thoroughly via Active Directory. I have confidence in the accuracy of the Department and Title information displayed in the user profiles. I'm helping to put up a brand new SharePoint 2007 site, and I contacted IT about managing the site's permissions through AD Groups. The goal is to have the site automatically assign read/write/contribute/whatever permissions based on the information in AD. For example, we could create an AD Group called "Managers" that would contain anyone with the "Manager" title in their AD user profile. I would have SharePoint tap into this AD Group to mass assign permissions if I knew all managers would need a certain level of access (read/write/contribute/whatever). Then if a manager joins the company or leaves it, the group is automatically updated (provided AD gets updated, of course). My IT rep called back and said it couldn't be done. This seems like a pretty straightforward business requirement, and one of the huge benefits of having Active Directory, but maybe I'm mistaken. Could anyone shed some light on this? A) Is it possible to use dynamically-updated AD Groups when assigning permissions via SharePoint? (Does anyone know of a guide I could show my doubtful IT rep?) B) Is there a "best practice" way to go about this? I've read some debate on whether SharePoint Groups or AD Groups are the way to go. My main concern is dynamic updating. C) If this isn't available out of the box, can someone recommend third-party software that will provide the functionality I'm looking for? A big thanks to anyone who can help me out!!

    Read the article

  • DRBD as a block device for XEN VM (Centos 5.3)

    - by SaberTooth
    Hi all, I have setup a drbd resource between 2 server nodes - everything works correctly when doing sync tests between the two. (I want to create a HA cluster using drbd,xen and heartbeat) However, when I try and create a XEN VM with Centos as guest operating system, I get through to the partitioning screen on the install but when I select a partitioning type the next screen gives me the following error : "An error has occurred - no valid devices were found on which to create new file systems. Please check your hardware for the cause of this problem." This is the first time attempting create a setup like this and searching Google does not help much... my config files for DRBD and XEN.... DRBD (just the section that is pertinent) on xennode0 { device /dev/drbd0; disk /dev/sda5; address X.X.X.X:7788; flexible-meta-disk internal; } on xennode1 { device /dev/drbd0; disk /dev/sda5; address X.X.X.X:7788; meta-disk internal; } XEN kernel = "/boot/xeninstall/vmlinuz" ramdisk = "/boot/xeninstall/initrd.img" extra = "text" name = "VM" maxmem = 3000 memory = 3000 vcpus = 4 on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" vfb = [ ] disk = [ "phy:/dev/drbd0,sda1,w", "tap:aio:/srv/xen/xenswap.img,sda2,w" ] vif = [ "mac=00:16:3e:11:67:ae,bridge=xenbr0" ] root = "/dev/sda1 ro" Thanks in advance!

    Read the article

  • Managing SharePoint permissions via Active Directory?

    - by rgmatthes
    My company has thousands of employees organized thoroughly via Active Directory. I have confidence in the accuracy of the Department and Title information displayed in the user profiles. I'm helping to put up a brand new SharePoint 2007 site, and I contacted IT about managing the site's permissions through AD Groups. The goal is to have the site automatically assign read/write/contribute/whatever permissions based on the information in AD. For example, we could create an AD Group called "Managers" that would contain anyone with the "Manager" title in their AD user profile. I would have SharePoint tap into this AD Group to mass assign permissions if I knew all managers would need a certain level of access (read/write/contribute/whatever). Then if a manager joins the company or leaves it, the group is automatically updated (provided AD gets updated, of course). My IT rep called back and said it couldn't be done. This seems like a pretty straightforward business requirement, and one of the huge benefits of having Active Directory, but maybe I'm mistaken. Could anyone shed some light on this? A) Is it possible to use dynamically-updated AD Groups when assigning permissions via SharePoint? (Does anyone know of a guide I could show my doubtful IT rep?) B) Is there a "best practice" way to go about this? I've read some debate on whether SharePoint Groups or AD Groups are the way to go. My main concern is dynamic updating. C) If this isn't available out of the box, can someone recommend third-party software that will provide the functionality I'm looking for? A big thanks to anyone who can help me out!!

    Read the article

  • Network Sniffing and Hubs

    - by Chris_K
    This will likely seem naive to the experts... but it has been on my mind lately. For years I've been using ntop and a cheap 4 port hub to sniff client networks to determine who's doing what -- and how much. Great way to see what's going on when they call and say "Geeze, the network seems really slow today." No need to bring in a managed switch (or access the existing one) and no need to configure spanning or mirroring. I just drop in the hub inline where I want to measure. Lately I noticed it is just about impossible to buy a real honest-to-goodness hub anymore. While looking for a new one, I had someone tell me that I should be sure to get a full-duplex hub or I'd only be seeing half the traffic when I monitor. Really? I've been using a crusty old Netgear DS104 all this time. No clue if it is half or FD. Have I really been understating my measurements? I'm just not bright enough about the physical layer to really know... Side note: Just ordered a Dualcomm Ethernet Switch TAP as a hub replacement. Seems like a nifty gadget. Any notes or tips about it would be welcome in the comments :-)

    Read the article

  • How can I generate filesystem images that are usable on many different virtualization systems?

    - by Mark Longair
    I have written a script that generates a root filesystem image (based on Debian lenny) suitable for User-Mode Linux. (Essentially this script creates a filesystem image, mounts it with a loop device, uses debootstrap to create a lenny install, sets up a static IP for TUN/TAP networking, adds public keys for login by SSH and installs a web application.) These filesystem images work pretty well with UML, but it would be nice to be able to generate similar images that people can use on alternative virtualization software, and I'm not familiar with these options at all. In particular, since the idea is to use this image as a standalone server for testing the web application, it's important that the networking works. I wonder if anyone can suggest what would be involved in customizing such root filesystem images such that they could be used with other virtualization software, such as VMware, Xen or as an Amazon EC2 instance? Two particular concerns are: If such systems don't use a raw filesystem image (e.g. they need headers with metadata or are compressed in some particular way) do there exist tools to convert between the different formats? I assume that in the filesystem, at least /etc/network/interfaces will have to be customized, but are more involved changes likely to be necessary? Many thanks for any suggestions...

    Read the article

  • Configuring iPad Mail app & Gmail app with different accounts? [migrated]

    - by Steve Crane
    I prefer to use the Gmail app over the standard Mail app on my iPad for reading my personal Gmail (I delete a lot of mails, newsletters, etc., after reading and this is one tap in Gmail and several in Mail). I have them set up so my personal Gmail uses the Gmail app and my work email is set up to use the standard Mail app. This all works fine except for one problem. If I'm in Gmail or Mail and send an email it sends from the relevant email address as expected. My problem is that when I share something via email from Safari or another app it sends from the email address configured in Settings for Mail (the work one) and I would prefer to do such sharing from my personal email address. Does anyone know if there is a way to achieve this? I could switch the addresses to use the other app but as I never delete work email and delete personal mail at least 50% of the time, the behaviour of the apps is perfect the way I have them set up; if only I could solve that one little problem of controlling where shared items are sent from. I am using an iPad 2 with iOS 5.1 should that be relevant.

    Read the article

  • How to delete row in TTTableViewController (Three20)

    - by Dmitry
    Hi, The Three20 is great library. It's making the work with tables very easy. But one gap that I noticed - is deletion of rows, with animation. I spent many hours trying to do this, and this is what I'm doing: // get current index patch UITableView* table = [super tableView]; NSIndexPath *indexPath = [table indexPathForSelectedRow]; //delete row in data source TTListDataSource * source = (TTListDataSource *)self.dataSource; [source.items removeObjectAtIndex:indexPath.row]; // delete row from the table, with animation [table deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; After this code runs the error appears: [NSCFArray objectAtIndex:]: index (0) beyond bounds (0) I'm using the class inherited from the TTTableViewController. And I doesn't implement the DataSourceDelegate or TableViewDelegate because I'm using the Three20 as much as possible. So, this is how I create datasource: for(NSDictionary *album in (NSArray *)albums){ TTTableRightImageItem *item = [TTTableRightImageItem itemWithText:@"name" imageURL:@"image url" defaultImage:nil imageStyle:TTSTYLE(rounded) URL:@"url of selector where I actually catch the tap on a row"]; [tableItems addObject:item]; } self.dataSource = [TTListDataSource dataSourceWithItems:tableItems]; I read this article (article) but it doesn't help :( So, I know that this is very simple for gurus from this site. Can you just give me advanced sample of how to do this Thanks

    Read the article

  • ExtJS: Combobox in EditorGridPanel not selecting the desired item (with test case)

    - by TomH
    I'm using ExtJS to create an EditorGridPanel with a combobox for an editor in a cell. The combobox in my EditorGridPanel that is not working as I'd expect it to. When the user types the first letter of an item in the drop down list, the combobox seems to ignore it and select the first item in the list. I can reproduce the error consistently and have put together a test case here: http://cluebucket.com/dev/testcase/testcase.html Load the page and reproduce the behavior by the following -- note that this is all done using the keyboard, no mouse clicks: Click 'Add Record' (A new row is added to the grid) enter text in the text field. TAB to the Priority field without selecting anything (None will remain selected) TAB out of the Priority field. (A new row is added to the grid) enter text and TAB to the Priority field TYPE v (Very High is selected) TAB out of the priority field (A new row is added to the grid) enter text and TAB to the Priority field Type v (None is selected, but Very High should have been) TAB out of the priority field Enter text and TAB to the priority field Type l ('el') (Low is selected) TAB out, enter text, TAB to priority Type l (None is selected) It appears that whenever the user attempts to select the same value that was selected in the previous row, the combobox selects None. Any ideas? The code is available at cluebucket.com/dev/testcase/js/testcase.js Thoughts/Pointers/Corrections are appreciated!! thanks tom

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >