Search Results

Search found 15798 results on 632 pages for 'authentication required'.

Page 561/632 | < Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • hostapd running on Ubuntu Server 13.04 only allows single station to connect when using wpa

    - by user450688
    Problem Only a single station can connect to hostapd at a time. Any single station can connect (W8, OSX, iOS, Nexus) but when two or more hosts are connected at the same time the first client loses its connectivity. However there are no connectivity issues when WPA is not used. Setup Linux (Ubuntu server 13.04) wireless router (with separate networks for wired WAN, wired LAN, and Wireless LAN. iptables-save output: *nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -s 10.0.0.0/24 -o p4p1 -j MASQUERADE -A POSTROUTING -s 10.0.1.0/24 -o p4p1 -j MASQUERADE COMMIT *mangle :PREROUTING ACCEPT [13:916] :INPUT ACCEPT [9:708] :FORWARD ACCEPT [4:208] :OUTPUT ACCEPT [9:3492] :POSTROUTING ACCEPT [13:3700] COMMIT *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT ACCEPT [9:3492] -A INPUT -i p4p1 -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -i p4p1 -p tcp -m tcp --dport 22 -m state --state NEW -j ACCEPT -A INPUT -i eth0 -j ACCEPT -A INPUT -i wlan0 -j ACCEPT -A INPUT -i lo -j ACCEPT -A FORWARD -i p4p1 -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -i eth0 -j ACCEPT -A FORWARD -i wlan0 -j ACCEPT -A FORWARD -i lo -j ACCEPT COMMIT /etc/hostapd/hostapd.conf #Wireless Interface interface=wlan0 driver=nl80211 ssid=<removed> hw_mode=g channel=6 max_num_sta=15 auth_algs=3 ieee80211n=1 wmm_enabled=1 wme_enabled=1 #Configure Hardware Capabilities of Interface ht_capab=[HT40+][SMPS-STATIC][GF][SHORT-GI-20][SHORT-GI-40][RX-STBC12] #Accept all MAC address macaddr_acl=0 #Shared Key Authentication wpa=1 wpa_passphrase=<removed> wpa_key_mgmt=WPA-PSK wpa_pairwise=CCMP rsn_pairwise=CCMP ###IPad Connectivevity Repair ieee8021x=0 eap_server=0 Wireless Card #lshw output product: RT2790 Wireless 802.11n 1T/2R PCIe vendor: Ralink corp. physical id: 0 bus info: pci@0000:03:00.0 logical name: mon.wlan0 version: 00 serial: <removed> width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list logical wireless ethernet physical configuration: broadcast=yes driver=rt2800pci driverversion=3.8.0-25-generic firmware=0.34 ip=10.0.1.254 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn #iw list output Band 1: Capabilities: 0x272 HT20/HT40 Static SM Power Save RX Greenfield RX HT20 SGI RX HT40 SGI RX STBC 2-streams Max AMSDU length: 3839 bytes No DSSS/CCK HT40 Maximum RX AMPDU length 65535 bytes (exponent: 0x003) Minimum RX AMPDU time spacing: 2 usec (0x04) HT RX MCS rate indexes supported: 0-15, 32 TX unequal modulation not supported HT TX Max spatial streams: 1 HT TX MCS rate indexes supported may differ Frequencies: * 2412 MHz [1] (27.0 dBm) * 2417 MHz [2] (27.0 dBm) * 2422 MHz [3] (27.0 dBm) * 2427 MHz [4] (27.0 dBm) * 2432 MHz [5] (27.0 dBm) * 2437 MHz [6] (27.0 dBm) * 2442 MHz [7] (27.0 dBm) * 2447 MHz [8] (27.0 dBm) * 2452 MHz [9] (27.0 dBm) * 2457 MHz [10] (27.0 dBm) * 2462 MHz [11] (27.0 dBm) * 2467 MHz [12] (disabled) * 2472 MHz [13] (disabled) * 2484 MHz [14] (disabled) Bitrates (non-HT): * 1.0 Mbps * 2.0 Mbps (short preamble supported) * 5.5 Mbps (short preamble supported) * 11.0 Mbps (short preamble supported) * 6.0 Mbps * 9.0 Mbps * 12.0 Mbps * 18.0 Mbps * 24.0 Mbps * 36.0 Mbps * 48.0 Mbps * 54.0 Mbps max # scan SSIDs: 4 max scan IEs length: 2257 bytes Coverage class: 0 (up to 0m) Supported Ciphers: * WEP40 (00-0f-ac:1) * WEP104 (00-0f-ac:5) * TKIP (00-0f-ac:2) * CCMP (00-0f-ac:4) Available Antennas: TX 0 RX 0 Supported interface modes: * IBSS * managed * AP * AP/VLAN * WDS * monitor * mesh point software interface modes (can always be added): * AP/VLAN * monitor valid interface combinations: * #{ AP } <= 8, total <= 8, #channels <= 1 Supported commands: * new_interface * set_interface * new_key * new_beacon * new_station * new_mpath * set_mesh_params * set_bss * authenticate * associate * deauthenticate * disassociate * join_ibss * join_mesh * set_tx_bitrate_mask * set_tx_bitrate_mask * action * frame_wait_cancel * set_wiphy_netns * set_channel * set_wds_peer * Unknown command (84) * Unknown command (87) * Unknown command (85) * Unknown command (89) * Unknown command (92) * testmode * connect * disconnect Supported TX frame types: * IBSS: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * managed: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * AP: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * AP/VLAN: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * mesh point: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * P2P-client: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * P2P-GO: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * Unknown mode (10): 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 Supported RX frame types: * IBSS: 0x40 0xb0 0xc0 0xd0 * managed: 0x40 0xd0 * AP: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0 * AP/VLAN: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0 * mesh point: 0xb0 0xc0 0xd0 * P2P-client: 0x40 0xd0 * P2P-GO: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0 * Unknown mode (10): 0x40 0xd0 Device supports RSN-IBSS. HT Capability overrides: * MCS: ff ff ff ff ff ff ff ff ff ff * maximum A-MSDU length * supported channel width * short GI for 40 MHz * max A-MPDU length exponent * min MPDU start spacing Device supports TX status socket option. Device supports HT-IBSS.

    Read the article

  • How to move Mailboxes over from old Exchange 2007 to new EBS 2008 network?

    - by Qwerty
    This q is similar to: http://serverfault.com/questions/39070/how-to-move-exchange-2003-mailbox-or-store-from-2003-to-2007-on-separate-networks Basically I am trying to move our exchange mailboxes over to a test domain that is hosting EBS2008 with Exchange 2007. We plan to move as soon as we can when we have our exchange data over. I have tried moving a db with mailboxes over but cannot get it to mount in the new Exchange in any way possible, including mounting it onto a recovery store. From my understanding the ONLY prerequisite for moving Exchange DBs across is that it must have the same Organizational name (unlike previous versions of Exchange). If anyone has any insight as to why I cannot mount and simply reattach the mailboxes, please give me an idea as to what could be wrong. It should be as simple as this. Note that the DBs I have are in a clean state. I cannot use ExMerge because I am not running any mailboxes on 2003. I have also tried using a 32bit Vista machine with the Export-Mailbox cmdlet to extract mailboxes but anything I do to it results in Permission errors. I have tried to troubleshoot these with no success. I am running in full admin with proper exchange roles and yet it still gives me access denied errors: Export-Mailbox : MapiExceptionNetworkError: Unable to make admin interface conn ection to server. (hr=0x80040115, ec=-2147221227) Also some errors show in the management console: get-MailboxDatabase Completed Warning: ERROR: Could not connect to the Microsoft Exchange Information Store service on server TATOOINE.baytech.local. One of the following problems may be occurring: 1- The Microsoft Exchange Information Store service is not running. 2- There is no network connectivity to server TATOOINE.baytech.local. 3- You do not have sufficient permissions to perform this command. The following permissions are required to perform this command: Exchange View-Only Administrator and local administrators group for the target server. 4- Credentials have been cached for an unpriviledged user. Try removing the entry for this server from Stored User Names and Passwords. Why I have to use a 32bit machine to export a simple .pst file is beyond me... So yeah I am now out of ideas and any help would be great! Thanks in advance.

    Read the article

  • Can't connect to shared folders anymore?

    - by HuskyHuskie
    My home server is running Windows Server 2008 R2. I've had it running for almost a year now without any issues with shared folders. This past week I had an issue with my modem which required it to be power cycled and with that I power cycled my router. After that I haven't been able to connect to my shared network folders. I have no idea why that would even cause an issue as I've power cycled my networking equipment in the past without issues and none of my settings appear to have been lost. I am mapping these drives on my Windows 7 Ultimate machine using "Map Network Drive", from there I enter \\SERVER\Storage as I'm trying to connect to my shared folder named Storage. I receive the following error every time I try mapping the drive: Windows cannot access \\Server\Storage Check the spelling of the name. Otherwise there might be a problem with your network. To try to identify and resolve network problems, click Diagnose. Details: Error code: 0x80070035 The network path was not found. When I click Diagnose I get the following: Problems found file and print sharing resource (SERVER) is online but isn't responding to connection attempts. The remote computer isn't responding to connection on port 445, possibly due to firewall or security policy settings, or because it might be temporarily unavailable. Windows couldn't find any problems with the firewall on your computer. I've tried this from multiple computers with the same issue too. To resolve the problems so far I've tried: Disabling the firewall on SERVER Reinstalling File Services Modifying NetBT\Parameters registry values Adding a custom inbound rule for port 445 Adding port forwarding on my router for port 445 Recreating the shared folders Checking and rechecking the shared folder permissions. Resetting my user account password on the server used to access the shared folder. I'm pulling my hair out with this problem mainly because it came out of nowhere. It was working fine the night before and the next day it just stopped working. Any ideas of what I could try next are much appreciated. It should also be noted that this server is used as a web server too and that functionality still works correctly.

    Read the article

  • Snow Leopard can see Windows shares in Finder but can't connect

    - by Randy Miller
    I have an iMac with the latest version of Snow Leopard on it. I have a NAS drive and a Windows machine that both show up in the Finder's 'Shared' section. However, if I click on them, Finder says "Connection Failed". Clicking on 'Connect As...' gives an error dialog that says "The server 'blah' may not exist or it is unavailable at this time." Points of interest: All machines are receiving their IP/DNS info from the router using DHCP. I have a Mac Mini on the same network that connects to the NAS drive and windows machine perfectly with no config (i.e. worked out of the box). Both Macs are on the same version of Snow Leopard. There is no password required to access the NAS share. I've never setup a WINS server on any machines and all machines are using 'workgroup' by default. I've tried putting "workgroup" in the Mac's workgroup entry and have tried leaving it blank, neither solves the problem. Here are some things I have tried: Finder-Connect To Server: smb:///share. This works, but by name does not. Terminal-mount_smbfs //@/share share. This also works by ip, but not be name, resulting in "mount_smbfs: server connection failed: No route to host". If I put the IP address of the NAS in the WINS server entry in the Mac's network setup, I can connect by name. It obviously seems to be a name resolution error, but I can't figure out why. The only thing that has changed since it used to work is that I got a new router that now gives out DHCP (all machines are dhcp clients) addresses of 192.168.x.x, but used to be 10.0.x.x. I've grep'd through everything that might have saved that old address, but can't find anything. It's also worth noting that the second Mac (the one that connects successfully) was added to the network after the router change. Please let me know if there are additional points of information needed to troubleshoot this further. Thanks, Randy

    Read the article

  • Windows 8 with LiveID login authenticates as Guest to remote SQl Server

    - by Tim Long
    I have a network where several users are using Office Accounting 2009 in multi-user client/server mode. OA is built on SQL Server. One PC acts as the 'server' and has the SQl Server instance, the others have only the application installed and no SQL instance, all of the apps connect remotely to the SQL instance on the 'server'. I'm using the term 'server' loosely here, it is just a normal workstation that happens to be designated as the server and runs the SQL instance. There is no NT domain, all user accounts are local accounts. The way that OA works in multi-user mode is that each user is required to have a local account with the same username and password on both the client and 'server' PCs. This has been working well, no along comes Windows 8. I use my 'Microsoft Account' aka LiveID to log into Windows 8. Office Accounting runs fine and attempts to connect to the database, but fails, 'you do not have permission to perform this operation'. In the SQL logs, I get this error: 2012-10-28 17:54:01.32 Logon Error: 18456, Severity: 14, State: 11. 2012-10-28 17:54:01.32 Logon Login failed for user 'SERVER\Guest'. Reason: Token-based server access validation failed with an infrastructure SERVER is the hostname of the server. So it seems to be authenticating as 'Guest'?? To verify this, I enabled the Guest account on the 'server' PC and then added Guest as an allowed user within Office Accounting (this simply creates the user in SQL and gives it an appropriate database role). Sure enough, My Windows 8 PC was then able to connect to the database when using Office Accounting. Clearly, having users authenticate as 'Guest' stinks from a security and auditing standpoint. So what I need are some ideas for how to work around this. I've tried switching the Windows 8 PC to a 'local account' and that works too, but requires giving up significant functionality on the Windows 8 PC. What I really need is a way to force the Windows 8 PC to use a specific set of credentials when connecting to the remote SQL instance. Office Accounting takes the logged in username, which is my LiveID and doesn't correspond to any Windows user name. Anyone solved this issue?

    Read the article

  • configuring slime in emacs

    - by CodeKingPlusPlus
    I am in the process of configuring slime for emacs. So far I have read about basic functionality for common lisp such as C-c C-q which invokes the command slime-close-parens-at-point which places the proper number of parens where your mouse is. Another command that seemed cool was invoked by C-c C-c and it would pass the code you are editing in a buffer to the REPL, and "compile" it. Why won't these commands work for me? Anyway, I have downloaded slime via M-x list-packages and do not seem to have this functionality (C-h w and then any of these commands tells me that these commands do note exist). So, I saw a bunch of other slime extensions such as slime-repl', 'slime-fuzzy' and 'hippie-expand-slime'. So I again usedM-x list-packages` and downloaded them. Still I did not have these commands. Here is the content of my emacs file relevant to slime: ;;;Common Lisp and Slime (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/slime-20130626.1151") (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/slime-repl-201000404") (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/hippie-expand-slime-20130226.1656") (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/slime-fuzzy-20100404") (require 'slime) (setq slime-lisp-implementations `((sbcl ("/usr/bin/sbcl")) (ecl ("/usr/bin/ecl")) (clisp ("/usr/bin/clisp" "-q -I")))) (require 'slime-repl) (require 'slime-fuzzy) (require 'hippie-expand-slime) When I execute M-x slime I get the following message in the inferior-lisp buffer where I can execute common lisp code (however, shouldn't this be the slime-repl since I required it?): STYLE-WARNING: redefining EMACS-INSPECT (#<BUILT-IN-CLASS T>) in DEFMETHOD STYLE-WARNING: Implicitly creating new generic function STREAM-READ-CHAR-WILL-HANG-P. WARNING: These Swank interfaces are unimplemented: (DISASSEMBLE-FRAME SLDB-BREAK-AT-START SLDB-BREAK-ON-RETURN) ;; Swank started at port: 46533. Then a slime-error buffer is created with the contents: Invalid protocol message: Symbol "CREATE-REPL" not found in the SWANK package. Line: 1, Column: 28, File-Position: 28 Stream: #<SB-IMPL::STRING-INPUT-STREAM {10056B9C33}> (:emacs-rex (swank:create-repl nil) "COMMON-LISP-USER" t 5) How should I modify my emacs file to give me the functionality of those commands? In my emacs file am I not loading the necessary files? Do I need to install an additional package? If you need more information let me know! All help is much appreciated!

    Read the article

  • Hyper-V causes boot loop/failure on a non-Gigabyte Win8 Pro system

    - by Nick
    Hardware: Intel i7 2600K (not overclocked, SLAT compatible, virt. features enabled in bios) Asus Maximus IV Extreme-Z (Z68) 16Gb RAM 256Gb SSD Other non-trivial working parts Adding Hyper-V is causing a boot loop resulting in an attempt at automatic repair by Windows 8 after the second or third loop: I'm trying to get the Windows Phone 8 SDK installed and I've narrowed down my troubles to the Hyper-V feature in Win8. This is required to run the WP8 emulator and there are no install options to omit this feature. My first attempt completely borked the OS as I did not have a recent restore point or system image, so I did a completely clean install and made plenty of backups/restore points. I skipped the SDK install and went straight for the windows feature add-on for Hyper-V. This confirmed that Hyper-V is the issue as the same behavior resulted. I cannot find any hint in the Event Logs. Cancelling automatic recovery causes the same behavior to repeat. I don't have any other VM products installed. My only recourse is to use a restore point, try something else, install it again, and see what happens. No luck so far. I'm on my 10th attempt here. Any help would be much appreciated. EDIT: I found a collection of tips here.. http://social.msdn.microsoft.com/Forums/en-US/wptools/thread/b06cc9f2-aa5e-4cb3-9df1-0c273e1dfd68 So i've been attempting various bios settings to resolve this issue with no luck. I've tried setting 'CPUID Limit' to disabled. This seems to work partly as Win8 boots but no USB devices work at all. I also attempted disabling the usb 3.0 controller as the msdn topic lists an issue with USB controllers on Gigabyte boards. This also doesn't work. The USB devices light up but no input is received by the OS. All of my other bios CPU settings are in line with the info in the post. I'm totally stumped. Bios screenshots: http://i.imgur.com/yKN5u.png http://i.imgur.com/Y9wI4.png http://i.imgur.com/F6EuO.png

    Read the article

  • mounting ext4 fs with block size of 65536

    - by seaquest
    I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I can not mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments. root@ubuntu:~# mkfs.ext4 -b 65536 /dev/sda3 Warning: blocksize 65536 not usable on most systems. mke2fs 1.41.12 (17-May-2010) mkfs.ext4: 65536-byte blocks too big for system (max 4096) Proceed anyway? (y,n) y Warning: 65536-byte blocks too big for system (max 4096), forced to continue Filesystem label= OS type: Linux Block size=65536 (log=6) Fragment size=65536 (log=6) Stride=0 blocks, Stripe width=0 blocks 19968 inodes, 19830 blocks 991 blocks (5.00%) reserved for the super user First data block=0 1 block group 65528 blocks per group, 65528 fragments per group 19968 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@ubuntu:~# tune2fs -l /dev/sda3 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cf3f507-e7b4-463c-be11-5b408097099b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 19968 Block count: 19830 Reserved block count: 991 Free blocks: 18720 Free inodes: 19957 First block: 0 Block size: 65536 Fragment size: 65536 Blocks per group: 65528 Fragments per group: 65528 Inodes per group: 19968 Inode blocks per group: 78 Flex block group size: 16 Filesystem created: Sat Feb 5 14:39:55 2011 Last mount time: n/a Last write time: Sat Feb 5 14:40:02 2011 Mount count: 0 Maximum mount count: 37 Last checked: Sat Feb 5 14:39:55 2011 Check interval: 15552000 (6 months) Next check after: Thu Aug 4 14:39:55 2011 Lifetime writes: 70 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: afb5b570-9d47-4786-bad2-4aacb3b73516 Journal backup: inode blocks root@ubuntu:~# mount -t ext4 /dev/sda3 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • How to configure Transparent IP Address Sharing (TAS) on a Mediatrix 4102 with DGW 2.0 firmware?

    - by Pascal Bourque
    I am making the switch to VoIP. I chose voip.ms as my service provider and Mediatrix 4102 as my ATA. One reason why I chose the Mediatrix over other popular consumer ATAs is that it's supposed to be easy to place it in front of the router, so it can give priority to its own upstream traffic over the home network's upstream traffic. This is supposed to work transparently, with the ATA and router sharing the same public IP address (the one obtained from the modem). They call this feaure Transparent IP Address Sharing, or TAS. Their promotional brochure describes it like this: The Mediatrix 4102 also uses its innovative TAS (Transparent IP Address Sharing) technology and an embedded PPPoE client to allow the PC (or router) connected to the second Ethernet port to have the same public IP address, eliminating the need for private IP addresses or address translations. I am interested by this feature because my router, an Apple Time Capsule, doesn't support QoS and cannot give priority to the voice packets if the ATA is behind the router. However, after hours of searching the web, reading the documentation, and good ol' trial and error, I haven't been able to configure the Mediatrix to run in this mode. Then I found a version of the manual that looks like it was for a previous version of the firmware (SIP), where there is an entire section dedicated to configuring TAS (starting at page 209). But my Mediatrix comes with the DGW 2.0 firmware, whose documentation does not mention TAS at all. So I tried to follow the TAS setup instructions from the SIP documentation and apply them to my DGW firmware, using the Variable Mapping Between SIP v5.0 and DGW v2.0 document as a reference, but no success. Some required SIP variables don't have an equivalent in DGW. So it looks like the DGW firmware does not support TAS at all, or if it does they are not doing anything to help us set it up. So right now, the Mediatrix is behind the router and VoIP works perfectly except when my upstream bandwidth is saturated. My questions are: Is downgrading to SIP firmware the only way to have my Mediatrix 4102 run in TAS mode? If not, anybody knows how to setup TAS on the DGW firmware? Is TAS mode the only way to give priority to the voice packets if I want to keep my current router (Apple Time Capsule)? Thanks!

    Read the article

  • Trying to install Rmagick on Debian

    - by Janak
    One of things you need to do to get Rmagick installed apt-get install libmagick9-dev When I try that I get the following errors Reading package lists... Done Building dependency tree... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: libmagick9-dev: Depends: libjpeg62-dev but it is not going to be installed Depends: libbz2-dev but it is not going to be installed Depends: libtiff4-dev but it is not going to be installed Depends: libwmf-dev (>= 0.2.7-1) but it is not going to be installed Depends: libz-dev Depends: libpng12-dev but it is not going to be installed Depends: libfreetype6-dev but it is not going to be installed Depends: libexif-dev but it is not going to be installed Depends: libdjvulibre-dev but it is not going to be installed Depends: librsvg2-dev but it is not going to be installed Depends: libgraphviz-dev but it is not going to be installed E: Broken packages I don't know what to do to fix this? EDIT 1: I tried sudo apt-get install librmagick-ruby and it worked fine but then I needed to install the fleximage gem gem1.8 install fleximage and I got the following error message Building native extensions. This could take a while... ERROR: Error installing fleximage: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for Ruby version >= 1.8.5... yes checking for cc... yes checking for Magick-config... no Can't install RMagick 2.12.2. Can't find Magick-config in /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/gems/1.8/bin *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/rmagick-2.12.2 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/rmagick-2.12.2/ext/RMagick/gem_make.out

    Read the article

  • Missing startup screens and slow bootup/login after using WinClone to expand Bootcamp partition

    - by user26453
    I used WinClone to backup my Bootcamp partition, which was a Windows 7 Ultimate install, on my late 2006 Macbook Pro. I desired to expand the Bootcamp partition's size. It worked reasonably well with some hiccups along the way and some remaining issues. First issue I ran into was the Bootcamp Assistant utility - it would not recreate the partition. This was due to a lack of contiguous space that is required for the Bootcamp partition. As a result I wiped the whole drive and reinstalled Snow Leopard, did the minimum amount of system updates, and created and formated a new Bootcamp partition. WinClone restored the image without complaint and the image was automatically resized to the new partition's size. Second issue I ran into was after the first boot into Windows. The first thing I noticed was that instead of the newer "slick" startup screen (4 colors wisping around, a Windows 7 title), there was more of an old school style startup screen (a progress bar with block increments, yellow/greenish color, nothing else really). The initial bootup to a login screen was slow, perhaps as Windows dealt with the partition changes. After logging in, the screen goes blank and the computer seems to hang for a minute, before completing the login. After subsequent restarts, the slick screen is still missing, boot to login screen is normal, but the time from login to desktop active is still very slow. As a side note, this behavior of a long time from login to the desktop finally loaded I've previously only seen when the computer would try to hibernate and fail (battery is really bad). On the next startup, I would see this behavior, but not subsequently. So a potential cause: I imaged the partition after hibernating out of Windows. From reading some posts/guides on the subject, this was not recommended, and perhaps shouldn't even have worked? Could the partition be stuck in some weird mode as a result that makes the boot issues appear? I've attempted to disable hibernation and restart, trying to delete the .sys file that hibernation uses. Other fixes I'm thinking of attempting are booting a Win7 disc and repairing the install/partition. I can't shake the nagging feeling something isn't right as a result of the modified boot screens and the slow login process.

    Read the article

  • Yahoo is sending our server's transactional email to the Spam folder, even though we have set up SPF and DKIM

    - by Derrick Miller
    Yahoo Mail is sending our server's transactional emails to the Spam folder, even though we have taken quite a few anti-spam steps. By contrast, Gmail allows the messages through to the inbox just fine. Here are the things which are in place: SPF is set up for the domain holsteinplaza.com. Yahoo reports spf=pass in the message headers. DKIM is set up for the domain holsteinplaza.com. Yahoo reports dkim=pass in the message headers. We have a proper reverse DNS entry for the sending mail server. Name - IP matches IP - Name. Neither Domainkeys nor SenderID are set up. From what I can tell, DKIM is the way of the future, and there is not much to be gained from adding Domainkeys or SenderID. Following are the headers. Any ideas what more I should do to get Yahoo to stop flagging the emails as spam? From Holstein Plaza Auctions Sat Jun 25 18:30:08 2011 X-Apparently-To: [email protected] via 98.138.90.132; Sat, 25 Jun 2011 18:30:11 -0700 Return-Path: <[email protected]> X-YahooFilteredBulk: 70.32.113.42 Received-SPF: pass (domain of holsteinplaza.com designates 70.32.113.42 as permitted sender) X-YMailISG: i_vaA_QWLDuLOmXhDjUv3aBKJl5Un6EiP6Yk2m4yn3jeEuYK MkhpqIt9zDUbHARCwXrhl9pqjTANurGVca7gytSs.mryWVQcbWBx.DaItWRb VcyrIzwMzXKCSeu06H2a.cJ7HG5vJLJaKmHUUI_1ttXKn_Aegiu5yHvFX83R Lpth0witO9zfaKvOMaJV3LAxpIpFOydwvq1cqjZ8nURxQbxM3Cl.QW7MxxrC 09qLVn_D_xSdU94QdU22IsVmlaRHv.uU5dnIazu.KSkhKpYykDoZA2SH0SY4 JmTZj3LP8N926xXVDzYQ5K6QvKuJL5g0d9pYZx3KC59sgIu5oHlJ3Q15RdKb f3OJw0PR6oIyJ2yStVr8vfbDgOfj3qig03.Tw6g6MMNpv1G7Cuol4oJeUaYP xELxX6dHgBgCSuWMcbsrxbK4BIXcS2qhpMqYQ4Isk.XXyA8uvmFXyvgc1ds5 8jo0rW.Wsw.55Z.KTPaQ0gHXj0T3OGppYMELSJv1iuhPyyAnZpmq01CU0Qd5 CcRgdyW3HaqhmpXqJCS0Clo16zXA4HmAjR0tgIQrHRLc3D9N02AOzvmDgCb1 vCh0p00QeKVq8UNkcShPRxZFKi9khtkLhPBlXEKkhJ76zyDmHUxTY.dQHVVD 8D2hx7BxbqI9DINI8x5oR5Q8hYkZqHYQsmGNkaU77O2BnsEv5WxMEmzrBJ4Z h8zGCidgYPiZycZfnfaBp0Xb4tya2WMTN45W02JFcO1qq_UMJ9xPeqZhPEj. j9YvBAC8324GGF.c8eWcNB2VB34QHgTcVUl3.c0XUCuncls9Cyg4L7AoIdCi HvAklSzDDu9nW6732VEipV9FJ_JkDupDNQU2hfiPG.3OeF8GwTnVYnEn0EiZ aO0NCnZhXuLDcN3K7ml3846yRdASvzPFs9s4aJkzR0FkhVvptiMBEOdRkKdG wHWmvWpK4GTZpW4yU7CnKpW2MiWWn1MP0h_CCZFKs5.3mfmfPjPVIABN_RuU Q8ex5hdKnKlQiqK56LzcPRnYmNtrwdsUX9CYn9d6cPpXR_Bi5jrNJMNzdFvq lGO0CBT4QPe2V45U8PtpMitttuDA1cCvmyBPFswxNlL0jyX0a_W.vl0YW5.d HhDItpHhDxKRUscM28IR.exetq4QCzyM X-Originating-IP: [70.32.113.42] Authentication-Results: mta1267.mail.ac4.yahoo.com from=holsteinplaza.com; domainkeys=neutral (no sig); from=holsteinplaza.com; dkim=pass (ok) Received: from 127.0.0.1 (EHLO predator.axis80.com) (70.32.113.42) by mta1267.mail.ac4.yahoo.com with SMTP; Sat, 25 Jun 2011 18:30:11 -0700 Received: (qmail 1440 invoked by uid 48); 25 Jun 2011 21:30:09 -0400 To: [email protected] Subject: this is a test X-PHPMAILER-DKIM: phpmailer.worxware.com DKIM-Signature: v=1; a=rsa-sha1; q=dns/txt; l=203; s=auction; t=1309051808; c=relaxed/simple; h=From:To:Subject; d=holsteinplaza.com; [email protected]; z=From:=20Holstein=20Plaza=20Auctions=20<[email protected]> |To:[email protected] |Subject:=20this=20is=20a=20test; bh=B3Tw5AQb1va627KEoazuFEBZ0fg=; b=oQ5uFq+oekPTGhszyIritjuuIAi3qPNyeitu+aWMhdx3oC6O2j5hJsDFpK0sS5fms7QdnBkBcEzT0iekEvn9EfAdCkGZ2KrtEC0yv7QKQcrjXxy07GJpj9nq0LYbgOuPdw8mGvKxlRZ+jFBX0DRJm0xXFLkr+MEaILw7adHTCCM= Date: Sat, 25 Jun 2011 21:30:08 -0400 From: Holstein Plaza Auctions <[email protected]> Reply-to: Holstein Plaza Auctions <[email protected]> Message-ID: <[email protected]> X-Priority: 3 X-Mailer: PHPMailer 5.1 (phpmailer.sourceforge.net) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="iso-8859-1" Content-Length: 195

    Read the article

  • Rsync: windows 7, synology: login error and permission denied error

    - by loonboon
    Good day to all of you all, I'm running into strange/stupid errors, and I hope anybody would be so kind to help me out. I have to admit, I am by no means a guru, so please bear with me :-) Situation: -Synology NAS (runs Linux), and Windows 7 desktop (1 normal/restricted user Lisa and 1 admin user). - Data from W7 desktop to be rsynced to synology: /volume1/home/Lisa/Backup - Rsync command: c:\cygwin\bin\rsync -avz /cygdrive/e/Lisa/ [email protected]:/volume1/homes/Lisa/Backup - I've set up ssh per these two threads: a. http://www.cesareriva.com/archives/102 b. http://www.cesareriva.com/archives/112 Now the horrors begin: - root is allowed to run the rsync succesfully, however, he doesn't login automatically (so I can not use rsync in W7 batchscripts, which is of course required). - Lisa is allowed to login automatically but he can not succesfully finish the rsync command because of permission errors: rsync change dir /volume1/homes/Lisa/Backup failed: permissions denied. This happens for each and every file and subdir rsync tries to create. However, the main directory (Backup) is created. When I try to copy files from windows explorer to the directory 'Backup' using the very same user Lisa everything goes smoothly. So, obviously, there is a permission problem somewhere; either my rsync-command isn't correct, or the folder permissions for homes/Lisa aren't correct (but, then again, Windows 7 copies files to that folder without any problems, so that does make me believe the homes/Lisa-permissions don't appear to be the problem). I also tried adding: --chmod=Dugo+x --chmod=ugo+r which I found somewhere on the web, to the rsync-command, but this didn't solve any problem and gave the exact errors. Would anybody please please help me on how to fix this? I am utterly frustrated about this, because I have been trying for 1 month to get everything to work and it simply doesn't work. I bought the big Synology to end the horrors of 20 external USB-disks for once and for all (we have many pictures and home vids of our deceased dogs and want to watch these, the horrors being 'what material is on what disk'). I'll gladly return the favour of somebody helping me out by buying you a nice beer (paypal), if you could end my misery. I am not extremely skilled on Linux (not at all :-( ) so if you could give an extra word when possible so I understand what to do, I'd be very grateful. I really hope somebody can help me out, Thank you in advance, Lisa

    Read the article

  • How to change the Nginx default folder?

    - by Ido Bukin
    I setup a server with Nginx and i set my Public_HTML in - /home/user/public_html/website.com/public And its always redirect to - /usr/local/nginx/html/ How can i change this ? Nginx.conf - user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 5; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } /usr/local/nginx/sites-enabled/default - server { listen 80; server_name localhost; location / { root html; index index.php index.html index.htm; } # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } /usr/local/nginx/sites-available/website.com - server { listen 80; server_name website.com; rewrite ^/(.*) http://www.website.com/$1 permanent; } server { listen 80; server_name www.website.com; access_log /home/user/public_html/website.com/log/access.log; error_log /home/user/public_html/website.com/log/error.log; location / { root /home/user/public_html/website.com/public/; index index.php index.html; } # pass the PHP scripts to FastCGI server listening on # 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/user/public_html/website.com/public/$fastcgi_script_name; } } The error message I get is Fatal error: require_once() [function.require]: Failed opening required '/usr/local/nginx/html/202-config/functions.php' the server try to find the file in the Nginx folder and not in my Public_Html

    Read the article

  • PCI-DSS compliance for business with only swipe terminals [migrated]

    - by rowatt
    I support the IT infrastructure for a small retail business which is now required to undergo a PCI-DSS assessment. The payment service and terminal provider (Streamline) has asked that we use Trustwave to do the PCI-DSS certification. The problem I face is that if I answer all questions and follow Trustwave's requirements to the letter, we will have to invest significantly in networking equipment to segment LANs and /or do internal vulnerability scanning, while at the same time Streamline assures me that the terminals we have (Verifone VX670-B and MagIC3 X-8) are secure, don't store any credit card information and are PCI-DSS compliant so by implication we don't need to take any action to ensure their network security. I'm looking for any suggestions as to how we can most easily meet the networking requirements for PCI-DSS. Some background on our current network setup: single wired LAN, also with WiFi turned on (though if this creates any PCI-DSS complexities we can turn it off). single Netgear ADSL router. This is the only firewall we have in place, and the firewall is out the box configuration (i.e. no DMZ, SNMP etc). Passwords have been changed though :-) a few windows PCs and 2 windows based tills, none of which ever see any credit card information at all. two swipe terminals. Until a few months ago (before we were told we had to be PCI-DSS certified) these terminals did auth/capture over the phone. Streamline suggested we moved to their IP Broadband service, which instead uses an SSL encrypted channel over the internet to do auth/capture, so we now use that service. We don't do any ecommerce or receive payments over the internet. All transactions are either cardholder present, or MOTO with details given over phone and typed direct into terminal. We're based in the UK. As I currently understand it we have three options in order to get PCI-DSS certification. segment our network so the POS terminals are isolated from all PCs, and set up internal vulnerability scanning on that network. don't segment the network, and have to do more internal scanning and have more onerous management of PCs than I think we need (for example, though the tills are Windows based, they are fully managed so I have no control over software update policies, anti virus etc). All PCs have anti virus (MSE) and windows updates automatically applied, but we don't have any centralised go back to auth/capture over phone lines. I can't imagine we are the first merchant to be in this situation. I'm looking for any recommendations a simple, cost effective way to be PCI-DSS compliant - either by doing 1 or 2 above with (hopefully) simple and inexpensive equipment/software, or any other ways if there's a better way to do this. Or... should we just go back to the digital stone age and do auth/capture over the phone, which means we don't need to do anything on our network to be PCI-DSS certified?

    Read the article

  • i cant ping to my DMZ zone from the local inside PC

    - by Big Denzel
    HI everybody. Can anyone please help me on the following issue. I got a Cisco Asa 5520 configured at my network. I cant ping to my DMZ interface from a local inside network PC. so the only way a ping the DMZ is right from the Cisco ASA firewall, there i can pint to all 3 interfaces, Inside, Outside and DMZ,,,, But no PC from the Inside Network can access the DMZ. Can please any one help? I thank you all in advance Bellow is my Cisco ASA 5520 Firewall show run; ASA-FW# sh run : Saved : ASA Version 7.0(8) ! hostname ASA-FW enable password encrypted passwd encrypted names dns-guard ! interface GigabitEthernet0/0 description "Link-To-GW-Router" nameif outside security-level 0 ip address 41.223.156.109 255.255.255.248 ! interface GigabitEthernet0/1 description "Link-To-Local-LAN" nameif inside security-level 100 ip address 10.1.4.1 255.255.252.0 ! interface GigabitEthernet0/2 description "Link-To-DMZ" nameif dmz security-level 50 ip address 172.16.16.1 255.255.255.0 ! interface GigabitEthernet0/3 shutdown no nameif no security-level no ip address ! interface Management0/0 description "Local-Management-Interface" no nameif no security-level ip address 192.168.192.1 255.255.255.0 ! ftp mode passive access-list OUT-TO-DMZ extended permit tcp any host 41.223.156.107 eq smtp access-list OUT-TO-DMZ extended permit tcp any host 41.223.156.106 eq www access-list OUT-TO-DMZ extended permit icmp any any log access-list OUT-TO-DMZ extended deny ip any any access-list inside extended permit tcp any any eq pop3 access-list inside extended permit tcp any any eq smtp access-list inside extended permit tcp any any eq ssh access-list inside extended permit tcp any any eq telnet access-list inside extended permit tcp any any eq https access-list inside extended permit udp any any eq domain access-list inside extended permit tcp any any eq domain access-list inside extended permit tcp any any eq www access-list inside extended permit ip any any access-list inside extended permit icmp any any access-list dmz extended permit ip any any access-list dmz extended permit icmp any any access-list cap extended permit ip 10.1.4.0 255.255.252.0 172.16.16.0 255.255.25 5.0 access-list cap extended permit ip 172.16.16.0 255.255.255.0 10.1.4.0 255.255.25 2.0 no pager logging enable logging buffer-size 5000 logging monitor warnings logging trap warnings mtu outside 1500 mtu inside 1500 mtu dmz 1500 no failover asdm image disk0:/asdm-508.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 1 0.0.0.0 0.0.0.0 static (dmz,outside) tcp 41.223.156.106 www 172.16.16.80 www netmask 255.255.255 .255 static (dmz,outside) tcp 41.223.156.107 smtp 172.16.16.25 smtp netmask 255.255.2 55.255 static (inside,dmz) 10.1.0.0 10.1.16.0 netmask 255.255.252.0 access-group OUT-TO-DMZ in interface outside access-group inside in interface inside access-group dmz in interface dmz route outside 0.0.0.0 0.0.0.0 41.223.156.108 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 timeout mgcp-pat 0:05:00 sip 0:30:00 sip_media 0:02:00 timeout uauth 0:05:00 absolute http server enable http 10.1.4.0 255.255.252.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 telnet timeout 5 ssh timeout 5 console timeout 0 management-access inside ! ! match default-inspection-traffic ! ! policy-map global_policy class inspection_default inspect dns maximum-length 512 inspect ftp inspect h323 h225 inspect h323 ras inspect netbios inspect rsh inspect rtsp inspect skinny inspect esmtp inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp ! service-policy global_policy global Cryptochecksum: : end ASA-FW# Please Help. Big Denzel

    Read the article

  • ruby on rails gitorious setup on ubuntu

    - by dogmatic69
    Ive been trying to install gitorious for a while now which required ruby and rails etc. Ive finally got rails pages serving but cant finish the installation of gitorious because the gem version is too new. the error logs say please run 'rake ultrasphinx:configure' and that gives rake ultrasphinx:configure (in /var/www/apps/gitorious) rake aborted! uninitialized constant ActiveSupport::Dependencies::Mutex /var/www/apps/gitorious/Rakefile:10:in `require' (See full trace by running task with --trace) From searching google this is beacuse of the wrong gem verison. Cant find a way to down grade it. apparently sudo gem update --system 1.4.2 should do the trick but ubuntu 10.10 does not like this. ERROR: While executing gem ... (RuntimeError) gem update --system is disabled on Debian, because it will overwrite the content of the rubygems Debian package, and might break your Debian system in subtle ways. The Debian-supported way to update rubygems is through apt-get, using Debian official repositories. If you really know what you are doing, you can still update rubygems by setting the REALLY_GEM_UPDATE_SYSTEM environment variable, but please remember that this is completely unsupported by Debian. So i added export REALLY_GEM_UPDATE_SYSTEM=1 to .bashrc and reloaded it with . ~/.bashrc and still the same. ive tried various forms of setting this environmental variable with no luck. Ive also been told on #gitorious irc channel to add the file config/initializers/rubygems.rb with the line require "thread" to it. This has done nothing. EDIT: Just found another way which was rvm install rubygems 1.4.2 and it gave: Removing old Rubygems files... Installing rubygems dedicated to ruby-1.8.7-p334... Retrieving rubygems-1.4.2 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 288k 100 288k 0 0 282k 0 0:00:01 0:00:01 --:--:-- 414k Extracting rubygems-1.4.2 ... Installing rubygems for /home/ubuntu/.rvm/rubies/ruby-1.8.7-p334/bin/ruby ERROR: Error running 'GEM_PATH="/home/ubuntu/.rvm/gems/ruby-1.8.7-p334:/home/ubuntu/.rvm/gems/ruby-1.8.7-p334@global:/home/ubuntu/.rvm/gems/ruby-1.8.7-p334@global" GEM_HOME="/home/ubuntu/.rvm/gems/ruby-1.8.7-p334" "/home/ubuntu/.rvm/rubies/ruby-1.8.7-p334/bin/ruby" "/home/ubuntu/.rvm/src/rubygems-1.4.2/setup.rb"', please read /home/ubuntu/.rvm/log/ruby-1.8.7-p334/rubygems.install.log WARN: Installation of rubygems did not complete successfully. TL;DR please tell me how to downgrade rubygems on ubuntu 10.10 or upgrade gitorious to work with 1.6.2 gems

    Read the article

  • How to get ISA 2006 Web Proxy to work with the Single Network Adapter template

    - by tronda
    I need to test an issue with running our application behind a proxy server with different type of configurations, so I installed ISA 2006 Enterprise on a desktop computer. Since this computer only has a single network card and I want to start out easy, I chose the "Single Network Adapter" template. We have a internal NAT'ed network which is in the 10 range. I have defined the internal network on the ISA server to be 10.XXX.YY.1 - 10.XXX.YY.255 I also have the Default rule which denies all traffic, but I've added the following Rule: Policy - Protocols - From - To Accept HTTP Internal External HTTPS Local Host Internal HTTS Server Localhost Then I configured Internet Explorer on a virutal machine running XP within virtualbox with Brigded network (gets same network address range as regular computers on our network) similar to this Instead of the server name I used the IP address. When I try to access a web page, this doesn't go through and I get the following log messages on the proxy server: Original Client IP Client Agent Authenticated Client Service Referring Server Destination Host Name Transport HTTP Method MIME Type Object Source Source Proxy Destination Proxy Bidirectional Client Host Name Filter Information Network Interface Raw IP Header Raw Payload GMT Log Time Source Port Processing Time Bytes Sent Bytes Received Cache Information Error Information Authentication Server Log Time Client IP Destination IP Destination Port Protocol Action Rule Result Code HTTP Status Code Client Username Source Network Destination Network URL Server Name Log Record Type 10.XXX.YY.174 - TCP - - - 24.08.2010 13:25:24 1080 0 0 0 0x0 0x0 - 24.08.2010 06:25:24 10.XXX.YY.174 10.XXX.YY.175 80 HTTP Initiated Connection MyHTTPAccess 0x0 ERROR_SUCCESS Internal Local Host - PROXYTEST Firewall 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:24 2275 0 0 0 0x0 0x0 - 24.08.2010 06:25:24 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Initiated Connection 0x0 ERROR_SUCCESS Local Host Local Host - PROXYTEST Firewall 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:25 2275 0 0 0 0x0 0x0 - 24.08.2010 06:25:25 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Closed Connection 0x80074e20 FWX_E_GRACEFUL_SHUTDOWN Local Host Local Host - PROXYTEST Firewall 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:25 2276 0 0 0 0x0 0x0 - 24.08.2010 06:25:25 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Initiated Connection 0x0 ERROR_SUCCESS Local Host Local Host - PROXYTEST Firewall 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:26 2276 0 0 0 0x0 0x0 - 24.08.2010 06:25:26 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Closed Connection 0x80074e20 FWX_E_GRACEFUL_SHUTDOWN Local Host Local Host - PROXYTEST Firewall 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:26 2277 0 0 0 0x0 0x0 - 24.08.2010 06:25:26 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Initiated Connection 0x0 ERROR_SUCCESS Local Host Local Host - PROXYTEST Firewall 10.XXX.YY.159 - UDP - - - 24.08.2010 13:25:26 68 0 0 0 0x0 0x0 - 24.08.2010 06:25:26 10.XXX.YY.159 255.255.255.255 67 DHCP (request) Denied Connection [Enterprise] Default rule 0xc004000d FWX_E_POLICY_RULES_DENIED Internal Local Host - PROXYTEST Firewall 10.XXX.YY.166 - UDP - - - 24.08.2010 13:25:26 68 0 0 0 0x0 0x0 - 24.08.2010 06:25:26 10.XXX.YY.166 255.255.255.255 67 DHCP (request) Denied Connection [Enterprise] Default rule 0xc004000d FWX_E_POLICY_RULES_DENIED Internal Local Host - PROXYTEST Firewall 0.0.0.0 Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Yes Proxy 10.XXX.YY.175 TCP GET Internet - - - Req ID: 096c76ae; Compression: client=No, server=No, compress rate=0% decompress rate=0% - - - 24.08.2010 13:25:27 0 2945 2581 446 0x0 0x40 24.08.2010 06:25:27 10.XXX.YY.174 10.XXX.YY.175 80 http Failed Connection Attempt MyHTTPAccess 10061 anonymous Internal Local Host http://www.vg.no/ PROXYTEST Web Proxy Filter 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:27 2277 0 0 0 0x0 0x0 - 24.08.2010 06:25:27 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Closed Connection 0x80074e20 FWX_E_GRACEFUL_SHUTDOWN Local Host Local Host - PROXYTEST Firewall

    Read the article

  • How to move Mailboxes over from old Exchange 2007 to new EBS 2008 network?

    - by Qwerty
    Hi all, This q is similar to: http://serverfault.com/questions/39070/how-to-move-exchange-2003-mailbox-or-store-from-2003-to-2007-on-separate-networks Basically I am trying to move our exchange mailboxes over to a test domain that is hosting EBS2008 with Exchange 2007. We plan to move as soon as we can when we have our exchange data over. I have tried moving a db with mailboxes over but cannot get it to mount in the new Exchange in any way possible, including mounting it onto a recovery store. From my understanding the ONLY prerequisite for moving Exchange DBs across is that it must have the same Organizational name (unlike previous versions of Exchange). If anyone has any insight as to why I cannot mount and simply reattach the mailboxes, please give me an idea as to what could be wrong. It should be as simple as this. Note that the DBs I have are in a clean state. I cannot use ExMerge because I am not running any mailboxes on 2003. I have also tried using a 32bit Vista machine with the Export-Mailbox cmdlet to extract mailboxes but anything I do to it results in Permission errors. I have tried to troubleshoot these with no success. I am running in full admin with proper exchange roles and yet it still gives me access denied errors: Export-Mailbox : MapiExceptionNetworkError: Unable to make admin interface conn ection to server. (hr=0x80040115, ec=-2147221227) Also some errors show in the management console: get-MailboxDatabase Completed Warning: ERROR: Could not connect to the Microsoft Exchange Information Store service on server TATOOINE.baytech.local. One of the following problems may be occurring: 1- The Microsoft Exchange Information Store service is not running. 2- There is no network connectivity to server TATOOINE.baytech.local. 3- You do not have sufficient permissions to perform this command. The following permissions are required to perform this command: Exchange View-Only Administrator and local administrators group for the target server. 4- Credentials have been cached for an unpriviledged user. Try removing the entry for this server from Stored User Names and Passwords. Why I have to use a 32bit machine to export a simple .pst file is beyond me... So yeah I am now out of ideas and any help would be great! Thanks in advance.

    Read the article

  • ASA and cisco vs NSA sonic firewall

    - by Lbaker101
    Currently I’m trying to structure our network to fully support and be redundant with BGP/Multi homing. Our current company size is 40 employees but the major part of that is our Development department. We are a software company and continued connection to the internet is a requirement as 90% of work stops when the net goes down. The only thing hosted on site (that needs to remain up) is our exchange server. Right now i'm faced with 2 different directions and was wondering if I could get your opinions on this. We will have 2 ISPs that are both 20meg up/down and dedicated fiber (so 40megs combined). This is handed off as an Ethernet cable into our server room. ISP#1 first digital ISP#2 CenturyLink we currently have 2x ASA5505s but the 2nd one is not in use. It was there to be a failover and it just needs the security+ license to be matched with the primary device. But this depends on the network structure. I have been looking into the hardware that would be required to be fully redundant and I found that we will either of the following. 2x Cisco 2921+ series routers with failover licenses. They will go in front of the ASAs and either connects in a failover state or 1 ISP into each of the 2921 series routers and then 1 line into each of the ASAs (thus all 4 hardware components will be used actively). So 2x Cisco 2921+ series routers 2x Cisco ASA5505 firewalls The other route 2x SonicWalls NSA2400MX series. 1 primary and the secondary will be in a failover state. This will remove the ASAs from the network and be about 2k cheaper than the cisco route. This also brings down the points of failure because it’s just the 2x sonicwalls It will also allow us to scale all the way up to 200-400 users (depending on their configuration). This also makes so the Sonic walls. So the real question is with the added functionality ect of the sonicwall is there a point in paying so much more to stay the cisco route? Thanks!

    Read the article

  • VBA + Polymorphism: Override worksheet functions from 3rd party

    - by phi
    my company makes extensive use of a data provider using a (closed source) VBA plugin. In principal, every query follows follows a certain structure: Fill one cell with a formula, where arguments to the formula specify the query the range of that formula is extended (not an arrray formula!) and cells below/right are filled with data For this to work, however, a user has to have a terminal program installed on the machine, as well as a com-plugin referenced in VBA/Excel. My Problem These Excelsheets are used and extended by multiple users, and not all of them have access to the data provider. While they can open the sheet, it will recalculate and the data will be gone. However, frequent recalculation is required. I would like every user to be able to use the sheets, without executing a very specific set of formulas. Attempts remove the reference on those computers where I do not have terminal access. This generates a NAME error i the cell containing the query (acceptable), but this query overrides parts of the data (not acceptable) If you allow the program to refresh, all data will be gone after a failed query Replace all formulas with the plain-text result in the respective cells (press a button and loop over every cell...). Obviously destroys any refresh-capabilities the querys offer for all subsequent users, so pretty bad, too. A theoretical idea, and I'm not sure how to implement it: Replace the functions offered by the plugin with something that will be called either first (and relay the query through to the original function, if thats available) or instead of the original function (by only deploying the solution on non-terminal machines), which just returns the original value. More specifically, if my query function is used like this: =GETALLDATA(Startdate, Enddate, Stockticker, etc) I would like to transparently swap the function behind the call. Do you see any hope, or am I lost? I appreciate your help. PS: Of course I'm talking about Bloomberg... Some additional points to clarify issues raise by Frank: The formula in the sheets may not be changed. This is mission-critical software, and its way too complex for any sane person to try and touch it. Only excel and VBA may be used (which is the reason for the previous point...) It would be sufficient to prevent execution of these few specific formulas/functions on a specific machine for all excel sheets to come This looks more and more like a problem for stackoverflow ;-)

    Read the article

  • Ubuntu 14.04, OpenLDAP TLS problems

    - by larsemil
    So i have set up an openldap server using this guide here. It worked fine. But as i want to use sssd i also need TLS to be working for ldap. So i looked into and followed the TLS part of the guide. And i never got any errors and slapd started fine again. BUT. It does not seem to work when i try to use ldap over tls. root@server:~# ldapsearch -x -ZZ -H ldap://83.209.243.253 -b dc=daladevelop,dc=se ldap_start_tls: Protocol error (2) additional info: unsupported extended operation Ganking up the debug level some notches returns some more information: root@server:~# ldapsearch -x -ZZ -H ldap://83.209.243.253 -b dc=daladevelop,dc=se -d 5 ldap_url_parse_ext(ldap://83.209.243.253) ldap_create ldap_url_parse_ext(ldap://83.209.243.253:389/??base) ldap_extended_operation_s ldap_extended_operation ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP 83.209.243.253:389 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 83.209.243.253:389 ldap_pvt_connect: fd: 3 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_scanf fmt ({) ber: ber_flush2: 31 bytes to sd 3 ldap_result ld 0x7f25df51e220 msgid 1 wait4msg ld 0x7f25df51e220 msgid 1 (infinite timeout) wait4msg continue ld 0x7f25df51e220 msgid 1 all 1 ** ld 0x7f25df51e220 Connections: * host: 83.209.243.253 port: 389 (default) refcnt: 2 status: Connected last used: Fri Jun 6 08:52:16 2014 ** ld 0x7f25df51e220 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0x7f25df51e220 request count 1 (abandoned 0) ** ld 0x7f25df51e220 Response Queue: Empty ld 0x7f25df51e220 response count 0 ldap_chkResponseList ld 0x7f25df51e220 msgid 1 all 1 ldap_chkResponseList returns ld 0x7f25df51e220 NULL ldap_int_select read1msg: ld 0x7f25df51e220 msgid 1 all 1 ber_get_next ber_get_next: tag 0x30 len 42 contents: read1msg: ld 0x7f25df51e220 msgid 1 message type extended-result ber_scanf fmt ({eAA) ber: read1msg: ld 0x7f25df51e220 0 new referrals read1msg: mark request completed, ld 0x7f25df51e220 msgid 1 request done: ld 0x7f25df51e220 msgid 1 res_errno: 2, res_error: <unsupported extended operation>, res_matched: <> ldap_free_request (origid 1, msgid 1) ldap_parse_extended_result ber_scanf fmt ({eAA) ber: ldap_parse_result ber_scanf fmt ({iAA) ber: ber_scanf fmt (}) ber: ldap_msgfree ldap_err2string ldap_start_tls: Protocol error (2) additional info: unsupported extended operation ldap_free_connection 1 1 ldap_send_unbind ber_flush2: 7 bytes to sd 3 ldap_free_connection: actually freed So no good information there neither. In /var/log/syslog i get: Jun 6 08:55:42 master slapd[21383]: conn=1008 fd=23 ACCEPT from IP=83.209.243.253:56440 (IP=0.0.0.0:389) Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 EXT oid=1.3.6.1.4.1.1466.20037 Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 do_extended: unsupported operation "1.3.6.1.4.1.1466.20037" Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 RESULT tag=120 err=2 text=unsupported extended operation Jun 6 08:55:42 master slapd[21383]: conn=1008 op=1 UNBIND Jun 6 08:55:42 master slapd[21383]: conn=1008 fd=23 closed If i portscan the host i get the following: Starting Nmap 6.40 ( http://nmap.org ) at 2014-06-06 08:56 CEST Nmap scan report for h83-209-243-253.static.se.alltele.net (83.209.243.253) Host is up (0.0072s latency). Not shown: 996 closed ports PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 389/tcp open ldap 636/tcp open ldapssl But when i check certs root@master:~# openssl s_client -connect daladevelop.se:636 -showcerts -state CONNECTED(00000003) SSL_connect:before/connect initialization SSL_connect:unknown state 140244859233952:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 317 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE --- And i feel like i am clearly out in deep water not knowing at all where to go from here. Anny hints appreciated on what to do or to get better debug logging... EDIT: This is my config slapcated from cn=config and it does not mention at all anything about TLS. I have inserted my certinfo.ldif: root@master:~# cat certinfo.ldif dn: cn=config add: olcTLSCACertificateFile olcTLSCACertificateFile: /etc/ssl/certs/cacert.pem - add: olcTLSCertificateFile olcTLSCertificateFile: /etc/ssl/certs/daladevelop_slapd_cert.pem - add: olcTLSCertificateKeyFile olcTLSCertificateKeyFile: /etc/ssl/private/daladevelop_slapd_key.pem and when doing that i only got this as an answer. root@master:~# sudo ldapmodify -Y EXTERNAL -H ldapi:/// -f certinfo.ldif SASL/EXTERNAL authentication started SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth SASL SSF: 0 modifying entry "cn=config" So still no wiser.

    Read the article

  • UCARP: prevent the original master from taking over the VIP when it comes back after failure?

    - by quanta
    Keepalived can do this by combining the nopreempt option and the BACKUP state on the both nodes: Prevent VRRP Master from becoming Master once it has failed Prevent master to fall back to master after failure How about the UCARP? Name : ucarp Arch : x86_64 Version : 1.5.2 Release : 1.el5.rf Size : 81 k Repo : installed Summary : Common Address Redundancy Protocol (CARP) for Unix URL : http://www.ucarp.org/ License : BSD Description: UCARP allows a couple of hosts to share common virtual IP addresses in order : to provide automatic failover. It is a portable userland implementation of the : secure and patent-free Common Address Redundancy Protocol (CARP, OpenBSD's : alternative to the patents-bloated VRRP). : Strong points of the CARP protocol are: very low overhead, cryptographically : signed messages, interoperability between different operating systems and no : need for any dedicated extra network link between redundant hosts. If I don't use the --preempt option and set the --advskew to the same value, both nodes become master. /etc/sysconfig/carp/vip-010.conf # Virtual IP configuration file for UCARP # The number (from 001 to 255) in the name of the file is the identifier # $Id: vip-001.conf.example 1527 2004-07-09 15:23:54Z dude $ # Set the same password on all mamchines sharing the same virtual IP PASSWORD="pa$$w0rd" # You are required to have an IPADDR= line in the configuration file for # this interface (so no DHCP allowed) BIND_INTERFACE="eth0" # Do *NOT* use a main interface for the virtual IP, use an ethX:Y alias # with the corresponding /etc/sysconfig/network-scripts/ifcfg-ethX:Y file # already configured and ith ONBOOT=no VIP_INTERFACE="eth0:0" # If you have extra options to add, see "ucarp --help" output # (the lower the "-k <val>" the higher priority and "-P" to become master ASAP) OPTIONS="-z -k 255" /etc/sysconfig/network-scripts/ifcfg-eth0:0 DEVICE=eth0:0 ONBOOT=no BOOTPROTO= IPADDR=192.168.6.8 NETMASK=255.255.255.0 USERCTL=yes IPV6INIT=no node 1: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether c6:9b:8e:af:a7:69 brd ff:ff:ff:ff:ff:ff inet 192.168.6.192/24 brd 192.168.6.255 scope global eth0 inet 192.168.6.8/24 brd 192.168.6.255 scope global secondary eth0:0 inet6 fe80::c49b:8eff:feaf:a769/64 scope link valid_lft forever preferred_lft forever node 2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:30:48:f7:0f:81 brd ff:ff:ff:ff:ff:ff inet 192.168.6.38/24 brd 192.168.6.255 scope global eth1 inet 192.168.6.8/24 brd 192.168.6.255 scope global secondary eth1:0 inet6 fe80::230:48ff:fef7:f81/64 scope link valid_lft forever preferred_lft forever

    Read the article

  • windows 2008 R2 TS printer security - can't take owership

    - by Ian
    I have a Windows 2008 R2 server with Terminal server role installed. I'm seeing a problem with an ordinary user who is member of local printer operators group on the server. If the user opens a cmd window using ‘run as administrator’ they can run printmanager.msc without needing to enter their password again. In printmanager they can change the ownership of redirected (easy print) printers without problems. If, from the same cmd window, they use subinacl to try and change the onwership of the queue to themselves they get access denied: >subinacl.exe /printer "_#MyPrinter (2 redirected)" /setowner="MyDom\MyUsr" Elapsed Time: 00 00:00:00 Done: 1, Modified 0, Failed 1, Syntax errors 0 Last Done : _#MyPrinter (2 redirected) Last Failed: _#MyPrinter (2 redirected) - OpenPrinter Error : 5 Access denied so, same context, same action but one works and one doesn't. Any ideas for this odd behaviour? I'm using subinacl x86 on an x64 server as I can't find anything more up to date. I've tried with icacls and others but couldn't get them to do anything with printers. EDIT: added after Gregs comments regarding setacl below If I log into the TS server as Testusr and open Admin Tools Printer Admin (as administrator) and then type mydomain\testusr and the testusr's password, then I can change the ownership of the printer queue and set testusr as the owner. However if I open cmd as administrator and, again, type mydomain\testusr and the users password when I try to change the ownership of my redirected printer I get the following: C:\>setacl -on "Bullzip PDF Printer (12 redireccionado)" -ot prn -actn setowner -ownr n:mydom\testusr WARNING: Privilege 'Back up files and directories' could not be enabled. SetACL's powers are restricted. WARNING: Privilege 'Restore files and directories' could not be enabled. SetACL's powers are restricted. INFORMATION: Processing ACL of: <Bullzip PDF Printer (12 redireccionado)> ERROR: Enabling the privilege SeTakeOwnershipPrivilege failed with: No todos los privilegios o grupos a los que se hace referencia son asignados al llamador. [meaning not all referenced privs or groups are assigned to the caller] SetACL finished with error(s): SetACL error message: A privilege could not be enabled maybe I'm getting something wrong but if the built in windows tool can do it with just membership of the 'print operators' group then setacl should be able to as well, no? However setacl seems to depend on other privileges, which in reality are not required to do this.

    Read the article

< Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >