Search Results

Search found 28350 results on 1134 pages for 'command switch'.

Page 421/1134 | < Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >

  • Xen Vif creation xl vs xm

    - by exaju
    Hi everyone, I switch my server from a xend/xm Xen install to a 4.1 xl Xen install. Therefore Xen does not create vif network interface when I launch xl create /etc/xen/my_server.cfg but does create vif network interface with the command xm create /etc/xen/my_server.cfg Here are sample configuration: nano /etc/xen/xl.conf vifscript="vif-bridge" nano /etc/xen/xend-config.sxp (network-script network-bridge) (vif-script vif-bridge) nano /etc/default/xen TOOLSTACK=xl Any idea ? I'm lost :-( Best Regards.

    Read the article

  • Postfix + procmail - delivery fails because "can't create user output file" - on CentOS 6.2

    - by jshin47
    I verified that my postfix installation / relaying setup worked. Now I am having trouble with procmail. I have it wired to postfix with the following command: mailbox_command = /usr/bin/procmail -f -a "$USER" I have nothing in my procmail config but the following: LOGFILE=/var/procmailrc/log And I send an email to a recipient that previously worked (before I attached procmail). Now it fails with error: Apr 6 14:07:05 localhost postfix/qmgr[15194]: D0C3DFF6E1: from=<[email protected]>, size=938, nrcpt=1 (queue active) Apr 6 14:07:05 localhost postfix/local[1953]: D0C3DFF6E1: to=<[email protected]>, orig_to=<postmaster>, relay=local, delay=0.05, delays=0.02/0.01/0/0.02, dsn=5.2.0, status=bounced (can't create user output file. Command output: procmail: Couldn't create "/var/spool/mail/nobody" procmail: Couldn't read "//root" ) Apr 6 14:07:05 localhost postfix/bounce[1955]: warning: D0C3DFF6E1: undeliverable postmaster notification discarded Apr 6 14:07:05 localhost postfix/qmgr[15194]: D0C3DFF6E1: removed It seems like there is some sort of permissions issue but I do not know what the problem is, nor do I understand how I would go about diagnosing it further. The logfile that I specified is empty, by the way. How can I make procmail+postfix work?

    Read the article

  • AWS Linux EC2: yum won't run with plugins

    - by Patrick
    Short Version: yum commands on my Amazon Linux EC2 AMI only work with --noplugins. Long Version: A couple of days ago, I ran yum update at the behest of the SSH Login MoTD telling me I had updates to install. About midway through the update (specifically while updating the kernel), the update abruptly ended (79 of 138 items completed). The website I host on EC2 got weird for a few minutes, but eventually seemed to stabilize back out (maybe EC2 restarted itself?), and I didn't have further issues (other than MySQL started running out of memory, but I think that's probably unrelated to this). Today, I went to install gcc-c++ (with yum install gcc-c++). When I did, I got the following message: Loaded plugins: priorities, security, update-motd, upgrade-helper Config error: Command "updateinfo" already defined and I get that for any command I can think to run using yum. However, If I throw in the --noplugins flag, then magically it seems to work. To be clear, when I installed a different package a week ago, it worked totally correctly, so the yum update is the only thing I can think of that changed. I could find nothing on Google with regard to "updateinfo" already defined (with and without quotes). I tried running yum update --noplugins which spit out a message telling me that I should have run yum-complete-transaction instead, but proceeded to try to update something on its own. When that completed, I tried yum-complete-transaction but that gave me a message about the transactions not lining up correctly, so it removed the old transaction (Probably since I should have completed the first transaction before trying to update again, if I had known). Based on the SF question "Linux EC2 Broken Yum", I've also tried yum clean all --noplugins (fails the same with plugins) which just gives me Cleaning repos: amzn-main amzn-updates rpmforge Cleaning up everything I also tried package-cleanup --problems Loaded plugins: priorities, update-motd, upgrade-helper No Problems Found and package-cleanup --dupes Gives a lot of dupes, so I pasted them here: http://pastebin.com/VVFQEkTT instead of inline. At this point, I'm not sure what else there even is to check.

    Read the article

  • VBScript Capture StdOut from ShellExecute

    - by Joe
    I am trying to run the following code snippet as part of a tool to gather and log some pertinent system diagnostics. The purpose of this snippet is to gather the result of running the command: vssadmin list writers The snippet is as follows: ' Set WshShell = CreateObject("WScript.Shell") ' WScript.Echo sCurPath & "\vsswritercheck.bat" ' Set WshShellExec = WshShell.Exec("elevate.cmd cmd.exe /c " & sCurPath & "\vsswritercheck.bat") Set oShell = CreateObject("Shell.Application") oShell.ShellExecute "cmd.exe", sCurPath & "\vsswritercheck.bat", , "runas", 1 vsswriter = VSSWriterCheck Select Case oShell.Status Case WshFinished strOutput = oShell.StdOut.ReadAll Case WshFailed strOutput = oShell.StdErr.ReadAll End Select WScript.Echo strOutPut vsswriter = strOutPut With the first code snippet (commented out) I can run the command and capture stdout from the batch file. In the second code snipped, I cannot capture stdout. I need to be able to run the batch script with Elevated permissions, so I am looking for a compromise between the functionality of the two. I cannot run the entire calling script in elevated mode due to restrictions from other pieces of functionality. I am looking for any ideas on how to add this output to my log as I am running out of options that are within the scope of basic scripts.

    Read the article

  • Issues while installing no machine setup (NX )

    - by TopCoder
    I am trying to connect to NX server from windows client but it reports following exception NX 203 NXSSH running with pid: 5404 NX 285 Enabling check on switch command NX 285 Enabling skip of SSH config files NX 285 Setting the preferred NX options NX 200 Connected to address: 10.43.51.77 on port: 22 NX 202 Authenticating user: nx NX 208 Using auth method: publickey NX 204 Authentication failed. I have regenearted the default_dsa.key on server and imported the same for client but still not working. Any solutions?

    Read the article

  • Securely executing system commands as sudo from PHP

    - by Aydin Hassan
    Is it possible? I have written a command line tool in PHP for creating new environments for our company. It creates system users, directories, databases, VHosts and restarts apache, amongst other things. These commands require sudo privileges. I thought it might be a nice idea to have a web-interface for it, to make it easier for other non-developers to use. The web app would be behind authentication. When running from the command line I just run sudo tool.php, obviously I can't do this from a web app. How could I do this securely? Giving the apache user sudo access seems silly, as this would means all sites hosted on the box (eg all our environments) would have sudo access. Is it possible to make this tool run under a different user? this user could have sudo privileges for only the commands I need? How do things like plesk and cPanel do this? Any thoughts?

    Read the article

  • Upstart: cannot run as root

    - by Ronni Egeriis
    I have made this upstart script, which starts a Node.js service. But all of the sudden the service has stopped, and upstart has failed to restart it. Now that I am trying to start it manually, it fails to recognize my service: start: Unknown job: queue The script is properly placed in /etc/init, and should have the correct rights: -rw-r--r-- 1 root root 200 Aug 7 13:30 queue.conf When I check the config file with init-checkconf however, it says that it is not able to run as root: root@production1:~# init-checkconf /etc/init/queue.conf ERROR: cannot run as root What causes this error and how do I solve it? Debug info: Ubuntu 12.04.3 LTS root@production1:~# service --version service ver. 0.91-ubuntu1 Edit Here's queue.conf: description "Echo.it command queue" author "Ronni Egeriis Persson <[email protected]>" stop on shutdown respawn respawn 20 5 exec sudo -u beanstalk /usr/bin/node /var/www/queue/index.js >> /var/log/queue.log 2>&1 The command sudo -u beanstalk /usr/bin/node /var/www/queue/index.js >> /var/log/queue.log 2>&1 works fine when run manually.

    Read the article

  • Good Linux disaster-ready filesystem?

    - by Felipe Solís
    I'm working on this emergency open wi-fi network project and it includes a local website (nginx + MySQL). In order to eliminate SPOFs, we're going to setup at least two of everything (server, switch, router, etc.). This network is thought to work when an earthquake strikes and it's very likely to a server to go to down, if so, we need to be able to boot them up and be operating as soon as possible. Do any of you know if any linux filesystem would work better than others in this scenario?

    Read the article

  • Are there any scripts to synchronize sites?

    - by Matrym
    I've just set up a fail-over DNS to switch the site to a second host if the first is down. This is great for showing an old / archived version of the site, but I suspect maintenance is going to be a real pain. I moved the files over with rsync in the first place. Is this the kinda thing that could be run as a cron job, automatically moving over newer files?

    Read the article

  • Dynamic fowarding with SOCKS5 proxy [on hold]

    - by bh3244
    I'm building my own SOCKS5 client and HTTP library and am having trouble figuring out how things work with dynamic port forwarding. So far I can connect successfully with my SOCKS5 client, but from there on I am stuck. I am using the ssh -D command. Considering I have my local machine "home" and my server "server" and I wanted to use "server" as proxy for all connections I understand I would type ssh -D "localport" "serverhostname" on my local machine "home". This command I understand has ssh accept connections with the SOCKS5 protocol. So now if I want to connect to google.com(74.125.224.72:80) and issue a GET for the front page, I assume I would send the SOCKS5 client request and the server would respond back with a 0x00 "succeeded" and from then on I am connected and I would send the HTTP GET request and the server would respond back accordingly with the data. Now if I want to navigate to a different website, must I issue another SOCKS5 connection request for that sites IP/hostname? I'm confused if this is the way it is done, or if there is a program listening on the local port of the "server" and handling outgoing and incoming data. To reiterate: Do SOCKS5 proxies work by sending repeated SOCKS5 connection requests for different addresses or is there just one connection to a local port on "server" and another program on "server" handles the outgoing connection to the internet by using that local port to send and receive data to/from "home"?

    Read the article

  • How can I resolve this one application coming up with an "You don't have permission to use the application" error?

    - by morgant
    I've got a Mac OS X 10.6 Snow Leopard Server Open Directory Master with a user who's getting Mobility & Application managed preferences from a group (the only group they're a member of). The workstation is also running Mac OS X 10.6 Snow Leopard, when the user logs in and tries to run our primary application which they're explicitly allowed to run (via the group's preferences), it says "You don't have permission to use the application 'Blah'". Now, the application is added to the group's list of always allowed applications, unsigned (so a minor difference in application version or file contents shouldn't disallow it). It even lives in a subdirectory of /Applications which is in the list of folders to allow applications. I've run into this when logging this user into new workstations and the following usually works: Log them out Remove the following files from their mobile home folder on the workstation: /Library/Managed\ Preferences/, ~/.FileSync, ~/Library/Preferences/com.apple.finder.plist, and ~/Library/Preferences/com.apple.MCX.plist. Remove the following files from their network home folder on the server: ~/.FileSync, ~/Library/Preferences/com.apple.finder.plist, and ~/Library/Preferences/com.apple.MCX.plist. Log them back in on the workstation. However, this no longer resolves the issue. Their Home Sync preferences are set (on the group) to sync ~, but not the following files (manually, at login, and at logout... no background sync here): ~/.SymAVQSFile ~/NAVMac800QSFile ~/Library ~/.FileSync ~/.account Their Preferences Sync preferences are set (also on the group) to sync ~/Library & ~/Documents/Microsoft User Data, but not the following files (also manually, at login, and at logout... no background sync): ~/.SymAVQSFile ~/.Trash ~/.Trashes ~/Documents/Microsoft User Data/Entourage Temp ~/Library/Application Support/SyncServices ~/Library/Application Support/MobileSync ~/Library/Caches ~/Library/Calendars/Calendar Cache ~/Library/Logs ~/Library/Mail/AvailableFeeds ~/Library/Mail/Envelope Index ~/Library/Preferences/Macromedia/ ~/Library/Printers ~/Library/PubSub/Database ~/Library/PubSub/Downloads ~/Library/PubSub/Feeds ~/Library/Safari/Icons.db ~/Library/Safari/HistoryIndex.sk ~/Library/iTunes/iPhone Software Updates IMAP-* Exchange-* EWS-* Mac-* ~/Library/Preferences/ByHost ~/Library/Preferences/com.apple.dock.plist ~/Library/Preferences/com.apple.sitebarlists.plist ~/Library/Application Support/4D ~/Library/Preferences/com.apple.MCX.plist ~/.FileSync ~/.account Even with ~/Library/Preferences/com.apple.MCX.plist prevented from syncing during a Preferences Sync, it still seems to show up in the network home on the server frequently. Are there any other files other than ~/Library/Preferences/com.apple.MCX.plist that contain application Managed Preferences that might be causing this one app to be showing up as not allowed? Any ideas on how ~/Library/Preferences/com.apple.MCX.plist keeps getting sync'd back up the network home folder on the server? Update: I thought I had found a workaround this morning, but it also seemed to be extremely temporary. Basically, loking at /Library/Managed\ Preferences/[shortname]/com.apple.applicationaccess.new.plist I discovered that it didn't have an entry for the application in question, but /Library/Managed\ Preferences/[shortname]/complete.plist did. Naturally, I deleted com.apple.applicationaccess.new.plist, logged in again, and it worked... on one workstation. It failed on others, and after logging out & back in a couple more times it started failing on all of them again, even after further deletions of com.apple.applicationaccess.new.plist. Oddly, com.apple.applicationaccess.new.plist & complete.plist do both contain an entry for the application in question now, but it still says it's not allowed. Further Update: Okay, so I now have a reproducible workaround which seems to be required after every reboot of the workstation: Log in as the user (you'll discover you cannot launch the application in question). Fast User Switch to the local admin account on the workstation (we always have one on every machine). From that local admin account, run sudo mcxrefresh -n 'shortname' (logging out and back in as the user in question will not work). Fast User Switch back to the user (you'll still not be allowed to run the application). Log the user out and back in (you'll now be able to run the application in question.) Fast User Switch back to the local admin account, log it out, and log back in as the user in question. If you do all that exactly as described it'll keep working through log out & log back in, but NOT through a reboot. If, after a reboot, you try something like logging in as the local admin account, running sudo mcxrefresh -n 'shortname', logging out, then logging in as the user in question, it will NOT work. Yet Another Update We don't have any computer groups in our Open Directory, so it shouldn't be getting any conflicting settings from there. I ran sudo mcxquery -format xml -user shortname -group groupname before & after performing the aforementioned process to allow the application in question to be run and the results were identical (saved the result to files & diff'd... I'm not just guessing here). One Step Forward, Half a Step Back: When the Mac OS X 10.6.5 Server update was released, we upgraded our Open Directory Master to it as the changes included the following managed preferences fixes which I hoped might address this issue: Addresses an issue that could prevent managed preferences from being applied when a user logs in on a workstation that has been idle. Fixes an issue that could prevent administrators from bypassing client management settings on a workstation. This seemed to improve the situation slightly. The application in question now usually launches without error. If, and when it does launch with the "You don't have permission to use the application" error, logging the user out and back in seems to correct it. That said, we've since had to add a couple of applications to the user's ~/Applications/ directory and those are still prevented from launching. The workstations are running Mac OS X 10.6.4, the OD Master (which the workstations are bound to) is running Mac OS 10.6.5 Server (although there are two OD Replicas still running 10.6.4 Server), and we're using Workgroup Manager 10.6.3 (which is included with the Server Admin Tools 10.6.5 upgrade) to add the applications (unsigned, as always). This time, I've caught the following in /var/log/system.log when attempting to launch one of the allowed applications from ~/Applications: Dec 22 17:36:24 hostname parentalcontrolsd[43221]: -[ActivityTracker checkApp:csFlags:] [954:username] -- *** Incoming app appears to be masquerading as white listed app and failed signature validation: /Users/username/Applications/FileMaker Pro 5.5/FileMaker Pro.app/Contents/MacOS/FileMaker Pro. Note: This may be a valid app of a different version than what was whitelisted (on a different volume?) Dec 22 17:36:24 hostname [0x0-0xa42a42].com.filemaker.filemakerpro[43304]: launch of /Users/username/Applications/FileMaker Pro 5.5/FileMaker Pro.app/Contents/MacOS/FileMaker Pro was blocked Dec 22 17:36:24 hostname com.apple.launchd.peruser.1340[6375] ([0x0-0xa42a42].com.filemaker.filemakerpro[43304]): Exited with exit code: 255 Dec 22 17:36:24 hostname parentalcontrolsd[43221]: -[ActivityTracker(Private) _removeAppFromWhiteList:] [1362:username] -- *** Couldn't find local user record Running sudo mcxquery -format xml -user username -group groupname includes the following entry for FileMaker Pro 5.5 (and appears to include a full integration of the user's application whitelist & group's application whitelist): <dict> <key>bundleID</key> <string>com.filemaker.filemakerpro</string> <key>displayName</key> <string>FileMaker Pro</string> </dict> Note the lack of <key>appID</key><data> ... </data> which seems to specify a signed application. While whitelisted directories also appear to be correctly listed in the results, they too do not actually allow the applications to be run either. What is going on here?! Where else should I be looking?

    Read the article

  • Copy from CDROM is very slow in Ubuntu

    - by ???
    I'm using the command to copy CDROM image: # dd if=/dev/sr0 of=./maverick.iso But it's very slow, at about 350k bytes/sec. I've searched the google, and try the command # hdparm -vi /dev/sr0 /dev/sr0: HDIO_DRIVE_CMD(identify) failed: Bad address IO_support = 1 (32-bit) readonly = 0 (off) readahead = 256 (on) HDIO_GETGEO failed: Inappropriate ioctl for device Model=DVD-ROM UJDA775, FwRev=DA03, SerialNo= Config={ Fixed Removeable DTR<=5Mbs DTR>10Mbs nonMagnetic } RawCHS=0/0/0, TrkSize=0, SectSize=0, ECCbytes=0 BuffType=unknown, BuffSize=unknown, MaxMultSect=0 (maybe): CurCHS=0/0/0, CurSects=0, LBA=yes, LBAsects=0 IORDY=yes, tPIO={min:180,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio1 pio2 pio3 pio4 DMA modes: sdma0 sdma1 sdma2 mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 *udma2 AdvancedPM=no Drive conforms to: ATA/ATAPI-5 T13 1321D revision 3: ATA/ATAPI-1,2,3,4,5 * signifies the current active mode Seems like DMA is already on. And a device test gives: # hdparm -t /dev/sr0 /dev/sr0: Timing buffered disk reads: 2 MB in 6.58 seconds = 311.10 kB/sec # sudo hdparm -tT /dev/sr0 /dev/sr0: Timing cached reads: 2 MB in 2.69 seconds = 760.96 kB/sec Timing buffered disk reads: m 4 MB in 5.19 seconds = 789.09 kB/sec The CD-ROM device and disc should be okay because I can copy it very fast in Windows, using UltraISO utility. So I guess there is something not configured right in Ubuntu, is it?

    Read the article

  • Trouble with wireless driver on a Dell Latitude D830

    - by Kevin
    After uninstalling Dell's wireless utility I get a new hardware found dialog that can not find any driver for my wifi card on it's own. I'm running Windows XP Professional Service Pack 3, and I would like to use the default wifi handler since dell's utility does not work with my company's wireless switch. I did try downloading the recommend driver from the dell support site Network Adapter 2 Model Intel(R) PRO/Wireless 3945ABG Network Connection Description [12] Intel(R) PRO/Wireless 3945ABG Network Connection Status Connected

    Read the article

  • gpg symmetric encryption using pipes

    - by Thomas
    I'm trying to generate keys to lock my drive (using DM-Crypt with LUKS) by pulling data from /dev/random and then encrypting that using GPG. In the guide I'm using, it suggests using the following command: dd if=/dev/random count=1 | gpg --symmetric -a >./[drive]_key.gpg If you do it without a pipe, and feed it a file, it will pop up an (n?)curses prompt for you to type in a password. However when I pipe in the data, it repeats the following message four times and sits there frozen: pinentry-curses: no LC_CTYPE known assuming UTF-8 It also says can't connect to '/root/.gnupg/S.gpg-agent': File or directory doesn't exist, however I am assuming that this doesn't have anything to do with it, since it shows up even when the input is from a file. So I guess my question boils down to this: is there a way to force gpg to accept the passphrase from the command line, or in some other way get this to work, or will I have to write the data from /dev/random to a temporary file, and then encrypt that file? (Which as far as I know should be alright due to the fact that I'm doing this on the LiveCD and haven't yet created the swap, so there should be no way for it to be written to disk.)

    Read the article

  • How to get a new-pssession in PowerShell to talk to my ICS-connected laptop for Remoting

    - by Scott Bilas
    If I have my laptop on the LAN, then Powershell remoting works fine from my workstation to the laptop. However, the LAN is wireless, and so sometimes I will connect on a wire to my workstation. It has two ethernet ports so I have the secondary wired up to share to the laptop using Win7's Internet Connection Sharing. (Btw I know that avoiding ICS would solve the problem, but that's not an option right now.) So my question is: what magic registry bits or command line options do I need to flip to get remoting to work to my laptop through ICS? Here's what happens when I try it: new-pssession -computername 192.168.137.161 [192.168.137.161] Connecting to remote server failed with the following error message : The WinRM client cannot process the request. Default authentication may be used with an IP address under the following conditions: the transport is HTTPS or the destination is in the TrustedHosts list, and explicit credentials are provided. Use winrm.cmd to configure TrustedHosts. Note that computers in the TrustedHosts list might not be authenticated. For more information on how to set TrustedHosts run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic. + CategoryInfo : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [], PSRemotingTransportException + FullyQualifiedErrorId : PSSessionOpenFailed I'm having a hard time understanding the documentation for PowerShell and WinRM. I've tried messing with allowing ports in the firewall and setting TrustedHosts to * on my workstation (don't think this is a good idea on the laptop). I have no idea where to go from here, would appreciate any help.

    Read the article

  • can't backup to a NAS drive as offline schedule task

    - by imageng
    I have seen this problem issue discussed in several forums including this one, but could not find a solution. On MS server 2003 I configured a Backup task, the target backup is on a NAS disc (Seagate BlackArmor NAS 110). The backup task is working well as a scheduled task or by a direct command, when I am logged on. It is not working when the user is offline (in this case - Administrator). I already tried the following actions: 1) addressing to the target as network drive (Y:location..), 2)Using UNC instead, 3) making the drive a domain member (the NAS admin S/W allows to define itself as a domain member) The result log message for 1 and 2 is: "The operation was not performed because the specified media cannot be found." The result log message for 3 is empty file. The schedule task "RUN" command is: C:\WINDOWS\system32\ntbackup.exe backup "@C:\Documents and Settings\Administrator\Local Settings\Application Data\Microsoft\Windows NT\NTBackup\data\de-board.bks" /a /d "Set created 2/14/2010 at 5:10 PM" /v:yes /r:no /rs:no /hc:off /m incremental /j "de-board" /l:s /f "\10.0.0.8\public\Backups\IBMServer\de-board.bkf" 10.0.0.8 is the static IP of the NAS. "Run only if logged on" is NOT marked. Password of the administrator user is set. It is obvious that there is no access to the NAS when the user is logged-out. Do you have any idea how can I solve it? Thanks

    Read the article

  • TCP/IP & throughput between FreeNAS (BSD) server & other LAN machines

    - by Tim Dickerson
    I have got a question for someone that knows BSD a bit better than me that are in regards to my LAN setup at home/work here outside Chicago. I can't seem to fully optimize my network's (LAN) thoughput via my FreeNAS (BSD based) file server. It runs with the latest FreeBSD release which is modified to support several protocols for file transfers and more. Every machine that is behind my Smoothwall (Linux based) router is on the usual 192.168.0.x subnet and for most part works just fine. Behind the Smoothwall box, all machines are connected to a GB HP unmanaged switch. I host a large WISP here and have an OC-3 connection here at home/work and have no issues with downloading/uploading from/to the 'net'. My problem is with throughput. When I try and transfer large files...really any for that matter..between any of the machines to/and from the FreeNAS server via FTP, the max throughput I can achieve say between a Win 7 or a Linux box is ~65Mbit/sec. All machines are running Intel Pro 1000 GB NIC's and all cable is CAT6. Each is set to 'auto negotiation' and each shows 1500 MTU Full Duplex @1GB so I know the hardware is okay. I have not adjusted the MTU on any machine as I understand it to be pointless unless certain configurations are used (I assume I am not one of those). My settings for the FreeNAS machine are the following: # FreeNAS /etc/sysctl.conf - pertinent settings shown kern.ipc.maxsockbuf=262144 kern.ipc.nmbclusters=32768 kern.ipc.somaxconn=8192 kern.maxfiles=65536 kern.maxfilesperproc=32768 net.inet.tcp.delayed_ack=0 net.inet.tcp.inflight.enable=0 net.inet.tcp.path_mtu_discovery=0 net.inet.tcp.recvbuf_auto=1 net.inet.tcp.recvbuf_inc=524288 net.inet.tcp.recvbuf_max=16777216 net.inet.tcp.recvspace=65536 net.inet.tcp.rfc1323=1 net.inet.tcp.sendbuf_inc=16384 net.inet.tcp.sendbuf_max=16777216 net.inet.tcp.sendspace=65536 net.inet.udp.recvspace=65536 net.local.stream.recvspace=65536 net.local.stream.sendspace=65536 net.inet.tcp.hostcache.expire=1 From what I can tell, that looks to be a somewhat optimized profile for a typical BSD machine acting as a server for a LAN. I might be wrong and just wanted to find out from someone that knows BSD better than I do if indeed that is ok or if something is out of tune or what. Are there other ways I would find better for P2P file transfers? I honestly do not know what I SHOULD be looking for with respect to throughput between the NAS box and another client when xferring files via FTP, but I am told that what I get on average (40-70MB/sec) is too low for what it could be. I have thought about adding another NIC in the FreeNAS box as well as the Win7 machine and use a X-over cable via a static route, but wanted to check with someone first to see if that might be worth it or not. I don't know if doing that would bypass the HP GB switch and allow for a machine to machine xfer anyways. The FTP client I use is: Filezilla and have tried both active and passive modes with no real gain over each other. The NAS box runs ProFTPD.

    Read the article

  • Cannot delete files on samba share when authenticated using kerberos

    - by ondra
    I have a samba server that authenticates users using LDAP, however it does have kerberos enabled as well. Unfortunately users authenticated using kerberos cannot delete files. I can test this using smbclient - if I use the '-k' switch, I cannot delete the files, if I don't, I can. The users does have read/write/execute access to the directory from where he is trying to delete the file. Any idea what might be wrong?

    Read the article

  • NTBackup Error: C: is not a valid drive

    - by Chris
    I'm trying to use NtBackup to back up the C: Drive on a Microsoft Windows Small Business Server 2003 machine and get the following error in the log file: Backup Status Operation: Backup Active backup destination: 4mm DDS Media name: "Media created 04/02/2011 at 21:56" Error: The device reported an error on a request to read data from media. Error reported: Invalid command. There may be a hardware or media problem. Please check the system event log for relevant failures. Error: C: is not a valid drive, or you do not have access. The operation did not successfully complete. I'm using a brand new SATA Quantum Dat-72 drive with a brand new tape (tried a couple of tapes). I carry out the following: Open NTBackup Select Backup Tab Tick the box next to C: Ensure Destination is 4mm DDS Media is set to New Press Start Backup Choose Replace the data on the media and press Start Backup NTBackup tries to mount the media Error Message shows: The device reported an error on a request to read data from media. Error reported: INvalid command. There may be a hardware or media problem. Please check the system event log for relevant failures. On checking the log I find the following: Event Type: Information Event Source: NTBackup Event Category: None Event ID: 8018 Date: 04/02/2011 Time: 22:02:02 User: N/A Computer: SERVER Description: Begin Operation For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. and then; Event Type: Information Event Source: NTBackup Event Category: None Event ID: 8019 Date: 04/02/2011 Time: 22:02:59 User: N/A Computer: SERVER Description: End Operation: The operation was successfully completed. Consult the backup report for more details. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • publickey authentication only works with existing ssh session

    - by aaron
    publickey authentication only works for me if I've already got one ssh session open. I am trying to log into a host running Ubuntu 10.10 desktop with publickey authentication, and it fails when I first log in: [me@my-laptop:~]$ ssh -vv host ... debug1: Next authentication method: publickey debug1: Offering public key: /Users/me/.ssh/id_rsa ... debug2: we did not send a packet, disable method debug1: Next authentication method: password me@hosts's password: And the /var/log/auth.log output: Jan 16 09:57:11 host sshd[1957]: reverse mapping checking getaddrinfo for cpe-70-114-155-20.austin.res.rr.com [70.114.155.20] failed - POSSIBLE BREAK-IN ATTEMPT! Jan 16 09:57:13 host sshd[1957]: pam_sm_authenticate: Called Jan 16 09:57:13 host sshd[1957]: pam_sm_authenticate: username = [astacy] Jan 16 09:57:13 host sshd[1959]: Passphrase file wrapped Jan 16 09:57:15 host sshd[1959]: Error attempting to add filename encryption key to user session keyring; rc = [1] Jan 16 09:57:15 host sshd[1957]: Accepted password for astacy from 70.114.155.20 port 42481 ssh2 Jan 16 09:57:15 host sshd[1957]: pam_unix(sshd:session): session opened for user astacy by (uid=0) Jan 16 09:57:20 host sudo: astacy : TTY=pts/0 ; PWD=/home/astacy ; USER=root ; COMMAND=/usr/bin/tail -f /var/log/auth.log The strange thing is that once I've got this first login session, I run the exact same ssh command, and publickey authentication works: [me@my-laptop:~]$ ssh -vv host ... debug1: Server accepts key: pkalg ssh-rsa blen 277 ... [me@host:~]$ And the /var/log/auth.log output is: Jan 16 09:59:11 host sshd[2061]: reverse mapping checking getaddrinfo for cpe-70-114-155-20.austin.res.rr.com [70.114.155.20] failed - POSSIBLE BREAK-IN ATTEMPT! Jan 16 09:59:11 host sshd[2061]: Accepted publickey for astacy from 70.114.155.20 port 39982 ssh2 Jan 16 09:59:11 host sshd[2061]: pam_unix(sshd:session): session opened for user astacy by (uid=0) What do I need to do to make publickey authentication work on the first login? NOTE: When I installed Ubuntu 10.10, I checked the 'encrypt home folder' option. I'm wondering if this has something to do with the log message "Error attempting to add filename encryption key to user session keyring"

    Read the article

  • Why is my global security group being filtered out of my logon token?

    - by Jay Michaud
    While investigating the effects of filtered tokens on my file permissions, I noticed that one of my global security groups is being filtered in addition to the regular system-defined filtered groups. My Active Directory environment is a single-domain forest on the Windows Server 2003 functional level. I'll call the domain "mydomain.example.com". I am logged onto a Windows Server 2008 Enterprise Edition machine (not a domain controller) as a member of the "MYDOMAIN\Domain Admins" group and the "MYDOMAIN\MySecurityGroup" global security group (among others). When I run "whoami /groups" from an elevated command prompt, I see the full list of groups to which my account belongs as expected. When I run "whoami /groups" from a regular, non-elevated command prompt, I see the same list of groups, but the following groups are described as "Group used for deny only". BUILTIN\Administrators MYDOMAIN\Schema Admins MYDOMAIN\Offer Remote Assistance Helpers MYDOMAIN\MySecurityGroup Numbers 1 through 3 above are expected based on Microsoft documentation; number 4 is not. The "MYDOMAIN\MySecurityGroup" global security group is a group that I created. It contains three non-built-in global security groups, and these security groups contain only non-built-in user accounts. (That is, I created all of the accounts and groups that are members of the "MYDOMAIN\MySecurityGroup" global security group.) There are other, similar groups of which my account is a member that are not being filtered out of my logon token, and this group is not granted any specific user rights in the security settings of this computer or in Group Policy. What would cause this one group to be filtered out of my logon token?

    Read the article

  • How to install Red Hat Enterprise Linux on Apple Macbook Pro MacBookPro4,1

    - by Todd V. Rovito
    I have a one year old Mac Book Pro that I am trying to get RHEL 5.4 installed on via bootcamp. No matter what I do I can't get the installer to boot. I have tried multiple DVD's and even verified the install works on a new Mac Book Pro. Most of the time the installer simply locks up. I usually use Linux text with all-generic-ide on the boot line. I removed the ide parameter and I just used linux text. The results I get are that a bunch of kernel messages appear then the background turns blue and a thin text box pops up saying its loading ata..... something it disappears too fast for me to read. Then the machine freezes. I pressed the alt function keys to see if I could look at the system log, here is what it says: Alt-f3 says "trying to mount CD device hda" Alt-f4 says status error: hda: lastFailedSense Hda: Failed opcode was: unknown Hda: Lost interrupt Hda: Drive not ready for command Ide-cd: command 0x3 timed out Above this junk it looks like it found the partition because it knew it was 20 GB and listed as /dev/sda3. I think it has something to do with the CD drive, is that possible? Thanks again for the support. PS I posted in the apple support forums ( Apple.com Support Discussions Boot Camp Installation and Storage) and didn't get an answer.

    Read the article

  • Why do my backup fail when I target a network share hosted by a Synology DS211 disk station?

    - by Larry
    My backups are failing when I try to use a network share hosted by a Synology DS211 disk station. They work fine if I target a different network share (i.e. \server1\data\larry). When I run the following command: Wbadmin start backup -backupTarget:\\diskstation\backup-larry -include:C: This is what I get: wbadmin 1.0 - Backup command-line tool (C) Copyright 2004 Microsoft Corp. Note: The backed up data cannot be securely protected at this destination. Backups stored on a remote shared folder might be accessible by other people on the network. You should only save your backups to a location where you trust the other users who have access to the location or on a network that has additional security precautions in place. Retrieving volume information... This will back up volume WIN7(C:) to \\diskstation\backup-larry. Do you want to start the backup operation? [Y] Yes [N] No y Note: The list of volumes included for backup does not include all the volumes that contain operating system components. This backup cannot be used to perform a system recovery. However, you can recover other items if the destination media type supports it. The backup operation to \\diskstation\backup-larry is starting. Creating a shadow copy of the volumes specified for backup... Creating a shadow copy of the volumes specified for backup... The backup operation stopped before completing. Summary of the backup operation: ------------------ The backup operation stopped before completing. Detailed error: Access is denied. Windows Backup failed to write the file: '<backup location>\WindowsImageBackup\<Computer Name>\MediaId'. Access is denied. The backup creates the following path \\diskstation\backup-larry\WindowsImageBackup\LARRY-MYDOMAIN\ but its empty. I definitely have read/write access on the target directory (\diskstation\backup-larry). I have verified this by looking at the permission and by actually copying files to this location. Any suggestions?

    Read the article

< Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >