Search Results

Search found 28350 results on 1134 pages for 'command switch'.

Page 296/1134 | < Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >

  • Share folder and access this folder on different domain

    - by michel
    the following situations: i have two pc's. My work desktop with XP and logged on in domain mywork.com. This desktop also has two network card. one for logging on the mywork.com domain and using the intranet etc. this desktop also has a network card with access to a switch. the other pc is a windows 7 pc. with is logged in to workgroup domain and also access the switch. now i want to access a shared folder from XP with my 7. but this is not possible because XP is in a different domain. 7 is asked for a user and password but i can't fill in my "mywork.com" login. how can i solve this?

    Read the article

  • My linux server time and log files are not the same

    - by Martin
    Hi i have installed NTP on my linux server and i am getting my clock from a 6500 core switch, everything is working fine. When i ssh to a switch i have it all sent to a log file on the linux server, this log file does not time stamp with the same time as i have on the server. the date and hwclock are the same. But my my is exatly 6 hours behind my date on the server wich is CET. Has annyone have that same problem ? bedst regards Martin

    Read the article

  • Ubuntu 9.10 RSA authentication: ssh fails, filezilla runs fine

    - by MariusPontmercy
    This is quite a mistery for me. I usually use passwordless RSA authentication to login into my remote *nix servers with ssh and sftp. Never had any problem until now. I cannot connect to an Ubuntu 9.10 machine: user@myclient$ ssh -i .ssh/Ganymede_key [email protected] [...] debug1: Host 'ganymede.server.com' is known and matches the RSA host key. debug1: Found key in /home/user/.ssh/known_hosts:14 debug2: bits set: 494/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: .ssh/Ganymede_key (0xb96a0ef8) debug2: key: .ssh/Ganymede_key ((nil)) debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: .ssh/Ganymede_key debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Trying private key: .ssh/Ganymede_key debug1: read PEM private key done: type RSA debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug2: we did not send a packet, disable method debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 Then it falls back to password authentication. If I disable password authentication on the remote machine my connection attempt just fails with a "Permission denied (publickey)." state. Same thing for sftp from command line. The "funny" thing is that the exact same RSA key works like a charm with a Filezilla sftp session instead: 12:08:00 Trace: Offered public key from "/home/user/.filezilla/keys/Ganymede_key" 12:08:00 Trace: Offer of public key accepted, trying to authenticate using it. 12:08:01 Trace: Access granted 12:08:01 Trace: Opened channel for session 12:08:01 Trace: Started a shell/command 12:08:01 Status: Connected to ganymede.server.com 12:08:02 Trace: CSftpControlSocket::ConnectParseResponse() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Retrieving directory listing... 12:08:02 Trace: CSftpControlSocket::SendNextCommand() 12:08:02 Trace: CSftpControlSocket::ChangeDirSend() 12:08:02 Command: pwd 12:08:02 Response: Current directory is: "/root" 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Trace: CSftpControlSocket::ParseSubcommandResult(0) 12:08:02 Trace: CSftpControlSocket::ListSubcommandResult() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Directory listing successful Any thoughts? M

    Read the article

  • Telnet does not give a response

    - by floorish
    Some wireless access points are acting a little weird, so I want to reboot them every couple of hours. Luckily there exists a security flaw which lets me login as root through telnet when using port 1111 (without username and password). Now I want to use that to let my QNAP NAS execute the reboot command through telnet every now and then. The problem is however that that telnet version doesn't give any response if I connect to the AP. The telnet I use on OSX works just fine but the one on the NAS not. BusyBox v1.01 (2012.06.14-18:35+0000) multi-call binary Usage: telnet [-a] [-l USER] HOST [PORT] When I execute telnet <HOST> 1111 nothing happens. I can send the escape character ^] which gives me the following options: Console escape. Commands are: l go to line mode c go to character mode z suspend telnet e exit telnet The only way to get some commands executed is by suspending telnet with z followed by some random command which isn't recognized. Then the prompt shows this: # telnet 192.168.1.5 1111 ^] Console escape. Commands are: l go to line mode c go to character mode z suspend telnet e exit telnet z continuing... asdf Illegal command. 00> After that I am able to communicate with the AP, but when I exit the telnet session and try the same again, the AP refuses to connect at all and it must be manually rebooted (looks like the telnet session isn't shut down properly on the AP). So the question is what commands should I execute in order to communicate with the AP using the Busybox telnet version of the QNAP? (No, can't use ssh unfortunately)

    Read the article

  • Handle filename with spaces inside Bash-script

    - by ifischer
    In my Bash-script i have to handle filenames with spaces. These are the important lines inside my script: mp3file="/media/d/Music/zz_Hardcore/Sampler/Punk-O-Rama\ Vol.5\ \[MP3PRO\]/01\ -\ Nofx\ -\ Pump\ up\ the\ Valium.mp3" echo "Command: mp3info -x `echo $mp3file`" mp3info -x `echo $mp3file` Unfortunately, the command does not work, because the filename is splitted: mp3info: invalid option -- '\' mp3info: invalid option -- '\' Error opening MP3: /media/d/Music/zz_Hardcore/Sampler/Punk-O-Rama\: No such file or directory Error opening MP3: Vol.5\: No such file or directory Error opening MP3: \[MP3PRO\]/01\: No such file or directory Error opening MP3: Nofx\: No such file or directory Error opening MP3: Pump\: No such file or directory Error opening MP3: up\: No such file or directory Error opening MP3: the\: No such file or directory Error opening MP3: Valium.mp3: No such file or directory I also tried to add a custom IFS as i read on some forums: SAVEIFS=$IFS IFS=$(echo -en "\n\b") # Script like above IFS=$SAVEIFS But this way, i'm getting the error Error opening MP3: /media/d/Music/zz_Hardcore/Sampler/Punk-O-Rama\ Vol.5\ \[MP3PRO\]/01\ -\ Nofx\ -\ Pump\ up\ the\ Valium.mp3: No such file or directory I tried quite a while now but i cannot get my script to work. What is strange is that if i'm running the same command that my script should create manually (echoing it inside my script) on the Shell, it actually works. But not inside my script. Any hints?

    Read the article

  • PHP CLI not respecting memory limit in php.ini

    - by user13743
    I am using drush, which is a command-line php app to manage a drupal website. I am running a command to import a lot of data, which is causing me to hit php's memory limit. PHP Fatal error: Allowed memory size of 536870912 bytes exhausted ... Which is 512MB if I'm doing the math correctly (536870912 / 1024 / 1024 = 512). I've changed the directive in the php.ini that drush uses: $> drush status ... PHP configuration : /etc/php5/cli/php.ini $> grep memory /etc/php5/cli/php.ini ; Maximum amount of memory a script may consume (128MB) ; http://php.net/memory-limit memory_limit = 1024M But I'm still hitting the 512 MB limit! I am running in a virtual machine, whose memory settings I changed from 512 to 1025 MB of RAM to allow drush to run. $> free -m total used free shared buffers cached Mem: 1010 578 431 0 14 392 -/+ buffers/cache: 172 837 Swap: 382 0 382 So it says it has some 431 MB free, now that I've bumped the vm up to 1024. I guess half the memory is being used to run the GUI, but I don't understand how the GUI was running okay when the vm had 512 MB of ram. Why is the PHP cli still hitting a 512 MB memory limit? If it was hitting a system memory limit, shouldn't it die around 431MB, which is what the free command says is available?

    Read the article

  • Megacli is killing me, any help appreciated

    - by Stefan
    I run a server with 2 drives in raid0 configured through BIOS. I just added 2 more drives using hotplug (the server is dell r610 with RHEL 5.4 64bit) and I would like to configure a separate raid0 partition on these drives. I am getting the following error: /opt/MegaRAID/MegaCli/MegaCli64 -CfgLdAdd r0[32:2, 32:3] -a0 The specified physical disk does not have the appropriate attributes to complete the requested command. Exit Code: 0x26 All the parameters are correct and there is just no reason why this command could not work, see this (fujitsu is current raid, seagate is the new one I want to create): /opt/MegaRAID/MegaCli/MegaCli64 -PDList -aALL | egrep 'Adapter|Enclosure|Slot|Inquiry' Adapter #0 Enclosure Device ID: 32 Slot Number: 0 Enclosure position: 0 Inquiry Data: FUJITSU MBD2147RC D807D0A4PA101174 Enclosure Device ID: 32 Slot Number: 1 Enclosure position: 0 Inquiry Data: FUJITSU MBD2147RC D807D0A4PA10115T Enclosure Device ID: 32 Slot Number: 2 Enclosure position: 0 Inquiry Data: SEAGATE ST9300603SS FS033SE0TF5K Enclosure Device ID: 32 Slot Number: 3 Enclosure position: 0 Inquiry Data: SEAGATE ST9300603SS FS023SE070FK I also tried to set up the drive as hotspare, also some strange error: /opt/MegaRAID/MegaCli/MegaCli64 -PDHSP -Set -physdrv[32:3] -a0 Adapter: 0: Set Physical Drive at EnclId-32 SlotId-3 as Hot Spare Failed. FW error description: The specified device is in a state that doesn't support the requested command. Exit Code: 0x32 As you can see the disk is in Unconfigured, Good state: Enclosure Device ID: 32 Slot Number: 3 Enclosure position: 0 Device Id: 3 Sequence Number: 1 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SAS Raw Size: 279.396 GB [0x22ecb25c Sectors] Non Coerced Size: 278.896 GB [0x22dcb25c Sectors] Coerced Size: 278.875 GB [0x22dc0000 Sectors] Firmware state: Unconfigured(good), Spun Up SAS Address(0): 0x5000c50005cd20b1 SAS Address(1): 0x0 Connected Port Number: 3(path0) Inquiry Data: SEAGATE ST9300603SS FS023SE070FK FDE Capable: Not Capable FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: Foreign Foreign Secure: Drive is not secured by a foreign lock key Device Speed: Unknown Link Speed: Unknown Media Type: Hard Disk Device Drive Temperature :30C (86.00 F)

    Read the article

  • NIC Teaming HP Server running Win2003

    - by Colm
    Hello everyone. I'm new to server NIC TEaming. I have a HP ProLiant DL360 G4p running Win2003 with 2 NICs , only one is currently active. I'd like to activate the 2nd NIC connected (in a active/passive state) to a 2nd switch with only one IP address and ideally only one mac layer address. The 1st switch is a Cisco 2960G and the 2nd is a Cisco C3560G. There are VLANS, RSTP and PAGP in use already. Can someone give me an idea, in broad terms , of what technology/protocols I should be investigating (HSRP, SLB/TLB Teaming etc.) ? I can provide more info if needed. Thanks, Colm.

    Read the article

  • Remote RIB iLO on Proliant via RIBCL

    - by Wudang
    I'm trying to automate a process for our Ops. The process requires that some windows servers running on blades are shut down, left down for a few hours, the restarted when some other processes complete. This is done by an op logging on to each blade's iLO web interface to stop and start. I've been trying to automate this with HP's cpqlocfg program with partial success. I can issue the GET_POWER, GET_USER_INFO, etc commands but SET_HOST_POWER fails in a specific way. Using the cpqlocfg GET_EVENTLOG command I can see the events XML login and the power comand being issued from the iLO interface but then nothing happens. Some hints from googling suggest ACPI isn't configured properly but I can't find any hits on how to verify this. Am I even using the right command? There's also a few other options like PRESS_PWR_BUTTON etc. Problem is I have nowhere to test this, all I can do at the moment is give a script to ops and ask them to try it as 4am on a Sunday when they try the proc. The shutdown is trivial as I can use the windows "shutdown" command, it's the power on that I need help on. Anyone done this? I'd tag this "rib ribcl ilo" but lack the rep points, sorry.

    Read the article

  • WS2008 subst in Logon script does not "stick"

    - by Frans
    I have a terminal server environment exclusively with Windows Server 2008. My problem is that I need to "map" a drive letter to each users Temp folder. This is due to a legacy app that requries a separate Temp folder for each user but which does not understand %temp%. So, just add "subst t: %temp%" to the logon script, right? The problem is that, even though the command runs, the subst doesn't "stick" and the user doesn't get a T: drive. Here is what I have tried; The simplest version: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") WinShell.Run "subst T: %temp%", 2, True That didn't work, so tried this for more debug information: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") Set procEnv = WinShell.Environment("Process") wscript.echo(procEnv("TEMP")) tempDir = procEnv("TEMP") WinShell.Run "subst T: " & tempDir, 3, True This shows me the correct temp path when the user logs in - but still no T: Drive. Decided to resort to brute force and put this in my login script: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") WinShell.Run "\\domain\sysvol\esl.hosted\scripts\tempdir.cmd", 3, True where \domain\sysvol\esl.hosted\scripts\tempdir.cmd has this content: echo on subst t: %temp% pause When I log in with the above then the command window opens up and I can see the subst command being executed correctly, with the correct path. But still no T: drive. I have tried running all of the above scripts outside of a login script and they always work perfectly - this problem only occurs when doing it from inside a login script. I found a passing reference on an MSFN forum about a similar problem when the user is already logged on to another machine - but I have this problem even without being logged on to another machine. Any suggestion on how to overcome this will be much appreciated.

    Read the article

  • certutil -ping fails with 30 seconds timeout - what to do?

    - by mark
    The certificate store on my Win7 box is constantly hanging. Observe: C:\1.cmd C:\certutil -? | findstr /i ping -ping -- Ping Active Directory Certificate Services Request interface -pingadmin -- Ping Active Directory Certificate Services Admin interface C:\set PROMPT=$P($t)$G C:\(13:04:28.57)certutil -ping CertUtil: -ping command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:04:58.68)certutil -pingadmin CertUtil: -pingadmin command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:05:28.79)set PROMPT=$P$G C:\ Explanations: The first command shows you that there are –ping and –pingadmin parameters to certutil Trying any ping parameter fails with 30 seconds timeout (the current time is seen in the prompt) This is a serious problem. It screws all the secure communication in my app. If anyone knows how this can be fixed - please share. Thanks. P.S. 1.cmd is simply a batch of these commands: certutil -? | findstr /i ping set PROMPT=$P($t)$G certutil -ping certutil -pingadmin set PROMPT=$P$G EDIT1 I have succeeded to pin down the single windows API that causes the problem - DsGetDcName According to the windbg, the certutil -ping invokes it like so: PDOMAIN_CONTROLLER_INFO pdci; DWORD ret = ::DsGetDcName(NULL, NULL, NULL, NULL, DS_DIRECTORY_SERVICE_PREFERRED, &pdci); On my workstation it times out for 30 seconds and then returns error code 1355, which is ERROR_NO_SUCH_DOMAIN No domain controller is available for the specified domain or the domain does not exist. On another machine, which is accidentally a windows server 2003, it returns almost immediately with the correct domain controller name inside the returned DOMAIN_CONTROLLER_INFO structure. Now the question is what is missing on my workstation for that API to find the correct domain controller?

    Read the article

  • Problems configuring an SSH tunnel to a Nexentastor appliance for use with headless Crashplan

    - by Rob Smallshire
    Problem I am attempting to configure an SSH tunnel to a NexentaStor appliance from either a Windows or Linux computer so that I can connect a Crashplan Desktop GUI to a headless Crashplan server running on the Nexenta box, according to these instructions on the Crashplan support site: Connect to a Headless CrashPlan Desktop. So far, I've failed to get a working SSH tunnel from from either either a Windows client (using Putty) or a Linux client (using command line SSH). I'm fairly sure the problem is at the receiving end with NexentaStor. A blog article - CrashPlan for Backup on Nexenta - indicates that it could be made to work only after "after enabling TCP forwarding in Nexenta in /etc/ssh/sshd_config" - although I'm not sure how to go about that or specifically what I need to do. Things I have tried Ensuring the Crashplan server on the Nexenta box is listening on port 4243 $ netstat -na | grep LISTEN | grep 42 127.0.0.1.4243 *.* 0 0 131072 0 LISTEN *.4242 *.* 0 0 65928 0 LISTEN Establishing a tunnel from a Linux host: $ ssh -L 4200:localhost:4243 admin:10.0.0.56 and then, from another terminal on the Linux host, using telnet to verify the tunnel: $ telnet localhost 4200 Trying ::1... Connected to localhost. Escape character is #^]'. with nothing more, although the Crashplan server should respond with something. From Windows, using PuTTY have followed the instructions on the Crashplan support site to establish an equivalent tunnel, but then telnet on Windows gives me no response at all and the Crashplan GUI can't connect either. The PuTTY log for the tunnelled connection shows reasonable output: ... 2011-11-18 21:09:57 Opened channel for session 2011-11-18 21:09:57 Local port 4200 forwarding to localhost:4243 2011-11-18 21:09:57 Allocated pty (ospeed 38400bps, ispeed 38400bps) 2011-11-18 21:09:57 Started a shell/command 2011-11-18 21:10:09 Opening forwarded connection to localhost:4243 but the telnet localhost 4200 command from Windows does nothing at all - it just waits with a blank terminal. On the NexentaStor server I've examined the /etc/ssh/sshd_config file and everything seems 'normal' - and I've commented out the ListenAddress entries to ensure that I'm listening on all interfaces. How can I establish a tunnel, and how can I verify that it is working?

    Read the article

  • Troubles installing/starting Redis via Resque

    - by Craig Flannagan
    Trying to complete instructions for Resque/Redis installation here: https://github.com/defunkt/resque/blob/master/README.markdown Am stuck at where I'm trying to start up Redis via Resque at the following command: Craig:/usr/local/src/resque$ rake redis:start (in /usr/local/src/resque) Detach with Ctrl+\ Re-attach with rake redis:attach ../../bin/dtach -A /tmp/redis.dtach ../../bin/redis-server ../../../etc/redis.conf rake aborted! Command failed with status (127): [../../bin/dtach -A /tmp/redis.dtach ../../...] (See full trace by running task with --trace) Rerunning with --trace (showing only part of trace): Craig:/usr/local/src/resque$ rake redis:start --trace (in /usr/local/src/resque) ** Invoke redis:start (first_time) ** Execute redis:start Detach with Ctrl+\ Re-attach with rake redis:attach ../../bin/dtach -A /tmp/redis.dtach ../../bin/redis-server ../../../etc/redis.conf rake aborted! Command failed with status (127): [../../bin/dtach -A /tmp/redis.dtach ../../...] /Users/craigflannagan/.rvm/gems/ruby-1.9.2-head@foo/gems/rake-0.8.7/lib/rake.rb:995:in `block in sh' Not sure what is wrong here - by the way, when I did those instructions $ git clone git://github.com/defunkt/resque.git $ cd resque $ PREFIX=<your_prefix> rake redis:install dtach:install $ rake redis:start I wasn't sure whether or not I was supposed to be doing #1 from within the Rails project, or if I was supposed to have the git clone create a new folder outside the Rails project (in this case, I chose to have folder created outside the project).

    Read the article

  • File transfer problems through VPN when Cisco IPS is enabled

    - by Richard West
    We have a Cisco ASA 5510 firewall with the IPS module installed. We have a customer that we must connect to via VPN to their network to exchange files via FTP. We use the Cisco VPN client (version 5.0.01.0600) on our local workstations, which are behind the firewall and subject to the IPS. The VPN client is successful in connecting to the remote site. However when we start the FTP file transfer we are able to upload only 150K to 200K of data, then everything stops. A minute later the VPN session is dropped. I think I have isolated this to an IPS issue by temporarily disabling the Service Policy on the ASA for the IPS with the following command: access-list IPS line 1 extended permit ip 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 inactive After this command was issued I then established the VPN to the remote site and was successful in transferring the entire file. While still connected to the VPN and FTP session I issued the command to enable the IPS: access-list IPS line 1 extended permit ip 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 The file transfer was tried again and was once again successful so I closed the FTP session and reopened it, while keeping the same VPN session open. This file transfer was also successful. This told me that nothing with the FTP programs was being filtered or causing the problem. Furthermore, we use FTP to exchange files with many sites everyday without issue. I then disconnected the original VPN session, which was established when the access-list was inactive, and reconnected the VPN session, now with the access-list active. After starting the FTP transfer the file stopped after 150K. To me this seems like the IPS is blocking, or somehow interfering with the initial VPN setup to the remote site. This only started happening last week after the latest IPS signature updates were applied (sig version 407.0). Our previous sig version was 95 days old becuase the system was not auto updating itself. Any ideas on what could be causing this problem?

    Read the article

  • help Add Any Application Shortcut in Desktop Context Menu

    - by blackjack
    i got the info here but after adding that i didn't get any shortcut on my desktop contest menu :( pls help me i want it only on my desktop context menu Open regedit and goto: CODEHKEY_CLASSES_ROOT\Directory\Background\shell now under this key create another key with any name and in right-side pane set its value to the label, which you want to show in desktop context menu, like Media Player, Winamp, Firefox, anything else. Now create another key under this newly created key with name command. and in right-side pane set its value to the exact path of application, like: C:\Program Files\Windows Media Player\wmplayer.exe C:\Program Files\Winamp\winamp.exe etc... Thats it. Now you can check your favorite application shortcut in desktop context menu. You can create as many shortcut as you want. Simply create a separate key for all the applications. Following is a ready-made code: CODEWindows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\Directory\Background\shell\WMP] @="Windows Media Player" [HKEY_CLASSES_ROOT\Directory\Background\shell\WMP\command] @="C:\Program Files\Windows Media Player\wmplayer.exe" Just change the label and path to ur desired application and save with the name "vishal.reg" (including the quotes) and run it. U can also set the application shortcut to show only when u press key by adding "Extended" String value in right-side pane of the newly created key: CODEWindows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\Directory\Background\shell\WMP] @="Windows Media Player" "Extended"="" [HKEY_CLASSES_ROOT\Directory\Background\shell\WMP\command] @="C:\Program Files\Windows Media Player\wmplayer.exe"

    Read the article

  • Problems during an update of cPanel / WHM

    - by haron
    I ordered a Master WHM account with the couple CentOS / cPanel. whm-cpanel.eu.pn The installation is a fresh update of the basic services was necessary (had: WHM 11.15.0 cPanel 11.17.0 WHM X v3.1.0, Apache 1.3.37, PHP 4.4.7, MySQL 4.1.22). 1 / I started to update cPanel / WHM via the command: / scripts / upcp. Everything went well until the middle of installing the server stopped responding (or ping, or ssh). The installation appears to have continued alone to the end and after some time everything is back to normal (I do not know if there was a reboot) and my interface was updated (cPanel 11.24.4-R36167 - WHM 11.24.2 - X 3.9). 2 / Then I updated via the MySQL interface tweak this in WHM then the command: / scripts / mysqlup. Here everything went fine, no problem. 3 / Finally, I wanted to upgrade Apache 2.2 / PHP 5 and I used this command: / scripts / easyapache. After selecting all the packages and modules installation is started but the same as for point 1: the server did not answer more and this time the installation did not go through. Apache 2.2 is well spent (after the second try) but PHP has remained at 4. I tried several times the same operation without success. I do not think this is a memory problem, a free-m shortly before losing communications gave nothing alarming. By cons CPU time seemed to rise up. I reinstalled the machine again the trick, same problem! Whether via the WHM interface or by Shell, the installation stops short, for 15 minutes the machine is not responding and then everything returns to normal, but no update is done in PHP. Is there a known bug in this version of cPanel / WHM? Someone he met the same problem? If I compile Apache / PHP manually, without using the script easyapache is what I might encounter problems with cPanel later? Thank you!

    Read the article

  • Cannot cd to parent directory with cd dirname

    - by Sharjeel Sayed
    I have made a bash command which generates a one liner for restarting all Weblogic ( 8,9,10) instances on a server /usr/ucb/ps auwwx | grep weblogic | tr ' ' '\n' | grep security.policy | grep domain | awk -F'=' '{print $2}' | sed 's/weblogic.policy//' | sed 's/security\///' | sort | sed 's/^/cd /' | sed 's/$/ ; cd .. ; \/recycle_script_directory_path\/recycle/ ;' | tr '\n' ' ' To restart a Weblogic instance, the recycle ( /recycle_script_directory_path/recycle/) script needs to be initiated from within the domain directory as the recycle script pulls some application information from some .ini files in the domain directory. The following part of the script generates a line to cd to the parent directory of the app i.e. the domain directory sed 's/$/ ; cd .. ; \/recycle_script_directory\/recycle/ ;' | tr '\n' ' ' I am sure there is a better way to cd to the parent directory like cd dirname but every time i run the following cd command , it throws a "Variable syntax" error. cd $(dirname '/domain_directory_path/app_name') How do i incorporate the cd to the directory name in a better way ? Also are there any enhancements for my bash command Some info on my script 1) The following part lists out the weblogic instances running along with their full path /usr/ucb/ps auwwx | grep weblogic | tr ' ' '\n' | grep security.policy | grep domain | awk -F'=' '{print $2}' | sed 's/weblogic.policy//' | sed 's/security\///' | sort 2) The grep domain part is required since all domain names have domain as the suffix

    Read the article

  • lftp cannot connecto to IIS

    - by ruyrocha
    Hello, I can not connect to IIS using lftp as you can see here: <--- 200 Language is now English, UTF-8 encoding. ---> OPTS UTF8 ON <--- 200 OPTS UTF8 command successful - UTF8 encoding now ON. ---> HOST x.x.x.x <--- 504 Server cannot accept argument. ---> USER bla <--- 331 Password required for hgtrf. ---> PASS blabla <--- 230 User logged in. ---> PWD <--- 257 "/" is current directory. ---> PBSZ 0 <--- 200 PBSZ command successful. ---> PROT P <--- 534 Policy denies SSL. ---> PASV <--- 227 Entering Passive Mode (x.x.x.x,194,118). ---- Connecting data socket to (x.x.x.x) port 49782 **** Socket error (Connection refused) - reconnecting ---> LIST ---> ABOR ---- Closing aborted data socket ---- Closing control socket I could connect, list, retrieve and send files using standard ftp command. Do you have any suggestion?

    Read the article

  • Option 82 and dhcpd. "No free leases" for second computer

    - by SaveTheRbtz
    There is DHCP server in network (isc-dhcpd-server-3.0 on FreeBSD 7.2) than gives one IP per switch port to every user via Option 82 The problem appears when user disconnects one of his computers and connects another(i.e notebook with different MAC address) then DHCPD puts to log "...network net1: no free leases", because there is record in leases file that this IP is already owned by another MAC. That second computer will have his IP only after default-lease-time (that is IIRC minimum 10min, and after 3min he usually calling support) or after deletion of dhcpd.leases file and restart of dhcpd. Is there a way to turn leases off at all, because we have strict binding between switch-port-ip?

    Read the article

  • network design to segregate public and staff

    - by barb
    My current setup has: a pfsense firewall with 4 NICs and potential for a 5th 1 48 port 3com switch, 1 24 port HP switch, willing to purchase more subnet 1) edge (Windows Server 2003 for vpn through routing and remote access) and subnet 2) LAN with one WS2003 domain controller/dns/wins etc., one WS2008 file server, one WS2003 running Vipre anti-virus and Time Limit Manager which controls client computer use, and about 50 pcs I am looking for a network design for separating clients and staff. I could do two totally isolated subnets, but I'm wondering if there is anything in between so that staff and clients could share some resources such as printers and anti-virus servers, staff could access client resources, but not vice versa. I guess what I'm asking is can you configure subnets and/or vlans like this: 1)edge for vpn 2)services available to all other internal networks 3)staff which can access services and clients 4)clients which can access services but not staff By access/non-access, I mean stronger separation than domain usernames and passwords.

    Read the article

  • Migrating from DD-WRT to Tomato

    - by Collin Allen
    Is it possible to switch to the latest version of Tomato on a router that's already running DD-WRT? Using the default Linksys firmware on my WRT54GL v1.1, I had to upload a micro version of DD-WRT first. I imagine that, since I'm now running third-party firmware, I won't have to do that again to make the switch, but I thought I should check so as not to brick it. This router is taking a back seat to a new AirPort Extreme (for the 'n' capability), but I still want to have the soon-to-be-Tomato device sit between the AirPort Extreme and my modem for the superior traffic graphing.

    Read the article

  • frequent "SNMP error" with Cacti

    - by nn4l
    When adding new devices to my Cacti instance, I get frequent "SNMP error" messages in the device screen. But the error is not consistent, not even for the same device. Here's what I already have checked: Sometimes a device shows that "SNMP error" message even when it did not had that error an hour before, and vice versa. I tried this with several different Cacti releases, installed on different OS (Debian squeeze: 0.8.7g-1+squeeze1, Debian Sid: 0.8.7i-3, CentOS 6.0: 0.8.7i-2.el6) tried both from a local (192.168.1.xy) network and from a different data center so I don't think it is a network problem reinstalled the Cacti database, rerun the scripts to install my devices. Now different devices have that error when executing a snmpwalk or snmpgetnext command from the command line, it is always successful increasing the timeout to 20000 (20 seconds) and the retry count to 10 does not make a difference The cacti.log says: 04/14/2012 02:10:19 PM - CMDPHP: Poller[0] WARNING: SNMP GetNext Timeout for Host:'s0026.mydomain.de', and OID:'.1.3.6.1.2.1.1.3.0' 04/14/2012 02:10:20 PM - CMDPHP: Poller[0] WARNING: SNMP GetNext Timeout for Host:'s0026.mydomain.de', and OID:'.1.3' However, when executing snmpget or snmpget with that from the command line a proper response is returned immediately.

    Read the article

  • Remote Scripted Installation of Sun/Oracle JRE

    - by chrisbunney
    I'm attempting to automate the installation of a Debian server (debian 6.0 squeeze 64bit). Part of the installation requires the Sun JRE package to be installed. This package has a licence agreement, which has to be accepted. I have a script which uses the following lines to accept and install the JRE: echo "sun-java6-bin shared/accepted-sun-dlj-v1-1 boolean true" | debconf-set-selections apt-get install -y sun-java6-jre This works fine when executing the script locally. However, I need to execute the script remotely using the ssh command, e.g.: ssh -i keyFile root@hostname './myScript' This doesn't work. In particular, it fails on apt-get install -y sun-java6-jre. It would seem that in spite of me setting the licence agreement to accepted, when run remotely in this manner it is ignored. Despite setting the value to true, I still get prompted to manually accept the agreement when I run this command: ssh -i keyFile root@hostname 'apt-get install -y sun-java6-jre' I suspect it is something to do with environment that is taken care of when running a proper terminal session, but have no idea what to try next to fix it. So, what do I have to do to get this command (and hence my deployment script) to run correctly when executing it remotely? Or is there an alternative way that allows me to install the JRE remotely by another means? Edit 0: I have compared the output of env when executed remotely via ssh and when executed via a local terminal session. The only difference between the outputs is that the local terminal session has the additional value TERM=xterm.

    Read the article

  • How to remotely open gedit with SFTP URL in Gnome through SSH?

    - by Álvaro Justen
    My setup is weird and I can't change it now. I have two machines: local-machine: it's my desktop running Ubuntu with Gnome remote-machine: it's one virtual machine, also running Ubuntu but without X In both machines I have my private and public SSH keys. I need to run SSH from remote-machine to local-machine and run gedit (in local-machine, under the default $DISPLAY) but openning a file in remote-machine throught SFTP. Something like this: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 gedit sftp://remote-machine/some/file" The command above doesn't work. gedit shows this message: Could not open the file sftp://remote-machine/some/file. gedit cannot handle sftp: locations. Note that: /some/file exists on remote-machine. I can SSH normally from remote-machine to local-machine using my SSH key without any problems! I can run the command DISPLAY=:0.0 gedit sftp://remote-machine/some/file in a terminal on local-machine and gedit opens the file on remote-machine without any problems - but the terminal in which I executed the command is running in DISPLAY :0 (really, it's gnome-terminal). I also tried -t option of SSH client (to force pseudo-tty allocation) but it didn't work. If I try to run DISPLAY=:0.0 gedit sftp://remote-machine/some/file in local-machine but under a tty (for example in tty1, by pressing <Ctrl>+<Alt>+<F1>) it doesn't not work - I get the same error when running from remote-machine. I found that if I pass the environment variable DBUS_SESSION_BUS_ADDRESS with a correct value, it works! So, if I do something like that: myuser@local-machine:~$ env | grep DBUS_SESSION_BUS_ADDRESS > env.txt myuser@local-machine:~$ scp env.txt remote-machine: and then: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 $(cat env.txt) gedit sftp://remote-machine/some/file" it works! The problem is that I'm not on local-machine so I can't get the correct value for this env variable. Is there any other way to make this work?

    Read the article

  • Synergy setup broke on upgrade

    - by CoatedMoose
    I had synergy setup working fine with version 1.3.7, however I got a new computer and decided to set it up as well. Because the setup I was working with was ubuntu (server - dual monitors) mac (client) and the new computer (replacing the mac) was windows, I ended up updating everything to 1.4.10. ______ ______ ______ | mac | ubu1 | ubu2 | |______|______|______| The problem is currently that dragging to the left of ubu1 causes the cursor on the mac to flicker briefly and then the cursor shows up at the bottom right corner of ubu2. Here is my .synergy.conf section: screens Andrews-Mac-Mini: ctrl = ctrl alt = meta super = alt Andrew-Ubuntu: end section: links Andrew-Ubuntu: left = Andrews-Mac-Mini Andrews-Mac-Mini: right = Andrew-Ubuntu end And the output from synergys -f NOTE: client "Andrews-Mac-Mini" has connected INFO: switch from "Andrew-Ubuntu" to "Andrews-Mac-Mini" at 1679,451 INFO: leaving screen INFO: screen "Andrew-Ubuntu" updated clipboard 0 INFO: screen "Andrew-Ubuntu" updated clipboard 1 INFO: switch from "Andrews-Mac-Mini" to "Andrew-Ubuntu" at 2398,833 INFO: entering screen

    Read the article

< Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >