Search Results

Search found 3923 results on 157 pages for 'ldn tech exec'.

Page 125/157 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Windows 8 Doesn't Shutdown Properly With Fast Start-Up Enabled

    - by Patrick
    While Fast start-up is enabled, on turning the computer off (shutdown) the computer idles for about 5min after logging out/screen turning off. It then turns off. On returning into Windows I receive the error message saying Windows didn't shut down properly. Hibernate works fine, and I am told this shouldn't be the case - If one doesn't work, neither should. It works when both Fast start-up is enabled and disabled, as does restart and sleep. Windows is installed under UEFI. The UEFI ultra fast boot option for my motherboard cannot be enabled as my GPU doesn't support some UEFI GOP tech. As far as I know, not related to windows fast start-up, but thought it was worth mentioning. To clarify, if this: http://www.eightforums.com/tutorials/6320-fast-startup-turn-off-windows-8-a.html is enabled, the computer does not shut down properly. EDIT: Some more information on the matter: Formatting didn't fix the issue. Still fails regardless of drivers installed. Hardware was purchased ~6months ago. Running a good SSD. Event viewer Always these two messages in close succession: Error (event ID 6008): The previous system shutdown at 7:45:21 PM on ?27/?10/?2012 was unexpected. Critical (kernel power, event ID 41): The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly. Upon installing WPT as suggested below to figure out what was happening during shutdown, and running the cmd xbootmgr -trace shutdown -noPrepReboot -traceFlags BASE+CSWITCH+DRIVERS+POWER -resultPath C:\TEMP Windows fast start-up is now working consistently. Still works upon uninstalling WPT. This is the only change to occur on the computer. Nothing else has bee installed/uninstalled, no Windows Updates, nothing. Windows fast start-up did not work prior to installing WPT and running the cmd (made sure I tested).

    Read the article

  • fcgiwrap listening to a unix socket file: how to change file permissions

    - by user36520
    I have a web server (nginx) and a CGI application (gitweb) that is ran with fcgiwrap to enable Fast CGI access to it. I want the Fast CGI protocol to take place over a unix socket file. To start the fcgiwrap daemon, I run: setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" (this is a daemontools daemon) The problem is that my web server runs as the user www-data and not the user git. And fcgiwrap creates the socket fastcgi.sock with user git, group git and read only fort the non owner. Thus, nginc with the user www-data can't access the socket. Apparently, fcgiwrap is not able to select permissions of unix socket files. And this is quite annoying. Moreover, if I manage to have the socket file exists before I run fcgiwrap (which is quite difficult given I did not find any shell command to create a socket file), it quits with the following error: Failed to bind: Address already in use The only solution I found is to start the server the following way: rm -f fastcgi.sock # Ensure that the socket doesn't already exists (sleep 5; chgrp www-data fastcgi.sock; chmod g+w fastcgi.sock) & exec setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" Which is far from the most elegant solution. Can you think of anything better ? Thanks

    Read the article

  • Optimizing Disk I/O & RAID on Windows SQL Server 2005

    - by David
    I've been monitoring our SQL server for a while, and have noticed that I/O hits 100% every so often using Task Manager and Perfmon. I have normally been able to correlate this spike with SUSPENDED processes in SQL Server Management when I execute "exec sp_who2". The RAID controller is controlled by LSI MegaRAID Storage Manager. We have the following setup: System Drive (Windows) on RAID 1 with two 280GB drives SQL is on a RAID 10 (2 mirroed drives of 280GB in two different spans) This is a database that is hammered during the day, but is pretty inactive at night. The DB size is currently about 13GB, and is used by approximately 200 (and growing) users a day. I have a couple of ideas I'm toying around with: Checking for Indexes & reindexing some tables Adding an additional RAID 1 (with 2 new, smaller, HDs) and moving the SQL's Log Data File (LDF) onto the new RAID. For #2, my question is this: Would we really be increasing disk performance (IO) by moving data off of the RAID 10 onto a RAID 1? RAID 10 obviously has better performance than RAID 1. Furthermore, SQL must write to the transaction logs before writing to the database. But on the flip side, we'll be reducing both the size of the disks as well as the amount of data written to the RAID 10, which is where all of the "meat" is - thereby increasing that RAID's performance for read requests. Is there any way to find out what our current limiting factor is? (The drives vs. the RAID Controller)? If the limiting factor is the drives, then maybe adding the additional RAID 1 makes sense. But if the limiting factor is the Controller itself, then I think we're approaching this thing wrong. Finally, are we just wasting our time? Should we instead be focusing our efforts towards #1 (reindexing tables, reducing network latency where possible, etc...)?

    Read the article

  • Dell 2970 - HP 1/8 G2 autoloader keeps falling off LSI 2032 SCSI chain

    - by middaparka
    I've a somewhat irritating problem with a Dell 2970 that has a HP 1/8 G2 autoloader (the Ultrium LTO 2 model) attached to the Dell/LSI 2032 non-RAID SCSI card. In essence, sometimes the autoloader/drive completely fails to appear on the SCSI chain (i.e.: there's neither a media changer or tape drive present within the device manager) and sometimes it appears but then subsequently disappears at a seemingly random (yet always inconvenient) time, resulting in backup failures. On most occasions, there are simply no errors logged in the system event log, but I did manage to capture a series of LSI_SCSI event ID 11 ("The driver detected a controller error on \Device\RaidPort0") errors followed by an event ID 129, ("Reset to device, \Device\RaidPort0, was issued") error during testing. I've tried two different cables, both with the same effect – sometimes the autoloader appears (for a while), sometimes it's completely absent. There's only one terminator I've tried to use, but as I've since successfully tested the autoloader on multiple occasions (albeit via a Adaptec U160 card on a different machine), my gut feel is that the issue doesn't lie with the terminator, or indeed the autoloader itself. As such, I'm just wondering if anyone has any ideas? It's most likely not relevant, but this is all under Windows SBS 2008, running Backup Exec 12.5 SBS edition (the Dell version), both fully patched. Addidtionally, the autoloader is running the latest firmware. It's been a while since I've dealt with anything SCSI, so all suggestions will be gratefully, gratefully received.

    Read the article

  • Unable to specify parameters to cvlc in a script

    - by VxJasonxV
    I'm creating a script that issues a few curl commands in order to access a time-protected mms stream link, then set up a relay using cvlc (vlc's command line interface) for my own use on an unencumbered player. The curl aspect of this is working, as I can run as a browser and curl side by side and get the same access url. (It's time locked meaning the stream will work forever, but you have to connect quickly or the URL will time out.) The very end of the script prints the command I will run, which is then followed up by "exec $CMD". When I echo $CMD I get: cvlc --sout '#standard{access=http,mux=asf,dst=0.0.0.0:58194}' mms://[...] Manually Copy/Pasting this command in, verbatim, works perfectly fine, but as part of a script, the cvlc execution output says: [0x9743d0] main interface error: no suitable interface module [0x962120] main libvlc error: interface "globalhotkeys,none" initialization failed [0x9743d0] dummy interface: using the dummy interface module... [0xb16e30] stream_out_standard stream out error: no mux specified or found by extension [0xb16ad0] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#standard{access=http,mux=asf,dst=0.0.0.0:58194}'"}' [0xb11cd0] main input error: cannot start stream output instance, aborting [0xb11f70] signals interface error: Caught Interrupt signal, exiting... Why is --sout behaving one way in a script (non-interactive shell?) vs. another way in the foreground (interactive shell) ?

    Read the article

  • How to find process that's using 100% of CPU

    - by Gabriel
    As i'm looking at htop and top i see that my processor usage is 100% allways. But i can not see any process that is using that much CPU. Htop shows me only 1-2 processes that use around 5% cpu time. Is there a way to find the processes that use that much cpu time? Here is the output of ps -eo pcpu,pid,user,args | sort -r -k1 | less %CPU PID USER COMMAND 0.8 20413 root jsvc.exec -user tomcat -cp ./bootstrap.jar -Djava.endorsed.dirs=../common/endorsed -outfile ../logs/catalina.out -errfile ../logs/catalina.err -verbose org.apache.catalina.startup.Bootstrap -security 0.3 631 mysql /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --pid-file=/var/lib/mysql/mysql.pid --skip-external-locking 0.2 3380 root /usr/local/apache/bin/httpd -k restart -DSSL 0.2 24698 root tailwatchd 0.2 22472 root /usr/local/jdk/bin/java -Djava.util.logging.config.file=/usr/local/jakarta/tomcat/conf/logging.properties -Dfile.encoding=UTF8 -XX:MaxPermSize=128m -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.endorsed.dirs=/usr/local/jakarta/tomcat/common/endorsed -classpath /usr/local/jakarta/tomcat/bin/bootstrap.jar -Dcatalina.base=/usr/local/jakarta/tomcat -Dcatalina.home=/usr/local/jakarta/tomcat -Djava.io.tmpdir=/usr/local/jakarta/tomcat/temp org.apache.catalina.startup.Bootstrap start 0.1 32095 root cpanellogd - processing bandwidth 0.0 9733 root sleep 1m

    Read the article

  • Tripwire help Required

    - by ramaperumal
    I have created the policy file in Tripwire and also I have created the rules as well mentioned below: /opt/jboss/server/gis/conf -> $(SEC_CONFIG) +aipm +c+g+a+i+s+t+u+l+M; /usr/local/gtech/eseries/ -> $(SEC_CONFIG) +a+c+g+i+s+t+u+l+M ; After running the integrity check the output should be a(Access timestamp),c (Inode timestamp (create/modify),g (File owner's group ID),i (Inode number),s (File size),t (time stamp),u (File owner's user ID),l(File is increasing in size (a "growing file"),M (MD5 hash value). I am getting the output as below: [root@xxsi1242 tripwire]# tripwire --check Parsing policy file: /etc/tripwire/tw.pol *** Processing Unix File System *** Performing integrity check... Wrote report file: /var/lib/tripwire/report/xxsi1242.gtk.gtech.com-20131106-053812.twr Open Source Tripwire(R) 2.4.1 Integrity Check Report Report generated by: root Report created on: Wed 06 Nov 2013 05:38:12 AM EST Database last updated on: Wed 06 Nov 2013 05:31:17 AM EST =============================================================================== Report Summary: =============================================================================== Host name: xxsi1242.gtk.gtech.com Host IP address: 156.24.65.171 Host ID: None Policy file used: /etc/tripwire/tw.pol Configuration file used: /etc/tripwire/tw.cfg Database file used: /var/lib/tripwire/xxsi1242.gtk.gtech.com.twd Command line used: tripwire --check =============================================================================== Rule Summary: =============================================================================== ------------------------------------------------------------------------------- Section: Unix File System ------------------------------------------------------------------------------- Rule Name Severity Level Added Removed Modified --------- -------------- ----- ------- -------- Invariant Directories 66 0 0 0 Temporary directories 33 0 0 0 * Tripwire Data Files 100 0 0 1 Tech Stack 100 0 0 0 User binaries 66 0 0 0 Tripwire Binaries 100 0 0 0 * CLPS bins 100 0 0 2 CLPS Configuration files 100 0 0 0 ESCommon 100 0 0 0 Shell Binaries 100 0 0 0 OS executables and libraries 100 0 0 0 Security Control 100 0 0 0 ESCommon Configuration 100 0 0 0 (/etc/gtech/escommon) Total objects scanned: 12358 Total violations found: 3 =============================================================================== Object Summary: =============================================================================== ------------------------------------------------------------------------------- # Section: Unix File System ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- Rule Name: Tripwire Data Files (/etc/tripwire/tw.pol) Severity Level: 100 ------------------------------------------------------------------------------- Modified: "/etc/tripwire/tw.pol" ------------------------------------------------------------------------------- Rule Name: CLPS bins (/opt/jboss/server) Severity Level: 100 ------------------------------------------------------------------------------- Modified: "/opt/jboss/server/esapps1/data/hypersonic/localDB.lck" "/opt/jboss/server/gis/data/hypersonic/localDB.lck" =============================================================================== Error Report: =============================================================================== No Errors ------------------------------------------------------------------------------- *** End of report *** Note: In the output I only am getting the files which are modified. I need the detail output for this. But unfortunately I am not getting what I expected. Please help me to proced further.

    Read the article

  • Resize Win2003 system+boot partitions to bigger disks & different controller?

    - by ane
    Have an old Win2003 server with 1 SCSI hard drive partitioned as follows: D: boot (includes D:\ntldr, boot.ini, etc.) C: system (includes C:\WINDOWS) Want to move the whole system to new hardware with bigger drives and different controllers. Specifically, C: to a 300GB SAS drive, and D: to a 2TB SATA drive. Tried: VMWare Converter - VMWare Server - Diskpart Result: Diskpart refuses to resize system or boot disks VMWare Converter - VMWare Server - GParted Result: Will not boot (see http://serverfault.com/questions/219868/resize-ntfs-system-partitions-with-gparted ) Attach original VMWare disk to a duplicate VMWare install - Diskpart Result: Will not boot (goes to Directory Services Restore mode) Backup Exec System Recovery Server Edition 2010 with Restore Anywhere (tried restoring both to VMWare and to the bare system, without VMWare) Result: Windows Boot error: Could not read from the selected boot disk. Check boot path. Supposedly this is a boot.ini problem, so I try bootcfg /rebuild from the recovery console. Says it can't find windows partition so it can't rebuild. Thought about Ghost but it's completely different hardware/controllers that we're restoring to, so I doubt it would boot. Reinstalling Windows from scratch is not an option due to critical custom software heavily embedded on the original machine. Has anyone been in a similar situation (with unusual boot/system partitions) before and figured out how to resize onto different disks?

    Read the article

  • Why would my wireless cut in and out every minute or so?

    - by Strilanc
    I've been having problems with my wireless. I moved to a new apartment, and the wireless seems incredibly unreliable. Sometimes it will be stable for hours until, all of a sudden, it starts cutting in and out. I'll get 30-90 seconds of normal behavior, then 5-30 seconds of nothing, then repeat. Sometimes the connection will stop working entirely, until I power-cycle the router. It is extremely, extremely annoying. Surfing the web isn't too bad, assuming you can stand the random 5-30 second waits. But some connections are sensitive enough to timeout, and it certainly makes multiplayer games unplayable. Facts: I confirmed the problem using ping google.com -t. I get normal traffic, interspersed with bursts of "Request timed out.". I've never had this problem before with this laptop. I didn't bring my own router or modem to the apartment. I'm using what the old tenant had. Hooking directly to the modem via an ethernet cable results in a stable connection. Temporarily cutting power to the router sometimes fixes the problem. Sometimes it doesn't. I reset the router, but the problem remained. Apparently the previous tenant had issues with the internet, but I don't know what they were specifically. The router is a D-Link DIR-615, and their tech support is useless.

    Read the article

  • Is it a good idea to run Redmine using Webrick through Nginx?

    - by Rohit
    The task here is to get Redmine setup for a small (<20) team. There may be a few users who would access the setup as business clients. I am familiar with setting up PHP for Apache, and recently, Nginx. I am not familiar with Ruby, Ruby-On-Rails, etc. I prefer to use the OS's (Ubuntu Linux LTS) package manager to install the different components as it takes care of dependencies and updates. I have setup Nginx with PHP-FPM successfully and am struggling with Redmine. As suggested here, I got Redmine running on port 3000. # /etc/init/redmine.conf # Redmine description "Redmine" start on runlevel [2345] stop on runlevel [!2345] expect daemon exec ruby /usr/share/redmine/script/server webrick -e production -b 0.0.0.0 -d And using the Nginx config on this page, I used Nginx to proxy requests to Webrick. server { listen 80; server_name myredmine.example.com; location / { proxy_pass http://127.0.0.1:3000; } } This works well locally. I wanted some opinions before trying this out on the live box (a 256 MB VPS). Further, should I use something like monit to monitor webrick for failure?

    Read the article

  • GitLab post-receive hook not firing

    - by Ben Graham
    Apologies if this isn't the right stackexchange. I have a GitLab install. It was installed over the top of a gitolite install that was only a few days old, and I assume this non-standard setup is at the root of my problem, but I cannot pin it down. The problem is straightforward: post-receive hooks are not fired. This prevents 'project activity' appearing in GitLab. The problem looks like: $ git push #... error: cannot run hooks/post-receive: No such file or directory Hook Exists The post-receive hook/symlink exists and is executable: -rwxr-xr-x 1 git git 470 Oct 3 2012 .gitolite/hooks/common/post-receive lrwxrwxrwx 1 git git 45 Oct 3 2012 repositories/project.git/hooks/post-receive -> /home/git/.gitolite/hooks/common/post-receive It's Executable By GitLab The gitlab user can execute the script (I have removed the /dev/null redirect and fed in blank input to get an 'OK' as output): sudo su - gitlab -c /home/git/.gitolite/hooks/common/post-receive OK GitLab Can Find It GitLab is looking for hooks in the correct location: $ grep hooks /srv/gitlab/gitlab/config/gitlab.yml hooks_path: /home/git/.gitolite/hooks/ and $ bundle exec rake gitlab:app:status RAILS_ENV=production # ... /home/git/.gitolite/hooks/common/post-receive exists? ............YES Environment The env -i line in the hook is commonly cited as an issue. I think that would occur after this problem, but for completeness, redis-cli is found OK: $ env -i redis-cli redis> I've run out of debugging ideas on this one. Does anybody have any suggestions?

    Read the article

  • SSH multi-hop connections with netcat mode proxy

    - by aef
    Since OpenSSH 5.4 there is a new feature called natcat mode, which allows you to bind STDIN and STDOUT of local SSH client to a TCP port accessible through the remote SSH server. This mode is enabled by simply calling ssh -W [HOST]:[PORT] Theoretically this should be ideal for use in the ProxyCommand setting in per-host SSH configurations, which was previously often used with the nc (netcat) command. ProxyCommand allows you to configure a machine as proxy between you local machine and the target SSH server, for example if the target SSH server is hidden behind a firewall. The problem now is, that instead of working, it throws a cryptic error message in my face: Bad packet length 1397966893. Disconnecting: Packet corrupt Here is an excerpt from my ~/.ssh/config: Host * Protocol 2 ControlMaster auto ControlPath ~/.ssh/cm_socket/%r@%h:%p ControlPersist 4h Host proxy-host proxy-host.my-domain.tld HostName proxy-host.my-domain.tld ForwardAgent yes Host target-server target-server.my-domain.tld HostName target-server.my-domain.tld ProxyCommand ssh -W %h:%p proxy-host ForwardAgent yes As you can see here, I'm using the ControlMaster feature so I don't have to open more than one SSH connection per-host. The client machine I tested this with is an Ubuntu 11.10 (x86_64) and both proxy-host and target-server are Debian Wheezy Beta 3 (x86_64) machines. The error happens when I call ssh target-server. When I call it with the -v flag, here is what I get additionally: OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /home/aef/.ssh/config debug1: Applying options for * debug1: Applying options for target-server.my-domain.tld debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: auto-mux: Trying existing master debug1: Control socket "/home/aef/.ssh/cm_socket/[email protected]:22" does not exist debug1: Executing proxy command: exec ssh -W target-server.my-domain.tld:22 proxy-host.my-domain.tld debug1: identity file /home/aef/.ssh/id_rsa type -1 debug1: identity file /home/aef/.ssh/id_rsa-cert type -1 debug1: identity file /home/aef/.ssh/id_dsa type -1 debug1: identity file /home/aef/.ssh/id_dsa-cert type -1 debug1: identity file /home/aef/.ssh/id_ecdsa type -1 debug1: identity file /home/aef/.ssh/id_ecdsa-cert type -1 debug1: permanently_drop_suid: 1000 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.0p1 Debian-3 debug1: match: OpenSSH_6.0p1 Debian-3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1 debug1: SSH2_MSG_KEXINIT sent Bad packet length 1397966893. Disconnecting: Packet corrupt

    Read the article

  • Permission problem with Git (over SSH) on FreeBSD

    - by vpetersson
    We're having permission problem with Git on FreeBSD. The setup is fairly straight forward. We have a few different repos on the same server. For simplicity, let's say they reside in /git/repo1 and /git/repo2. Each repo is owned by the user 'git' and a self-titled group (eg. repo1). The repo is configured with g+rwX access. Every user who commits to the repository is also member of the group for the repo (eg. repo1). The Git repositories all have 'sharedRepository = group' set. So far so good, all users can check out the code from the repositories, and the first user can commit without any problem. However, when the next user tries to commit to the repositories, he will receive a permission error. We've been banging our heads with this issue for some time now, and the only way we've managed to resolve it is by running the following script between commits (which is obviously very inconvenient): find /git/repo1 -type d -exec chmod g+s {} \; chmod -R g+rwX /git/repo1 chown -R git:repo1 /git/repo1/ cd /git/repo1 git gc Anyone got a clue to where the problem lies?

    Read the article

  • Unable to PPTP through NAT on Cisco 881

    - by MasterRoot24
    I'm trying to connect to a PPTP server which is sat behind a Cisco 881 NAT router. The server is running Ubuntu Server 12.04 and is running Poptop pptpd as the PPTP daemon listening for connections. As discussed in my other question, I'm trying to setup a Cisco 881 router to replace my old Linksys WAG320N. This same server and WAN connection worked fine with the WAG320N with no special configuration, other than allowing 1723 in through the firewall. On the Cisco 881, I'm using the newer ip nat enable or NAT NVI to setup static routes in through the firewall for the services running behind the router. My reason being that I can't run another copy of my live DNS domains internally with local IP addresses in. For the purposes of this question, though, I have rebuilt the router with ip nat inside/outside style NAT'ing, but this issue is still apparent. HTTP/SMTP/IMAP etc. all work ok from both the WAN and LAN interfaces of the router. I'm only having issues with SIP (see other question) and PPTP. My issue is that the GRE doesn't appear to be passing through NAT correctly and one end of the connection is not receiving GRE traffic when it should be, so the server hangs up the connection. Here's an example of /var/log/syslog with debug enabled in /etc/pptpd.conf: Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: MGR: Launching /usr/sbin/pptpctrl to handle client Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: local address = 192.168.1.50 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: remote address = 192.168.1.51 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: pppd options file = /etc/ppp/pptpd-options Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Client 82.132.248.216 control connection started Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Received PPTP Control Message (type: 1) Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Made a START CTRL CONN RPLY packet Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: I wrote 156 bytes to the client. Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Sent packet to client Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Received PPTP Control Message (type: 7) Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Set parameters to 100000000 maxbps, 64 window size Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Made a OUT CALL RPLY packet Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Starting call (launching pppd, opening GRE) Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: pty_fd = 6 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: tty_fd = 7 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: I wrote 32 bytes to the client. Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Sent packet to client Dec 11 21:06:30 <HOSTNAME> pptpd[22627]: CTRL (PPPD Launcher): program binary = /usr/sbin/pppd Dec 11 21:06:30 <HOSTNAME> pptpd[22627]: CTRL (PPPD Launcher): local address = 192.168.1.50 Dec 11 21:06:30 <HOSTNAME> pptpd[22627]: CTRL (PPPD Launcher): remote address = 192.168.1.51 Dec 11 21:06:30 <HOSTNAME> pppd[22627]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Dec 11 21:06:30 <HOSTNAME> pppd[22627]: pppd 2.4.5 started by root, uid 0 Dec 11 21:06:30 <HOSTNAME> pppd[22627]: Using interface ppp0 Dec 11 21:06:30 <HOSTNAME> pppd[22627]: Connect: ppp0 <--> /dev/pts/3 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: GRE: Bad checksum from pppd. Dec 11 21:06:31 <HOSTNAME> pptpd[22626]: CTRL: Received PPTP Control Message (type: 15) Dec 11 21:06:31 <HOSTNAME> pptpd[22626]: CTRL: Got a SET LINK INFO packet with standard ACCMs Dec 11 21:07:00 <HOSTNAME> pppd[22627]: LCP: timeout sending Config-Requests Dec 11 21:07:00 <HOSTNAME> pppd[22627]: Connection terminated. Dec 11 21:07:00 <HOSTNAME> avahi-daemon[1042]: Withdrawing workstation service for ppp0. Dec 11 21:07:00 <HOSTNAME> pppd[22627]: Modem hangup Dec 11 21:07:00 <HOSTNAME> pppd[22627]: Exit. Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: GRE: read(fd=6,buffer=6075a0,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: CTRL: Reaping child PPP[22627] Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: CTRL: Client 82.132.248.216 control connection finished Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: CTRL: Exiting now Dec 11 21:07:00 <HOSTNAME> pptpd[5803]: MGR: Reaped child 22626 As far as Cisco are concerned, all I need is ip nat source static tcp <SERVER LAN IP> 1723 interface FastEthernet4 1723 but of course this doesn't seem to the be helping the GRE traffic through as it should. Trying the connection to the LAN IP of the server from the same LAN as the server (behind the router), the PPTP connection works fine, so I'm confident that the server's config is ok. Furthermore, all I needed on my WAG320N was to open 1723 in the firewall. Here's my current router config: ! ! Last configuration change at 20:20:15 UTC Tue Dec 11 2012 by xxx version 15.2 no service pad service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname xxx ! boot-start-marker boot-end-marker ! ! enable secret 4 xxxx ! aaa new-model ! ! aaa authentication login local_auth local ! ! ! ! ! aaa session-id common ! memory-size iomem 10 ! crypto pki trustpoint TP-self-signed-xxx enrollment selfsigned subject-name cn=IOS-Self-Signed-Certificate-xxx revocation-check none rsakeypair TP-self-signed-xxx ! ! crypto pki certificate chain TP-self-signed-xxx certificate self-signed 01 xxx quit ip gratuitous-arps ip auth-proxy max-login-attempts 5 ip admission max-login-attempts 5 ! ! ! ! ! ip domain list dmz.xxx.local ip domain list xxx.local ip domain name dmz.xxx.local ip name-server 192.168.1.x ip cef login block-for 3 attempts 3 within 3 no ipv6 cef ! ! multilink bundle-name authenticated license udi pid CISCO881-SEC-K9 sn xxx ! ! username admin privilege 15 secret 4 xxx username joe secret 4 xxx ! ! ! ! ! ip ssh time-out 60 ! ! ! ! ! ! ! ! ! interface FastEthernet0 no ip address ! interface FastEthernet1 no ip address ! interface FastEthernet2 no ip address ! interface FastEthernet3 switchport access vlan 2 no ip address ! interface FastEthernet4 ip address dhcp ip nat enable duplex auto speed auto ! interface Vlan1 ip address 192.168.1.x 255.255.255.0 no ip redirects no ip unreachables no ip proxy-arp ip nat enable ! interface Vlan2 ip address 192.168.0.x 255.255.255.0 ! ip forward-protocol nd ip http server ip http access-class 1 ip http authentication local ip http secure-server ! ! ip nat source list 1 interface FastEthernet4 overload ip nat source list 2 interface FastEthernet4 overload ip nat source static tcp 192.168.1.x 1723 interface FastEthernet4 1723 ! ! access-list 1 permit 192.168.0.0 0.0.0.255 access-list 2 permit 192.168.1.0 0.0.0.255 ! ! ! ! control-plane ! ! banner motd Authorized Access only ! line con 0 exec-timeout 15 0 login authentication local_auth line aux 0 exec-timeout 15 0 login authentication local_auth line vty 0 4 access-class 2 in login authentication local_auth length 0 transport input all ! ! end UPDATE 16/12/2012: The only progress that I have been able to make on this issue is that I'm confident that the issue is caused by the GRE tunnels (which are required for the PPTP connection to complete) are being blocked. When attempting a connection, I can see in show ip nat nvi translations that both a TCP translation on 1723 is setup and also a GRE translation is setup also. I appear to be able to see GRE related packets on the LAN that the server is on, so I am lead to believe that the server is sending(?) GRE packets, however running Wireshark on a client PC when attempting a connection shows absolutely no GRE packets. Whilst there are no configuration directives in my config posted above (that I can pin point) which would specifically block them, it would appear that the GRE packets are not being allowed in/out of the router's firewall, even though a NAT translation entry is setup to the server's LAN address. Would anyone be able to provide me with some help to ensure that GRE packets are not blocked by the router's firewall, so that this can be ruled out as a possible issue please?

    Read the article

  • Dell Dimension 8400 Startup error

    - by Michael
    Hello all, thanks first for taking the time to read this and possibly help me...... now I am pretty decent of a computer tech...but not enough. I am having an issue with my computer which is running windows xp and as I mentioned it is a dell Dimension 8400. as soon as I power the system up the fan goes into hyper drive (spins like crazy and is very loud) then the start up screen with dell comes up and the loading bar gets stuck on the process of "Bios Revision A00" and never loads beyound that. I have read alot about it and think that the main problem was that it can not locate the file (which does have an updated version) I think it is A09. I can not enter safe mode, Bios mode or anything. I do have the file on my other computer and I was wondering if there is a way that I can use a usb flash drive (as I have read on other sites) to create a bootable MS-Dos diskette but I am failing at creating as such....is this possible? or is there anything else I can do? I tried to remove the battery from the system for about 10 minutes while it was completely unplug and tried then to reboot it and go into the bios menu but the same thing keeps happening....can anyone help me :-(

    Read the article

  • Setup Firefox to save .pages as .zip automatically

    - by Mike Dtrick
    What do I want to do? I would like Firefox to save files with the .pages extension as .zip files automatically. Scenario You are browsing through your emails and you notice your friend just sent you an email with a file attached (a .pages in this example). Unfortunately, you have a laptop that runs Windows. Your friend continues to send tons of emails with .pages files attached and you are tired of manually saving the files as a .zip file. Ultimately, you would like Firefox to be set up so that the download/file manager recognizes the .pages extension and automatically converts it to a .zip file. What have I done? I have saved files manually by selecting save as "All Files" and setting the extension to .zip. I've gone through Firefox and their documentation and have not found anything on how to complete this task. Why am I doing this? To save time (only a few seconds, not the main reason). I would like to setup a simple solution that "converts" a file automatically without having to recall steps on how to achieve the task manually (for clients who aren't exactly tech savvy). So that clients with Windows can access the files. IMPORTANT NOTE: I am not trying to save the web page, rather an Apple document equivalent to Microsoft Word. UPDATE: The really easy method would be to save one file, right click it, choose properties and open all .pages files up with WinRAR (or any other program that extracts files from a compressed folder). For the sake of learning, I am going to "neglect" this method and continue to do some research on Firefox add-ons. I would still like to have Firefox or the download manager to do the bulk of the work for converting the file.

    Read the article

  • Does a mini PCIe SSD fit into a Acer Aspire One?

    - by Narcolapser
    Question: What, if any, mini PCIe SSDs fit into the mini PCIe slot of the Acer Aspire one AOD250? Info: I have an Aspire One and I've been considering loading it with an SSD. The mini PCIe drives fascinate me and so I want to try that approach. Also they tend to be cheaper and not much slower. (at least not on Read time which matters more for a netbook) But I've heard that some times computers don't support certain mini PCIe cards. And I was wondering if anyone knew about the Aspire One? I tried asking Acer tech support, but they didn't know jack and spent the whole time informing that I would have to support my Ubuntu install on my own, which I was. Anyway. Rant Aside, I'm looking at this drive: http://www.newegg.com/Product/Product.aspx?Item=N82E16820183252 It states it is exclusively for the Eee PC. now does that mean It was designed for the Eee PC but will work in my netbook. or is something going to go wrong? (like right now my concern is it physically not fitting.) Any information would be appreciated. o7

    Read the article

  • Recommendation on remote access setup for accessing customer systems

    - by gregmac
    I'm looking for a product recommendation (open or commercial) that will allow remote access to customer sites for tech support purposes. We need to be able to gain access to help troubleshoot problems on servers. Currently end up using anything from RDP on public IP, to various VPNs that clients happen to have, to webex-type sessions that require lots of interaction from both sides to get things working. This often means a problem that could take 10 minutes to solve takes an extra 30+ minutes messing around trying to get a connection up. There are multiple customer sites, which should NOT have access to each other. At each site, there is anywhere from 1 to 8 servers (Windows 2003 or 2008) that need to be accessed. Support connection to machines even if they're behind a firewall/router with no public IP Be able to selectively allow/deny access from customer site. Customer site should not be able to connect outbound to anywhere else (our systems, or other customer sites) Support multiple users from our end If not a VPN connection (where RDP could be used over top), should support: Remote desktop access, including copy/paste File transfers Preferably would have some way to list all remote systems, showing online/offline. Anyone have any suggestions?

    Read the article

  • Restoring Windows 2008 Server X86 and X64

    - by rihatum
    Restoring Windows 2008 Server (Domain Controller) We are using Backup Exec System Recovery 2010 to Image our DC. Now this software has a feature to convert the backup into a vmware or hyper-v VM I have also used disk2vhd to convert one of our dc's to a vhd and when I connected it into Hyper-V, it booted fine, I can login - BUT :-) As soon as I login, I get the activation error, that change product key, this product key isn't good for this machine etc. Question is : When in a real recovery situation, what would be the procedure to restore it either virtual or onto a physical box but be able to login and change product key etc ? In this scenario its just locked down and I cant' do anything, if this is the case, how would I replicate my production environment via these tools ? Any Ideas ? Will be grateful for some real world examples here. Same thing happens with our exchange backup / test restore either physical or virtual, can login but nothing else. Now we don't have the keys as they are OEM keys and just wondering what will happen in a real scenario, would we be purchasing another KEY or using the OEM key on our new server ? This is a test environment I am trying to create by restoring our backups either into hyper-v or physical test machines. Also, If I build up a machine (Server 2008) in a VM (Hyper-V), How can I restore just the system state backup of my DC into it ? will that give me the activation error too ? even though I would use the TRIAL ISOs provided by Microsoft ? Kind regards

    Read the article

  • "Network Error - 53" while trying to mount NFS share in Windows Server 2008 client

    - by Mike B
    CentOS | Windows 2008 I've got a CentOS 5.5 server running nfsd. On the Windows side, I'm running Windows Server 2008 R2 Enterprise. I have the "Files Services" server role enabled and both Client for NFS and Server for NFS are on. I'm able to successfully connect/mount to the CentOS NFS share from other linux systems but am experiencing errors connecting to it from Windows. When I try to connect, I get the following: C:\Users\fooadmin>mount -o anon 10.10.10.10:/share/ z: Network Error - 53 Type 'NET HELPMSG 53' for more information. (IP and share name have been changed to protect the innocent :-) ) Additional information: I've verified low-level network connectivity between the Windows client and the NFS server with telnet (to the NFS on TCP/2049) so I know the port is open. I've further confirmed that inbound and outbound firewall ports are present and enabled. I came across a Microsoft tech note that suggested changing the "Provider Order" so "NFS Network" is above other items like Microsoft Windows Network. I changed this and restarted the NFS client - no luck. I've confirmed that the share folder on the NFS server is readable/writable by all (777) I've tried other variations of the mount command like: mount 10.10.10.10:/share/ z: and mount 10.10.10.10:/share z: and mount -o anon mtype=hard \\10.10.10.10:/share * No luck. As per the command output, I tried typing NET HELPMSG 53 but that doesn't tell me much. Just "The network path was not found". I'm lost on how to proceed with troubleshooting. Any ideas?

    Read the article

  • SNMP Access on Ubuntu

    - by javano
    I am trying to use SNMP to monitor a machine locally on its self and remotely. This is the snmpd.conf (Ubuntu 8.04.1): # sec.name source comunity com2sec readonly 1.2.3.4 nicenandtight com2sec readonly 5.6.7.8 reallysafe group MyROGroup v1 readonly group MyROGroup v2c readonly group MyROGroup usm readonly view all included .1 view system included .iso.org.dod.internet.mgmt.mib-2.system access MyROGroup "" any noauth exact all none none syslocation my house syscontact me <[email protected]> exec .1.3.6.1.4.1.2021.7890.1 distro /usr/bin/distro smuxpeer .1.3.6.1.4.1.674.10892.1 includeAllDisks 95% 1.2.3.4 is the local machines IP and everything is working locally. 5.6.7.8 is the remote machine and initially I am just trying to touch SNMPD with snmpwalk from the remote machine; snmpwalk -v 2c -c reallysafe 1.2.3.4 Timeout: No Response from 1.2.3.4 I have added to iptables as the very first rule; -A INPUT -p udp -m udp --dport 161 -j ACCEPT With such a loose iptables rule I can't see why I can't even touch the SNMPD on that Uubuntu Machine. There are more specific rules further down the table but as I couldn't connect I added the above. TCPDump shows the UDP packets coming in. What could be going wrong here?

    Read the article

  • conflicting info about the running kernel version in FreeBSD

    - by John
    I asked a related question about uname before, now want to ask from another angle because the following simple yet obvious conflicting outputs may mean there is something many people did not think of (me included). I'm running FreeBSD 9 RELEASE, please see the following commands: # sysctl kern.bootfile kern.bootfile: /boot/kernel/kernel # strings /boot/kernel/kernel |grep RELEASE|grep 9 @(#)FreeBSD 9.2-RELEASE-p7 #0: Tue Jun 3 11:05:13 UTC 2014 FreeBSD 9.2-RELEASE-p7 #0: Tue Jun 3 11:05:13 UTC 2014 9.2-RELEASE-p7 The above kernel file suggests the running kernel is 9.2-RELEASE-p7. But... # dmesg Copyright (c) 1992-2012 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD is a registered trademark of The FreeBSD Foundation. FreeBSD 9.1-RELEASE #0 r243825: Tue Dec 4 09:23:10 UTC 2012 ... # uname -a FreeBSD localhost.localdomain 9.1-RELEASE FreeBSD 9.1-RELEASE #0 r243825: Tue Dec 4 09:23:10 UTC 2012 [email protected]:/usr/obj/usr/src/sys/GENERIC amd64 So dmesg and uname says it's 9.1-RELEASE. I also did an extensive find / -type f -exec grep -l "9.1-RELEASE" {} \; but found no possible kernel file that contains 9.1-RELEASE. What could lead to the above conflict, and what kernel I am actually running? Please note I run RELEASE and ran freebsd-update to do binary update, so no compiled kernel is involved. And I have rebooted multiple times after freebsd-update. And the system is not in jail etc, just the only system on that computer.

    Read the article

  • Massive Memory Leaks?

    - by Mads
    Hi, I seem to have huge memory leaks, which are confusing me. I'm running fusion 3.1 / Windows 7 on Snow Leopard. It's a clean install with all upgrades applied. I've given fusion 8GB on a 14GB machine. I've installed VS2008 & Eclipse in Windows 7. Nothing unusual. Inside Task Manager in Windows 7, my memory footprint stays reasonable, at <2GB. But in OSX, Activity Monitor shows the footprint of vmware-vmx to be much larger. It starts at 2 GB, which seems fine, but whenever I'm actually doing anything in Windows, vmware-vmx's footprint grows at a few MB per second. After 20 mins or so it's using ~10GB and everything grinds to a halt. Throughout this, Task Manager still says I'm only using 2GB. And whatever I do in windows seems to increase vmware-vmx's memory footprint. Even closing down an application seems to make it go up. So is this par for the course in fusion? I was previously using parallels 3 / Vista under Leopard, and it worked fine. I'd assumed my new fusion config would work better, but this makes it completely unusable. (And apparently I can't even ask tech support unless I buy a support package...) Any advice much appreciated. Thanks

    Read the article

  • BackupExec 2012 File System Archiving - Access is denied to Remote Agent

    - by AllisZero
    Gentlemen, I've been struggling with a Trial version of Symantec Backup Exec 2012 for about a week now. It was installed as an upgrade to our 12.5 license, and the setup completed with no issues. The reason I upgraded is solely for the File System Archiving option as I'm working to reduce the amount of live data in my servers. Backups work A-Ok and I have followed the instructions in the Admin Manual to make sure I had filled all requirements. The account BE is running under is a member of the Local administrators group as required and has been added to the test share that I'm using to evaluate the archiving function. Testing the credentials in the job setup window always works fine, and I am able to add both regular and Admin$ shares to my Archive selection. However, every time I run the Archive job, I get the following message: https://dl.dropbox.com/u/59540229/BEXec.png I've already tried to troubleshoot DNS resolution issues as suggested in the Symantec KB to no avail. The only thing I can think of, at this point, is that a trial license doesn't allow me to use the Archiving function, although that would seem silly on their part. Appreciate any assistance or information. Thanks.

    Read the article

  • Where is '/host' declared for mount in Wubi (Ubuntu 9.10)?

    - by Pedro
    Hi! I'm using Wubi (ubuntu 9.10), and I couldn't find where '/host' mountpoint is declared for mounting. There's no entry in fstab, but it's listed in /proc/mount and mounted at boot time. Any ideas? pedroel@ubuntu:~$ cat /proc/mounts rootfs / rootfs rw 0 0 none /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 none /proc proc rw,nosuid,nodev,noexec,relatime 0 0 udev /dev tmpfs rw,relatime,mode=755 0 0 /dev/sda1 /host fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,blksize=4096 0 0 /dev/loop0 / ext4 rw,relatime,errors=remount-ro,barrier=1,data=ordered 0 0 none /sys/kernel/security securityfs rw,relatime 0 0 none /sys/fs/fuse/connections fusectl rw,relatime 0 0 none /sys/kernel/debug debugfs rw,relatime 0 0 none /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 none /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0 none /var/run tmpfs rw,nosuid,relatime,mode=755 0 0 none /var/lock tmpfs rw,nosuid,nodev,noexec,relatime 0 0 none /lib/init/rw tmpfs rw,nosuid,relatime,mode=755 0 0 /dev/loop1 /home/pedroel/Downloads ext4 rw,relatime,errors=remount-ro,barrier=1,data=ordered 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime 0 0 gvfs-fuse-daemon /home/pedroel/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0 /dev/mapper/isw_efhafcifi_RAID_Volume01 /media/RAID_D fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096 0 0 pedroel@ubuntu:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # proc /proc proc defaults 0 0 /host/ubuntu/disks/root.disk / ext4 loop,errors=remount-ro 0 1 /host/ubuntu/disks/pedro.disk /home/pedroel/Downloads ext4 loop,errors=remount-ro 0 1 /host/ubuntu/disks/swap.disk none swap loop,sw 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 Thanks in advance, Pedro

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >