Search Results

Search found 8790 results on 352 pages for 'known hosts'.

Page 316/352 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >

  • Excessive Outbound DNS Traffic

    - by user1318414
    I have a VPS system which I have had for 3 years on one host without issue. Recently, the host started sending an extreme amount of outbound DNS traffic to 31.193.132.138. Due to the way that Linode responded to this, I have recently left Linode and moved to 6sync. The server was completely rebuilt on 6sync with the exception of postfix mail configurations. Currently, the daemons run are as follows: sshd nginx postfix dovecot php5-fpm (localhost only) spampd (localhost only) clamsmtpd (localhost only) Given that the server was 100% rebuilt, I can't find any serious exploits against the above stated daemons, passwords have changed, ssh keys don't even exist on the rebuild yet, etc... it seems extremely unlikely that this is a compromise which is being used to DoS the address. The provided IP is noted online as a known SPAM source. My initial assumption was that it was attempting to use my postfix server as a relay, and the bogus addresses it was providing were domains with that IP registered as their nameservers. I would imagine given my postfix configuration that DNS queries for things such as SPF information would come in with equal or greater amount than the number of attempted spam e-mails sent. Both Linode and 6Sync have said that the outbound traffic is extremely disproportionate. The following is all the information I received from Linode regarding the outbound traffic: 21:28:28.647263 IP 97.107.134.33.32775 > 31.193.132.138.53: 28720 op8+% [b2&3=0x4134] [17267a] [30550q] [28773n] [14673au][|domain] 21:28:28.647264 IP 97.107.134.33 > 31.193.132.138: udp 21:28:28.647264 IP 97.107.134.33.32775 > 31.193.132.138.53: 28720 op8+% [b2&3=0x4134] [17267a] [30550q] [28773n] [14673au][|domain] 21:28:28.647265 IP 97.107.134.33 > 31.193.132.138: udp 21:28:28.647265 IP 97.107.134.33.32775 > 31.193.132.138.53: 28720 op8+% [b2&3=0x4134] [17267a] [30550q] [28773n] [14673au][|domain] 21:28:28.647266 IP 97.107.134.33 > 31.193.132.138: udp 6sync cannot confirm whether or not the recent spike in outbound traffic was to the same IP or over DNS, but I have presumed as such. For now my server is blocking the entire 31.0.0.0/8 subnet to help deter this while I figure it out. Anyone have any idea what is going on?

    Read the article

  • Out of nowhere, ssh_exchange_identification: Connection closed by remote hot me too

    - by dgerman
    See similar: Out of nowhere, ssh_exchange_identification: Connection closed by remote host Today, 6/19/12 attempting to ssh to the same host as usual ssh replied ssh_exchange_identification: Connection closed by remote host two additional attempts failed ssh -v $RWS OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to Real-World-Systems.com [174.127.119.33] port 22. debug1: Connection established. debug1: identity file /Users/dgerman/.ssh/id_rsa type 1 debug1: identity file /Users/dgerman/.ssh/id_rsa-cert type -1 debug1: identity file /Users/dgerman/.ssh/id_dsa type -1 debug1: identity file /Users/dgerman/.ssh/id_dsa-cert type -1 ssh_exchange_identification: Connection closed by remote host ping host was successful, ftp host was successful, ssh now successful, ssh -v $RWS OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to Real-World-Systems.com [174.127.119.33] port 22. debug1: Connection established. debug1: identity file /Users/dgerman/.ssh/id_rsa type 1 debug1: identity file /Users/dgerman/.ssh/id_rsa-cert type -1 debug1: identity file /Users/dgerman/.ssh/id_dsa type -1 debug1: identity file /Users/dgerman/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_4.3 debug1: match: OpenSSH_4.3 pat OpenSSH_4* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'real-world-systems.com' is known and matches the RSA host key. debug1: Found key in /Users/dgerman/.ssh/known_hosts:5 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/dgerman/.ssh/id_rsa debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Trying private key: /Users/dgerman/.ssh/id_dsa debug1: Next authentication method: password ++++ What gives?? +++++++++++ Mac OS X 10.4.7 , OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011, /Users/dgerman/.ssh > ls -la total 24 drwx------ 7 dgerman staff 238 Jun 19 15:46 . drwxr-xr-x 389 dgerman staff 13226 Jun 19 15:46 .. -rw------- 1 dgerman staff 1766 Feb 26 18:25 id_rsa -rw-r--r-- 1 dgerman staff 400 Feb 26 18:25 id_rsa.pub -rw-r--r-- 1 dgerman staff 67 Feb 26 18:27 keyfingerprint -rw-r--r-- 1 dgerman staff 6215 May 1 08:11 known_hosts -rw-r--r-- 1 dgerman staff 220 Feb 26 18:26 randomart

    Read the article

  • File/printer sharing issues on network with multiple OSes

    - by DanZ
    My workplace consists of computers running a variety of different operating systems, and I have been running into problems getting some of them to connect to a shared drive and printer over the network. Here is a brief description of the computers involved and the issues I have encountered: 1: Dell desktop, Windows Vista Business-- This is the computer I want the others to connect to. It has a USB printer and eSATA hard drive enclosure that I have set up for sharing, with different accounts for the various users. 2: Fujitsu laptop, Windows XP Tablet edition-- No problems. Can connect to both the shared printer and hard drive. 3: Lenovo laptop, Windows Vista Business 64 bit-- No problems. Can connect to both the shared printer and drive. 4: Apple MacBook, OS 10.4-- Can connect to the shared drive, but not to the shared printer. I am aware that the printer issue is due to a known incompatibility between Vista and OS 10.4 and earlier with regards to Samba. It is not a big problem, however, as this computer can access a network printer. 5: Sony laptop, Windows Vista Home Premium-- Can connect to the shared printer, but not the shared drive. It can see computer 1 and its shared drive on the network, and appears to successfully log in to user accounts. However, if you try to access the shared drive, it says you do not have permission. I have tried both standard and administrator accounts, and none can access the drive from this computer. 6: MacBook Pro, OS 10.5 (there are two of these)-- Can connect to the shared printer, but not the shared drive. They can't see computer 1 on the network. For that matter, they also can't see each other or the older Mac, but can see and access shared folders on the XP machine (computer 2) and can see other PCs in the building. I was able to add the shared printer manually by typing in its network location, but was unable to manually add the shared drive in the same way. So, what I am looking for is suggestions on how to get computers 5 and 6 to connect to the shared drive. Since they can already connect to the shared printer (which is on the same computer as the shared drive), it seems reasonable that they should be able to access the drive as well.

    Read the article

  • Real-time offline folder-to-folder backup application needed (Windows)

    - by niktech
    I recently started using Intel Matrix Storage RAID solution that allowed me to use my 5 1TB drives for two RAID volumes. First one a 1TB RAID 0 striped across all 5 drives and second one a RAID 5 across the rest of the free space on all drives (around 2.85TB usable space). The RAID 0 I use for OS, applications and games while the RAID 5 I use as a more-permanent type storage (photos, etc). Now I do realize that running the OS and applications on RAID 0 across 5 drives is very dangerous, which is what brings up the following question. Is there a reliable freeware realtime backup application that can backup a set of folders from one drive to another drive (no online backups needed)? I've already tried a few (Mozy, Yadis, Comodo Backup, GFI Backup, Idoo, Crash Plan) but none meet my requirements: Low CPU and RAM usage. Realtime Backups - as soon as a file is modified in the source folder, it is added to the backup queue which will be processed with the lowest priority when the CPU is idle. This backup queue should persist in cases of computer restarts (ie: the source and destination folders should always have the same set of files, except for the ones waiting in the backup queue). Incremental Backups - if only 10 bytes changed in a 1GB file, the app should only copy those 10 new bytes. Ability to back up locked and opened files (some apps, like Yadis, can't back up critical files like browser favorites). Ability to run as a service (no need for any user to log-in to have the app started). Optional requirements: Compression of the destination into a well-known format (RAR, Zip) that can be directly read without the use of the application. Preset source folders (such as Browser Favorites, Game Saves, Application Settings, etc). The idea is to use RAID 0 array as "semi-persistent RAM-like" storage which in case of a failure can be quickly rebuilt by reinstalling the OS, apps and games and copying over the settings, saves, favorites from the RAID 5. I'm also thinking of taking this RAID 0 as RAM idea to the extreme with SSDs (as soon as we get some nice 6Gb/s SATA III SSDs out there), where a couple of SSDs chained in RAID 0 will work as yet another semi-persistent cache layer sitting between the RAM and the HD. I'm just hoping there already exists an application that satisfies these requirements... otherwise I'll have to write one myself, which I would prefer not to do.

    Read the article

  • Lustre - issues with simple setup

    - by ethrbunny
    Issue: I'm trying to assess the (possible) use of Lustre for our group. To this end I've been trying to create a simple system to explore the nuances. I can't seem to get past the 'llmount.sh' test with any degree of success. What I've done: Each system (throwaway PCs with 70Gb HD, 2Gb RAM) is formatted with CentOS 6.2. I then update everything and install the Lustre kernel from downloads.whamcloud.com and add on the various (appropriate) lustre and e2fs RPM files. Systems are rebooted and tested with 'llmount.sh' (and then cleared with 'llmountcleanup.sh'). All is well to this point. First I create an MDS/MDT system via: /usr/sbin/mkfs.lustre --mgs --mdt --fsname=lustre --device-size=200000 --param sys.timeout=20 --mountfsoptions=errors=remount-ro,user_xattr,acl --param lov.stripesize=1048576 --param lov.stripecount=0 --param mdt.identity_upcall=/usr/sbin/l_getidentity --backfstype ldiskfs --reformat /tmp/lustre-mdt1 and then mkdir -p /mnt/mds1 mount -t lustre -o loop,user_xattr,acl /tmp/lustre-mdt1 /mnt/mds1 Next I take 3 systems and create a 2Gb loop mount via: /usr/sbin/mkfs.lustre --ost --fsname=lustre --device-size=200000 --param sys.timeout=20 --mgsnode=lustre_MDS0@tcp --backfstype ldiskfs --reformat /tmp/lustre-ost1 mkdir -p /mnt/ost1 mount -t lustre -o loop /tmp/lustre-ost1 /mnt/ost1 The logs on the MDT box show the OSS boxes connecting up. All appears ok. Last I create a client and attach to the MDT box: mkdir -p /mnt/lustre mount -t lustre -o user_xattr,acl,flock luster_MDS0@tcp:/lustre /mnt/lustre Again, the log on the MDT box shows the client connection. Appears to be successful. Here's where the issues (appear to) start. If I do a 'df -h' on the client it hangs after showing the system drives. If I attempt to create files (via 'dd') on the lustre mount the session hangs and the job can't be killed. Rebooting the client is the only solution. If I do a 'lctl dl' from the client it shows that only 2/3 OST boxes are found and 'UP'. [root@lfsclient0 etc]# lctl dl 0 UP mgc MGC10.127.24.42@tcp 282d249f-fcb2-b90f-8c4e-2f1415485410 5 1 UP lov lustre-clilov-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 4 2 UP lmv lustre-clilmv-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 4 3 UP mdc lustre-MDT0000-mdc-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 5 4 UP osc lustre-OST0000-osc-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 5 5 UP osc lustre-OST0003-osc-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 5 Doing a 'lfs df' from the client shows: [root@lfsclient0 etc]# lfs df UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 149944 16900 123044 12% /mnt/lustre[MDT:0] OST0000 : inactive device OST0001 : Resource temporarily unavailable OST0002 : Resource temporarily unavailable lustre-OST0003_UUID 187464 24764 152636 14% /mnt/lustre[OST:3] filesystem summary: 187464 24764 152636 14% /mnt/lustre Given that each OSS box has a 2Gb (loop) mount I would expect to see this reflected in available size. There are no errors on the MDS/MDT box to indicate that multiple OSS/OST boxes have been lost. EDIT: each system has all other systems defined in /etc/hosts and entries in iptables to provide access. SO: I'm clearly making several mistakes. Any pointers as to where to start correcting them?

    Read the article

  • DPM 2010 PowerShell Script to Easily Restore Multiple Files

    - by bmccleary
    I’ve got what I thought would be a simple task with Data Protection Manager 2010 that is turning out to be quite frustrating. I have a file server on one server and it is the only server in a protection group. This file server is the repository for a document management application which stores the files according to the data within a SQL database. Sometimes users inadvertently delete files from within our application and we need to restore them. We have all the information needed to restore the files to include the file name, the folder that the file was stored in and the exact date that the file was deleted. It is easy for me to restore the file from within the DPM console since we have a recovery point created every day, I simply go to the day before the delete, browse to the proper folder and restore the file. The problem is that using the DPM console, the cumbersome wizard requires about 20 mouse clicks to restore a single file and it takes 2-4 minutes to get through all the windows. This becomes very irritating when a client needs 100’s of files restored… it takes all day of redundant mouse clicks to restore the files. Therefore, I want to use a PowerShell script (and I’m a novice at PowerShell) to automate this process. I want to be able to create a script that I pass in a file name, a folder, a recovery point date (and a protection group/server name if needed) and simply have the file restored back to its original location with some sort of success/failure notification. I thought it was a simple basic task of a backup solution, but I am having a heck of a time finding the right code. I have seen the sample code at http://social.technet.microsoft.com/wiki/contents/articles/how-to-use-a-windows-powershell-script-to-recover-an-item-in-data-protection-manager.aspx that I have tried to follow, but it doesn’t accomplish what I really want to do (it’s too simplistic) and there are errors in the sample code. Therefore, I would like to get some help writing a script to restore these files. An example of the known values to restore the data are: DPM Server: BACKUP01 Protection Group: Document Repository Data Protected Server: FILER01 File Path: R:\DocumentRepository\ToBackup\ClientName\Repository\2010\07\24\filename.pdf Date Deleted: 8/2/2010 (last recovery point = 8/1/2010) Bonus Points: If you can help me not only create this script, but also show me how to automate by providing a text file with the above information that the PowerShell script loops through, or even better, is able to query our SQL server for the needed data, then I would be more than willing to pay for this development.

    Read the article

  • can't access nginx server from IP

    - by EquinoX
    So 2 days ago I can see that page where it saya "Welcome to nginx", however as of now when I tried to access it, it says 404 page not found... Why is this? Inside my sites-enabled folder I have a file named default and it has the following: # You may add here your # server { # ... # } # statements for each of your virtual hosts server { listen 80; server_name 127.0.0.1; access_log /var/log/nginx/localhost.access.log; location / { root /var/www/nginx-default; index index.html index.htm; } location /doc { root /usr/share; autoindex on; allow 127.0.0.1; deny all; } location /images { root /usr/share; autoindex on; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /var/www/nginx-default; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { #proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/nginx-default$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } } # another virtual host using mix of IP-, name-, and port-based configuration # #server { #listen 8000; #listen somename:8080; #server_name somename alias another.alias; #location / { #root html; #index index.html index.htm; #} #} # HTTPS server # #server { #listen 443; #server_name localhost; #ssl on; #ssl_certificate cert.pem; #ssl_certificate_key cert.key; #ssl_session_timeout 5m; #ssl_protocols SSLv2 SSLv3 TLSv1; #ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; #ssl_prefer_server_ciphers on; #location / { #root html; #index index.html index.htm; #} #} Here's my nginx.conf file: user www-data; worker_processes 4; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } # mail { # # See sample authentication script at: # # http://wiki.nginx.org/NginxImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; # } # # server { # listen localhost:143; # protocol imap; # proxy on; # } # } What am I doing wrong here? I have other virtual host setup in the sites-enabled as well... UPDATE: The server_name directives are: -admin.api.frapi -api.frapi -default -example.com -php.example.com

    Read the article

  • How to disable or tune filesystem cache sharing for OpenVZ?

    - by gertvdijk
    For OpenVZ, an example of container-based virtualization, it seems that host and all guests are sharing the filesystem cache. This sounds paradoxical when talking about virtualization, but this is actually a feature of OpenVZ. It makes sense too. Because only one kernel is running, it's possible to benefit from sharing the same pages of filesystem cache in memory. And while it sounds beneficial, I think a set up here actually suffers in performance from it. Here's why I think why: my machines aren't actually sharing any files on disk so I can't benefit from this feature in OpenVZ. Several OpenVZ machines are running MySQL with MyISAM tables. MyISAM relies on the system's filesystem cache for caching of data files, unlike InnoDB's buffer pool. Also some virtual machines are known to do heavy and large I/O operations on the same filesystem in the host. For example, when running cat *.MYD > /dev/null on some large database in one machine, I saw the filesystem cache lowering in another, monitored by htop. This essentially flushes all the useful filesystem cache in guests (FIFO) and so it flushes the MySQL caches in the guests. Now users are complaining that MySQL is very slow. And it is. Some simple SELECT queries take several seconds on times disk I/O is heavily used by other machines. So, simply put: Is there a way to avoid filesystem cache being wiped out by other virtual machines in container-based virtualization? Some thoughts: Choosing algorithm for flushing filesystem cache in the kernel. (possible? how?) Reserving a certain amount of pages for a single VM. (seems no option for filesystem cache type of pages that reading man vzctl) Will running MySQL on another filesystem get me anywhere? If not, I think my alternatives are: Use KVM for MySQL-MyISAM running VMs. KVM actually assigns memory to the VM and does not allow swapping out caches unless using a balloon driver. Move to InnoDB and tune the buffer pools, dirty pages, etc. This is now considered to be 'nice to have' on the long-term as not everyone responsible for administration of the system understands InnoDB. more suggestions welcome. System software: Proxmox (now 1.9, could be upgraded to 2.x). One big LV assigned for the VMs.

    Read the article

  • Event ID 17890 (A significant part... paged out.) with SQL Server 2008

    - by Godeke
    I have a machine that has SQL Server 2008 Standard installed. Periodically (about once an hour) I am getting Event ID 17890 several times in a row. An example: 6:28:54 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 0 seconds. Working set (KB): 10652, committed (KB): 628428, memory utilization: 1%%. 6:34:27 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 332 seconds. Working set (KB): 169780, committed (KB): 546124, memory utilization: 31%%." 6:38:55 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 600 seconds. Working set (KB): 245068, committed (KB): 546124, memory utilization: 44%%." This pattern repeated at 7:26 - 7:37, 8:26 - 8:36, 9:24 - 9:35 and so with the same increasing working set and memory utilization pattern. I don't have any (known) background tasks running at this time. Backups run at 2:00 This subsided from 11:00 at night until it resumed at 4:00 in the morning and has been continuing the intermittent 10 minute glitch periods. As this server has plenty of RAM (the commit charge has peaked at 2,871,564 of 4,194,012 physical) I disabled the paging files after reading several items I dug up searching Google and not finding any of them changing the situation. This pattern I am documented is after removing the paging files, so I'm not even sure where we are paging the SQL process could be going. I also changed the SQL process memory to have a minimum of 500MB and a maximum of 2GB of RAM (as this is a light duty database server serving only a small workgroup). Has anyone encountered this? Prior to disabling the page files this error would cause 5 minutes of disk thrashing that disabled access to the databases, files, IIS webs and so on. Since disabling the page files it just logs strange things, but I'm not seeing a performance drop at least. Any suggestions would be welcome.

    Read the article

  • Why doesn't video run smoothly on my laptop anymore?

    - by andygrunt
    This might be an impossible question to answer remotely but I figure there may be some common causes that people can suggest so I think it's worth asking... Video no longer plays smoothly on my laptop. It used to but not for a while now. For example, playing a video on YouTube is pretty typical: I press play (making sure it's not on HD or even HQ) and the video buffers a little then starts to play. At first it plays fine then the video starts to stutter, turning into a slideshow while the sound continues to play smoothly. If I try playing the same video on my Playstation 3 (which is linked to the same network) it plays smoothly so it can't be the connection. Another example is streaming DivX videos. Again, I wait while it buffers and it starts but very soon, instead of a slideshow, this time the video just plays slowly while the sound continues as normal (instantly getting out of sync). Even if I let the video fully load before pressing play (i.e. it's no longer streaming), it still behaves the same way. I can even let it load 100% then save the file to hard disk and use VLC player to view it, and the same thing happens. I'm using an old laptop running Windows XP. For the past several years it's been connected to the router via Wi-Fi but in the past few days I've changed that to a network cable (like my PS3) but that hasn't helped. Yes, I regularly install various bits and pieces of software but nothing that I can identify as being the cause. So, are there known causes of this sort of behaviour and if so, what can I do to fix it? Thanks. Update to answer a few questions... Laptop Spec' (note: video has played back fine for the majority of time I've had the laptop) Toshiba Satellite 1900-603 (possibly called something else outside the UK) Intel Pentium 4 2.2 Ghz Processor Originally had 512Mb memory but recently doubled that to 1 Gig of memory Graphics: ATI Mobility Radeon 16Mb DDR VRAM Windows XP SP3 (Home edition) Over the years I've done several things to speed it up (disabling indexing etc) and am generally happy with the performance. I also regularly have a clear out of old software (if for no other reason than the laptop only has a 40Gb hard disk) and use CCleaner and Glary Utilities to strip out much of the crap from my system. Also recently (after doubling the memory), I've tried a few new things which might be likely candidates for slowing the video down such as Rocketdock, Jingle keyboard (which gives an old style 'clacky' typewriter sound when I type - love it), SugarSync, Taskbar Shuffle. However, the video doesn't play smoothly even when I try quit all these apps.

    Read the article

  • Identifying the cause of my DNS failure (domain not propagating)

    - by thejartender
    I have set up a DNS server with the help of two helpful tutorials: http://linuxconfig.org/linux-dns-server-bind-configuration http://ulyssesonline.com/2007/11/07/how-to-setup-a-dns-server-in-ubuntu/ I am using: Ubuntu Bind9 and had issues I tried negating on my own thanks to a question I posted here earlier that pointed out my mistake of using rfc 1918 addresses in my previous SOA record: $TTL 3D @ IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); thejarbar.org. IN A 10.0.0.42 @ IN NS ns.thejarbar,org. yuccalaptop IN A 10.0.0.19 ns IN A 10.0.0.42 gw IN A 10.0.0.138 www IN CNAME thejarbar.org. $TTL 600 0.0.10.in-addr.arpa. IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); 0.0.10.in-addr.arpa. IN NS ns.thejarbar.org. 42 IN PTR thejarbar.org. 19 IN PTR yuccalaptop.thejarbar.org. 138 IN PTR gw.thejarbar.org. I read the ranges that are used under rfc 1918 and modified my routers resource pool to assign LAN devices IP(s) within the 30.0.0.0 range and now modified my SOA to: $TTL 600 @ IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); thejarbar.org. IN A 30.0.0.42 @ IN NS ns.thejarbar,org. yuccalaptop IN A 10.0.0.19 ns IN A 30.0.0.42 gw IN A 30.0.0.138 www IN CNAME thejarbar.org. $TTL600 0.0.10.in-addr.arpa. IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); 0.0.30.in-addr.arpa. IN NS ns.thejarbar.org. 42 IN PTR thejarbar.org. 19 IN PTR yuccalaptop.thejarbar.org. 138 IN PTR gw.thejarbar.org. I can ping my nameserverver ns.thejarbar.organd it gives me the correct isp IP address, but my domain never seems to propagate to my nameserver. I have searched for a concise tutorial that covers setting up a DNS with a nameserver that hosts (my) or the site. I am fully aware that this is not recommended and am using this for my learning purposes. Getting to the question, due to the lack of information in tutorials I looked at (nothing about rfc 1918 and no example of swapping these with ISP IP) is my router modification going to help me as it does not seem to be. I have also tried as recommended using my ISP IP instead of the values I posted. My site never propagated to my nameserver. What could be causing this? I have run dig thejarbar.org @88.89.190.171 and get an authorative response. Can anyone assist me with the final steps I may be missing here?

    Read the article

  • Issue in nginx proxying to apache

    - by Luis Masuelli
    My current nginx configuration is as follows: specific configuration for (currently two) domains: server { listen 443 ssl; server_name studiotv.service.tebusco.lan phpmyadmin.service.tebusco.lan; ssl_certificate /home/administrador/nginx-confs/ssl/service.tebusco.lan.crt; ssl_certificate_key /home/administrador/nginx-confs/ssl/service.tebusco.lan.key; location / { proxy_pass http://127.0.0.1:8180; proxy_set_header Host $http_host:8180; } } default configuration for unmatched ssl connections: server { listen 443 default ssl; ssl_certificate /home/administrador/nginx-confs/ssl/service.tebusco.lan.crt; ssl_certificate_key /home/administrador/nginx-confs/ssl/service.tebusco.lan.key; location / { return 403; } } http configuration: server { listen 80; rewrite ^ https://$host$request_uri? permanent; } The intention is clear: Redirect http traffic to https. Proxy each https:// call from phpmyadmin.service.tebusco.lan and studiotv.service.tebusco.lan to apache2. This includes passing a host header, which is detected. Each unmatched ssl connection must return a 403 in nginx. Does not even reach apache2. In the apache2 side of the life, I have a default site, and a non-default site which will match studiotv.service.tebusco.lan: 000-default.conf file (available and enabled): <VirtualHost 127.0.0.1:8180> # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. ServerName localhost ServerAdmin webmaster@localhost DocumentRoot /var/www/html <Directory /var/www/html> Order deny,allow Require all granted </Directory> </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet studiotv.conf file (available and enabled): <VirtualHost *:8180> ServerName studiotv.service.tebusco.lan ServerAdmin [email protected] DocumentRoot /var/www/studiotv <Directory /var/www/studiotv/> Options -Indexes +FollowSymLinks AllowOverride None Order deny,allow Allow from all Require all granted </Directory> # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn # No usamos ${APACHE_LOG_DIR} sino en su lugar /var/log/<host> ErrorLog /var/log/apache2/studiotv/error.log CustomLog /var/log/apache2/studiotv/access.log combined </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet However, when I hit the browser with http://studiotv.service.tebusco.lan, the default php page is shown instead. Question: What am I missing? (apache 2.4.7, nginx 1.6.0, ubuntu server 14.04).

    Read the article

  • Some services refusing to start on Win 7 machine. What could the root cause be?

    - by BombDefused
    When I check msconfig, there are no services that are blocked from starting up. When I look in services.msc, the problem services have a start up type of 'Automatic', but have a blank space where others will show 'Started'. Attempting to start them manually results in the following pop up error messages. I have no idea what's causing this, looks like some sort of cascade effect from another problem service. It's affecting scheduled tasks, SQL server agent and windows back up services. How can I resolve this? I don't know how to work out what the root cause is. Task Scheduler Service Start Error: "Windows could not start the Task Scheduler service on local computer. 1068: The dependecy service or group failed to start. SQL Server Service Start Error: "The SQL Server Agent service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs." UPDATE: I've just noticed some other services have a description of "Failed to Read Description. Error Code: 2" They are: NetMsmqActivator, NetPipeActivator, NetTcpActivator, NetTcpPortSharing UPDATE 2: As joeqwerty says the Event Log service does seem to be the root of the problem. This service will not start either. It fails with 'Error 31 - A device attached to the system is not functioning correctly'. I've tried detaching all devices. I've also followed the advice here, where the same problem is described, but with no luck: http://social.technet.microsoft.com/Forums/en/w7itprosecurity/thread/44479c49-55e6-4bd7-b25e-3f2a6497306e Update 3 @ Pacey - The following was a good tip, really clear instruction. However, I found that those reg keys do not exist on my system. "Your Problem might also derive from the UpperFilter or LowerFilter Settings of the CDROM Drive. These are a known cause for Errorcode 31. You can find step-by-step instructions on removing the filters on about.com" I followed the advice through to checking every component in device manager separately, but everything is reported as working correctly!? These services did all work at one point. The hardware set up hasn't changed much. Guess I'm looking at a repair install maybe???!

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • "Service Unavailable" when browsing to static HTML page in non-application IIS website on Windows 2003 (possibly SharePoint WSS 2.0 related?)

    - by Jordan Rieger
    Background: My client has an old Pentium III Windows 2003 server whose 16/36 GB disks are dying. On it he has a database-driven web site and email application that needs further customization by a developer (me). First we need to get it working on the new server. The original developer is no longer available to provide a system setup guide. So my client got a tech who imaged the old drives over to the new server and managed to get it booting. But the IIS-driven site no longer works. In fact it seems that IIS itself does not work. Problem: Service Unavailable when attempting to browse from the server itself to the URL for a local Web Site called test which I setup in IIS to serve a single static index.htm file. This I did to isolate the problem, and eliminate the client's application from the equation. The site is setup on port 80 with the host header "test.myclientsdomain.com", and I used the etc\hosts file to point that host at the local IP. I know the host entry took effect because I can ping it. When doing an iisreset, I get: Attempting start... Restart attempt failed. IIS Admin Service or a service dependent on IIS Admin is not active. It most likely failed to start, which may mean that it's disabled. Despite this message, the services all stay in the Started state. The only relevant System event logs I found are: Event Type: Error Event Source: W3SVC Event Category: None Event ID: 1002 Date: 11/4/2012 Time: 11:04:47 PM User: N/A Computer: ALPHA1 Description: Application pool 'DefaultAppPool' is being automatically disabled due to a series of failures in the process(es) serving that application pool. Event Type: Error Event Source: W3SVC Event Category: None Event ID: 1039 Date: 11/4/2012 Time: 11:13:12 PM User: N/A Computer: ALPHA1 Description: A process serving application pool 'DefaultAppPool' reported a failure. The process id was '5636'. The data field contains the error number. Data: 0000: 7e 00 07 80 ~.. And one Application event log: Event Type: Error Event Source: Windows SharePoint Services 2.0 Event Category: None Event ID: 1000 Date: 11/4/2012 Time: 11:34:04 PM User: N/A Computer: ALPHA1 Description: #50070: Unable to connect to the database STS_Config on ALPHA2\SharePoint. Check the database connection information and make sure that the database server is running. That last log tells me that the tech may have initially tried to have both the old and the new server running, by renaming the new server from ALPHA1 to ALPHA2. And perhaps SharePoint grabbed onto that change, and now can't tell that the machine name has been switched back to the old ALPHA1. But why would SharePoint interfere with a static IIS web site serving a single HTML file? The test site is not even within an Application pool (I clicked the Remove button.) What I have tried/eliminated: No relevant services seem to be disabled: IIS Admin, WWW Publishing, Sharepoint Timer Giving Full Control to All Users/Everyone on the c:\inetpub\test folder serving my test site. I can connect to and query the local SharePoint config database (ALPHA1\SHAREPOINT\STS_CONFIG) from SSMS. But when I try to do stsadm -o setconfigdb -connect -databaseserver ALPHA1\SHAREPOINT it tells me The SharePoint admininstration port does not exist. Please use stsadm.exe to create it. And when I do that, using the port 9487 specified in the IIS SharePoint Admin site config, it tells me the port is already in use. Needless to say, simply browsing to the admin site gives me a similar error about being unable to reach the config database. I didn't want to go further down the SharePoint path as it may be completed unrelated to my IIS issue, and I don't even know yet if SharePoint is required for this application to work. The app itself is ASP.Net/C#/Silverlight and a little MS Word integration (maybe that's where the SharePoint stuff comes in.)

    Read the article

  • Win-XP Browsers Hang on page load - (waiting for...)

    - by CHarmon
    Hello, I’m having problems with my browsers hanging on loading pages on my desktop machine. I’m using Windows XP Pro with SP3 and fully updated except for IE 8. All three of my browsers, IE 7, Chrome and Firefox are having the same problems. Pages are not being loaded and are hanging on “waiting for …”. The browsers are waiting for the page being loaded or ad servers. Sometimes a page will load but the loading graphic continues to be displayed as if the page were still loading when the page appears to be fully loaded. The problem is bad enough that I can’t really use any of my browsers. I can eventually get most pages to load by stopping and restarting the page load. I have DSL modem with a wireless router and I have been able to eliminate the modem and router from being the source of my problem. My laptop doesn’t have any problems even when hardwired to the router and with the wireless connection disabled. I deleted the NIC and let XP re-install. Also tried a different network cable. Tried the same router port used in the laptop test. One clue that may be important is that I can’t connect to my router using the desktop machine…the page hangs while trying to connect. I can ping the router and I can quickly connect to the router using the laptop. I also can’t use the Windows update process – the page never fully loads. The problem affects other user accounts and even happens in safe mode. I am convinced the problem is with part of the O/S…some layer able to affect all of the browsers. The purpose of this post is to see if anyone has some ideas before I do a XP repair. I have done quite a bit of trouble-shooting: Ran a full anti-virus scan with AVG – no problems. Ran full scans with Spybot, MalwareBytes and Sophos anti-rootkit – no problems. Ran Chkdsk with both options checked. Ran Disk Clean up Defragged RE-installed IE7 Cleared all the browser caches Ran Ccleaner (registry tool) Ran HijackThis – nothing unusual (problem happens in safe mode too) Ran Process Explorer – no unusual processes Used System Restore and fell back several days – no change in the problem Booted to last known good configuration – no change in the problem Ran MicrosoftFixit50199.msi – no change in the problem Any ideas or suggestions would be appreciated…I’m not looking forward to doing a repair on XP. Thanks in advance for any help.

    Read the article

  • Clamdscan scans file in 0 seconds

    - by SupaCoco
    I have to run clamav on large files. I was wondering which command was the fastest between clamscan and clamdscan. But it seems that clamdscan is not working properly: it scans file larger than 1 GB. Could you guys help me find why the heck clamdscan isn't working ? Between clamscan and clamdscan which one is less resource consuming ? I run ClamAV 0.97.8/18037 on Ubuntu 12.04.3 LTS. Please find below the execution result of both commands: clamscan myfile.zip ----------- SCAN SUMMARY ----------- Known viruses: 2864504 Engine version: 0.97.8 Scanned directories: 0 Scanned files: 1 Infected files: 0 Data scanned: 0.00 MB Data read: 1024.16 MB (ratio 0.00:1) Time: 9.145 sec (0 m 9 s) clamdscan myfile.zip /home/ubuntu/workspace/benchmark/myfile.zip: OK ----------- SCAN SUMMARY ----------- Infected files: 0 Time: 0.000 sec (0 m 0 s) And here are the clamav log file: Wed Oct 30 10:26:32 2013 -> Received POLLIN|POLLHUP on fd 4 Wed Oct 30 10:26:32 2013 -> Got new connection, FD 9 Wed Oct 30 10:26:32 2013 -> Received POLLIN|POLLHUP on fd 5 Wed Oct 30 10:26:32 2013 -> fds_poll_recv: timeout after 5 seconds Wed Oct 30 10:26:32 2013 -> Received POLLIN|POLLHUP on fd 9 Wed Oct 30 10:26:32 2013 -> got command CONTSCAN /home/ubuntu/workspace/benchmark/myfile.zip (51, 7), argument: /home/ubuntu/workspace/benchmark/myfile.zip Wed Oct 30 10:26:32 2013 -> mode -> MODE_WAITREPLY Wed Oct 30 10:26:32 2013 -> Breaking command loop, mode is no longer MODE_COMMAND Wed Oct 30 10:26:32 2013 -> Consumed entire command Wed Oct 30 10:26:32 2013 -> Number of file descriptors polled: 1 fds Wed Oct 30 10:26:32 2013 -> fds_poll_recv: timeout after 3600 seconds Wed Oct 30 10:26:32 2013 -> THRMGR: queue (single) crossed low threshold -> signaling Wed Oct 30 10:26:32 2013 -> THRMGR: queue (bulk) crossed low threshold -> signaling Wed Oct 30 10:26:32 2013 -> /home/ubuntu/workspace/benchmark/myfile.zip: OK Wed Oct 30 10:26:32 2013 -> Finished scanthread Wed Oct 30 10:26:32 2013 -> Scanthread: connection shut down (FD 9) Wed Oct 30 10:26:32 2013 -> THRMGR: queue (single) crossed low threshold -> signaling Wed Oct 30 10:26:32 2013 -> THRMGR: queue (bulk) crossed low threshold -> signaling

    Read the article

  • arp "who-has tell" on cloned machine

    - by mcmorry
    I have a urgent problem to solve today, but I'm lost. Please help. I've cloned a Virtual Machine hosted on VM Ware ESXi 4.1 The OS is now Ubuntu Server 12.04 LTS, but at the time of cloning it was 10.04 LTS. I fixed the MAC address manually inside /etc/udev/rules.d/70-persistent-net.rules. It is a known problem on Ubuntu. I had to remove the old MAC address and set the new one as eth0. Everything seems to work fine, except ARP. My provider OVH sent me a warning to resolve it today (this is the second day) or they will block my IP! The log contains many lines like this: Tue Jun 5 01:04:29 2012 : arp who-has 178.32.136.212 tell 178.32.136.224 where .224 is the cloned server that is causing problems, and .212 is the cloned one. arp -na returns: ? (178.33.230.254) at 00:07:b4:00:00:02 [ether] on eth0 ? (178.32.136.212) at 00:50:56:09:8e:f1 [ether] on eth0 The first IP is the ESXi machine. The second one should not be there. I'm not an expert and I don't know what else to do to fix this problem. Any help will be very appreciated. Thanks. EDIT: ifcofig on .224: eth0 Link encap:Ethernet HWaddr 00:50:56:01:32:c6 inet addr:178.32.136.224 Bcast:178.32.136.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fe01:32c6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:399924 errors:0 dropped:465 overruns:0 frame:0 TX packets:241884 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:58006071 (58.0 MB) TX bytes:663603166 (663.6 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:516216 errors:0 dropped:0 overruns:0 frame:0 TX packets:516216 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:236284275 (236.2 MB) TX bytes:236284275 (236.2 MB) ifconfig on .212: eth0 Link encap:Ethernet HWaddr 00:50:56:09:8e:f1 inet addr:178.32.136.212 Bcast:178.32.136.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fe09:8ef1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:16014 errors:0 dropped:0 overruns:0 frame:0 TX packets:14511 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:15134444 (15.1 MB) TX bytes:2683025 (2.6 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:9944 errors:0 dropped:0 overruns:0 frame:0 TX packets:9944 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1139347 (1.1 MB) TX bytes:1139347 (1.1 MB)

    Read the article

  • git private server error: "Permission denied (publickey)."

    - by goddfree
    I followed the instructions here in order to set up a private git server on my Amazon EC2 instance. However, I am having problems when trying to SSH into the git account. Specifically, I get the error "Permission denied (publickey)." Here are the permissions of my files/folders on the EC2 server: drwx------ 4 git git 4096 Aug 13 19:52 /home/git/ drwx------ 2 git git 4096 Aug 13 19:52 /home/git/.ssh -rw------- 1 git git 400 Aug 13 19:51 /home/git/.ssh/authorized_keys Here are the permissions of my files/folders on my own computer: drwx------ 5 CYT staff 170 Aug 13 14:51 .ssh -rw------- 1 CYT staff 1679 Aug 13 13:53 .ssh/id_rsa -rw-r--r-- 1 CYT staff 400 Aug 13 13:53 .ssh/id_rsa.pub -rw-r--r-- 1 CYT staff 1585 Aug 13 13:53 .ssh/known_hosts When checking my logs in /var/log/secure, I used to get the following error message every time I tried to SSH: Authentication refused: bad ownership or modes for file /home/git/.ssh/authorized_keys However, after making a few permission changes, I no longer get this error message. Despite this, I am still getting the "Permission denied (publickey)." message every time I try to SSH. The command I am using to SSH is ssh -T git@my-ip. Here is the full log I get when I run ssh -vT [email protected]: OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: Connecting to my-ip [my-ip] port 22. debug1: Connection established. debug1: identity file /Users/CYT/.ssh/id_rsa type -1 debug1: identity file /Users/CYT/.ssh/id_rsa-cert type -1 debug1: identity file /Users/CYT/.ssh/id_dsa type -1 debug1: identity file /Users/CYT/.ssh/id_dsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.2 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.2 debug1: match: OpenSSH_6.2 pat OpenSSH* debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr [email protected] none debug1: kex: client->server aes128-ctr [email protected] none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 08:ad:8a:bc:ab:4d:5f:73:24:b2:78:69:46:1a:a5:5a debug1: Host 'my-ip' is known and matches the RSA host key. debug1: Found key in /Users/CYT/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /Users/CYT/.ssh/id_rsa debug1: Trying private key: /Users/CYT/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey). I have spent a few hours going through threads on various sites, including SO and SF, looking for a solution. It seems that the permissions for my files are all okay, but I just can't figure out the problem. Any help would be greatly appreciated. Edit: EEAA: Here are the outputs you requested: $ getent passwd git git:x:503:504::/home/git:/bin/bash $ grep ssh ~git/.ssh/authorized_keys | wc -l grep: /home/git/.ssh/authorized_keys: Permission denied 0

    Read the article

  • Two-Hop SSH connection with two separate public keys

    - by yigit
    We have the following ssh hop setup: localhost -> hub -> server hubuser@hub accepts the public key for localuser@localhost. serveruser@server accepts the public key for hubuser@hub. So we are issuing ssh -t hubuser@hub ssh serveruser@server for connecting to server. The problem with this setup is we can not scp directly to the server. I tried creating .ssh/config file like this: Host server user serveruser port 22 hostname server ProxyCommand ssh -q hubuser@hub 'nc %h %p' But I am not able to connect (yigit is localuser): $ ssh serveruser@server -v OpenSSH_6.1p1, OpenSSL 1.0.1c 10 May 2012 debug1: Reading configuration data /home/yigit/.ssh/config debug1: /home/yigit/.ssh/config line 19: Applying options for server debug1: Reading configuration data /etc/ssh/ssh_config debug1: Executing proxy command: exec ssh -q hubuser@hub 'nc server 22' debug1: permanently_drop_suid: 1000 debug1: identity file /home/yigit/.ssh/id_rsa type 1000 debug1: identity file /home/yigit/.ssh/id_rsa-cert type -1 debug1: identity file /home/yigit/.ssh/id_dsa type -1 debug1: identity file /home/yigit/.ssh/id_dsa-cert type -1 debug1: identity file /home/yigit/.ssh/id_ecdsa type -1 debug1: identity file /home/yigit/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH_5* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA cb:ee:1f:78:82:1e:b4:39:c6:67:6f:4d:b4:01:f2:9f debug1: Host 'server' is known and matches the ECDSA host key. debug1: Found key in /home/yigit/.ssh/known_hosts:33 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/yigit/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /home/yigit/.ssh/id_dsa debug1: Trying private key: /home/yigit/.ssh/id_ecdsa debug1: No more authentication methods to try. Permission denied (publickey). Notice that it is trying to use the public key localuser@localhost for authenticating on server and fails since it is not the right one. Is it possible to modify the ProxyCommand so that the key for hubuser@hub is used for authenticating on server?

    Read the article

  • SQL Server 2000 and SSL Encryption

    - by Angry_IT_Guru
    We are a datacenter that hsots a SQL Server 2000 environment which provides database services for a product we sell that is loaded as a rich-client applicatin at each of our many clients and their workstations. Currently today, the application uses straight ODBC connections from the client site to our datacenter. We need to begin encrypting the credentials -- since everything is clear-text today and the authentication is weakly encrypted -- and I'm trying to determine the best way to implement SSL on the server with minimizing the impact of the client. A few things, however: 1) We have our own Windows domain and all our servers are joined to our private domain. Our clietns no nothing of our domain. 2) Typically, our clients connect to our datacenter servers either by: a) Using TCP/IP address b) Using a DNS name that we publish via internet, zone transfers from our DNS servers to our customers, or the client can add static HOSTS entries. 3) From what I understand from enabling encryption is that I can go to the Network Utility and select the "encryption" option for the protocol that I wish to encrypt. Such as TCP/IP. 4) When the encryption option is selected, I have a choice of installing a third-party certificate or a self-signed. I have tested the self-signed, but do have potential issues. I'll explain in a bit. If I go with a third-party cert, such as Verisign, or Network solutions... what kind of certificate do I request? These aren't IIS certificates? When I go create a self-signed via Microsoft's certificate server, I have to select "Authentication certificate". What does this translate to in the third-party world? 5) If I create a self-signed certificate, I understand that the "issue to" name has to match the FQDN for the server that is running SQL. In my case, I have to use my private domain name. If I use this, what does this do for my clients when trying to connect to my SQL Server? Surely they cannot resolve my private DNS names on their network.... I've also verified that when the self-signed certificate is installed, it has to be in the local personal store for the user account that is running SQL Server. SQL Server will only start if the FQDN matches the "issue to" of the certificate and SQL is running under the account that has the certificate installed. If I use a self-signed certificate, does this mean I have to have every one of my clients install it to verify? 6) If I used a third-party certificate, which sounds like the best option, do all my clients have to have internet access when accessing my private servers of their private WAN connection to use to verify the certificate? What do I do about the FQDN? It sounds like they have to use my private domain name -- which is not published -- and can no longer use the one that I setup for them to use? 7) I plan on upgrading to SQL 2000 soon. Is setup of SSL any easier/better with SQL 2005 than SQL 2000? Any help or guiadance would be appreciated

    Read the article

  • General guidelines / workflow to convert or transfer video "professionally"?

    - by cloneman
    I'm an IT "professional" who sometimes has to deal with small video conversion / video cutting projects, and I'd like to learn "the right way" to do this. Every time I search Google, there's always a disaster for weird, low-maturity trialware, or random forums threads from 3-4 years ago indicating various antiquated method to do it. The big question is the following: What are the "general" guidelines and tools to transcode video into some efficient (lossless?) intermediary, for editing purposes, for the purpose of eventually re-encoding it after? It seems to me like even the simplest of formats and tasks are a disaster of endless trial & error, or expertise only known by hardened experts who have a swiss army kife of weird conversion tools that they use, almost as if mounting an attack against the project. Here are a few cases in point: Simple VOB files extracted from DVD footage can't be imported into Adobe Premiere directly. Virtualdub is an old software people keep recommending but doesn't seem to support newer formats. I don't even know how to tell with certainty which codecs a video has, and weather the image is interlaced or not, and what resolution and codecs I'm dealing with. Problems: Choosing a wrong interlace option which diminishes quality Choosing a wrong pixel aspect ratio (stretches the image) Choosing a wrong "project type" in Premiere causing footage to require scaling Being forced to use some weird program that will have any number of negative effects What I'm looking for: Books or "Real knowledge" on format conversions, recognized tools, etc. that aren't some random forum guides on how to deal with video formats. Workflow guidelines on identifying a format going from one format to another without problems as mentioned above. Documentation on what programs like Adobe Premiere can and can't do with regards to formats, so that I don't use a wrench as a hammer. TL;DR How should you convert or "prepare" a video file to ensure it will be supported by Premiere for editing? Is premiere a suitable program to handle cropping, encoding, or should other tools be used for this, when making a video montage from a variety of source formats? What are some good books to read that specifically deal with converting videos that use any number of codecs?

    Read the article

  • Will spreading your servers load not just consume more recourses

    - by Saif Bechan
    I am running a heavy real-time updating website. The amount of recourses needed per user are quite high, ill give you an example. Setup Every visit The application is php/mysql so on every visit static and dynamic content is loaded. Recourses: apache,php,mysql Every second (no more than a second will just be too long) The website needs to be updated real-time so every second there is an ajax call thats updates the website. Recourses: jQuery,apache,php,mysql Avarage spending for single user (spending one minute and visited 3 pages) Apache: +/- 63 requests / responsess serving static and dynamic content (img,css,js,html) php: +/- 63 requests / responses mysql: +/- 63 requests / responses jquery: +/- 60 requests / responses Optimization I want to optimize this process, but I think that maybe it would be just the same in the end. Before implementing and testing (which will take weeks) I wanted to have some second opinions from you guys. Every visit I want to start off with having nginx in the front and work as a proxy to deliver the static content. Recources: Dynamic: apache,php,mysql Static: nginx This will spread the load on apache a lot. Every Second For the script that loads every second I want to set up Node.js server side javascript with nginx in te front. I want to set it up that jquery makes a request ones a minute, and node.js streams the data to the client every second. Recources: jQuery,nginx,node.js,mysql Avarage spending for single user (spending one minute and visited 3 pages) Nginx: 4 requests / responsess serving mostly static conetent(img,css,js) Apache: 3 requests only the pages php: 3 requests only the pages node.js: 1 request / 60 responses jquery: 1 request / 60 responses mysql: 63 requests / responses Optimization As you can see in the optimisation the load from Apache and PHP are lifted and places on nginx and node.js. These are known for there light footprint and good performance. But I am having my doubts, because there are still 2 programs extra loaded in the memory and they consume cpu. So it it better to have less programs that do the job, or more. Before I am going to spend a lot of time setting this up I would like to know if it will be worth the while.

    Read the article

  • Why isn't passwordless ssh working?

    - by Nelson
    I have two Ubuntu Server machines sitting at home. One is 192.168.1.15 (we'll call this 15), and the other is 192.168.1.25 (we'll call this 25). For some reason, when I want to setup passwordless login from 15 to 25, it works like a champ. When I repeat the steps on 25, so that 25 can login without a password on 15, no dice. I have checked both sshd_config files. Both have: RSAAuthentication yes PubkeyAuthentication yes I have checked permissions on both servers: drwx------ 2 bion2 bion2 4096 Dec 4 12:51 .ssh -rw------- 1 bion2 bion2 398 Dec 4 13:10 authorized_keys On 25. drwx------ 2 shimdidly shimdidly 4096 Dec 4 19:15 .ssh -rw------- 1 shimdidly shimdidly 1018 Dec 4 18:54 authorized_keys On 15. I just don't understand when things would work one way and not the other. I know it's probably something obvious just staring me in the face, but for the life of me, I can't figure out what is going on. Here's what ssh -v says when I try to ssh from 25 to 15: ssh -v -p 51337 192.168.1.15 OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 192.168.1.15 [192.168.1.15] port 51337. debug1: Connection established. debug1: identity file /home/shimdidly/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file /home/shimdidly/.ssh/id_rsa-cert type -1 debug1: identity file /home/shimdidly/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: identity file /home/shimdidly/.ssh/id_dsa-cert type -1 debug1: identity file /home/shimdidly/.ssh/id_ecdsa type -1 debug1: identity file /home/shimdidly/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA 54:5c:60:80:74:ab:ab:31:36:a1:d3:9b:db:31:2a:ee debug1: Host '[192.168.1.15]:51337' is known and matches the ECDSA host key. debug1: Found key in /home/shimdidly/.ssh/known_hosts:2 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/shimdidly/.ssh/id_rsa debug1: Authentications that can continue: publickey,password debug1: Offering DSA public key: /home/shimdidly/.ssh/id_dsa debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/shimdidly/.ssh/id_ecdsa debug1: Next authentication method: password

    Read the article

  • how can I give openvpn clients access to a dns server (bind9) that is located on the same machine as the openvpn server

    - by lacrosse1991
    I currently have a debian server that is running an openvpn server. I also have a dns server (bind9) that I would like give allow access to by the connected openvpn clients, but I am unsure as of how to do this, I already known how to send dns options to the clients using push "dhcp-option DNS x.x.x.x" but I am just unsure how give the clients access to the dns server that is located on the same machine as the vpn server, so if anyone could point me in the right direction I would really appreciate it. Also in case this would have anything to do with adding rules to iptables, this is my current configuration for iptables # Generated by iptables-save v1.4.14 on Thu Oct 18 22:05:33 2012 *nat :PREROUTING ACCEPT [3831842:462225238] :INPUT ACCEPT [3820049:461550908] :OUTPUT ACCEPT [1885011:139487044] :POSTROUTING ACCEPT [1883834:139415168] -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE COMMIT # Completed on Thu Oct 18 22:05:33 2012 # Generated by iptables-save v1.4.14 on Thu Oct 18 22:05:33 2012 *filter :INPUT ACCEPT [45799:10669929] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [45747:10335026] :fail2ban-apache - [0:0] :fail2ban-apache-myadmin - [0:0] :fail2ban-apache-noscript - [0:0] :fail2ban-ssh - [0:0] :fail2ban-ssh-ddos - [0:0] :fail2ban-webserver-w00tw00t - [0:0] -A INPUT -p tcp -m multiport --dports 80,443 -j fail2ban-apache-myadmin -A INPUT -p tcp -m multiport --dports 80,443 -j fail2ban-webserver-w00tw00t -A INPUT -p tcp -m multiport --dports 80,443 -j fail2ban-apache-noscript -A INPUT -p tcp -m multiport --dports 80,443 -j fail2ban-apache -A INPUT -p tcp -m multiport --dports 22 -j fail2ban-ssh-ddos -A INPUT -p tcp -m multiport --dports 22 -j fail2ban-ssh -A INPUT -i tun+ -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 3306 -j ACCEPT -A FORWARD -i tun+ -j ACCEPT -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT -A fail2ban-apache -j RETURN -A fail2ban-apache-myadmin -s 211.154.213.122/32 -j DROP -A fail2ban-apache-myadmin -s 201.170.229.96/32 -j DROP -A fail2ban-apache-myadmin -j RETURN -A fail2ban-apache-noscript -j RETURN -A fail2ban-ssh -s 76.9.59.66/32 -j DROP -A fail2ban-ssh -s 64.13.220.73/32 -j DROP -A fail2ban-ssh -s 203.69.139.179/32 -j DROP -A fail2ban-ssh -s 173.10.11.146/32 -j DROP -A fail2ban-ssh -j RETURN -A fail2ban-ssh-ddos -j RETURN -A fail2ban-webserver-w00tw00t -s 217.70.51.154/32 -j DROP -A fail2ban-webserver-w00tw00t -s 86.35.242.58/32 -j DROP -A fail2ban-webserver-w00tw00t -j RETURN COMMIT # Completed on Thu Oct 18 22:05:33 2012 also here is my openvpn server configuration port 1194 proto udp dev tun ca ca.crt cert server.crt key server.key dh dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt keepalive 10 120 comp-lzo user nobody group users persist-key persist-tun status /var/log/openvpn/openvpn-status.log verb 3 push "redirect-gateway def1" push "dhcp-option DNS 213.133.98.98" push "dhcp-option DNS 213.133.99.99" push "dhcp-option DNS 213.133.100.100" client-to-client

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >