Search Results

Search found 30819 results on 1233 pages for 'software security'.

Page 1003/1233 | < Previous Page | 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010  | Next Page >

  • Connecting to Server 2008 shares fails

    - by Chris J
    I'm having problems getting a reliable share working on an x64 Server 2008 R1 SP1 server. All works well after a reboot, but after some time (within a day) the shares become unavailable to XP and Server 2003 servers. Interestingly, they remain available to other Server 2008 servers. On trying to access \\server\share, Server 2003 returns immediately and simply gives me the message "The specified network name is no longer available", XP takes a minute or two to timeout before giving the same message. There doesn't seem to be anything in the event logs indicating a problem. Doing some googling over the last day or two I've seen the following blamed: Bad network drivers ... I've updated to the latest drivers with no result Symantec anti-virus ... we're not using it (currently no AV on the server) Receive window auto-tuning ... I've disabled with netsh int tcp set global autotuninglevel=disabled and netsh int tcp set global rss=disabled None of these have had an effect. Windows Firewall is currently disabled. As other Server 2008 boxes (both x32 and x64) can connect, I can only assume that there's some new security configuration that's not quite right - or there's an AD issue that I need to trace, but don't know where to start. Even if anyone doesn't know how to resolve, if someone knows what I need to look for with Wireshark this would be a help.

    Read the article

  • Access an external SSH server through a restrictive proxy [on hold]

    - by Cyrille
    I'm a software developer. I wish to access my computer at home through SSH. For example, I sometime need to access my personal projects source code to check how I handled specific problems. Unfortunately, I currently work under an over-restrictive and anti-productive proxy that waste a hell of a lot of everyone's time (We often have to visit websites from our smartphones or use a web proxy to check very legitimates websites for answers, and don't get me started on other "security" overkill features we have to cope with...). Well, back to the subject, I can access my home computer from my phone (SSH, port 22 and 80 both redirected by router on port 22). It works, but it's quite uncomfortable. From my office computer, this is what I tried so far: export http_proxy=http://user:pass@proxyip:8080 echo "user:pass" > ~/.corkscrew-auth echo "ProxyCommand corkscrew proxyip 8080 %h %p /home/me/.corkscrew-auth" > ~/.ssh/config ssh 82.23.34.56 -l me -p 80 Proxy could not open connnection to 82.23.34.56: Forbidden ssh_exchange_identification: Connection closed by remote host (same without -p 80) Without corkscrew: ssh: connect to host 82.23.34.56 port 80: Connection timed out ssh: connect to host 82.23.34.56 port 22: Connection timed out Any other idea ?

    Read the article

  • Slow, choppy video playback with nVidia 8600GT

    - by user5351
    I have an nVidia 8600GT card (made by EVGA) on a machine with Windows Vista (AMD Athlon X2 processors) and four gigs of ram. It runs pretty good, but I have had some slow/choppy/stuterring video playback issues whenever watching flash videos on Youtube or other sites. The problem is there with both Firefox and IE flash videos, but is maybe worse with Firefox. I also tried Linux with nVidia's binary drivers and it was about the same. I downloaded EVGA precision which allows me to control stuff like the fan and clock speed. The card's temp (in both Vista and Linux) is usually at 66C when idle (not playing a game or watching anything). It goes up a little when watching a video (maybe 68-72C). Any ideas on how to fix this? UPDATE: The issues are both with full screen and embedded flash videos. I have Flash 10.0.32.18 (always make sure I use most recent for security), and the CPU is an AMD Athlon 64 X2 Dual Core Processor 4000+ at 2.11 GHZ. The current GPU driver installed is the most recent GeForce one from last July.

    Read the article

  • Single application through OpenVPN tunnel (Debian Lenny)

    - by user14124
    I'm using Debian Lenny and I want to tunnel rtorrent only through a OpenVPN tunnel. I have a tunnel running, the config file looks like this: client dev tun proto udp remote openvpn.xxx.com 1194 resolv-retry infinite nobind persist-key persist-tun ca /etc/openvpn/xxx/keys/ca.crt cert /etc/openvpn/xxx/keys/client.crt key /etc/openvpn/xxx/keys/client.key tls-auth /etc/openvpn/xxx/keys/tls.key 1 ns-cert-type server comp-lzo verb 3 auth-user-pass script-security 3 reneg-sec 0 My idea is that I could run a sockd proxy internally that redirects traffic to the openvpn tunnel. I could use the *nix "proxifier" application "tsocks" to make it possible for rtorrent to connect through that proxy (as rtorrent doesn't support proxies). I have trouble configuring sockd as my IP inside the VPN changes every time I connect. This is a config file someone said would help: http://ircpimps.org/sockd.conf As my IP changes at each connect I don't know what to put in that config file. I have no control over the host side config file. Any help wanted. Any other method is very welcome.

    Read the article

  • DFS Root namespace is RDWR for all users

    - by Patrick
    We have an existing DFS Replication and Namespace group that we use to serve the company's files. This has been operating fine for us for some time now, and continues to do so. however a situation arose yesterday afternoon that has led us to be stumped. The problem is that we have our name space presented as : \\domain.co.uk\public\[8 or 9 folders that are mapped to the users in the business] We had a problem this morning that meant that a number of users started mapping their AD Home Drive directly to the \\domain.co.uk\public directory and we found that they had read/write. This rapidly became a problem as a at least one director saved some moderately sensitive documents in there and basically anyone could read them. I've tidied up that specific problem with some deft scripting and a slight modification of group policy. However I would like to make \public read only, the trouble is I can't work out where the ACLs for that folder would be held. All the folders that are presented as \\domain.co.uk\public\[folder] are 'real' folders on logical volumes on our DFS servers so are secured with groups that are applied via the 'security' tab. I'd like to do the same on \public but I can't find it. I have looked through amongst other things \Sysvol\domain.co.uk but can't find it and after a lot of clicking and a bit of reading I can't see how to lock it down. Any thoughts?

    Read the article

  • Copied a file with winscp; only winscp can see it

    - by nilbus
    I recently copied a 25.5GB file from another machine using WinSCP. I copied it to C:\beth.tar.gz, and WinSCP can still see the file. However no other app (including Explorer) can see the file. What might cause this, and how can I fix it? The details that might or might not matter WinSCP shows the size of the file (C:\beth.tar.gz) correctly as 27,460,124,080 bytes, which matches the filesize on the remote host Neither explorer, cmd (command line prompt w/ dir C:\), the 7Zip archive program, nor any other File Open dialog can see the beth.tar.gz file under C:\ I have configured Explorer to show hidden files I can move the file to other directories using WinSCP If I try to move the file to Users/, UAC prompts me for administrative rights, which I grant, and I get this error: Could not find this item The item is no longer located in C:\ When I try to transfer the file back to the remote host in a new directory, the transfer starts successfully and transfers data The transfer had about 30 minutes remaining when I left it for the night The morning after the file transfer, I was greeted with a message saying that the connection to the server had been lost. I don't think this is relevant, since I did not tell it to disconnect after the file was done transferring, and it likely disconnected after the file transfer finished. I'm using an old version of WinSCP - v4.1.8 from 2008 I can view the file properties in WinSCP: Type of file: 7zip (.gz) Location: C:\ Attributes: none (Ready-only, Hidden, Archive, or Ready for indexing) Security: SYSTEM, my user, and Administrators group have full permissions - everything other than "special permissions" is checked under Allow for all 3 users/groups (my user, Administrators, SYSTEM) What's going on?!

    Read the article

  • Upgrade of Ubuntu 8.10 distribution fails due to missing packages

    - by Tim
    I have a server that I've forgotten to upgrade for ages, which is still running Intrepid (8.10). I'd like to upgrade it to a newer version of the distribution, so that I can get security patches etc. I found some instructions that tell me to install the package update-manager-core. I tried the following: $ sudo apt-get install update-manager-core but this fails since some of the necessary packages can't be found: ... Err http://archive.ubuntu.com intrepid/main python-apt 0.7.7.1ubuntu4 404 Not Found [IP: 91.189.88.40 80] Err http://archive.ubuntu.com intrepid-updates/main update-manager-core 1:0.93.34 404 Not Found [IP: 91.189.88.40 80] Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/p/python-apt/python-apt_0.7.7.1ubuntu4_amd64.deb 404 Not Found [IP: 91.189.88.40 80] Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/u/update-manager/update-manager-core_0.93.34_amd64.deb 404 Not Found [IP: 91.189.88.40 80] ... I know that Intrepid is no longer supported, and so I guess some of the necessary files may no longer be maintained. But this seems rather unhelpful: I can't upgrade because it's too old, and the only way to fix this would be to upgrade it. Is there a way round this? Is something else wrong?

    Read the article

  • Thunderbird 15.0.1 cannot use Exchange 2003 SMTP

    - by speedreeder
    I'm having the strangest time getting a Thunderbird email client to connect to my Exchange 2003 server. I got the incoming IMAP account set up no problem, and I can receive mail. However sending mail will not work no matter what SMTP settings I enter. After checking the server, the proper settings should be port 25 with no authentication or connection security, which I have entered. I can ping the hostname of the server from the client machine in question. The Thunderbird error message I get is: "Sending of message failed. The message could not be sent because the connection to SMTP server -hostname omitted- was lost in the middle of the transaction." So I went to the server and double checked the settings for Exchange's SMTP stuff. I have it correct. I tried to telnet (on the server) to localhost 25. It appears to connect and then disconnect immediately, no message, no nothing. When I telnet to other ports (POP-110 for example) I get proper connection messages and a stable connection. There are no firewalls on either the client or the server. There's a firewall on the network but LAN-LAN traffic is unrestricted. I can reproduce the Thunderbird error on a second client, and I can't get any client to be able to telnet in. Anyone have any ideas?

    Read the article

  • Can't access YouTube

    - by Agentleader1
    I can not seem to connect to YouTube at all. If I connect directly to YouTube (youtube.com) I get this: And if I try to connect via video directly (youtube.com/watch?v=), I get this: Here's how this isn't a web issue: I have malware on my computer I believe. And the question here is, how do I get rid of it, or what possible issue could this be? I can verify this is not a website issue or wifi issue, I've tried to connect on another computer in my wifi, and it worked. It is the local machine issue. I've tried to get rid of malware at my best and also have tried to disable possible virus extensions. All can out as the same result: no help. Also, I am unable to find these hiding viruses. I used malware-bytes and Microsoft Security Essentials. Edit: This is OBVIOUSLY Windows. I am currently running on a laptop Windows 7 Home Premium 4gb ram 64-bit os Lifebook series manufactured by Fujitsu America, Inc. Intel(R) Core(TM) i3-3110M CPU @ 2.40GHz nslookup youtube.com:

    Read the article

  • What Logs / Process Stats to monitor on a Ubuntu FTP server?

    - by Adam Salkin
    I am administering a server with Ubuntu Server which is running pureFTP. So far all is well, but I would like to know what I should be monitoring so that I can spot any potential stability and security issues. I'm not looking for sophisticated software, more an idea of what logs and process statistics are most useful for checking on the health of the system. I'm thinking that I can look at various parameters output from the "ps" command and compare to see if I have things like memory leaks. But I would like to know what experienced admins do. Also, how do I do a disk check so that when I reboot, I don't get a message saying something like "disk not checked for x days, forcing check" which delays the reboot? I assume there is command that I can run as a cron job late at night. How often should it be run? What things should I be looking at to spot intrusion attempts? The only shell access is SSH on a non-standard port through UFW firewall, and I regularly do a grep on auth.log for "Fail" or "Invalid". Is there anything else I should look at? I was logging the firewall (UFW) but I have very few open ports (FTP and SSH on a non standard port) so looking at lists of IP's that have been blocked did not seem useful. Many thanks

    Read the article

  • Linux file server for an inexperienced admin

    - by Pat
    A charity I volunteer for wants a file server for their mostly Windows machines (about five XP and 7 machines, with some Mac laptops every now and then). For the server, I have a PC with an Intel Core 2 Duo 3GHz proc, 4GB of DDR2 400MHz RAM, and a 500 GB HDD. (I should point out that they do not currently have any server - they are just using Windows to share a folder on one of the PCs.) What is a linux distro that is easy to configure for Windows file serving yet stable and secure enough to protect sensitive data without an expert sysadmin? I'm guessing that a Debian distro would probably fit the security bill, but I don't know of any tailored to novice sysadmins. Also, are there any killer apps for making this easy to administer and set up (as a Windows file server, in particular - this answer is a good example)? Would FreeNAS be sufficient? Once it's all set up, what are the minimum measures I need to take to keep the data secure? I found this somewhat helpful answer, but it's not specific to my question of just getting a secure file server up, running, and maintained.

    Read the article

  • 403 Forbiden on Apache (CentOS) Server

    - by pouya
    These are my VM setup: HOST: windows 7 ultimate 32bit GUEST: CentOs 6.3 i386 Virtualization soft: Oracle virtualBox 4.1.22 Networking: NAT -> (PORT FORWARD: HOST:8080 => GUEST:80) Shared Folder: centos all the project files goes into shared folder and for each project file a virtualhost conf file is created in /etc/httpd/conf.d/ like /etc/httpd/conf.d/$domain I wasn't able to see anything in my browser before disabling both windows firewall and iptables in centos after that if i type for example: http://www.$domain:8080/ all i see is: Forbidden You don't have permission to access / on this server. Apache/2.2.15 (CentOS) Server at www.$domain.com Port 8080 A sample Virtual Host conf file: <VirtualHost *:80> #General DocumentRoot /media/sf_centos/path/to/public_html ServerAdmin webmaster@$domain ServerName www.$domain ServerAlias $domain *.$domain #Logging ErrorLog /var/log/httpd/$domain-error.log CustomLog /var/log/httpd/$domain-access.log combined #mod rewrite RewriteEngine On RewriteLog /var/log/httpd/$domain-rewrite.log RewriteLogLevel 0 </VirtualHost> centos shared folder is availabe to guest at /media/sf_centos These are file permissons for sf_centos: drwxrwx--- root vboxsf vboxsf group includes: apache and root So these are my questions: 1- How to solve Forbidden Problem? 2- How to setup both host and guest firewalls? 3- How can i improve this developement environment to simulate production environment as much as possible specially security improvements?

    Read the article

  • Can I restore one of my user's profiles in Vista?

    - by Rod
    My youngest daughter uses my 4 year old laptop, which has Windows Vista installed. Somehow she got some Trojan (Vista Internet Security). (I'd love to know how that happened, seeing as how she is a standard user, and I have VIPRE as my AV.) Anyway, I ran a deep anti-virus scan using VIPRE, which identified it. I decided to delete everything that it identified. Now she cannot use anything in her profile. If she tries to bring up the browser, it recycles over and over again a dialog box asking which program to use. If I try to run any program at all, it doesn't know what to do. For example, it is totally lost trying to run the command line. If I bring up Windows Explorer and navigate to Windows\System32 and try to run the command line, or anything at all from there, it goes "Huh?" What in heck has happened?? Is it possible to fix this, and if so, how? As an aside, I can log into my account (my account on that machine is an administrator) and it works fine.

    Read the article

  • Replicated filesystem and EC2 MySQL

    - by El Yobo
    I'm currently investigating migrating our infrastructure over to run on Amazon's EC2 and am trying to figure out the best way to set up a MySQL service. I'm leaning towards running our own MySQL instances, rather than going with Amazon's RDS, but am still considering the best approach for performance and cost on the instance itself. In order to have persistent data, the MySQL data needs to be on an EBS volume (with some form of striped RAID, e.g. RAID0 or RAID10) to improve persistence. However, EBS IO is limited by the network interface (gigabit, so a theoretical maximum of 128 MB/s), while the ephemeral volumes have no such problem. I did see a suggestion for running two MySQL servers on an instance, with a master running on the ephemeral disk (which we would also RAID) and a slave storing changes to an EBS volume, but this has some additional overhead and complexity (two servers). What I was imagining is using some form of replicated file system such that I could have a filesystem on top of a RAID0 of ephemeral volumes to maximise performance all changes from the above immediately replicated to another RAID1 volume backed by multiple EBS volumes to ensure no data loss The advantages of this would be best possible IO performance for the DB server; no network delay in IO decreased IO on EBS volumes (as all read IO will be done on the ephemeral volumes) so decreased cost good data security, as it's backed onto redundant EBS volumes However, I haven't seen an appropriate system to replicate all changes from one volume to the other; is there a filesystem, or any other approach, which will do this? The distributed file systems, e.g. GlusterFS, DRBD etc seem to focus on replicating disks between servers, can they be set up to do what I'm interested in here? I also haven't seen anything about other's taking this approach. Do I have a solution in need of a problem here (i.e. is performance good enough, so this whole idea is redundant)? Is there some flaw in the plan?

    Read the article

  • RAIDs with a lot of spindles - how to safely put to use the "wasted" space

    - by kubanczyk
    I have a fairly large number of RAID arrays (server controllers as well as midrange SAN storage) that all suffer from the same problem: barely enough spindles to keep the peak I/O performance, and tons of unused disk space. I guess it's a universal issue since vendors offer the smallest drives of 300 GB capacity but the random I/O performance hasn't really grown much since the time when the smallest drives were 36 GB. One example is a database that has 300 GB and needs random performance of 3200 IOPS, so it gets 16 disks (4800 GB minus 300 GB and we have 4.5 TB wasted space). Another common example are redo logs for a OLTP database that is sensitive in terms of response time. The redo logs get their own 300 GB mirror, but take 30 GB: 270 GB wasted. What I would like to see is a systematic approach for both Linux and Windows environment. How to set up the space so sysadmin team would be reminded about the risk of hindering the performance of the main db/app? Or, even better, to be protected from that risk? The typical situation that comes to my mind is "oh, I have this very large zip file, where do I uncompress it? Umm let's see the df -h and we figure something out in no time..." I don't put emphasis on strictness of the security (sysadmins are trusted to act in good faith), but on overall simplicity of the approach. For Linux, it would be great to have a filesystem customized to cap I/O rate to a very low level - is this possible?

    Read the article

  • How to recover Windows Password without reinstalling if you forgot Windows password?

    - by user38908
    Usually, we can recover Windows admin password in two traditional ways. The first is to change Windows password with another admin account; the second is to recover the previous password with the windows password reset disk that had been created before you forgot the password. Take Windows XP for example, 1 At the Windows XP login prompt when the password is entered incorrectly click the reset button in the login failed window. 2 Insert the password reset diskette into the computer and click Next. 3 If the correct diskette Windows XP will open a window prompting for the new password you wish to use. However, we offen ignore the important of security until we have been locked out of computer. Fortunately, there is still the last way that can unlock your computer without reinstalling - erase Windows password with Windows password reset CD, which can recover admin password for Windows 7/XP/Vista/NT/2000/2003.... Take Windows Password unlocker for example, followings are the steps to create the reset CD 1.Download Windows Password Unlocker from Password Unlocker Official site 2.Decompress the Windows password unlocker and note that there is an .ISO image file. Burn the image file onto an blank CD with the burner freely supported by Password Unlocker. 3.Insert the newly created CD into the locked computer and re-boot it from the CD drive. 4.After launched the CD, a window pop up with all your account names(if you have several accounts) select one of the accounts that you have forgotten its password to reset it. Just one press, this software can remove Windows password instantly.

    Read the article

  • Rsync root files between systems without specifying password

    - by xpt
    This seems very tricky to me. I've set up my two systems so that I can rsync files between them as me, without specifying password. Now the the problem is to rsync files that belong to root. On both of my systems, there are no root passwords. The only way to become root is via sudo. So I can neither give a password for sudo rsyn local root@remote:, no use my ssh-agent to supply pass phrase. I don't want to set up a root password on any systems; and I do need the files to be owned by root on both systems. EDIT: Using the files that belong to root is just an example, I need a way for my unprivileged account to read/write system (including root-owned) files easily. One example is to copy my configured /root environment into the freshly-installed system. The two systems are actually two VMs under a single host, so it's not a big concern for me to copy root-owned files between them. EDIT 2: If I only want to copy my configured /root environment into the freshly-installed system, I can use tar: sudo tar cvzf - /root | ssh me@remote sudo tar xvzf - -C / But I do need rsync to update from time to time. Any easy way to make it happen? EDIT 3: Formally formulate the question Alright, it all began with the question, how to rsync files that belong to root between two systems as a normal unprivileged user, without specifying password, under the condition that, The root account is locked on both of systems. I.e., there are no root passwords. The only way to become root is via sudo (recommended security practice, see http://help.ubuntu.com/community/RootSudo) I don't want a completely passwordless sudo but don’t want to be typing passwords all the time either. The normal unprivileged user has entered their ssh pass phrase into the ssh agent. Thanks

    Read the article

  • Permissions Issue with Files Generated by PerfMon

    - by SvrGuy
    We are trying to implement some data logging to CSV files using a Data Collector Set in PerfMon (on a windows Server 2008R2 system). The issue we are running into is that we (seemingly) can't control the permissions being set on the log files created by perfmon. What we want is for the log files created by perfmon to have Everyone:F permissions (Full Control for Everyone). So, we have a directory structure setup where all logs go into a folder: c:\vms\PerfMonLogs\%MACHINENAME% (e.g. c:\vms\PerfMonLogs\EvaluationG2) In the above example, c:\vms\PerfMonLogs\EvaluationG2 has permissions Everyone:F (below is the icacls for this directory) EVALUATIONG2/ Everyone:(OI)(CI)(F) NT AUTHORITY\SYSTEM:(OI)(CI)(F) BUILTIN\Administrators:(OI)(CI)(F) BUILTIN\Performance Log Users:(OI)(R) When the data collector set runs, it creates new sub folders and files within c:\vms\PerfMonLogs\EvaluationG2, e.g. (C:\vms\PerfMonLogs\EVALUATIONG2\M11d26y2012N3) Each of these directories and files has the following permissions: M11d26y2012N3 NT AUTHORITY\SYSTEM:(OI)(CI)(F) BUILTIN\Administrators:(OI)(CI)(F) BUILTIN\Performance Log Users:(OI)(R) So these new folders and not simply inheriting permissions from the parent folder (don't know why). Now, we tried adding Everyone:F using the security tab on the collector set (No dice). Any ideas? How do we control the permissions on the log files generated by perfmon data collector set?

    Read the article

  • How to configure iptables to use apt-get in a server?

    - by segaco
    I'm starting using iptables (newbie) to protect a linux server (specifically Debian 5.0). Before I configure the iptables settings, I can use apt-get without a problem. But after I configure the iptables, the apt-get stop working. For example I use this script in iptables: #!/bin/sh IPT=/sbin/iptables ## FLUSH $IPT -F $IPT -X $IPT -t nat -F $IPT -t nat -X $IPT -t mangle -F $IPT -t mangle -X $IPT -P INPUT DROP $IPT -P OUTPUT DROP $IPT -P FORWARD DROP $IPT -A INPUT -i lo -j ACCEPT $IPT -A OUTPUT -o lo -j ACCEPT $IPT -A INPUT -p tcp --dport 22 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 22 -j ACCEPT $IPT -A INPUT -p tcp --dport 80 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 80 -j ACCEPT $IPT -A INPUT -p tcp --dport 443 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 443 -j ACCEPT # Allow FTP connections @ port 21 $IPT -A INPUT -p tcp --sport 21 -m state --state ESTABLISHED -j ACCEPT $IPT -A OUTPUT -p tcp --dport 21 -m state --state NEW,ESTABLISHED -j ACCEPT # Allow Active FTP Connections $IPT -A INPUT -p tcp --sport 20 -m state --state ESTABLISHED,RELATED -j ACCEPT $IPT -A OUTPUT -p tcp --dport 20 -m state --state ESTABLISHED -j ACCEPT # Allow Passive FTP Connections $IPT -A INPUT -p tcp --sport 1024: --dport 1024: -m state --state ESTABLISHED -j ACCEPT $IPT -A OUTPUT -p tcp --sport 1024: --dport 1024: -m state --state ESTABLISHED,RELATED -j ACCEPT #DNS $IPT -A OUTPUT -p udp --dport 53 --sport 1024:65535 -j ACCEPT $IPT -A INPUT -p tcp --dport 1:1024 $IPT -A INPUT -p udp --dport 1:1024 $IPT -A INPUT -p tcp --dport 3306 -j DROP $IPT -A INPUT -p tcp --dport 10000 -j DROP $IPT -A INPUT -p udp --dport 10000 -j DROP then when I run apt-get I obtain: core:~# apt-get update 0% [Connecting to ftp.us.debian.org] [Connecting to security.debian.org] [Conne and it stalls. What rules I need to configure to make it works. Thanks

    Read the article

  • MysqlTunner and query_cache_size dilemma

    - by wbad
    On a busy mysql server MySQLTuner 1.2.0 always recommends to add query_cache_size no matter how I increase the value (I tried up to 512MB). On the other hand it warns that : Increasing the query_cache size over 128M may reduce performance Here are the last results: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.25-1~dotdeb.0-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 6G (Tables: 195) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 51 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 19h 17m 8s (254M q [1K qps], 5M conn, TX: 139B, RX: 32B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 24.2G global + 92.2M per thread (1200 max threads) [!!] Maximum possible memory usage: 132.2G (139% of installed RAM) [OK] Slow queries: 0% (2K/254M) [OK] Highest usage of available connections: 32% (391/1200) [OK] Key buffer size / total MyISAM indexes: 128.0M/92.0K [OK] Key buffer hit rate: 100.0% (8B cached / 0 reads) [OK] Query cache efficiency: 79.9% (181M cached / 226M selects) [!!] Query cache prunes per day: 1033203 [OK] Sorts requiring temporary tables: 0% (341 temp sorts / 4M sorts) [OK] Temporary tables created on disk: 14% (760K on disk / 5M total) [OK] Thread cache hit rate: 99% (676 created / 5M connections) [OK] Table cache hit rate: 22% (1K open / 8K opened) [OK] Open file limit used: 0% (49/13K) [OK] Table locks acquired immediately: 99% (64M immediate / 64M locks) [OK] InnoDB data size / buffer pool: 6.1G/19.5G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 192M) [see warning above] The server has 76GB ram and dual E5-2650. The load is usually below 2. I appreciate your hints to interpret the recommendation and optimize the database configs.

    Read the article

  • Svchost.exe connecting to different IPs with remote port 445

    - by Coll911
    Im using Windows XP Professional SP2. Whenever I start my Windows, svchost.exe starts connecting to all the possible IPs on LAN like from 192.168.1.2 to 192.168.1.200. The local port ranges from 1000-1099 and the remote port being 445. After it's done with the local IPs, it starts connecting to other random IPs. I tried blocking connections to the port 445 using the local security polices but it didn't work. Is there any possible way I could prevent svchost from connecting to these IPs without involving any firewall installed? My PC slows down due to the load. I scanned my PC with MalwareBytes and found out it was infected with a worm, it's deleted now but still svchost is connecting to the IPs. I also found out that in my Windows Firewall settings, under Internet Control Message Protocol (ICMP), there's a tick on "allow incoming echo request" (usually disabled) which is locked and I can't disable it. Its description is as follows Messages sent to this computer will be repeated back to the sender. This is used for trouble shooting for e.g to ping a machine. Requests of this type are automatically allowed if TCP port 445 is enabled. Any solutions? I can't bear going with the reinstalling Windows phase again.

    Read the article

  • Passive mode FTP file download hangs from specific machine

    - by chiptuned
    I have a server which is an AWS instance that just cannot download files from a specific FTP server. I can connect to the FTP server fine and run some commands, but when I request a file it just hangs. Here is the debug output of the base linux ftp client after login: ---> SYST 215 UNIX Type: Apache FtpServer Remote system type is UNIX. ftp> get outgoing/catalog.gz catalog.gz local: catalog.gz remote: outgoing/catalog.gz ---> PASV 227 Entering Passive Mode (64,156,167,125,135,191) ---> RETR outgoing/catalog.gz 150 File status okay; about to open data connection. Thats it. Then it just sits there and nothing transfers. I have verified that a data connection is made but the client gets no data. ? ss -nt dst 64.156.167.125 State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 10.185.147.150:41190 64.156.167.125:21 ESTAB 0 0 10.185.147.150:48871 64.156.167.125:48557 The FTP server is not in my control and downloads from other FTP servers in passive mode have worked. Active mode does not work as the system is behind a firewall. Every FTP client I've tried has the same problem. The download works from other systems, even from other AWS instances I have with the same Security Group. Not necessarily the same distro or config though. I understand it may be some issue on the server side, but I want to know what it is about my particular machine where the transfer hangs and where on every other machine I can get my hands on, it works. Please let me know what the culprit on the client side could be or ideas on what else to look at.

    Read the article

  • Server format & Reinstall while keeping Server & domain ID

    - by Chris
    Hi Everyone, I want to reinstall my 2008 R2 server from scratch, due to multiple Active Dir issues. I have only 1 server running AD and a spare machine to use if necessary. Is there a way to save just the user accounts and the domain SID, so that I can start with a clean server that uses the same name as before? I can reassign file security, but I do not want to have to rejoin all the users to a new domain. Also all users are mapped to folders on the server. What I hope to do is a clean install of the server without having to mess with the users machines. can someone please tell me the procedure to accomplish this? any help appreciated! Thanks guys, but I could be here all day telling you every error I am getting. can we please keep this to the question of how to do a reinstall and keep the same SID? I just want to start over without having to rejoin all the clients to a new domain. Is there such a tool that can backup the Server SID and the AD domain name so that I could restore them, without restoring any other data? I might not be using the correct terminology here, but hopefully you understand what I am asking. Thanks

    Read the article

  • Losing SQL connections

    - by john pavelka
    sql servr 2005 - Standard; one dedicated sql server (VM); windows server 2003; Small databases; About once a week we lose all sql connections. It seems to fix itself after about 5-10 minutes. System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. We don't have a fully qualified DBA; it's kind of a joint effort here. Can somebody give me some general ideas for troubleshooting the network side and the application side? We already ran a few tuning profiles and ran through Database Tuning Advisor to apply indexing recommendations. It would sure be nice if there was a way to take a snapshot of what was running on sql server when these 100% cpu spikes occured, but sometimes we're not around. Is it common to throttle CPU for certain processes? Can this be done with Windows server 2003? For example, if security apps were making cpu spike to 100%, is there a way to limit their cpu usage? Any advice is appreciated. thanks,

    Read the article

  • Transfer iptables rules to another server (almost) real time

    - by MrShunz
    I'm running 2 cPanel servers with ConfigServer Security & Firewall plugin. One of the functions of the plugin is to block via iptables (temporarily and/or permanently) IPs which fail various authentications (POP3/IMAP, SMTP, FTP, webmail, mod_security and such). Now, i'd like to push those IP blocks to the border router to drop packets as soon as possible (and doing so protecting the other machines on the network). Keep in mind that after N failed logins IP is blocked for 5 minutes, then re-allowed. If multiple bans occours in an hour IP is blocked permanently and should be unlocked "by hand". So I need a near realtime solution. What I'm looking for is a better way than firing some cronjobs both on cPanels and border router to: dump the rules to file transfer the file to border router (via scp/sftp) load the rules from the file in the border router I'm aware that I will need some scripts to parse and modify the rules as cPanels have one ethernet interface and some aliases while border router has two ehternet interfaces and some loopbacks. All machines involved use Linux. EDIT as per @pjmorse comment. The plugin consists of a bunch of perl and config files. The part I'm intrested in is a process which scans logfiles (lfd) and installs iptables rules (and sends an alert email). Fact is, it upgrades quite often (one or two times a week) and itself is 7000 lines of perl so I'm not comfortable on tampering with it.

    Read the article

< Previous Page | 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010  | Next Page >