Search Results

Search found 52729 results on 2110 pages for 'kaiz net'.

Page 1641/2110 | < Previous Page | 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648  | Next Page >

  • ssh keys rejected each day

    - by EddyR
    I've had OpenSSH server running on my debian server for a couple weeks and all of a sudden now when I go to login the next day it rejects my ssh key and I have to manually add a new one each time. Not only that but I have the "tunneling with clear-text passwords" option enabled and the non-root (login with root is disabled) account for that is rejected too. I'm at a loss why this is happening and I can't find any ssh options that would explain it. --update-- I just changed debug level to DEBUG. But before that I'm seeing a lot of the following in auth.log Feb 1 04:23:01 greenpages CRON[7213]: pam_unix(cron:session): session opened for user root by (uid=0) Feb 1 04:23:01 greenpages CRON[7213]: pam_unix(cron:session): session closed for user root ... Feb 1 04:36:26 greenpages sshd[7217]: reverse mapping checking getaddrinfo for nat-pool-xx-xx-xx-xx.myinternet.net [xx.xx.xx.xx] failed - POSSIBLE BREAK-IN ATTEMPT! ... Feb 1 04:37:31 greenpages sshd[7223]: Did not receive identification string from xx.xx.xx.xx ... My sshd_conf file settings are: # Package generated configuration file # See the sshd(8) manpage for details # What ports, IPs and protocols we listen for Port xxx # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel DEBUG # Authentication: LoginGraceTime 120 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding no X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server UsePAM no ClientAliveInterval 60 AllowUsers myuser

    Read the article

  • Unable to start tomcat6: Java error (Exception in thread "main")

    - by Nyxynyx
    After installing tomcat6 on CentOS 6.3, I am unable to start tomcat6 server. root@host [/var/log/tomcat6]# service tomcat6 start Starting tomcat6: [ OK ] Although it says OK, I cant access http://mydomain.com:8080. catalina.out Exception in thread "main" java.lang.NullPointerException at java.lang.VMClassLoader.defineClass(libgcj.so.10) at java.lang.ClassLoader.defineClass(libgcj.so.10) at java.security.SecureClassLoader.defineClass(libgcj.so.10) at java.net.URLClassLoader.findClass(libgcj.so.10) at gnu.gcj.runtime.SystemClassLoader.findClass(libgcj.so.10) at java.lang.ClassLoader.loadClass(libgcj.so.10) at java.lang.ClassLoader.loadClass(libgcj.so.10) at gnu.java.lang.MainThread.run(libgcj.so.10) Tomcat6 was installed using yum: yum -y install java tomcat6 tomcat6-webapps tomcat6-admin-webapps When I tried to find the version: tomcat6 version: Exception in thread "main" java.lang.NoClassDefFoundError: org.apache.catalina.util.ServerInfo at gnu.java.lang.MainThread.run(libgcj.so.10) Caused by: java.lang.ClassNotFoundException: org.apache.catalina.util.ServerInfo not found in gnu.gcj.runtime.SystemClassLoader{urls=[], parent=gnu.gcj.runtime.ExtensionClassLoader{urls=[], parent=null}} at java.net.URLClassLoader.findClass(libgcj.so.10) at gnu.gcj.runtime.SystemClassLoader.findClass(libgcj.so.10) at java.lang.ClassLoader.loadClass(libgcj.so.10) at java.lang.ClassLoader.loadClass(libgcj.so.10) at gnu.java.lang.MainThread.run(libgcj.so.10) Any idea what I should do? Thanks!

    Read the article

  • How do you monitor SSD wear in Windows when the drives are presented as 'generic' devices?

    - by MikeyB
    Under Linux, we can monitor SSD wear fairly easily with smartmontools whether the drive is presented as a normal block device or a generic device (which happens when the drive has been hardware RAIDed by certain controllers such as the one on the IBM HS22). How can we do the equivalent under Windows? Does anyone actually use smartmontools? Or are there other packages out there? The problem is that SCSI Generic devices just don't show up in Windows. If the drives aren't RAIDed we can see them fine. How I'd do it in Linux: sles11-live:~ # lsscsi -g [1:0:0:0] disk SMART USB-IBM 8989 /dev/sda /dev/sg0 [2:0:0:0] disk ATA MTFDDAK256MAR-1K MA44 - /dev/sg1 [2:0:1:0] disk ATA MTFDDAK256MAR-1K MA44 - /dev/sg2 [2:1:8:0] disk LSILOGIC Logical Volume 3000 /dev/sdb /dev/sg3 sles11-live:~ # smartctl -l ssd /dev/sg1 smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.32.49-0.3-default] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net Device Statistics (GP Log 0x04) Page Offset Size Value Description 7 ===== = = == Solid State Device Statistics (rev 1) == 7 0x008 1 26~ Percentage Used Endurance Indicator |_ ~ normalized value sles11-live:~ # smartctl -l ssd /dev/sg2 smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.32.49-0.3-default] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net Device Statistics (GP Log 0x04) Page Offset Size Value Description 7 ===== = = == Solid State Device Statistics (rev 1) == 7 0x008 1 3~ Percentage Used Endurance Indicator |_ ~ normalized value

    Read the article

  • CopSSH SFTP -- limit users access to their home directory only

    - by bradvido
    Let me preface this by saying I've read and followed these instructions at the FAQ many times: http://www.itefix.no/i2/node/37 It does not do what the title claims... It allows every user access to every other user's home directory, as well as access to all subfolders below the copssh installation path. I'm only using this for SFTP access and I need my users to be sandboxed into only their home directory. If you know a fool-proof way to lock users down so they can see only their home directory and its subfolders, stop reading now and reply with the solution. The details: Here is exactly what i tried as I followed the FAQ. My copSSH installation directory is: C:\Program Files\CopSSH net localgroup sftp_users /ADD **Create a user group to hold all my SFTP users cacls c:\ /c /e /t /d sftp_users **For that group, deny access at the top level and all levels below cacls "C:\Program Files\CopSSH" /c /e /t /r sftp_users **Allow my user group access to the copSSH installation directory and its subdirectories For each sftp user, I create a new windows user account, then I: net localgroup sftp_users sftp_user_1 /add **Add my user to the group I've created Open the activate user wizard for CopSSH, choosing the user, "/bin/sftponly" and Remove copssh home directory if it exists **Remains checked Create keys for public key authentication **Remains checked Create link to user's real home directory **Remains checked This works, however, every user has access to every other user's home directory as well as the CopSSH root directory.... So I tried denying access for all users to the user home directory: cacls "C:\Program Files\CopSSH\home" /c /e /t /d sftp_users **Deny access for users to the user home directory Then I tried adding permissions on a user-by-user basis for each users home\username folder. However,these permission were not allowed by windows because of the above deny rule i created at the home directory was being inherited and over-riding my allow rule. The next step for me would be to remove the deny rule at the home directory and for each user folder, add a deny rule for every user it doesn't belong to, and add an allow rule for the one user it does belong to. However, as my user list gets long, this will become very cumbersome. Thanks for the help!

    Read the article

  • Sign multiple domains with single Domain Key (dk-filter)

    - by Lashae
    Motivation The private shopping website GILT, send periodical update emails from giltgroupe.bounce.ed10.net however all of the mails are signed with domain keys of giltgroupe.com. mailed-by giltgroupe.bounce.ed10.net signed-by giltgroupe.com My Story I couldn't manage to sign x.com with y.com 's domain key using dk-filter under Debian Lenny with postfix. If I try to init dk-filter service with following arguments: DAEMON_OPTS="$DAEMON_OPTS -d x.com,y.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" dk-filter service signs with domain x.com (d=x.com) If I change the daemon arg.s as following: DAEMON_OPTS="$DAEMON_OPTS -d x.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" then emails sent From y.com is not being signed. the dk-keys.conf file is as follows: *:/var/dk-filter/y.com/mail I managed to do same thing with DKIM, works perfect. However DK doesn't seem to work. I don't have any problem signing y.com's emails with y.com's key and x.com's emails x.com's key, which indicates there is no configuration problem. Do you have any experience/advice to make it possible to sign emails from multiple domains by a specific chosen domain?

    Read the article

  • Windows 2008 Standard upgrade to Windows 2008 Enterprise failure

    - by Archit Baweja
    Sidestory, I was in the process of setting up a second Exchange 2010 server for DAG support, when I realized that my box needed Windows 2008 Enterprise edition. The box currently has Windows 2008 Standard Windows update including SP2 Exchange 2010 with CAS, HT, Mailbox roles Domain Services role File Services role. When I try to upgrade to Windows 2008 Enterprise, I initially got a "your current version of windows is more recent than the intallation media", something to that effect. My first guess was it may be SP2 related, so I uninstalled SP2, restarted and tried again. This time it gave me an error to the effect Windows could not configure one or more windows components. Please restart and try the update again. This was at the last stage of the Windows 2008 Enterprise install when it says "Completing installation". So I removed Domain Services role (including demoting it as a DC). However I get the same error again. Anyone see something like this before and have any suggestions? Also , is there a log file the windows upgrade program spits out that I can consult to see what component exactly is interfering? Update 1 Based on some googling I finally found the setup log file, and it seems that Windows setup had an issue determining the .Net 3.0 "feature" being installed or uninstalled. So based of of a win7/vista technet article I'm going to retry the upgrade after removing the .Net 3.0 feature.

    Read the article

  • How to edit files directly on webdav in windows.

    - by phazei
    I have a webDAV setup with the cPanel webdisk. I can connect to it through NetHood and I can drag and drop files to/from there. What I can't do is simply edit any of the files directly. I need to copy it somewhere else, edit it, then copy it back. That's essentially what is needed with ftp, though smart clients will monitor the file, making it easier than webDAV in the current state I'm using it in. I was under the impression that webdav was supposed to let me work on the files as if it were a local drive. But nothing can actually open the files. How can I go about bringing more functionality around to it? Or is this as good as it gets? I have tried 'net use q:\ https://myserver.com:2078' and 'net use q:\ '\myserver.com@SSL:2078\' but neither work and only throws: "System error 67 has occurred. The network name cannot be found." I ultimately want to use TortiseSVN with the webDAV so I can have my working copy running on the server.

    Read the article

  • Allow access from outside network with dmz and iptables

    - by Ivan
    I'm having a problem with my home network. So my setup is like this: In my Router (using Ubuntu desktop v11.04), I installed squid proxy as my transparent proxy. So I would like to use dyndns to my home network so I could be access my server from the internet, and also I installed CCTV camera and I would like to enable watching it from internet. The problem is I cannot access it from outside the net. I already set DMZ in my modem to my router ip. My first guess is because i'm using iptables to redirect all inside network to use squid. And not allow from outside traffic to my inside network. Here is my iptables script: #!/bin/sh # squid server IP SQUID_SERVER="192.168.5.1" # Interface connected to Internet INTERNET="eth0" # Interface connected to LAN LAN_IN="eth1" # Squid port SQUID_PORT="3128" # Clean old firewall iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X # Load IPTABLES modules for NAT and IP conntrack support modprobe ip_conntrack modprobe ip_conntrack_ftp # For win xp ftp client #modprobe ip_nat_ftp echo 1 > /proc/sys/net/ipv4/ip_forward # Setting default filter policy iptables -P INPUT DROP iptables -P OUTPUT ACCEPT # Unlimited access to loop back iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT # Allow UDP, DNS and Passive FTP iptables -A INPUT -i $INTERNET -m state --state ESTABLISHED,RELATED -j ACCEPT # set this system as a router for Rest of LAN iptables --table nat --append POSTROUTING --out-interface $INTERNET -j MASQUERADE iptables --append FORWARD --in-interface $LAN_IN -j ACCEPT # unlimited access to LAN iptables -A INPUT -i $LAN_IN -j ACCEPT iptables -A OUTPUT -o $LAN_IN -j ACCEPT # DNAT port 80 request comming from LAN systems to squid 3128 ($SQUID_PORT) aka transparent proxy iptables -t nat -A PREROUTING -i $LAN_IN -p tcp --dport 80 -j DNAT --to $SQUID_SERVER:$SQUID_PORT # if it is same system iptables -t nat -A PREROUTING -i $INTERNET -p tcp --dport 80 -j REDIRECT --to-port $SQUID_PORT # DROP everything and Log it iptables -A INPUT -j LOG iptables -A INPUT -j DROP If you know where did I miss, please advice me. Thanks for all your help and I really appreciate it.

    Read the article

  • Please advise on VPS config choice (with SQL Server Express)

    - by tjeuten
    Hi all, I might be interested in getting a VPS hosting plan for some small personal sites and .NET projects. Was thinking of Softsys Bronze Plan, as my current shared host plan is with them too. The stuff I want to host has grown beyond the capabilities of a Shared hosting plan, and I also want more control over the IIS/ASP.NET configuration, that's why I'm considering VPS. The main config details would be: Hyper-V 30 GB of diskspace 1 GB of RAM More info here: http://www.softsyshosting.com/Windows-VPS-HyperV.aspx Does anyone have experience with this plan (or something similar from another host), and maybe could answer these couple of questions: Bronze has a total diskspace of 30GB. Is the OS part of this quota or not ? If so, how much does a base configuration with Windows 2008 take up in diskspace ? Would you advise Windows 2008 R2 or Normal. Or would you advise to use Windows 2003 with this config. I'm planning on running a SQL Server Express install too. Would 1 GB of RAM be enough for both the Windows 2008 (R2) and SQL Express. The database load will not be that very high (a couple of 1000 records returned each day). The DB will most likely be far away from the 4GB limit, that's why I'd go for a SQL Express instead of paying extra licensing costs for a SQL Web install. But I'm more concerned about performance. Would you recommend Softsys as a VPS host ? I've been with them for one year for my Shared hosting plan, and have no complaints so far. Also, as I have no VPS experience, what are the pitfalls I need to be aware of, in terms of performance mainly, but maybe in other areas too ? Mathieu

    Read the article

  • How do I determine whether this email bounce is my fault?

    - by David Zaslavsky
    I use Google Apps to handle email for my personal website, so I have an email address username@ellipsix.net through that, and I also have a Gmail account [email protected]. Now, I've been trying to send emails to a particular recipient who shall be known as [email protected]. When I send the email from my Gmail account with the @gmail.com address, it works fine. However, when I send it from my Google Apps account with the @ellipsix.net address, I get a bounce message which includes the following text: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 554 554 mail server permanently rejected message (#5.3.0) (state 17). The bounce message suggests that it is up to the mail administrator of the recipient domain example.com to fix the problem, whatever it is. But I would like to be as sure as possible that nothing needs to be fixed on my end. I already have DKIM signatures enabled for my domain, and I have published an SPF DNS record. Is there something else I should check or do, or can I be confident that it's up to the recipient to fix this issue? Does the "state 17" in the bounce message mean something relevant? I've included my domain name in the question so people who know more than me about this stuff can independently check the relevant DNS records or other information. This other question seems similar, but I've already investigated everything suggested in the answers there (except for contacting Google, which I don't want to do unless I suspect it's their issue to fix).

    Read the article

  • Configure Cisco Pix 515 with DMZ and no NAT

    - by Rickard
    I hope that someone could shed some light over my situation, as I am fairly new to PIX configurations. I will be getting a new net for my department, which I am going to configure. At my hands, I have a Cisco PIX 515 (not E), a Cisco 2948 switch (and if needed, I can bring up a 2621XM router, but this is my private and not owned by my dept.). The network I will be getting is the following: 10.12.33.0/26 Link net between the ISP routers and my network will be 10.12.32.0/29 where GW is .1 and HSRP roututers are .2 and .3 The ISP has asked me not to NAT the addresses on my side, as they will set it up to give 10.12.33.2 as a one-to-one nat to a public IP. The rest of the IP's will be a many-to-one NAT to another public IP. 10.12.33.2 is supposed to be my server placed on the DMZ, the rest of the IP's will be used for my clients and the AD server (which is currently also acting as a DHCP server in the old network config with another ISP). Now, the question is, how would I best configure this? I mean, am I thinking wrong here, I am expected to put the PIX first from the ISP outlet, then to the switch which will connect my clients. But with the ISP routers being on a different network, how will the firewall forward the packets to the other network, it's a firewall, not a router. I have actually never configured a pix before, and fortunately, this is more like a lab network, not a production network, so if something goes wrong it's not the end of the world, if though annoying. I am not asking for a full configuration from anyone, just some directions, or possibly some links which will give me some hints. Thank you very much!

    Read the article

  • Mapped networkdrive on logout

    - by Robuust
    I'm using a script to keep a mapped networkconnection alive, but ofcourse the mapped connection is gone when I logout. The point is now, that I'm running this on Windows Server 2008 R2, where I use remote desktop to login on the administrator account. However, it should remain logged in and not remove the mapped connection as this script takes care of not logging out on MS office 365 sharepoint. Is there a way to keep the mapped networklocation (L:) available after logout? So the script can run to remain the connection? # Create an IE Object and navigate to my SharePoint Site $ie = New-Object -ComObject InternetExplorer.Application $ie.navigate('https://xxx.sharepoint.com/') # Don't need the object anymore, so let's close it to free up some memory $ie.Quit() # Just in case there was a problem with the web client service # I am going to stop and start it, you could potentially remove this # part if you want. I like it just because it takes out a step of # troubleshooting if I'm having problems. Stop-Service WebClient Start-Service WebClient # We are going to set the $Drive variable here, this is just # going to tell the command what drive letter to map you can # change this to whatever you want (if you change it to a # drive that is already mapped it will overwrite it, so be careful. $Drive = "L:" # You can change the drive destiniation to whatever you want, # it has to be a document library or folder of course. $DrvDest = "https://xxx.sharepoint.com/files/" # Here is where we create the object to map the network drive and # then map the network drive $net = New-Object -ComObject WScript.Network; $net.mapnetworkdrive($Drive,$DrvDest) # That is the end of the script, now schedule this with task # scheduler and every so often and you should be set.

    Read the article

  • Single Sign On for intranet with Apache and Linux MIT Kerberos

    - by Beerdude26
    Greetings, I am looking for a way to do a single sign on to an intranet in the following manner: A Linux user logs on via a graphical frontend (for example, GNOME). He automatically requests a TGT for his username from the MIT Kerberos KDC. Via some way or another, the Apache server (which we'll assume is on the same server as the KDC), is informed that this user has logged in. When the user accesses the intranet, he is automatically granted access to his web applications. I don't think I've seen this kind of functionality while searching the net. I know the following possibilities exist: Using an authentication module such as mod_auth_kerb, a user is presented with a login prompt to enter his username and password, which are then authenticated against the MIT Kerberos server. (I would like this to be automatic.) IIS supports integrated Windows logon via ASP.Net when the user is part of an Active Directory. (I'm looking for the Linux / Apache equivalent.) Any suggestions, criticism and ideas are highly appreciated. This is for a school project to show a proof-of-concept, so every handy piece of information is more than welcome. :)

    Read the article

  • Optimum configuration of McAfee for Servers

    - by Wayne Arthurton
    Our corporate standard is McAfee Enterprise, unfortunately this is non-negotiable. On two types of servers I'm responsible for, SQL & Web, we have noticed major performance issues with the corporate standard setup. Max scan time 45sec One policy for all processes Scan ALL files on write, read and open for backup Heuristics: Find unknown programs, trojans and macros Detect unwanted programs Exclude: EVT, LDF, LOG, MDF, VMD, , windows file protection) This of course still causes major slowdowns. IIS .NET recompiles are slow especially with SharePoint, SQL backups and restores, SQL Analysis Services, Integration Services and temp data from them as well. I have looked from time to time, for some best practices on setting up McAfee of SQL & SQL Analysis Service, SQL Integration Service, Visual Studio, Sharepoint, and .NET web servers in general. How do people setup McAfee enterprise on their corporate serves keeping security intact, but affecting performance as minimally as possible? Has anyone run across white papers on these setups? Obviously some are case by case, but there must be some best practices out there somewhere.

    Read the article

  • how to properly set environment variables

    - by avorum
    I've recently started using Windows (having used Ubuntu up until now) and I find myself unable to properly set environment variables. Whenever I set them they don't seem to work. I've been going to Start-Edit Environment Variables for your Account and editing the PATH value in the upper half of the GUI. Here's what I've got so far. ;C:\Chocolatey\bin;C:\tools\mysql\current\bin;C:\Program Files (x86)\Git\bin;C:\Program Files\MySQL\MySQL Server 5.6\bin\;C:\Python33\Scripts; These are each the parent directories of the executables I'd like to be able to run by name from CMD, but mysql, git, and pip aren't being recognized. Am I doing something wrong syntactically or at a general understanding level? I'd like to be able to run these commands without having to specify the full path to the executables every time. EDIT: The full PATH extracted from CMD PATH=C:\Python33\;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files (x86)\AMD APP\bin\x86_64;C:\Program Files (x86)\AMD APP\bin\x86;C:\Program Files\Common Files\Microsoft Shared\Windows Live;C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\GTK2-Runtime\bin;C:\Program Files\WIDCOMM\Bluetooth Software\;C:\Program Files\WIDCOMM\Bluetooth Software\syswow64;C:\Program Files (x86)\ATI Technologies\ATI.ACE\Core-Static;C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\;C:\Program Files\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files (x86)\QuickTime\QTSystem\;C:\Program Files (x86)\Common Files\Acronis\SnapAPI\;C:\Program Files (x86)\Java\jre7\bin;C:\Program Files (x86)\Windows Kits\8.1\Windows Performance Toolkit\;C:\Program Files (x86)\Microsoft SDKs\TypeScript\;C:\Program Files (x86)\MySQL\MySQL Utilities 1.3.4\; ;C:\Chocolatey\bin;C:\tools\mysql\current\bin I'm being forced to use Windows by my work environment, I don't enjoy the state of affairs.

    Read the article

  • Easy GUI way to auto scale EC2 and RDS: aws console, scalr, ylastic...?

    - by Zillo
    I am managing all my instances with the AWS Management Console (the GUI web console) but now I want to use Auto Scale and it seems that this can not be done with that console. Yes, there is CloudWatch but I can only create alarms (e-mail notifications), it seems that CouldWatch needs you to add the auto scale policy in some other place (by command line console?). I would like to use some easy GUI interface. Ylastic and Scalr seems to be a good option. Which one do you think is better? Regarding Scalr, is there any difference between the open source software Scalr and the service Scalr.net? I mean, is the GUI interface the same? I like the idea of the Scalr because I do not need to give my Secret Access Key to a third party (like in Ylastic or in Scalr.net) One question about the Scalr software, it has to be installed in the instances or it must be installed in another machine? Do I need to setup again all my security permissions, AMIs, snapshots, etc. or I can use AWS Management Console for everything and Scalr just to auto scale.

    Read the article

  • Apache Tomcat Server Error

    - by Sam....
    I M trying to install Tomcat But Getting this Error Every Time ..whether it is binary or Exe install *SEVERE: Begin event threw exception java.lang.ClassNotFoundException: org.apache.catalina.core.AprLifecycleListener at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at org.apache.commons.digester.ObjectCreateRule.begin(ObjectCreateRule.java:204) at org.apache.commons.digester.Rule.begin(Rule.java:152) at org.apache.commons.digester.Digester.startElement(Digester.java:1286) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source) at org.apache.commons.digester.Digester.parse(Digester.java:1572) at org.apache.catalina.startup.Catalina.start(Catalina.java:451) at org.apache.catalina.startup.Catalina.execute(Catalina.java:402) at org.apache.catalina.startup.Catalina.process(Catalina.java:180) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:202) * Can any one please solve this...need urgent rly

    Read the article

  • IIS 7.0 404 Custom Error Page and web.config

    - by Colin
    I am having trouble with a custom 404 error page. I have a domain running a .NET proj with it's own error handling. I have a web.config running for the domain which contains: <customErrors mode="RemoteOnly"> <error statusCode="500" redirect="/Error"/> <error statusCode="404" redirect="/404"/> </customErrors> On a sub dir of that domain I am ignoring all routes there by doing routes.IgnoreRoute("Assets/{*pathInfo}"); in the .NET proj and I want to put a custom 404 error page on that and any sub dir's of Assets. The sub dir contains static content like images, css, js etc etc. So in the Error Pages section of IIS I put a redirect to an absolute URL. The web.config for that dir looks like the following: <system.webServer> <httpErrors> <remove statusCode="404" subStatusCode="-1" /> <error statusCode="404" prefixLanguageFilePath="" path="http://mydomain.com/404" responseMode="Redirect" /> </httpErrors> </system.webServer> But I navigate to an unknown URL under that dir and yet I still see the default IIS 404 page. I am also seeing an alert in IIS that reads: You have configured detailed error messages to be returned for both local and remote requests. When this option is selected, custom error configuration is not used. Does this have anything to do with the customErrors mode="RemoteOnly" in the site web.config? I have tried to overwrite the customErrors in the sub dir web.config but nothing changes. Any help would be appreciated. Thanks.

    Read the article

  • [SOLVED] Single Sign On for intranet with Apache and Linux MIT Kerberos

    - by Beerdude26
    EDIT: SOLVED! See my answer below. Greetings, I am looking for a way to do a single sign on to an intranet in the following manner: A Linux user logs on via a graphical frontend (for example, GNOME). He automatically requests a TGT for his username from the MIT Kerberos KDC. Via some way or another, the Apache server (which we'll assume is on the same server as the KDC), is informed that this user has logged in. When the user accesses the intranet, he is automatically granted access to his web applications. I don't think I've seen this kind of functionality while searching the net. I know the following possibilities exist: Using an authentication module such as mod_auth_kerb, a user is presented with a login prompt to enter his username and password, which are then authenticated against the MIT Kerberos server. (I would like this to be automatic.) IIS supports integrated Windows logon via ASP.Net when the user is part of an Active Directory. (I'm looking for the Linux / Apache equivalent.) Any suggestions, criticism and ideas are highly appreciated. This is for a school project to show a proof-of-concept, so every handy piece of information is more than welcome. :)

    Read the article

  • Slow network file transfer (under 20KB/s) on newly built x64 Win7

    - by Mangoshake
    I am getting <20KB/s for local network file transfer. If I transfer a very small file (less than 100KB) it would start quickly then slow down to <20KB/s. all subsequently network file transfer would be slow, a reboot is needed to reset this. If I transfer a large file it would be stuck on calculating for a long time and then begin with <20KB/s immediately. This is a newly built desktop running Windows 7 x64 SP1. Realtek gigabit LAN from the motherboard (ASRock Extreme3 gen3). Problematic speed is observed on the private LAN, both through ethernet and WiFi. The Router is D-Link DIR-655. Remote Differential Compression is off. Drivers are up-to-date from ASRock's website. I have tested network file transfer to and from another Windows 7 laptop and a MacBook Pro, so I am fairly certain it is the desktop's problem. The slow speed only happens with one direction also, outbound from the desktop, regardless of whether I initiate the file transfer action from the origin or the destination. Inbound network file transfer and internet speeds are fine, so I don't think this is a hardware issue. I am getting 74.8MB/s internet upload speed from speedtest.net (http://www.speedtest.net/result/1852752479.png). Inbound network file transfer I can get around 10-15MB/s. I am hoping this community has some insight for me to troubleshoot this. I don't see anything obviously related from the Event Viewer, and beyond that I just don't know where else to look. Any suggestions are greatly appreciated, thank you in advance.

    Read the article

  • IIS Windows Authentication not working in Internet Explorer via host name; works via IP

    - by jkohlhepp
    I'm trying to get a new Windows Server 2003 box working to host an ASP.NET application that uses Windows Authentication. Here's some info: IIS Anonymous Access is diabled IIS Integrated Windows Authentication is enabled I've tried it with and without Digest Authentication and it is the same result Both my machine and the server are in same active directory domain on the same intranet I'm using IE 6 My symptoms: In Firefox, via either IP or host name, a login box pops up, and if I enter my NT credentials, it works. In IE, via the server IP address, it works perfectly with no login box. In IE, via the server host name, it pops up a login box but even if I put in the correct credentials, it just pops up the box again. This is the problem. Why won't windows auth work in IE via host name but it will via IP address? Edit: Here's something else interesting. If I go into my Internet Explorer advanced settings and disable Windows Authentication, it seems to work just fine. And by work I mean that my test .NET app sees my NT ID as the current user.

    Read the article

  • Cannot 301 redirect with IIS URL Rewrite Module

    - by Justin
    I am trying to troubleshoot my issue with the URL Rewrite Module on IIS 7. I migrated a Wordpress blog over to BlogEngine.net. There were only about 5 entries that I wanted to use 301 redirects to the new blog, so I wanted to simply create 5 exact match redirect rules using the rewrite module. For some reason the exact match rule never seems to take effect, I always get a 404 error when the original url is navigated to. I verified that my exact match pattern matched the existing backlinks and it does. I then tried a simple test and got the same behavior, no redirection. I created a page, test.html, on my site, I then created a second page, test2.html. So my exact match pattern is: "http://www.mydomain.com/test.html" And the rule is supposed to do a 301 redirect to "http://www.mydomain.com/test2.html " The redirect never happens. I created the steps for the rule based on the instructions in this page: http://learn.iis.net/page.aspx/461/creating-rewrite-rules-for-the-url-rewrite-module/ I don't see that I left out a step. After I apply the rule I've even gone as far as doing an IISReset to make sure it would be in effect but still no luck. Any thoughts on what I might have left out? (Note: my rewrite rules dont include the " " around them but I had to add since serverfault thinks I am trying to spam the system with multiple urls.)

    Read the article

  • How do I run a stable Windows XP kvm guest on Ubuntu 10.04?

    - by Jean-Paul Calderone
    I have three Windows XP guests running on a recently upgraded 64-bit Ubuntu 10.04 system. Occasionally (on the order of once every few days), one of the guests will become non-responsive and the kvm process on the host which is running that guest will start consuming 100% CPU. It will continue to do so until it is killed. When restarted, it will be fine for a while, and then the issue repeats. The kvm command line used to run all three guests is this: /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 1024 -smp 1 -name bigdog21vmxp1 \ -uuid ea47ff84-125b-16f7-9a4d-a6d0d8bab46a \ -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/bigdog21vmxp1.monitor,server,nowait \ -monitor chardev:monitor \ -localtime \ -boot c \ -drive file=/var/lib/libvirt/images/windowsxp-1.qcow2,if=ide,index=0,boot=on,format=qcow2 \ -net nic,macaddr=54:52:00:02:06:0e,vlan=0,name=nic.0 \ -net tap,fd=58,vlan=0,name=tap.0 \ -chardev pty,id=serial0 \ -serial chardev:serial0 \ -parallel none \ -usb \ -usbdevice tablet \ -vnc 127.0.0.1:1 \ -k en-us \ -vga cirrus \ -soundhw es1370 Why do the systems misbehave this way sometimes? And what configuration can I change in order to fix it? Or, if the problem is due to a bug in kvm, what is the process for isolating a kvm failure so that the developers have a chance of fixing it?

    Read the article

  • Unable To Install SQL Server 2008 On Windows 7 & Windows XP

    - by mickburkejnr
    Hi Guys, you are my final hope of salvation! For the last five hours I've been trying to install SQL Server 2008, first on Windows 7 and then Windows XP. Both have had the same issue. Even before the installation starts, an error message appears saying: Microsoft .NET Framework 3.5 SP1 installation has failed. SQL Server 2008 Setup requires .NET Framework 3.5 SP1 to be installed. I have installed everything in the hope of getting past this. the service pack it refers to has even been installed and I can see them in the control panel! I have installed Framework 2.0, Framework 2.0 SP1, Framework 3.5, Framework 3.5 SP1, Windows Installer 3.5 and Windows Installer 4.0. Even installed the Service Pack for SQL Server 2008.... But, nothing, works! Does anyone have an idea what I'm doing wrong or what could be wrong? I'm seriously close to the end of my tether, I've even sent Steve Ballmer an email with my frustration. I would seriously appreciate some help with this. Many thanks!

    Read the article

  • Error in Apache: /var/run/apache2 not found

    - by Julen
    This is more self-answered question but since it drove me crazy I would like to share with the community and maybe someone can tell me why it happened or what it caused. The thing is I wanted to install in my Ubuntu 10.4 machine a CGI app, one built in the samples that come with the gSOAP toolkit. My intention was to access those from ASP .NET machine. Regular Ubuntu does not come with Apache so I install it from Sypnatic. Pretty easy. I followed this How to Install Apache2 webserver with PHP,CGI and Perl Support in Ubuntu Server. Instead of apache.conf I tweaked httpd.conf since a college here used that file instead of the first to put his Apache running. Besides I was able to access his CGI from my ASP .NET but mysteriously I could not from mine, I was getting always "The request failed with HTTP status 503: Service Temporarily Unavailable". Checking Apache error.log I found these messages: No such file or directory: unable to connect to cgi daemon after multiple tries: /home/julen/htdocs/cgi-bin/calcserver And looking more carefully whenever I restarted Apache I got this other message No such file or directory: Couldn't bind unix domain socket /var/run/apache2/cgisock. cgid daemon failed to initialize I am pretty new with Ubuntu and I could not think that Apache and Synaptic made a mistake in the installation process of the server, but it is true that the /var/run/apache2 was missing whereas in my college's computer was not. I tried to find and "elegant" solution but I found a post from 2006 that had an slight reference to it. Finally I decided to create the folder myself (as root) and then everything worked fine. Hope this helps others if they encounter a similar problem. Still I have the doubt why the folders was not created in the first place. Best, Julen.

    Read the article

< Previous Page | 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648  | Next Page >