Search Results

Search found 10931 results on 438 pages for 'struts config'.

Page 336/438 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • Configuring https access on HP A5120 Switch

    - by GerryEgan
    I am trying to configure HTTPS management on a HP a5120 switch running Version 5.20.99, Release 2215 and not having much luck. I have followed the manual by creating an SSL policy first and then enabling the HTTPS server with the SSL policy: ssl server-policy sslpol ip https ssl-server-policy sslpol ip https enable When I try and log onto the switch with Google Chrome I get the following error: Error 107 (net::ERR_SSL_PROTOCOL_ERROR): SSL protocol error. When I look this up I have found references to errors due to TLS being used in SSL. I can find no way to specify the SSL version in the server policy. The manual has a configuration example that uses MSCEP to retrieve a certificate but in Windows 2008 R2 that feature is only available in Enterprise and Datacentre editions which I don't have. I have SSH configured and it is using a locally generated certificate so I'm not sure if I can use that but I'd like to if possible. Has anybody been able to setup HTTPS management on HP A series switches without MSCEP? Any and all help appreciated! here is a copy of my config with the interfaces removed: version 5.20.99, Release 2215 # sysname MYSYSNAME # irf domain 10 irf mac-address persistent timer irf auto-update enable undo irf link-delay # domain default enable system # telnet server enable # vlan 1 # vlan 100 description Management # radius scheme system primary authentication 127.0.0.1 1645 primary accounting 127.0.0.1 1646 user-name-format without-domain # domain system access-limit disable state active idle-cut disable self-service-url disable # user-group system group-attribute allow-guest # local-user admin password cipher authorization-attribute level 3 service-type ssh telnet terminal service-type web # stp enable # ssl server-policy sslpol pki-domain MYDOMAIN # interface NULL0 # interface Vlan-interface199 ip address 192.168.199.140 255.255.255.0 # interface GigabitEthernet1/0/1 poe enable stp edged-port enable # interface Ten-GigabitEthernet2/1/2 # dhcp-snooping # ntp-service unicast-server 192.168.1.71 # ssh server enable # ip https ssl-server-policy sslpol ip https enable # load xml-configuration # user-interface aux 0 1 user-interface vty 0 15 authentication-mode scheme

    Read the article

  • .NET not processing an XML file in IIS

    - by Stuart McIntosh
    We have 2 servers, 1 already configured with .net which works fine and a new one which appears to be configured the same but when I open an xml page in Internet Explorer it complains about the <% tag. We have IIS on win srvr 2003 SP2. The website is configured with .NET 1.1.4322. In ISAPI extensions have set the .XML extension to use c:\windows\microsoft.net\framework\v1.1.4322\aspnet_isapi.dll But the page: <property name="documentmaxage" value="0"/> <property name="documentmaxstale" value="0"/> <var name="m_Prompt_Path" /> <form id="InitVoiceXmlDoc"> <block> <assign name="m_Prompt_Path" expr="&quot;<% Response.Write(Request.QueryString["m_Prompt_Path"]); %>&quot;"/> </block> </form> gives the error: The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later. The character '<' cannot be used in an attribute value. Error processing resource 'http://localhost:11119/fails.xml'. Lin... &quo... We have the same config on another server which works fine. So are there other options apart from the ISAPI extensions that I need to look at. If I suffix the page .aspx, of course it works fine.

    Read the article

  • SSH freeze when UFW is enabled

    - by Cristian Vrabie
    I have a small Ubuntu 10.10 server and i recently noticed a weird behavior (not sure if it was happening before). If I have ufw enabled (with default deny all in, allow all out, allow all http, allow all on a random port i use for ssh) when i perform some actions in a ssh sesion, the ssh console completely freezes. The server continues to work and if i close the console i can start another ssh session. This happens no matter from where I log in (tried from another ubuntu and a mac). The actions are fairly reproducible, for example vim some config files (though vim-ing other files works), cat some other file, etc. The freeze never happens if ufw is disabled. Any idea what's going on? Thanks! Cristian Addition: if you're wondering, yes, I have TcpKeepAlive on yes and I doubt is related (it would happen with ufw disabled too) As requested: my ufw conf below. Also, i don't know if it has something to do but the server has 2 ips. On one is configured the ssh domain, and on one to serve hhtp (via apache2) Status: active Logging: on (low) Default: deny (incoming), allow (outgoing) New profiles: skip To Action From -- ------ ---- 19922/tcp ALLOW IN Anywhere 9418/tcp ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 443/tcp ALLOW IN Anywhere

    Read the article

  • OpenSolaris livecd, NForce NIC driver, and NTFS USB mounting. Oh My!

    - by Jake Wharton
    I'm attempting to install OpenSolaris 2009.06 on my server. Before I do I would like to test that everything works and am running in to problems. It has an Abit AN-M2 motherboard with an NForce chipset. The driver config utility says that I need a third-party driver and links me to http://homepage2.nifty.com/mrym3/taiyodo/eng/. Scrolling to the bottom, I have downloaded both tgzs just in case. Now the fun part: The only way to get this on to the computer is via a USB drive since I can't access the network. Also, install CD in the drive otherwise I'd just burn them to DVD. Since my USB key is NTFS formatted I cannot mount it since the install CD seems to be lacking NTFS drivers which require more downloaded packages. What should I do? The server will simply be a dumb NAS and I know that there exists other OpenSolaris-based flavors such as Nexenta but from what I read the stock install is likely the best. If this is not the case and pursuing a different flavor is required or better I will also accept that as an answer (but please don't jump straight to it).

    Read the article

  • How to get automatic upgrades to work on Ubuntu Server?

    - by J. Pablo Fernández
    I followed the documentation for enabling automatic upgrades in Ubuntu servers, but it's not really updating anything at all. My /etc/apt/apt.conf.d/50unattended-upgrades looks almost like the default. // Automatically upgrade packages from these (origin, archive) pairs Unattended-Upgrade::Allowed-Origins { "Ubuntu karmic-security"; "Ubuntu karmic-updates"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. The package 'mailx' // must be installed or anything that provides /usr/bin/mail. Unattended-Upgrade::Mail "[email protected]"; // Automatically reboot *WITHOUT CONFIRMATION* if a // the file /var/run/reboot-required is found after the upgrade //Unattended-Upgrade::Automatic-Reboot "false"; The directory /var/log/unattended-upgrades/ is empty. Running /etc/init.d/unattended-upgrades start is not very nice: root@mozart:~# /etc/init.d/unattended-upgrades start Checking for running unattended-upgrades: root@mozart:~# Something seems to be broken, but I'm not sure why. I have pending updates and they are not being applied: root@mozart:~# aptitude safe-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following packages will be upgraded: linux-libc-dev 1 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/743kB of archives. After unpacking 4096B will be used. Do you want to continue? [Y/n/?] In all the servers I have, unattended upgrades seems to have been disabled: root@mozart:~# apt-config shell UnattendedUpgradeInterval APT::Periodic::Unattended-Upgrade root@mozart:~# Any ideas what am I missing?

    Read the article

  • Configured Samba to join our domain, but logon fails from Windows machine

    - by jasonh
    I've configured a Fedora 11 installation to join our domain. It seems to join successfully (though it reports a DNS update failure) but when I try to access \\fedoraserver.test.mycompany.com I'm prompted for a password. So I enter adminuser and the password and that fails, so I try test.mycompany.com\adminuser and that too fails. What am I missing? EDIT (Update 9/1/09): I can now connect to the machine and see the shares on it (see my response to djhowell's answer) but when I try to connect, I get an error saying The network path was not found. I checked the log entry on the Fedora computer for the computer I'm connecting from (/var/log/samba/log.ComputerX) and it reads: [2009/09/01 12:02:46, 1] libads/cldap.c:recv_cldap_netlogon(157) no reply received to cldap netlogon [2009/09/01 12:02:46, 1] libads/ldap.c:ads_find_dc(417) ads_find_dc: failed to find a valid DC on our site (Default-First-Site-Name), trying to find another DC Config files as of 9/1/09: smb.conf: [global] Workgroup = TEST realm = TEST.MYCOMPANY.COM password server = DC.TEST.MYCOMPANY.COM security = DOMAIN server string = Test Samba Server log file = /var/log/samba/log.%m max log size = 50 idmap uid = 15000-20000 idmap gid = 15000-20000 windbind use default domain = yes cups options = raw client use spnego = no server signing = auto client signing = auto [share] comment = Test Share path = /mnt/storage1 valid users = adminuser admin users = adminuser read list = adminuser write list = adminuser read only = No I also set the krb5.conf file to look like this: [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = test.mycompany.com dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h forwardable = yes [realms] TEST.MYCOMPANY.COM = { kdc = dc.test.mycompany.com admin_server = dc.test.mycompany.com default_domain = test.mycompany.com } [domain_realm] dc.test.mycompany.com = test.mycompany.com .dc.test.mycompany.com = test.mycompany.com [appdefaults] pam = { debug = false ticket_lifetime = 36000 renew_lifetime = 36000 forwardable = true krb4_convert = false } I realize that there might be an issue with EXAMPLE.COM in there, however if I change it to TEST.MYCOMPANY.COM then it fails to join the domain with a preauthentication failure. As of 9/1/09, this is no longer the case.

    Read the article

  • NoMachine NX window closes after establishing connection

    - by blackicecube
    I am trying to use nomachine nx server and client. But somehow it doen't work. What happens is the following: Client starts up Client authenticates with Server The NoMachine window appears for 2-4 seconds The NoMachine window exists Somehow a "closeEvent" is sent. Here's what I see in the log file: [Thu Sep 24 11:20:37 2009]: Starting nxcomp with options: 'NX 299 Switch connection to: NX mode: unencrypted options: nx/nx,options=/home/foo/.nx/S-adnws029-1022-7EEF1367361DB2A7F4D9F76B06F4B434/options:1022'. [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor: opened file: [/home/foo/.nx/S-adnws029-1022-7EEF1367361DB2A7F4D9F76B06F4B434/session] [Thu Sep 24 11:20:38 2009]: LoginDialog::ShowConnectionStatus code=[246] str=[Initializing X protocol compression] error=[0] [Thu Sep 24 11:20:38 2009]: ProgressDialog::printNxStatus: [Initializing X protocol compression] [Thu Sep 24 11:20:38 2009]: LoginDialog::ShowConnectionStatus code=[247] str=[Established the display connection] error=[0] [Thu Sep 24 11:20:38 2009]: ProgressDialog::printNxStatus: [Established the display connection] [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: LoginDialog: slotAgentTimer [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: QClipboard: Unknown SelectionClear event received. [Thu Sep 24 11:20:38 2009]: LoginDialog: slotAgentTimer [Thu Sep 24 11:20:38 2009]: LoginDialog: Agent found closing windows... [Thu Sep 24 11:20:38 2009]: LoginDialog: setting automatic reconnection to true. [Thu Sep 24 11:20:38 2009]: Settings::flush [Thu Sep 24 11:20:38 2009]: Settings::flush [Thu Sep 24 11:20:38 2009]: LoginDialog: closeEvent received! [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: LoginDialog::destructor called begin [Thu Sep 24 11:20:38 2009]: LoginDialog: stopAllTimers [Thu Sep 24 11:20:38 2009]: LoginDialog: stopProgressTimer [Thu Sep 24 11:20:38 2009]: Utility::getPreferencesFile: 'nxclient' - '/home/foo/.nx/config/nxclient.cfg' [Thu Sep 24 11:20:38 2009]: Settings::flush [Thu Sep 24 11:20:38 2009]: Called destructor for protocol class [Thu Sep 24 11:20:38 2009]: LoginDialog::destructor called end Anyone with a helpful idea?

    Read the article

  • ZFS with L2ARC (SSD) slower for random seeks than without L2ARC

    - by Florian Kruse
    I am currently testing ZFS (Opensolaris 2009.06) in an older fileserver to evaluate its use for our needs. Our current setup is as follows: Dual core (2,4 GHz) with 4 GB RAM 3x SATA controller with 11 HDDs (250 GB) and one SSD (OCZ Vertex 2 100 GB) We want to evaluate the use of a L2ARC, so the current ZPOOL is: $ zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM afstank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c11t0d0 ONLINE 0 0 0 c11t1d0 ONLINE 0 0 0 c11t2d0 ONLINE 0 0 0 c11t3d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c13t0d0 ONLINE 0 0 0 c13t1d0 ONLINE 0 0 0 c13t2d0 ONLINE 0 0 0 c13t3d0 ONLINE 0 0 0 cache c14t3d0 ONLINE 0 0 0 where c14t3d0 is the SSD (of course). We run IO tests with bonnie++ 1.03d, size is set to 200 GB (-s 200g) so that the test sample will never be completely in ARC/L2ARC. The results without SSD are (average values over several runs which show no differences) write_chr write_blk rewrite read_chr read_blk random seeks 101.998 kB/s 214.258 kB/s 96.673 kB/s 77.702 kB/s 254.695 kB/s 900 /s With SSD it becomes interesting. My assumption was that the results should be in worst case at least the same. While write/read/rewrite rates are not different, the random seek rate differs significantly between individual bonnie++ runs (between 188 /s and 1333 /s so far), average is 548 +- 200 /s, so below the value w/o SSD. So, my questions are mainly: Why do the random seek rates differ so much? If the seeks are really random, they should not differ much (my assumption). So, even if the SSD is impairing the performance it should be the same in each bonnie++ run. Why is the random seek performance worse in most of the bonnie++ runs? I would assume that some part of the bonnie++ data is in the L2ARC and random seeks on this data performs better while random seeks on other data just performs similarly like before.

    Read the article

  • ADFS Relying Party

    - by user49607
    I'm trying to set up an Active Directory Federation Service Relying Party and I get the following error. I've tried modifying the page to allow <pages validateRequest="false"> to web.config and it doesn't make a difference. Can someone help me out? Server Error in '/test' Application. A potentially dangerous Request.Form value was detected from the client (wresult="<t:RequestSecurityTo..."). Description: Request Validation has detected a potentially dangerous client input value, and processing of the request has been aborted. This value may indicate an attempt to compromise the security of your application, such as a cross-site scripting attack. To allow pages to override application request validation settings, set the requestValidationMode attribute in the httpRuntime configuration section to requestValidationMode="2.0". Example: <httpRuntime requestValidationMode="2.0" />. After setting this value, you can then disable request validation by setting validateRequest="false" in the Page directive or in the <pages> configuration section. However, it is strongly recommended that your application explicitly check all inputs in this case. For more information, see http://go.microsoft.com/fwlink/?LinkId=153133. Exception Details: System.Web.HttpRequestValidationException: A potentially dangerous Request.Form value was detected from the client (wresult="<t:RequestSecurityTo..."). Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [HttpRequestValidationException (0x80004005): A potentially dangerous Request.Form value was detected from the client (wresult="<t:RequestSecurityTo...").] System.Web.HttpRequest.ValidateString(String value, String collectionKey, RequestValidationSource requestCollection) +11309476 System.Web.HttpRequest.ValidateNameValueCollection(NameValueCollection nvc, RequestValidationSource requestCollection) +82 System.Web.HttpRequest.get_Form() +186 Microsoft.IdentityModel.Web.WSFederationAuthenticationModule.IsSignInResponse(HttpRequest request) +26 Microsoft.IdentityModel.Web.WSFederationAuthenticationModule.CanReadSignInResponse(HttpRequest request, Boolean onPage) +145 Microsoft.IdentityModel.Web.WSFederationAuthenticationModule.OnAuthenticateRequest(Object sender, EventArgs args) +108 System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +80 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +266 `

    Read the article

  • Apache crashes a few seconds after the start.

    - by Nacho
    Hi, i've got a problem with apache. When i try to start it (/etc/init.d/apache2 start) it dies after a few seconds. It shows up on "ps aux" consuming a lot of memory and then dies. I don't know what could be causing apache to consume this amount of memory: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 13379 1.0 0.3 14376 3908 ? Ss 22:31 0:00 /usr/sbin/apache2 -k start www-data 13383 0.0 0.4 197316 4196 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13390 0.0 0.3 172728 4172 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13396 0.0 0.3 156336 4160 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13400 0.0 0.3 148140 4156 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13403 0.0 0.3 131748 4148 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start Here is a htop screenshot: http://i.imgur.com/N4Chh.png It happened suddenly, no change had been made to server config, so i don't know whats causing it. The error log of my virtual servers shows this: [Sun Jan 30 22:19:50 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=9685): Couldn't create worker thread 11 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:19:55 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=9685): Couldn't create worker thread 19 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:29:40 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=12009): Couldn't create worker thread 18 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:31:06 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=13396): Couldn't create worker thread 15 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:35:02 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=14009): Couldn't create worker thread 16 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:35:07 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=14009): Couldn't create worker thread 17 in daemon process 'fb.ebookmetafinder.com'. I'm on a ubuntu server vps and i use mod_wsgi with django. Thanks.

    Read the article

  • How can I both pipe and display output in Windows' command line?

    - by Bob
    I have a process I need to run within a batch file. This process produces some output. I need to both display this output to the screen and send (pipe) it to another program. The bash method uses tee: echo 'ee' | tee /dev/tty | foo Is there an equivalent for Windows? I am happy to use PowerShell if necessary. There are tee ports for Windows, but there does not appear to be an equivalent for /dev/tty, which complicates matters. The specific use-case here: I have a program (launch4j) that I need to run, displaying output to the user. At the same time, I need to be able to detect success or failure in the script. Unfortunately, this program does not set an exit code, and I cannot force it to do so. My current workaround involves piping to find, to search the output (launch4j config.xml | find "Successfully created") - however, that swallows the output I need to display. Therefore, I need some way to both display to the screen and send the ouput to a command - and this command should be able to set ERRORLEVEL (it cannot run asynchronously).

    Read the article

  • gitweb- fatal: not a git repository

    - by Robert Mason
    So I have set up a simple server running debian stable (squeeze), and have configured git. Using gitolite, I have all functionality (at least the basic clone/push/pull/commit) working. Installation of gitweb went without any issues. However, when I access gitweb, I get a gitweb screen without any repos listed. # tail -n 1 /var/log/apache2/error.log [DATE] [error] [client IP_ADDRESS] fatal: Not a git repository: '/var/lib/gitolite/repositories/testrepo.git' # cd /var/lib/gitolite/repositories/testrepo.git # ls branches config HEAD hooks info objects refs Here is what I see in /var/lib/gitolite/projects.list: testrepo.git And in /etc/gitweb.conf: # path to git projects (<project>.git) $projectroot = "/var/lib/gitolite/repositories"; # directory to use for temp files $git_temp = "/tmp"; # target of the home link on top of all pages #$home_link = $my_uri || "/"; # html text to include at home page $home_text = "indextext.html"; # file with project list; by default, simply scan the projectroot dir. $projects_list = "/var/lib/gitolite/projects.list"; # stylesheet to use $stylesheet = "gitweb.css"; # javascript code for gitweb $javascript = "gitweb.js"; # logo to use $logo = "git-logo.png"; # the 'favicon' $favicon = "git-favicon.png"; What is missing?

    Read the article

  • Nginx + PHP-FPM Timeouts, almost zero load consumption?

    - by javipas
    I've got a server running on a Linode with Ubuntu 10.04 LTS, Nginx 0.7.65, MySQL 5.1.41 and PHP 5.3.2 with PHP-FPM. There is a WordPress blog on it, updated to WordPress 3.2.1 recently. I have made no changes to the server (except updating WordPress) and while it was running fine, a couple of days ago I started having downtimes. I tried to solve the problem, and checking the error_log I saw many timeouts and messages that seemed to be related to timeouts. The server is currently logging this kind of errors: 2011/07/14 10:37:35 [warn] 2539#0: *104 an upstream response is buffered to a temporary file /var/lib/nginx/fastcgi/2/00/0000000002 while reading upstream, client: 217.12.16.51, server: www.mydomain.com, request: "GET /page/2/ HTTP/1.0", upstream: "fastcgi://127.0.0.1:9000", host: "www.mydomain.com", referrer: "http://www.mydomain.com/" 2011/07/14 10:40:24 [error] 2539#0: *231 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 46.24.245.181, server: www.mydomain.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.mydomain.com", referrer: "http://www.google.es/search?sourceid=chrome&ie=UTF-8&q=mydomain" and even saw this previous serverfault discussion with a possible solution: to edit /etc/php/etc/php-fpm.conf and change request_terminate_timeout=30s instead of ;request_terminate_timeout= 0 The server worked for some hours, and then broke again. I edited the file again to leave it as it was, and restarted again php-fpm (service php-fpm restart) but no luck: the server worked for a few minutes and back to the problem over and over. The strange thing is, although the services are running, htop shows there is no CPU load (see image) and I really don't know how to solve the problem. The config files are on pastebin The php-fpm.conf file is here The /etc/nginx/nginx.conf is here The /etc/nginx/sites-available/www.mydomain.com is here Please help :(

    Read the article

  • ospfd over an OpenVPN link - strange error in logs

    - by Alex
    I am trying to set up Quagga ospfd on two hosts connected by an OpenVPN link. These hosts have VPN IPs 10.31.0.1 and 10.31.0.13. ospfd config is pretty simple: hostname bizon password xxxxxxxxx enable password xxxxxxxxx ! log file /var/log/quagga/ospfd.log ! interface lo ! interface tun0 ip ospf network point-to-point ip ospf mtu-ignore ip ospf cost 10 interface tun1 ip ospf network point-to-point ip ospf mtu-ignore ip ospf cost 10 interface tun2 ip ospf network point-to-point ip ospf mtu-ignore ip ospf cost 10 ! router ospf ospf router-id 10.31.0.1 network 10.31.0.0/16 area 0.0.0.0 network 10.119.2.0/24 area 0.0.0.0 redistribute connected area 0.0.0.0 range 10.0.0.0/8 ! line vty ! debug ospf event debug ospf packet all I am getting the following error in the ospfd.log (the log is from 10.31.0.13): 2012/10/05 01:25:28 OSPF: ip_v 4 2012/10/05 01:25:28 OSPF: ip_hl 5 2012/10/05 01:25:28 OSPF: ip_tos 192 2012/10/05 01:25:28 OSPF: ip_len 64 2012/10/05 01:25:28 OSPF: ip_id 64666 2012/10/05 01:25:28 OSPF: ip_off 0 2012/10/05 01:25:28 OSPF: ip_ttl 1 2012/10/05 01:25:28 OSPF: ip_p 89 2012/10/05 01:25:28 OSPF: ip_sum 0xe5d1 2012/10/05 01:25:28 OSPF: ip_src 10.31.0.1 2012/10/05 01:25:28 OSPF: ip_dst 224.0.0.5 2012/10/05 01:25:28 OSPF: Packet from [10.31.0.1] received on link tun1 but no ospf_interface I'm not sure what to do next. I have set up ospfd over OpenVPN several times but I used Debian and I am on CentOS 6 now. Quagga version is 0.99.15. Should I try to get more recent version?

    Read the article

  • Installing xampp on system that already have mysql

    - by Charith
    I'm rather new to PHP and xampp. I have a computer that has installed MySQL server and MySQL workbench as I was working with Java and NetBeans. Now I want to use my computer for developing PHP and other web stuff too. I installed xampp successfully. But when I'm trying to access phpMyAdmin, it gives me an error saying mysql server rejected its connection Actually I tried stopping my current MySQL service and installing it again. However xampp have a its own mysql server in its installation path too. I tried configuring config.inc.php to use my existing installation of MySQL which is on a separate path. But I failed. Can anyone please instruct me how to configure this xampp to use my existing MySQL server to do everything and ignore the installed one with itself? I don't want two MySQL services to run on my system and clash in future. I'll be glad if anyone can explain to me what is best to use when you're developing Java, PHP, C and all the stuff on the same machine. P.S.: I have been given a password for my existing MySQL sever (user = root) as we do it usually when installing MySQL alone.

    Read the article

  • Server unreachable without www

    - by deamon
    My server is unreachable without "www." prefix, even when trying it with ping. The DNS entry looks like this: $TTL 86400 @ IN SOA ns1.first-ns.de. postmaster.robot.first-ns.de. ( 2011010600 ; serial 14400 ; refresh 1800 ; retry 604800 ; expire 86400 ) ; minimum @ IN NS robotns3.second-ns.com. @ IN NS robotns2.second-ns.de. @ IN NS ns1.first-ns.de. @ IN A 1.2.3.4 localhost IN A 127.0.0.1 mail IN A 1.2.3.4 www IN A 1.2.3.4 ftp IN CNAME www imap IN CNAME www loopback IN CNAME localhost pop IN CNAME www relay IN CNAME www smtp IN CNAME www @ A DNS record of the same type for another domain on the same server is working with and without "www". And the VirualHost config looks like this: <VirtualHost *:80> ServerName somewhere.com ServerAlias www.somewhere.com ServerSignature Off ... </VirtualHost> Any idea what could be wrong?

    Read the article

  • Why is lighttpd and fastcgi keeping sending me the *.scgi file instead of the website content?

    - by e-satis
    I have the following config: server.modules = ( "mod_compress", "mod_access", "mod_alias", "mod_rewrite", "mod_redirect", "mod_secdownload", "mod_h264_streaming", "mod_flv_streaming", "mod_accesslog", "mod_auth", "mod_status", "mod_expire", "mod_fastcgi" ) [...] fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/var/tmp/lighttpd/php-fastcgi.socket" + var.PID, "max-procs" => 1, "kill-signal" => 9, "idle-timeout" => 10, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "200", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "/pyapps/essai/blondes.fcgi" => ( "main" => ( "socket" => "/var/tmp/lighttpd/django-fastcgi.socket", ), ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" ))) [...] $HTTP["host"] =~ "(^|www\.)cam\.com(\:[0-9]*)?$" { server.document-root = "/home/cam/web/" accesslog.filename = "/home/cam/log/access.log" server.errorlog = "/home/cam/log/error.log" server.follow-symlink = "enable" # files to check for if .../ is requested server.indexfiles = ( "index.php", "index.html", "index.htm", "index.rb") url.rewrite = ( "^(/blondes/.*)$" => "/pyapps/essai/blondes.fcgi$1" ) } I have the following dir tree: /home/tv/web/ `-- pyapps `-- essai `-- __init__.py `-- blondes.fcgi `-- blondes.pid `-- django-fcgi.py `-- manage.py `-- manage.pyo `-- plop `-- settings.py `-- urls.py No error when restarting lighthttpd. The I run: ./manage.py runfcgi method=prefork socket=/var/tmp/lighttpd/django-fastcgi.socket daemonize=false pidfile=blondes.pid No errors neither. I then go to http://cam.com/blondes/. I offers me to download an empty file. I checked permissions but everything is set to the same user and group, and they work for the PHP site. The file /var/tmp/lighttpd/django-fastcgi.socket exists. When I reload the page, I got no output in error logs, nor in the manage.py runfcgi command. I probably missed something obvious, but what ?

    Read the article

  • Samba4/Ubuntu Shares Incorrectly Available to All Users

    - by Dan
    I've got my Ubuntu server working with Samba4 and got it set up as the Primary domain controller on my network with AD and all that goodness. However, I'm trying to get my Samba configuration to work with the users and groups I've defined with the Active Directory tools from Windows. For instance, I've got a share X which I want users A and B (as part of the 'management' group, known as LLGrpManager in my setup) to see, but no body else. However, after making changes to the configuration, restarting Samba, I test by connecting to the share with my Mac over Samba as user 'C' which isn't part of the management group, and I can, incorrectly, see the X share. I've tried alsorts of combinations of specifying the group with no luck at all. I've got a feeling that my global config might be too lenient or something to do with file permissions but being a bit green, I'm without clue. My /etc/samba/smb.conf # Global parameters [global] server role = domain controller server string = Office Server workgroup = LLDOMAIN realm = lldomain.local netbios name = DUMBO passdb backend = samba4 logon path = \\%L\profiles\%U logon drive = L: log file = /var/log/samba/%m.log max log size = 50 security = ads domain logons = yes domain master = auto usershare allow guests = no valid users = %S [netlogon] path = /var/lib/samba/sysvol/lldomain.local/scripts read only = no guest ok = no [sysvol] path = /var/lib/samba/sysvol read only = No guest ok = no valid users = @LLDOMAIN\LLGrpManager [ShareX] path = /data comment = Entire Data Volume guest ok = no comment = Entire Data Volume guest ok = no valid users = @LLDOMAIN\LLGrpManager admin users = @LLDOMAIN\LLGrpManager browsable = no inherit acls = yes inherit permissions = yes ... My /etc/nsswitch.conf I've also instructed the system to use the nss winbind library when searching for users or groups by adding the stanza passwd and group in /etc/nsswitch.conf: passwd: compat winbind group: compat winbind shadow: compat Permissions on the folder in question drwxrwxrwt 8 root root 4.0K Oct 28 19:11 data

    Read the article

  • keepalived issues on xen domU

    - by David Cournapeau
    Hi, I cannot manage to run keepalived correctly on xen domU. I am following this link for configuration, and it works great on some local VM (running with KVM). If I set up the exact same configuration, but on xen domU, it does not work: both servers do not see each other and decide to be master (10.120.100.99 being the virtual IP) $ sudo ip addr sh eth0 # host1 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:16:3e:78:f5:31 brd ff:ff:ff:ff:ff:ff inet 10.120.100.104/24 brd 10.120.100.255 scope global eth0 inet 10.120.100.99/32 scope global eth0 inet6 fe80::216:3eff:fe78:f531/64 scope link valid_lft forever preferred_lft forever $ sudo ip addr sh eth0 # host2 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:16:3e:51:36:20 brd ff:ff:ff:ff:ff:ff inet 10.120.100.105/24 brd 10.120.100.255 scope global eth0 inet 10.120.100.99/32 scope global eth0 inet6 fe80::216:3eff:fe51:3620/64 scope link valid_lft forever preferred_lft forever Is there a way I could debug this - it seems some people are able to use keepalived on xen following some mailing list, but without much info on their config.

    Read the article

  • Can I recover a zpool after it's been exported, given that devices have not been reallocated?

    - by cali-spc
    I had a zpool we'll call 'testpool'. testpool had 3 devices included in it, and a single zfs called 'test'. I needed to move 'test' to a new, smaller pool. I wanted to name the new pool the same name 'testpool'. Basically did the following. zfs send testpool@backup > /tmp/test-dump zpool export -f testpool zpool create -f testpool newdevice zfs receive -F testpool < /tmp/test-dump Unfortunately I found out that the testpool@backup snapshot was the wrong snapshot. Too old. I have yet to reallocate the three devices that were in the OLD testpool. (None of these 3 devices are 'newdevice', they are a separate 3.) Is there any way I can recover data in those devices? I'm thinking since I named the new, smaller pool the same as the old zpool, I'm pretty much SOL. But if not, that would be nice to know. Edit: More info I did a 'zpool import' and got this. bash-3.00# zpool import pool: testpool id: 14781458723915654709 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: testpool ONLINE c5t8d0 ONLINE c5t9d0 ONLINE c5t10d0 ONLINE So I'm guessing I just need the syntax to import this zpool using its numeric identifier, while giving it a new name. S.

    Read the article

  • Hyper-v on 2012R2 startup gen1 vm causes the host to freeze up

    - by sputnik
    I've searched a lot to resolve the following issue, but nothing helped me. My problem is, that starting up a first-gen vm locks up the whole host. Only a hard reset helps. Second-gen vm starts and runs perfectly. The freezes happened on 3 different vms. FreeBSD, Ubuntu, Windows Server 2008R2, while Windows 8.1 on second gen config works perfectly. Im using this pc mainly as a workstation. No eventlog errors nor dumps are generated. My system: Windows Server 2012R2 FX-8350, non OC ASRock 870 Extreme R2 (Crappy board imho) 32GB DDR3 1866@1600 (My motherboard, against the "support" for 1866ram won't work with full speed) 120GB SSD 4.5TB Storage space device I dont think that its due to my system, because vmware workstation was running without problems. Did I forget to configure something? Any help is appreciated. P.S: Even deactivating C1E, C6, C&Q didnt work. P.P.S: With no virtual network adapter set, the system still locks up. Creating a first gen vm without any hdds and network and launching works. Attaching a boot dvd causes the host to freeze. The host freezes as the gen1 vm begins to boot, doesn't matter if from dvd or hdd

    Read the article

  • How to find process that's using 100% of CPU

    - by Gabriel
    As i'm looking at htop and top i see that my processor usage is 100% allways. But i can not see any process that is using that much CPU. Htop shows me only 1-2 processes that use around 5% cpu time. Is there a way to find the processes that use that much cpu time? Here is the output of ps -eo pcpu,pid,user,args | sort -r -k1 | less %CPU PID USER COMMAND 0.8 20413 root jsvc.exec -user tomcat -cp ./bootstrap.jar -Djava.endorsed.dirs=../common/endorsed -outfile ../logs/catalina.out -errfile ../logs/catalina.err -verbose org.apache.catalina.startup.Bootstrap -security 0.3 631 mysql /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --pid-file=/var/lib/mysql/mysql.pid --skip-external-locking 0.2 3380 root /usr/local/apache/bin/httpd -k restart -DSSL 0.2 24698 root tailwatchd 0.2 22472 root /usr/local/jdk/bin/java -Djava.util.logging.config.file=/usr/local/jakarta/tomcat/conf/logging.properties -Dfile.encoding=UTF8 -XX:MaxPermSize=128m -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.endorsed.dirs=/usr/local/jakarta/tomcat/common/endorsed -classpath /usr/local/jakarta/tomcat/bin/bootstrap.jar -Dcatalina.base=/usr/local/jakarta/tomcat -Dcatalina.home=/usr/local/jakarta/tomcat -Djava.io.tmpdir=/usr/local/jakarta/tomcat/temp org.apache.catalina.startup.Bootstrap start 0.1 32095 root cpanellogd - processing bandwidth 0.0 9733 root sleep 1m

    Read the article

  • Debian/Ubuntu - No network connection

    - by leviathanus
    I have a very weird situation on my Ubuntu 12.04 LTS Server. I can not access (ping) my gateway, although I believe my config is ok - I attach the outputs. Any hints where to look? (I changed the beginning of the IP to something different, just obfuscation) ping 5.9.10.129 PING 5.9.10.129 (5.9.10.129) 56(84) bytes of data. From 5.9.10.129 (5.9.10.129) icmp_seq=2 Destination Host Unreachable From 5.9.10.129 (5.9.10.129) icmp_seq=3 Destination Host Unreachable From 5.9.10.129 (5.9.10.129) icmp_seq=4 Destination Host Unreachable uname -r 3.2.0-29-generic ifconfig eth0 eth0 Link encap:Ethernet HWaddr 3c:97:0e:0e:54:d7 inet addr:5.9.10.142 Bcast:5.9.10.159 Mask:255.255.255.224 inet6 addr: fe80::8e70:5aff:feda:c4ac/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1216 errors:0 dropped:0 overruns:0 frame:0 TX packets:490 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:107470 (107.4 KB) TX bytes:34344 (34.3 KB) Interrupt:17 Memory:d2500000-d2520000 ip route default via 5.9.10.129 dev eth0 metric 100 5.9.10.128/27 via 5.9.10.129 dev eth0 5.9.10.128/27 dev eth0 proto kernel scope link src 5.9.10.142 route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 5.9.10.129 0.0.0.0 UG 1000 0 0 eth0 5.9.10.128 5.9.10.129 255.255.255.224 UG 0 0 0 eth0 5.9.10.128 0.0.0.0 255.255.255.224 U 0 0 0 eth0 iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination UPD: Eric, this is how routing information looks on a working server: 0.0.0.0 78.47.198.49 0.0.0.0 UG 100 0 0 eth0 78.47.198.48 78.47.198.49 255.255.255.240 UG 0 0 0 eth0 78.47.198.48 0.0.0.0 255.255.255.240 U 0 0 0 eth0 As I understand it, Hetzner tries to ensure security by this, so I can not take over an IP by changing my MAC. But this is another server, which has another netmask (255.255.255.240) UPD2: BatchyX, on the working server: 78.47.198.49 dev eth0 src 78.47.198.60 cache on the broken: 5.9.10.129 dev eth0 src 5.9.10.142 cache

    Read the article

  • Is it a good idea to run Redmine using Webrick through Nginx?

    - by Rohit
    The task here is to get Redmine setup for a small (<20) team. There may be a few users who would access the setup as business clients. I am familiar with setting up PHP for Apache, and recently, Nginx. I am not familiar with Ruby, Ruby-On-Rails, etc. I prefer to use the OS's (Ubuntu Linux LTS) package manager to install the different components as it takes care of dependencies and updates. I have setup Nginx with PHP-FPM successfully and am struggling with Redmine. As suggested here, I got Redmine running on port 3000. # /etc/init/redmine.conf # Redmine description "Redmine" start on runlevel [2345] stop on runlevel [!2345] expect daemon exec ruby /usr/share/redmine/script/server webrick -e production -b 0.0.0.0 -d And using the Nginx config on this page, I used Nginx to proxy requests to Webrick. server { listen 80; server_name myredmine.example.com; location / { proxy_pass http://127.0.0.1:3000; } } This works well locally. I wanted some opinions before trying this out on the live box (a 256 MB VPS). Further, should I use something like monit to monitor webrick for failure?

    Read the article

  • Using OSX home directories from linux

    - by Steffen
    I'm running an OSX (Snow Leopard) Server with OpenDirectory, which is nothing else than a modified OpenLDAP with some Apple-specific schemas. However, I want to reuse this directory on some of my Linux (Debian Squeeze) boxes. It's no problem to authenticate against OSXs LDAP Server, this works fine already. What I struggle with is the way the home folders are specified in OSX. If I query the passwd config on one of my linux machines, the OSX imported entries are looking like this myaccount:x:1034:1026:Firstname Lastname:/Network/Servers/hostname.example.com/Volumes/MyShare/Users/myaccount:/bin/bash While those network home folders might be fine for OSX-Clients, I don't want those server based paths on my linux machines. I saw that there is an NFSHomeDirectory Attribute in the OSX User inspector, but if I change this the whole user home path gets changed. Since my users should be able to login on both systems, OSX and Linux, this is not what I want. Does anyone have an idea how I must configure OSX to make my linux machines use home folders like /net/myaccount and leave the configuration for OSX clients untouched?

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >