Search Results

Search found 9335 results on 374 pages for 'extension modules'.

Page 334/374 | < Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >

  • Dovecot Virtual Users Not Authenticating

    - by blankabout
    We have a standard Postfix/Dovecot installation working perfectly with real users but cannot work out how to add virtual users, all virtual user login attempts fail with authentication errors. Following are snippets from the configuration files: /etc/postfix/main.cf: virtual_mailbox_domains = virtualexample.com virtual_mailbox_base = /var/spool/vhosts virtual_mailbox_recipients = hash:/etc/postfix/virtual_mailbox_recipients /etc/dovecot/dovecot.conf: !include conf.d/*.conf /etc/dovecot/conf.d/10-auth.conf auth_mechanisms = cram-md5 digest-md5 plain passdb { driver = passwd-file # Path for passwd-file. Also set the default password scheme. args = scheme=cram-md5 /etc/cram-md5.pwd } /etc/cram-md5.pwd [email protected]{MD5}$1$uIMvzy92$9Xt67B/qw4u6txkkxzne80 This is a snippet from the log when a login attempt is made: auth: Debug: Loading modules from directory: /usr/lib64/dovecot/auth auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libauthdb_ldap.so auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libdriver_sqlite.so auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libmech_gssapi.so auth: Debug: passwd-file /etc/cram-md5.pwd: Read 1 users auth: Debug: auth client connected (pid=21990) auth: Debug: client in: AUTH#0111#011CRAM-MD5#011service=imap#011lip=1.1.1.1#011rip=2.2.2.2#011lport=143#011rport=51774 auth: Debug: client out: CONT#0111#011PDI1Njc0NjQ1NzQ3MTY0NTkuMTM0MTIxNzkwN0BncDM+ auth: Debug: client in: CONT auth: Debug: passwd-file([email protected],2.2.2.2): lookup: [email protected] file=/etc/cram-md5.pwd auth: Debug: client out: OK#0111#[email protected] auth: Debug: master in: REQUEST#0111630404609#01121990#0111#011b66b5f46b520a08e1d19d3d249be7073 auth: Debug: passwd([email protected],2.2.2.2): lookup auth: passwd([email protected],2.2.2.2): unknown user auth: Error: userdb([email protected],2.2.2.2): user not found from userdb passwd auth: Debug: master out: NOTFOUND#0111630404609 imap: Error: Authenticated user not found from userdb, auth lookup id=1630404609 (client-pid=21990 client-id=1) imap-login: Internal login failure (pid=21990 id=1) (auth failed, 1 attempts): user=, method=CRAM-MD5, rip=2.2.2.2, lip=1.1.1.1, mpid=21993 auth: Debug: auth client connected (pid=22010) auth: Debug: client in: AUTH#0111#011CRAM-MD5#011service=imap#011lip=1.1.1.1#011rip=2.2.2.2#011lport=143#011rport=51775 auth: Debug: client out: CONT#0111#011PDcxMDkwNDY1NTQzODUzMDkuMTM0MTIxNzkyOEBncDM+ auth: Debug: client in: CONT auth: Debug: passwd-file([email protected],2.2.2.2): lookup: [email protected] file=/etc/cram-md5.pwd auth: Debug: client out: OK#0111#[email protected] auth: Debug: master in: REQUEST#011343539713#01122010#0111#011e47b1345784e2845d59e794afa9a6bbe auth: Debug: passwd([email protected],2.2.2.2): lookup auth: passwd([email protected],2.2.2.2): unknown user auth: Error: userdb([email protected],2.2.2.2): user not found from userdb passwd auth: Debug: master out: NOTFOUND#011343539713 imap: Error: Authenticated user not found from userdb, auth lookup id=343539713 (client-pid=22010 client-id=1) imap-login: Internal login failure (pid=22010 id=1) (auth failed, 1 attempts): user=, method=CRAM-MD5, rip=2.2.2.2, lip=1.1.1.1, mpid=22011 It would appear that the user lookup is not working, even tho' the log suggests that Dovecot is using the /etc/cram-md5.pwd file and the user is configured in that same file. There are of course dozens of examples of using virtual users with Dovecot, but all the ones we have found either refer to Dovecot 1.x (we are using 2.x), using only virtual users (we must use real AND virtual users) or want to use a MySQL db, we need to use a text file. Some hints about where we are going wrong would be very much appreciated.

    Read the article

  • Getting Classic ASP to work in .js files under IIS 7

    - by Abdullah Ahmed
    I am moving a clients classic asp webapp to a new IIS7 based server. The site contains some .js files which have javascript but also classic asp in <% % tags which contains a bunch of conditional statements designed to spit out pieces of javascript based on session state variables. Here's a brief example of what the file could be like.... var arrHOFFSET = -1; var arrLeft ="<"; var arrRight = ">"; <% If ((Session("dashInv") = "True") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4"))) Then %> addMainItem("/MgmtTools/WelcomeInventory.asp?wherefrom=salesMan","",81,"center","","",0,0,"","","","",""); <% Else %> <% If (Session("dashInv") = "False") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4")) Then %> <% Else %> addMainItem("/calendar/welcome.asp","",81,"center","","",0,0,"","","","",""); <% End If %> <% End If %> defineSubmenuProperties(135,"center","center",-3,0,"","","","","","",""); Currently this file (named custom.js for example) will start throwing js errors, because the server doesnt seem to recognize the asp code in it and therefore does not parse it. I know I need to somehow specify that a .js file should also be treated like an .asp file and run through parsing it. However I am not sure how to go about doing this. Here is what I've tried so far... Under the Server node in IIS under HANDLER MAPPINGS I created a new Script Map with the following settings. Request Path: *.js Executable: C:\Windows\System32\inetsrv\asp.dll Name: ASPClassicInJSFiles Mapping: Invoke Handler only if request is mapped to : File Verbs: All verbs Access: Script I also created a similar handler under the site node itself. Under MIME Types .js is defined as application/x-javascript None of these work. If I simply rename the file to have .asp extension then things work, however this app is poorly coded and has literally 100's of files with the .js files included in them under various names and locations, so rename, search and replace is the last option I have.

    Read the article

  • Problems when trying to connect to a router wirelessly

    - by Ruud Lenders
    The situation - At my girlfriend's parents' place there are six Windows 7 devices that are wired or wireless connected to a router: 3 dekstops and 3 laptops. There are also several smartphones using the router. The router is secured with WPA2 (AES). The problem - We never had any problems with the router for over a year. But recently - about 3 weeks ago - my girlfriend's laptop (HP) and my laptop (ASUS) started to develop problems while trying to connect to the router. The router has stopped showing up from the network list. Sometimes it comes back and shows up, but then it keeps saying something along the lines of "Could not connect", and not long after that it dissapears again. The range of the router is not the problem here, because we experience the same when we sit next to the router. Sometimes, if we are lucky, and waited a long time (10-15 minutes) without using the laptop for anything, the laptop will eventually succesful connect to the router. The attempts - Of course, the Window 7 troubleshooter. We tried troubleshooting the connection problems and the wireless network adapter, but no luck. We also reset the router enough times to know that's not helping either. Here's the full list of things we tried, but did not help: Running the Windows 7 troubleshooter Resetting the router (more than once) Setting the router settings to factory defaults Disconnecting all other devices except one laptop Applying a system restore Trying static/dynamic IP/DNS - Dynamic is better, right? Enabling/disabling IPv6 - Should I keep IPv6 disabled? Running the command: netsh wlan stop hostednetwork Running the command: netsh wlan set hostednetwork mode=disallow Updating/reïnstalling wireless adapter drivers The tests - To help finding the core of the problem, we tested the following: Plugging an ethernet cable in the router and in our laptops - worked fine Connecting someone else's laptop to the router (wireless) - worked fine Connecting our laptops to someone else's router - worked fine The router - This information might be relevant: Router model: Sitecom 300N Wireless Router Router hardware: version 01 The DCHP Server's IPs range from 192.168.0.100 to 192.168.0.200. Router settings: Wireless channel: 12 Channel bandwidth: 20/40 MHz Extension channel: 8 Preamble type: Long 802.11g protection: Disabled UPnP: Enabled The laptops - If you are wondering about our laptops: My laptop model: ASUS Pro64JQ Girlfriend's laptop: HP Pavillion G6 OS: Both Windows 7 Professional x64 - with Service Pack 1 My wireless adapter: Atheros AR9285 AdHoc 11n: Enabled The question - Does anyone have experienced the same problems as I do? Or does someone know how to solve this? Are there more tricks I can try, or settings I should change? Note - Our laptops are not slow or old. My laptop is 1.5 years old, and the other laptop is just 5 months old. I know how to keep laptops clean and I'm pretty sure both laptops are not bloated with useless software.

    Read the article

  • Expectations for NTFS file recovery

    - by Fred Hamilton
    Yesterday I booted my XP system, and as I looked up a minute later I saw the light blue screen and tail-end of that pre-boot diskcheck Windows sometimes does if it finds an error (or was previously told to run a diskcheck drung the next boot). I didn't worry about it at the moment... But then I looked at my "scratch" disk, which was a 70% full, 750GB hard disk...and it now looks like it has been freshly formatted. It doesn't have a single file on it, just the hidden "System Volume Information" file and 750GB of freedom from data. I looked at some of the recovery tools from the Free NTFS partition recovery question and decided to try PC INSPECTOR™ File Recovery 4.x initially. It ran overnight and afterwards returned a list of thousands of files it could recover. The odd thing was that the filenames were lost, but the file extensions were not (WTF?). And all of the files were exactly 1,472kB in size. I recovered a dozen PDFs as a test, and 80% of them displayed OK despite being padded out to 1.5MB (though I assume any files 1472kB are hosed). My primary question is: Is this the best I can expect from any file recovery software when trying to recover NTFS files? Or is there perhaps something better out there? I assume this is as good as it gets, but wanted to check in with the experts first. Bonus questions: What might have happened to my drive? I didn't intentionally format it. I've never seen a disk error cause the drive to suddenly become a clean, reformatted drive. Could some malicious/confused software have told my PC to format my disk on reboot? Is that even a function Windows XP has? Why can the file extensions be recovered but not the filename? Does NTFS really treat them as separate entities? I thought I had 8.3 naming turned off, but maybe that had something to do with it. Or maybe it looks at the data in the file and guesses the extension?

    Read the article

  • Encoding multiple video streams with a single avconv invocation

    - by automatthias
    I played with avconv on Ubuntu and I'm now able to e.g. record the desktop with sound from a soundcard. One thing I wanted to do was recording two video inputs at the same time, for instance the desktop and from the webcam. I thought about doing something like this: avconv \ -f alsa \ -i default \ -acodec flac \ -f video4linux2 \ -r 6 \ -i /dev/video0 \ -f x11grab \ -i :0.0 \ out.mkv My thinking was that if you define multiple video inputs, and the .mkv format can handle multiple video streams, avconv will encode 2 video streams and 1 audio stream into one file. But this isn't what happens: avconv version 0.8.4-6:0.8.4-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Nov 6 2012 16:51:11 with gcc 4.7.2 [alsa @ 0x1091bc0] capture with some ALSA plugins, especially dsnoop, may hang. [alsa @ 0x1091bc0] Estimating duration from bitrate, this may be inaccurate Input #0, alsa, from 'default': Duration: N/A, start: 1354364317.020350, bitrate: N/A Stream #0.0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s [video4linux2 @ 0x10923e0] Estimating duration from bitrate, this may be inaccurate Input #1, video4linux2, from '/dev/video0': Duration: N/A, start: 100607.724745, bitrate: 29491 kb/s Stream #1.0: Video: rawvideo, yuyv422, 640x480, 29491 kb/s, 6 tbr, 1000k tbn, 6 tbc [x11grab @ 0x107b2a0] device: :0.0+83,87 -> display: :0.0 x: 83 y: 87 width: 854 height: 480 [x11grab @ 0x107b2a0] shared memory extension found [x11grab @ 0x107b2a0] Estimating duration from bitrate, this may be inaccurate Input #2, x11grab, from ':0.0+83,87': Duration: N/A, start: 1354364318.488382, bitrate: 196761 kb/s Stream #2.0: Video: rawvideo, bgra, 854x480, 196761 kb/s, 15 tbr, 1000k tbn, 15 tbc Incompatible pixel format 'bgra' for codec 'mpeg4', auto-selecting format 'yuv420p' [buffer @ 0x107fcc0] w:854 h:480 pixfmt:bgra [avsink @ 0x10bdf00] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out' [scale @ 0x10dc680] w:854 h:480 fmt:bgra -> w:854 h:480 fmt:yuv420p flags:0x4 Output #0, matroska, to '.../out.mkv': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: mpeg4, yuv420p, 854x480, q=2-31, 4000 kb/s, 1k tbn, 15 tbc Stream #0.1: Audio: libvorbis, 48000 Hz, 2 channels, s16 Stream mapping: Stream #2:0 -> #0:0 (rawvideo -> mpeg4) Stream #0:0 -> #0:1 (pcm_s16le -> libvorbis) Press ctrl-c to stop encoding [mpeg4 @ 0x10bd800] rc buffer underflow ^Cframe= 160 fps= 15 q=2.0 Lsize= 3414kB time=10.66 bitrate=2623.0kbits/s video:3273kB audio:131kB global headers:4kB muxing overhead 0.165600% Received signal 2: terminating. I'm not sure if it's the question of mapping (some -map options to add?) or that avconv just can't encode more than 1 video stream at one time. So is it an actual avconv limitation, or a limitation of the available containers, or me simply not finding the right combination of command line options?

    Read the article

  • Trying to install Rmagick on Debian

    - by Janak
    One of things you need to do to get Rmagick installed apt-get install libmagick9-dev When I try that I get the following errors Reading package lists... Done Building dependency tree... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: libmagick9-dev: Depends: libjpeg62-dev but it is not going to be installed Depends: libbz2-dev but it is not going to be installed Depends: libtiff4-dev but it is not going to be installed Depends: libwmf-dev (>= 0.2.7-1) but it is not going to be installed Depends: libz-dev Depends: libpng12-dev but it is not going to be installed Depends: libfreetype6-dev but it is not going to be installed Depends: libexif-dev but it is not going to be installed Depends: libdjvulibre-dev but it is not going to be installed Depends: librsvg2-dev but it is not going to be installed Depends: libgraphviz-dev but it is not going to be installed E: Broken packages I don't know what to do to fix this? EDIT 1: I tried sudo apt-get install librmagick-ruby and it worked fine but then I needed to install the fleximage gem gem1.8 install fleximage and I got the following error message Building native extensions. This could take a while... ERROR: Error installing fleximage: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for Ruby version >= 1.8.5... yes checking for cc... yes checking for Magick-config... no Can't install RMagick 2.12.2. Can't find Magick-config in /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/gems/1.8/bin *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/rmagick-2.12.2 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/rmagick-2.12.2/ext/RMagick/gem_make.out

    Read the article

  • PTR Record Troubles

    - by Physikal
    I am having a hell of a time getting our PTR record right. Our current PTR zone looks like this: $ttl 38400 @ IN SOA ns1.domain.com. admin.domain.com. ( 1268669139 10800 3600 604800 38400 ) xxx.xxx.xxx.in-addr.arpa. IN NS ns2.domain.com. xxx.xxx.xxx.in-addr.arpa. IN NS ns1.domain.com. 97 IN PTR mail.domain.com. xxx.xxx.xxx.xxx.in-addr.arpa. IN PTR mail.domain.com. 97.96/28. IN PTR mail.domain.com For some reason the only thing that works is the 97.96/28. When this line is in there it actually says I have a PTR record when reporting from intodns.com. If I remove that line, it says I have no PTR. I have followed instructions from http://www.philchen.com/2007/04/04/configuring-reverse-dns and when I follow those instructions intodns.com says I have no PTR. When it does work with the line 97.96/28., the PTR kicks back as (from intodns.com) : 97.xxx.xxx.xxx.in-addr.arpa -> mail.domain.com.xxx.xxx.xxx.in-addr.arpa Which is, to my knowledge, an incorrect PTR. I want it to just kick back as mail.domain.com, without the xxx.xxx.xxx.in-addr.arpa extension. I have tried everything I can think of but I can't fix it. I can't help but think it's one of those things that is so stupid and simple I'm going to do the ol'facepalm. Any help is greatly appreciated. Thanks! In the event that the domain zone is needed, here it is: $ttl 38400 @ IN SOA domain.com. [email protected]. ( 1265221037 10800 3600 604800 38400 ) domain.com. IN A xxx.xxx.xxx.xxx www.domain.com. IN A xxx.xxx.xxx.xxx ftp.domain.com. IN A xxx.xxx.xxx.xxx m.domain.com. IN A xxx.xxx.xxx.xxx localhost.domain.com. IN A 127.0.0.1 webmail.domain.com. IN A xxx.xxx.xxx.xxx admin.domain.com. IN A xxx.xxx.xxx.xxx mail.domain.com. IN A xxx.xxx.xxx.xxx domain.com. IN MX 5 mail.domain.com. domain.com. IN TXT "v=spf1 a mx a:domain.com ip4:xxx.xxx.xxx.xxx ?all" domain.com. IN NS ns1 domain.com. IN NS ns2 ns1 IN A xxx.xxx.xxx.xxx ns2 IN A xxx.xxx.xxx.xxx Any double entries in different formats were part of my troubleshooting process.

    Read the article

  • Why does unpartitioned Hitachi HDS5C3020 drive start consuming 50% more power 15 minutes after boot?

    - by Pro Backup
    In a Debian 6.0.6 system there are 74 pieces of 2TB Toshiba DT01ABA200 drives. These drives are identified as Hitachi HDS5C3020BLE630 drives running firmware revision MZ4OAAB0. 64 Drives attached via HP SAS expander cards to an LSI 2008 SAS controller, another 5 drives are connected directly to the mainboard, 4 drives are connected to a Sil based PCI controller and last 1 drive is only powered and has no data cable connected. The controller LSI and Sil card's their onboard BIOS are both disabled and the mpt2sas and sata_sil modules are removed from the Linux debian 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux kernel. The mpt2sas module is loaded after boot using a modprobe command in /etc/rc.local. These 74 drives are not partitioned, neither formatted and also not mounted. The system consumes: with 0 drives: 70.6 - 70.9 Watt (also 15 minutes after boot); with 74 drives: 330 - 360 Watt, just after boot (is equivalent to 3.5 - 3.9W per drive in idle state); with 74 drives: 420 - 466 Watt, each time in the 15th minute of uptime (is equivalent to 4.7 - 5.3W per drive in idle state). The drive specification lists 4.7W as read/write, and 3.3W as idle power consumption. The increased power consumption is most likely on the 5V line, because after roughly 1 minute an "over current protection" (OCP) of the power supply (PSU) shuts down the power. The used PSU is a single rail model with an OCP of 122A on the 12V line and 55A on the 5V line. Regression: It doesn't matter whether the drive its APM value is set to disabled or 1 (maximum power saving). The operating system records no read/write activity in /proc/diskstats. The values there are identical (28 read, 0 write operations) as immediately after the modprobe operation. Can't test what happens when booting into the mainboard it's BIOS - to exclude any OS intervention - because the Super Micro X8SI6-F mainboard running firmware 06/27/12 has a bug that incorrectly reads a +74.0 C CPU sensor temperature as "High" in BIOS mode, and shuts down the power after 1 minute. What might be causing the drive read/write activity on all drives in the 15th minute after boot and how to prevent it from happening?

    Read the article

  • configuration required for HIVE to be installed on a node

    - by ????? ????????
    I went through the process of manually installing ambari (not through SSH, because I couldnt get keyless to work) and everything installed OK, except for HIVE and GANGLIA. I got this message: stderr: None stdout: warning: Unrecognised escape sequence ‘\;’ in file /var/lib/ambari-agent/puppet/modules/hdp-hive/manifests/hive/service_check.pp at line 32 warning: Dynamic lookup of $configuration is deprecated. Support will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $classname::variable) or parameterized classes. notice: /Stage[1]/Hdp::Snappy::Package/Hdp::Snappy::Package::Ln[32]/Hdp::Exec[hdp::snappy::package::ln 32]/Exec[hdp::snappy::package::ln 32]/returns: executed successfully notice: /Stage[2]/Hdp-hive::Hive::Service_check/File[/tmp/hiveserver2Smoke.sh]/ensure: defined content as ‘{md5}7f1d24221266a2330ec55ba620c015a9' notice: /Stage[2]/Hdp-hive::Hive::Service_check/File[/tmp/hiveserver2.sql]/ensure: defined content as ‘{md5}0c429dc9ae0867b5af74ef85b5530d84' notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/File[/tmp/hcatSmoke.sh]/ensure: defined content as ‘{md5}bae7742f7083db968cb6b2bd208874cb’ notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:11:56 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException org.apache.hadoop.hive.ql.parse.SemanticException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:12:06 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException [Error 10001]: Table not found hcatsmokeida8c07401_date102513 notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:12:15 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException o When i go to the alerts and health checks i’m getting this: ive Metastore status check CRIT for 42 minutes CRITICAL: Error accessing hive-metaserver status [13/06/25 03:44:06 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. What am I doing wrong? I have already tried to do ambari-server reset on the the database without results.

    Read the article

  • SSH to an ubuntu machine using avahi

    - by tensaiji
    I have an ubuntu box that I connect to using avahi. Connecting to that box works fine for all services (I regularly use AFP, SSH and SMB on it) but I've noticed that whenever I connect to it from a mac using SSH (and using the ".local" dns name provided by avahi - eg. "ssh .local") SSH tries to connect using ipv6, which for some reason times out (after two minutes) then it tries ipv4 which connects immediately. I'd like to avoid this timeout, as it's really annoying for me and other users - if SSH tried ipv4 first or if ssh over ipv6 worked then that would solve the problem. But so far I've been unable to get either to work (the best I've managed is to specify the "-4" option to SSH to stop it from trying ipv6 at all). I'm using Ubuntu 10.04. Any solution has to be on the server (not the client) as there are multiple clients connecting. A possible complication might be that my LAN is set up to allow link-local ipv6 addresses only, but I have other servers (using Mac OS) that I can SSH into using ipv6) I suspect that the problem could be solved by either preventing avahi from broadcasting the ipv6 address, or by enabling ssh over ipv6, but so far as I can tell avahi is already configured not to broadcast the ipv6 address and sshd is configured to allow ipv6 connections! Here's my /etc/avahi/avahi-daemon.conf (I don't think I've changed anything from the ubuntu defaults) [server] #host-name=foo #domain-name=local #browse-domains=0pointer.de, zeroconf.org use-ipv4=yes use-ipv6=no #allow-interfaces=eth0 #deny-interfaces=eth1 #check-response-ttl=no #use-iff-running=no #enable-dbus=yes #disallow-other-stacks=no #allow-point-to-point=no [wide-area] enable-wide-area=yes [publish] #disable-publishing=no #disable-user-service-publishing=no #add-service-cookie=no #publish-addresses=yes #publish-hinfo=yes #publish-workstation=yes #publish-domain=yes #publish-dns-servers=192.168.50.1, 192.168.50.2 #publish-resolv-conf-dns-servers=yes #publish-aaaa-on-ipv4=yes #publish-a-on-ipv6=no [reflector] #enable-reflector=no #reflect-ipv=no [rlimits] #rlimit-as= rlimit-core=0 rlimit-data=4194304 rlimit-fsize=0 rlimit-nofile=300 rlimit-stack=4194304 rlimit-nproc=3 and here's my sshd_config (mainly updated to only allow pub/private keys): # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 180 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords PasswordAuthentication no AllowGroups sshusers # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server UsePAM yes Does anyone have any ideas that I can try, or has experienced anything similar?

    Read the article

  • Windows 2008 IIS 7.0 HTTP to HTTPS Redirect -- Versus IIS 6.0 Mechanism

    - by Dan7el
    This topic, creating a mechanism for redirection from HTTP to HTTPS on a Windows 2008 server running IIS 7.0 is a much written-about topic on the Internet. How this is done is really not so much my issue. My issue is more of explaining why this can't be done with the standard HTTP Redirect module that ships with Windows 2008 IIS 7.0. Instead, there are other methods needed that are more arduous. First, the IIS 6.0 method requires no externally available modules nor does it require any additional modifications to the web.config or any type of other development effort. It's outlined here: http://blogs.microsoft.co.il/blogs/dorr/archive/2009/01/13/how-to-force-redirection-from-http-to-https-on-iis-6-0.aspx And, you can see the basic steps are to run the snap-in, get the properties on the site, and do some modifications. Presto, you have the HTTP -- HTTP redirect setup. Now, on the IIS 7.0 platform, it doesn't seem this simple. An initial search found the following site: http://www.sslshopper.com/iis7-redirect-http-to-https.html Which has two separate approcates: 1. Involves installing a separately available Microsoft module -- URL Rewrite Module, and then adding XML to the web.config. 2. Custom Error Page. ...there might be other methods, but these are the basic ones and the first is listed as the primary method. But wait...There exists on the IIS 7.0 an HTTP Redirect Module. So...why can't I use the HTTP Redirect Module to do this very thing? This is really my big question. I need to know this because my management is going to insist I use the HTTP Redirect Module and set up the HTTP to HTTPS redirect in a similar fashion to how we do in IIS 6.0. Can someone please explain to me, in clean, simple, easy to understand, terms that both I and my management can understand as to why I need to go get the URL Rewrite Module and install that on the server and make the web.config changes suggested by the article instead of simply using the HTTP Redirect module that's already installed on the site? Thanks a bunch.

    Read the article

  • Problem deploying GWT application on apache and tomcat using mod_jk

    - by Colin
    I'm trying to deploy a GWT app on Apache using mod_jk connector. I have compiled the application and tested it on tomcat on the address localhost:8080/loginapp and it works ok. However when I deploy it to apache using mod_jk I get the starter page which gives me a login form but trying to login I get this error 404 Not Found Not Found The requested URL /loginapp/loginapp/login was not found on this server Looking at the apache log files i see this [Thu Jan 13 13:43:17 2011] [error] [client 127.0.0.1] client denied by server configuration: /usr/local/tomcat/webapps/loginapp/WEB-INF/ [Thu Jan 13 13:43:26 2011] [error] [client 127.0.0.1] File does not exist: /usr/local/tomcat/webapps/loginapp/loginapp/login, referer: http://localhost/loginapp/LoginApp.html The mod_jk configurations on my apache2.conf file are as follows LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so JkWorkersFile /etc/apache2/workers.properties JkLogFile /var/log/apache2/mod_jk.log JkLogLevel info JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories JkRequestLogFormat "%w %V %T" <IfModule mod_jk.c> Alias /loginapp "/usr/local/tomcat/webapps/loginapp/" <Directory "/usr/local/tomcat/webapps/loginapp/"> Options Indexes +FollowSymLinks AllowOverride None Allow from all </Directory> <Location /*/WEB-INF/*> AllowOverride None deny from all </Location> JkMount /loginapp/*.html loginapp My workers.properties file is as follows workers.tomcat_home=/usr/local/tomcat workers.java_home=/usr/lib/jvm/java-6-sun ps=/ worker.list=loginapp worker.loginapp.type=ajp13 worker.loginapp.host=localhost worker.loginapp.port=8009 worker.loginapp.cachesize=10 worker.loginapp.cache_timeout=600 worker.loginapp.socket_keepalive=1 worker.loginapp.recycle_timeout=300 worker.loginapp.lbfactor=1 And this is my servlet mappings for my app on the application's web.xml <servlet> <servlet-name>loginServlet</servlet-name> <servlet-class>com.example.loginapp.server.LoginServiceImpl</servlet-class> </servlet> <servlet-mapping> <servlet-name>loginServlet</servlet-name> <url-pattern>/loginapp/login</url-pattern> </servlet-mapping> <servlet> <servlet-name>myAppServlet</servlet-name> <servlet-class>com.example.loginapp.server.MyAppServiceImpl</servlet-class> </servlet> <servlet-mapping> <servlet-name>myAppServlet</servlet-name> <url-pattern>/loginapp/mapdata</url-pattern> </servlet-mapping> Ive tried everything and it seems to still elude me. Even tried changing the deny from all directive on the WEBINF folder to allow from all and still it doesnt work. Maybe im missing something. Any help will be highly appreciated.

    Read the article

  • debian packages version convention

    - by JackWu
    I'm using debian/Ubuntu, and get confused about versions of packages. When using dpkg -l command, I get: ii vim 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor ii vim-common 2:7.3.429-2ubuntu2.1 Vi IMproved - Common files ii vim-runtime 2:7.3.429-2ubuntu2.1 Vi IMproved - Runtime files ii vim-tiny 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor - compact version ii virt-what 1.11-1 detect if we are running in a virtual machine ii w3m 0.5.3-5ubuntu1 WWW browsable pager with excellent tables/frames support ii watershed 6 reduce superfluous executions of idempotent command ii wget 1.13.4-2ubuntu1 retrieves files from the web ii whiptail 0.52.11-2ubuntu10 Displays user-friendly dialog boxes from shell scripts ii whoopsie 0.1.33 Ubuntu crash database submission daemon ii wimlib9 1.5.0-1~webupd8~precise Library to extract, create, modify, and mount WIM files ii wimtools 1.5.0-1~webupd8~precise Tools to extract, create, modify, and mount WIM files ii wireless-tools 30~pre9-5ubuntu2 Tools for manipulating Linux Wireless Extensions ii wpasupplicant 0.7.3-6ubuntu2.1 client support for WPA and WPA2 (IEEE 802.11i) ii x11-common 1:7.6+12ubuntu2 X Window System (X.Org) infrastructure ii x11-utils 7.6+4ubuntu0.1 X11 utilities ii xauth 1:1.0.6-1 X authentication utility ii xbitmaps 1.1.1-1 Base X bitmaps ii xclip 0.12-1 command line interface to X selections ii xfonts-encodings 1:1.0.4-1ubuntu1 Encodings for X.Org fonts ii xfonts-utils 1:7.6+1 X Window System font utility programs ii xkb-data 2.5-1ubuntu1.3 X Keyboard Extension (XKB) configuration data ii xml-core 0.13 XML infrastructure and XML catalog file support rc xpdf 3.02-21build1 Portable Document Format (PDF) reader ii xterm 271-1ubuntu2.1 X terminal emulator ii xz-lzma 5.1.1alpha+20110809-3 XZ-format compression utilities - compatibility commands ii xz-utils 5.1.1alpha+20110809-3 XZ-format compression utilities ii zabbix-agent 1:1.8.11-1 network monitoring solution - agent ii zlib1g 1:1.2.3.4.dfsg-3ubuntu4 compression library - runtime ii zlib1g-dev 1:1.2.3.4.dfsg-3ubuntu4 compression library - development ii zsh 4.3.17-1ubuntu1 shell with lots of features The third column is version, but it all messed up in a way I can't understand. I mean, different packages use total different naming specification. Here are the major questions: Why there are ubuntu in them, and there are not? what all the special -~+ mean? alpha and build, dfsg, what are they? Can I just use them casually? vim and other packages have 2:, what does that mean? How version comparison works, since they can be so different? Can anyone please explain this to me? Or where can I find an official document? Thanks in advance.

    Read the article

  • Can't connect to svnserve on localhost - connection actively refused

    - by RMorrisey
    When I try to connect using Tortoise to my SVN server using: svn://localhost/ Tortoise tells me: "Can't connect to host 'localhost'. No connection could be made because the target machine actively refused it." How can I fix this? I am trying to set up a subversion server on my local PC for personal use. I am running Windows Vista, with SlikSVN and TortoiseSVN installed. I previously had everything working correctly, but I found that I couldn't merge(!), apparently due to a version mismatch between the SVN client and server. Anyway... I now have the following setup: I created a repository using svnadmin create; it resides at C:\svnGrove C:\svnGrove\conf\svnserve.conf (# comments omitted): [general] anon-access=read auth-access=write password-db=passwd #authz-db=authz realm=svnGrove C:\svnGrove\conf\passwd: [users] myname=mypass My Subversion Server service is pointed to: C:\Program Files\SlikSvn\bin\svnserve.exe --service -r C:\svnGrove It shows the TCP/IP service as a dependency. I have also tried running svnserve from the command line, with similar results. The below is provided by the 'about' option in TortoiseSVN: TortoiseSVN 1.6.10, Build 19898 - 32 Bit , 2010/07/16 15:46:08 Subversion 1.6.12, apr 1.3.8 apr-utils 1.3.9 neon 0.29.3 OpenSSL 0.9.8o 01 Jun 2010 zlib 1.2.3 The following is from svn --version on the command line (not sure why it says CollabNet, CollabNet was the previous SVN binary that I had set up. The uninstaller failed to remove everything gracefully): svn, version 1.6.12 (SlikSvn/1.6.12) WIN32 compiled Jun 22 2010, 20:45:29 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: * ra_neon : Module for accessing a repository via WebDAV protocol using Neon. - handles 'http' scheme - handles 'https' scheme * ra_svn : Module for accessing a repository using the svn network protocol. - with Cyrus SASL authentication - handles 'svn' scheme * ra_local : Module for accessing a repository on local disk. - handles 'file' scheme * ra_serf : Module for accessing a repository via WebDAV protocol using serf. - handles 'http' scheme - handles 'https' scheme I disabled my Windows Firewall and CA Internet Security, without success in resolving the issue. Edit The old version of svnserve was still set up as a service after the uninstall, pointed to this path: C:\Program Files\Subversion\svn-win32-1.4.6\bin I edited the registry key for the service to point to the new path (shown above). Whether I run svnserve as a service, or using -d, I do not see an entry for that port number in the listing generated by netstat -anp tcp.

    Read the article

  • Hanging page loads every n loads - SOLVED

    - by Christian
    Hi Guys I recently moved my site to a new server (Apache 2, PHP5, MySQL5). The site is an Invision based forum. Every few posts / topics it just hangs. The data has been written because if you stop and reload, the post / thread is there. I thought it was a write issue initially, but nope. So, the data is written but the page load never completes. It doesn't leave the page where the data has been input. Whats the best way to trouble shoot this issue? The only thing I have done recently is reduce my MySQL timeouts, but I can't see that being an issue as the values are still big enough and there are no mentions of timeouts in the MySQL log. (For the record there is nothing in PHP's error log either) Thanks in advance! EDIT: I checked my server-status. It all looked ok, but I have a suspicion I was hitting my ServerLimit, so I doubled that. Also enabled my Keepalives. Will keep an eye on it. EDIT 2: Its now been a few days and this is still occuring. I have more info though; Apache is throwing seg faults, but enabling core dumps does not produce them. I have tried disabling the modules in apache but it just stops things from working. I fear it may actually be DNS related. If I watch Live Headers in Firefox, absolutely nothing happens during this 'hanging' period. After that, the responses come back fairly promptly. UPDATE (05/04): I built the latest versions of Apache and PHP from source, no luck. I then removed those and used the remi repo to update all my packages to the latest stable. Segfaults seem to have stopped, but the hanging is continuing. ini's are at; www.skylinesaustralia.com/php.ini www.skylinesaustralia.com/my.cnf www.skylinesaustralia.com/httpd.conf UPDATE - SOLVED! - The issue was having a gigantic query cache size in MySQL. It was 2GB, changing it to 64M sorted it. Thanks for all the help everybody, much appreciated!!

    Read the article

  • Problems set-up Single Sign-On using Kerberos authentication

    - by user1124133
    I need for Ruby on Rail application set authentication via Active Directory using Kerberos authentication. Some technical information: I are using Apache installed mod_auth_kerb In httpd.conf I added LoadModule auth_kerb_module modules/mod_auth_kerb.so In /etc/krb5.conf I added following configuration [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = EU.ORG.COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h forwardable = yes [realms] EU.ORG.COM = { kdc = eudc05.eu.org.com:88 admin_server = eudc05.eu.org.com:749 default_domain = eu.org.com } [domain_realm] .eu.org.com = EU.ORG.COM eu.org.com = EU.ORG.COM [appdefaults] pam = { debug = true ticket_lifetime = 36000 renew_lifetime = 36000 forwardable = true krb4_convert = false } When I test kinit validuser and enter password then authentication is successful. klist returns: Ticket cache: FILE:/tmp/krb5cc_600 Default principal: [email protected] Valid starting Expires Service principal 02/08/13 13:46:40 02/08/13 23:46:47 krbtgt/[email protected] renew until 02/09/13 13:46:40 Kerberos 4 ticket cache: /tmp/tkt600 klist: You have no tickets cached In application Apache configuration I added IfModule mod_auth_kerb.c> Location /winlogin> AuthType Kerberos AuthName "Kerberos Loginsss" KrbMethodNegotiate off KrbAuthoritative on KrbVerifyKDC off KrbAuthRealms EU.ORG.COM Krb5Keytab /home/crmdata/httpd/apache.keytab KrbSaveCredentials off Require valid-user </Location> </IfModule> I restarted apache Now some tests: When I try to access application from Win7, I got pop-up message box, with text: Warning: This server is requesting that your username and password be sent in an insecure manner (basic authentification without a secure connection) When I enter valid credentials then my application opens successfully, and all works fine. Questions: Is ok that for user pop-ups such windows? If I use NTLM authentication then there no such pop-up. I checked IE Internet Options and there 'Enable Integrated Windows Authentication' is checked. Why IE try to send username and password to application apache? If I correct to understand then Windows self must make authentication via Active Directory using Kerberos protocol. When I try to access application from Win7 and I enter incorrect credentials to pop-up message box Application say Authentication failed (this is OK) In apache error log I see: [error] [client 192.168.56.1] krb5_get_init_creds_password() failed: Client not found in Kerberos database But now I cannot get possibility to enter valid credentials, only when I restart IE I can get again pop-up box. What could be incorrect or missing in my Kerberos setup? I read in some blog post that probably something is needed to be done in Active Directory side. What exactly?

    Read the article

  • Drupal on an NFS share has terrible performance

    - by Marcus
    We have a setup where a Drupal 7 site with the following setup - a VMware ESXi 4.1 host server running a web vm and an NFS VM. The web VM is using Apache and mod_php. The site is still in development thus we have to turn off all forms of caching due to the frequently-updated files. Each page request takes around 15-20 seconds to complete. Profiling the PHP code shows that the vast majority of time (normally over 90%) is taking by all the is_dir(), is_file() function calls that load up the modules. I've increased PHP's realpath cache size to several megs and an strace shows that the lstat calls then drop from over 200 to around 6 and stat() decreases a bit (around 600 calls). However, while this has shaved off quite a bit of time, I am simply unable to break past the 10 second per request barrier. Is there a way to get better performance out of this setup that doesn't involve caching? Configs and stats: VMs: web - Centos 6 64bt, 2.5GB RAM, normal CPU/HD prioritisation nfs - Centos 6 64bt, 2GB RAM, normal CPU priority, high HD priority PHP: 32M realpath cache size (it's this high for testing purposes) NFS: ~]# egrep -v '#|^$' /etc/nfsmount.conf [ NFSMount_Global_Options ] Defaultvers=4 Ac=False Rsize=32k Wsize=32k Bsize=32k Reading speeds via NFS are not an issue a dd of a 100M test file using 32k blocks returns: 3200+0 records in 3200+0 records out 104857600 bytes (105 MB) copied, 1.84984 s, 56.7 MB/s real 0m1.857s user 0m0.007s sys 0m0.330s Strace on Apache process with empty realpath cache: % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 50.78 1.157452 337 3434 28 stat 32.58 0.742656 628 1182 425 open 9.29 0.211788 762 278 1 lstat 3.17 0.072322 0 237865 write 2.45 0.055839 490 114 13 access 0.45 0.010262 43 237 brk 0.34 0.007725 10 811 74 read 0.28 0.006340 9 679 fstat 0.22 0.005069 18 281 poll 0.20 0.004533 6 698 getdents 0.09 0.001960 10 190 mmap 0.05 0.001065 14 74 accept4 0.04 0.001000 333 3 chdir 0.03 0.000750 4 190 munmap 0.01 0.000339 0 836 close 0.01 0.000247 3 75 writev 0.00 0.000068 0 611 fcntl 0.00 0.000063 1 77 shutdown 0.00 0.000000 0 1 lseek 0.00 0.000000 0 5 rt_sigaction 0.00 0.000000 0 1 rt_sigprocmask 0.00 0.000000 0 3 setitimer 0.00 0.000000 0 5 socket 0.00 0.000000 0 5 5 connect 0.00 0.000000 0 74 getsockname 0.00 0.000000 0 15 setsockopt 0.00 0.000000 0 5 getcwd 0.00 0.000000 0 1 futex ------ ----------- ----------- --------- --------- ---------------- Strace after realpaths are cached % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 60.14 1.371006 484 2831 28 stat 31.79 0.724705 627 1155 425 open 3.53 0.080354 0 237865 write 2.65 0.060433 530 114 13 access 0.43 0.009913 99 100 brk 0.38 0.008730 11 804 74 read 0.35 0.007910 12 675 fstat 0.30 0.006775 10 654 getdents 0.13 0.003065 11 281 poll 0.09 0.002000 333 6 1 lstat 0.07 0.001545 2 807 close 0.05 0.001063 14 74 accept4 0.04 0.001000 6 179 mmap 0.02 0.000404 2 179 munmap 0.01 0.000271 4 75 writev 0.01 0.000212 0 611 fcntl 0.01 0.000129 2 77 shutdown 0.00 0.000022 0 74 getsockname 0.00 0.000000 0 1 lseek 0.00 0.000000 0 5 rt_sigaction 0.00 0.000000 0 1 rt_sigprocmask 0.00 0.000000 0 3 setitimer 0.00 0.000000 0 3 socket 0.00 0.000000 0 3 3 connect 0.00 0.000000 0 15 setsockopt 0.00 0.000000 0 5 getcwd 0.00 0.000000 0 3 chdir ------ ----------- ----------- --------- --------- ---------------- Mount: nfs.xxx.xxx.xxx:/path/to/website/files on /path/to/website/files type nfs (rw,hard,intr,noac,vers=4,addr=xx.xx.xx.xx,clientaddr=xx.xx.xx.xx) Any help is, naturally, appreciated.

    Read the article

  • Poor performance of single processor 32bit Windows XP xompared SMP in VBA+Excel

    - by Adam Ryczkowski
    Welcome! On many computers I experienced poor performance of 32 bit guests running on 64 bit Linux host (I used only the Debian family). At last I managed to collect benchmark data. I made the benchmark by running custom VBA macro, (which we use in our company) that generates 284 pages long Word document full of Excel Pie charts, tables and comments. The macro is run as a single task (excluding the standard services) on a set of identically configured Windows XP 32-bit systems. I measured the time (in sec.) needed to perform the test. The computer (i.e. my notebook Asus P53E) supports both VT-d extensions and native Windows XP. It has 2-core processor, each core is hyperthreaded, so in total we have 4 mostly independent execution units. I use the latest VirtualBox 4.2 and VMWare Workstation 9.0 for Linux, installed together on the same host (running Mint 13 Maya) but never run simultaneously. The results (in column Time) are no less accurate than ± 10% Here are the results (sorry for the format, but I couldn't find out a better solution for tables in SO): +---------------+-------------+------------------------------------------------------+---------+------------+----------------+------+ | Host software | # processor | Windows kernel | IO APIC | VT-x/AMD-V | 2D Video Accel | Time | +---------------+-------------+------------------------------------------------------+---------+------------+----------------+------+ | VirtualBox | 1 | Advanced Configuration and Power Interface (ACPI) PC | 0 | 1 | 0 | 1139 | | VirtualBox | 1 | Advanced Configuration and Power Interface (ACPI) PC | 0 | 1 | 1 | 1050 | | VirtualBox | 1 | Advanced Configuration and Power Interface (ACPI) PC | 0 | 0 | 1 | 1644 | | VirtualBox | 4 | ACPI Multiprocessor PC | 1 | 1 | 1 | 6809 | | VMWare | 1 | ACPI Uniprocessor PC | | 1 | 1 | 1175 | | VMWare | 4 | ACPI Multiprocessor PC | | 1 | 1 | 3412 | | Native | 4 | ACPI Multiprocessor PC | | | | 1693 | | Native | 1 | Advanced Configuration and Power Interface (ACPI) PC | | | | 1170 | +---------------+-------------+------------------------------------------------------+---------+------------+----------------+------+ Here are the striking conclusions: Although I've read in the VirtualBox fora about abysmal performance with 32-bit guest on 64-bit host, VMWare also has problems compared to native run, still being twice faster(!) than VBox. Although VBA is inherently single-threaded, the Excel calculations, which take much more than a half of total computation time, supposedly aren't. So one would expect some speed gain when running on 2+ cores ("+" for hyperthreading). What we see is a speed loss. And quite big one too. For the VirtualBox the VT-d extension isn't a big deal. Can anyone shed some light on why the singlethreaded Windows kernel is so much faster than the SMP one?

    Read the article

  • How to configure ASP.NET MVC 3 on IIS 6 (Windows 2003 R2)

    - by Nedcode
    I am getting 403 Directory Listing Denied for the root and 404 for an action that I know should exist. Background: I have build and deployed an ASP.NET MVC 2 applcation a long time ago. Later I upgraded it to MVC 3 and it is still working with not configuration changes. Setting it up on a windows 2003 R2 (Standard) initialy was a pain, but after a couple of days(yes, days) struggling it started working. Now I have to do the same with the same application on a different server (2003 R2 Standard again) on a different network. .Net 4 is installed and allowed ASP.NET MVC 3 is also installed By default IIS is set to use .net 4 I verify aspnet_isapi.dll used in application extension are from version 4.0.30319 .NET asemblies folder. I also added the wildcard mapping to aspnet_isapi.dll and unchecked verify file exists. Under Directory Security in Authentication Methods I have disabled anonymos access and enabled Integrated Windows authentication(same as the one on the server that it works) I have copied the same web.config with the <authentication mode="Windows" /> <authorization> <deny users="?" /> </authorization> I have set Read & Execute, List Folder Contents, and Read for the Networkservice account(under which the app pool is working). Also I have set the same for Network account, IIS_WPG, ASPNET and IUSR_MAchineName. I do not have an EnableExte??nsionlessUrls but even if I create it and set it to true or false it does not help. I also tried http://haacked.com/archive/2010/12/22/asp-net-mvc-3-extensionless-urls-on-iis-6.aspx and it did not help. But I kept getting 403 Directory Listing Denied for the root and 404 for an action that I know should exist. Web Platform installer was then used to re-install and possibly update .net, asp.net etc. I then noticed IIS was reset to default. So I added the wildcard mapping again. No, luck still 403. I exported configuration files from the working server setup and created new default app pool and new default website using those configurations. Still I get 403 Directory Listing Denied for the / and 404 for any action I try.

    Read the article

  • how to setup kismet.conf on Ubuntu

    - by Registered User
    I installed Kismet on my Ubuntu 10.04 machine as apt-get install kismet every thing seems to work fine. but when I launch it I see following error kismet Launching kismet_server: //usr/bin/kismet_server Suid priv-dropping disabled. This may not be secure. No specific sources given to be enabled, all will be enabled. Non-RFMon VAPs will be destroyed on multi-vap interfaces (ie, madwifi-ng) Enabling channel hopping. Enabling channel splitting. NOTICE: Disabling channel hopping, no enabled sources are able to change channel. Source 0 (addme): Opening none source interface none... FATAL: Please configure at least one packet source. Kismet will not function if no packet sources are defined in kismet.conf or on the command line. Please read the README for more information about configuring Kismet. Kismet exiting. Done. I followed this guide http://www.ubuntugeek.com/kismet-an-802-11-wireless-network-detector-sniffer-and-intrusion-detection-system.html#more-1776 how ever in kismet.conf I am not clear with following line source=none,none,addme as to what should I change this to. lspci -vnn shows 0c:00.0 Network controller [0280]: Broadcom Corporation BCM4312 802.11b/g [14e4:4315] (rev 01) Subsystem: Dell Device [1028:000c] Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at f69fc000 (64-bit, non-prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [58] Vendor Specific Information <?> Capabilities: [e8] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable- Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting <?> Capabilities: [13c] Virtual Channel <?> Capabilities: [160] Device Serial Number Capabilities: [16c] Power Budgeting <?> Kernel driver in use: wl Kernel modules: wl, ssb and iwconfig shows lo no wireless extensions. eth0 no wireless extensions. eth1 IEEE 802.11bg ESSID:"WIKUCD" Mode:Managed Frequency:2.462 GHz Access Point: <00:43:92:21:H5:09> Bit Rate=11 Mb/s Tx-Power:24 dBm Retry min limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Managementmode:All packets received Link Quality=1/5 Signal level=-81 dBm Noise level=-90 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:169 Invalid misc:0 Missed beacon:0 So what should I be putting in place of source=none,none,addme with output I mentioned above ?

    Read the article

  • XCP Project Kronos syslog error: "irq ... : nobody cared" on Dom0 host

    - by Vlad Fedin
    One of our production clusters driven by XCP suddenly went uresponsive. After restart and some investigation we found such logs in dom0 machine syslog: Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659040] irq 339: nobody cared (try booting with the "irqpoll" option) Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659058] Pid: 0, comm: swapper/3 Tainted: G C O 3.2.0-24-generic #37-Ubuntu Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659060] Call Trace: Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659062] <IRQ> [<ffffffff810db37d>] __report_bad_irq+0x3d/0xe0 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659071] [<ffffffff810db605>] note_interrupt+0x135/0x190 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659074] [<ffffffff810d8e69>] handle_irq_event_percpu+0xa9/0x220 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659078] [<ffffffff8130ff3b>] ? radix_tree_lookup+0xb/0x10 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659081] [<ffffffff810d9031>] handle_irq_event+0x51/0x80 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659084] [<ffffffff810dc187>] handle_edge_irq+0x87/0x140 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659089] [<ffffffff813a8829>] __xen_evtchn_do_upcall+0x199/0x250 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659092] [<ffffffff813aa96f>] xen_evtchn_do_upcall+0x2f/0x50 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659096] [<ffffffff81666d3e>] xen_do_hypervisor_callback+0x1e/0x30 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659097] <EOI> [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659104] [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659107] [<ffffffff8100a1d0>] ? xen_safe_halt+0x10/0x20 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659110] [<fff IRQ 339 in cat /proc/interrupts: 339: ... xen-pirq-msi-x eth0 where eth0 is hardware NIC. While host machine seems to hang, guest machines continue to work, so our tiny internal monitoring on one of the virtual hosts logged something like that: [2012-10-26 20:31:51] [OK......] 200 OK : 113159149 ns [2012-10-26 20:32:40] [DISASTER] 500 Can't connect to [hostname]:80 (No route to host) : 47763284432 ns ... [2012-10-26 20:34:40] [DISASTER] 500 Can't connect to [hostname]:80 (No route to host) : 46894835070 ns [2012-10-26 20:34:57] [DISASTER] 500 Can't connect to [hostname]:80 (Bad hostname) : 16821741955 ns ... [2012-10-26 20:38:18] [DISASTER] 500 Can't connect to [hostname]:80 (Bad hostname) : 20103298289 ns [2012-10-26 20:38:37] [DISASTER] 500 Can't connect to [hostname]:80 (Bad hostname) : 17895754943 ns Host and guest OS: Ubuntu 12.04 LTS, 05:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection Subsystem: ASUSTeK Computer Inc. Device 8369 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx+ Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 17 Region 0: Memory at fe500000 (32-bit, non-prefetchable) [size=128K] Region 2: I/O ports at e000 [size=32] Region 3: Memory at fe520000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: e1000e Kernel modules: e1000e Any hints how to debug this?

    Read the article

  • Error sending email to alias with Postfix

    - by Burning the Codeigniter
    I'm on Ubuntu 11.04 64bit. I'm trying to set up Postfix on my VPS, which has been configured but when I send an email to an alias e.g. [email protected] it will send it to [email protected]. Now when I sent the email from my GMail account, I got this returned: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 #5.1.0 Address rejected [email protected] (state 14). ----- Original message ----- DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=R1WtjVRWywfkWCR2g4QKbSjAfUaU9DAAMKbg9UAWqvs=; b=FiSfdhEaV4pEq/76ENlH4tvOgm35Ow3ulRg06kDYrIQTaDf3eOEgfSEgH25PjZuAj/ 7Hg1CL++o6Rt/tl80ZiR2AWekhA0zIn2JkqE7KssMG7WbBmMmbf8V9KDo2jOw+mZv+C/ KDKsQ65AudBZ/NYLDDpTT7MkKf8DzqeGCKj9MAct6sHDoC0wCciXYxNfTf+MKxrZvRHQ oICTkH5LOugKW9wEjPF2AoO8X0qgYmTLYeSUtXxu46VeNKRBGmdRkkpPOoJlQN9ank7i SW6kU6M9bk2bYOgKwV/YPsaantmYlu1XdmYx+kWeJkNJAyYOfXfZZ8WUJhbbFFD9bZCi m/hw== MIME-Version: 1.0 Received: by 10.101.3.5 with SMTP id f5mr783908ani.86.1334247306547; Thu, 12 Apr 2012 09:15:06 -0700 (PDT) Received: by 10.236.73.136 with HTTP; Thu, 12 Apr 2012 09:15:06 -0700 (PDT) Date: Thu, 12 Apr 2012 17:15:06 +0100 Message-ID: <CAN+9S2aB=xjiDxVZx3qYZoBMFD4XuadUyR_3OYWaxw1ecrZmOQ@mail.gmail.com> Subject: Test Email From: My Name <[email protected]> To: [email protected] Content-Type: multipart/alternative; boundary=001636c597eabfd21504bd7da8fd Now that I don't understand why it isn't working, my aliases are set up correctly - I see no error messages being produced in /var/log/mail.log or any other mail logs, which makes it harder for me to debug. This is my postfix configuration (postconf -n): alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydestination = $mydomain, $myhostname, localhost, localhost.localdomain, localhost mydomain = domain.com myhostname = localhost mynetworks = 192.168.1.0/24 127.0.0.0/8 readme_directory = no recipient_delimiter = + smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes Does anyone know how to solve this specific issue?

    Read the article

  • "log on as a batch job" user rights removed by what GPO?

    - by LarsH
    I am not much of a server administrator, but get my feet wet when I have to. Right now I'm running some COTS software on a Windows 2008 Server machine. The software installer creates a few user accounts for running its processes, and then gives those users the right to "log on as a batch job". Every so often (e.g. yesterday at 2:52pm and this morning at 7:50am), those rights disappear. The software then stops working. I can verify that the user rights are gone by using secedit /export /cfg e:\temp\uraExp.inf /areas USER_RIGHTS and I have a script that does this every 30 seconds and logs the results with a timestamp, so I know when the rights disappear. What I see from the export is that in the "good" state, i.e. after I install the software and it's working correctly, the line for SeBatchLogonRight from the secedit export includes the user accounts created by the software. But every few hours (sometimes more), those user accounts are removed from that line. The same thing can be seen by using the GUI tool Local Security Policy > Security Settings > Local Policies > User Rights Assignment > Log on as a batch job: in the "good" state, that policy includes the needed user accounts, and in the bad state, the policy does not. Based on the above-mentioned logging script and the timestamps at which the user rights are being removed, I can see clearly that some GPOs are causing the change. The GPO Operational log shows GPOs being processed at exactly the right times. E.g.: Starting Registry Extension Processing. List of applicable GPOs: (Changes were detected.) Local Group Policy I have run GPOs on demand using gpupdate /force, and was able to verify that this caused the User Rights to be removed. We have looked over local group policies till our eyes are crossed, trying to figure out which one might be stripping these User Rights to "log on as a batch job." We have not configured any local group policies on this machine, that we know of; so is there a default local group policy that might typically do such a thing? Are there typical domain policies that would do this? I have been working with our IT staff colleagues to troubleshoot the problem, but none of them are really GPO experts... They wear many hats, and they do what they need to do in order to keep most things running. Any suggestions would be greatly appreciated!

    Read the article

  • Getting error while installing mod_wsgi in centos

    - by user825904
    I have reinstalled python with enable shared [root@master mod_wsgi-3.4]# make clean rm -rf .libs rm -f mod_wsgi.o mod_wsgi.la mod_wsgi.lo mod_wsgi.slo mod_wsgi.loT rm -f config.log config.status rm -rf autom4te.cache [root@master mod_wsgi-3.4]# LD_RUN_PATH=/usr/local/lib make apxs -c -I/usr/local/include/python2.7 -DNDEBUG mod_wsgi.c -L/usr/local/lib -L/usr/local/lib/python2.7/config -lpython2.7 -lpthread -ldl -lutil -lm /usr/lib64/apr-1/build/libtool --silent --mode=compile gcc -prefer-pic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -Wformat-security -fno-strict-aliasing -DLINUX=2 -D_REENTRANT -D_GNU_SOURCE -pthread -I/usr/include/httpd -I/usr/include/apr-1 -I/usr/include/apr-1 -I/usr/local/include/python2.7 -DNDEBUG -c -o mod_wsgi.lo mod_wsgi.c && touch mod_wsgi.slo In file included from /usr/local/include/python2.7/Python.h:8, from mod_wsgi.c:142: /usr/local/include/python2.7/pyconfig.h:1161:1: warning: "_POSIX_C_SOURCE" redefined In file included from /usr/include/sys/types.h:26, from /usr/include/apr-1/apr-x86_64.h:127, from /usr/include/apr-1/apr.h:19, from /usr/include/httpd/ap_config.h:25, from /usr/include/httpd/httpd.h:43, from mod_wsgi.c:34: /usr/include/features.h:162:1: warning: this is the location of the previous definition In file included from /usr/local/include/python2.7/Python.h:8, from mod_wsgi.c:142: /usr/local/include/python2.7/pyconfig.h:1183:1: warning: "_XOPEN_SOURCE" redefined In file included from /usr/include/sys/types.h:26, from /usr/include/apr-1/apr-x86_64.h:127, from /usr/include/apr-1/apr.h:19, from /usr/include/httpd/ap_config.h:25, from /usr/include/httpd/httpd.h:43, from mod_wsgi.c:34: /usr/include/features.h:164:1: warning: this is the location of the previous definition mod_wsgi.c: In function ‘wsgi_server_group’: mod_wsgi.c:991: warning: unused variable ‘value’ mod_wsgi.c: In function ‘Log_isatty’: mod_wsgi.c:1665: warning: unused variable ‘result’ mod_wsgi.c: In function ‘Log_writelines’: mod_wsgi.c:1802: warning: unused variable ‘msg’ mod_wsgi.c: In function ‘Adapter_output’: mod_wsgi.c:3087: warning: unused variable ‘n’ mod_wsgi.c: In function ‘Adapter_file_wrapper’: mod_wsgi.c:4138: warning: unused variable ‘result’ mod_wsgi.c: In function ‘wsgi_python_term’: mod_wsgi.c:5850: warning: unused variable ‘tstate’ mod_wsgi.c:5849: warning: unused variable ‘interp’ mod_wsgi.c: In function ‘wsgi_python_child_init’: mod_wsgi.c:7050: warning: unused variable ‘l’ mod_wsgi.c:6948: warning: unused variable ‘interp’ mod_wsgi.c: In function ‘wsgi_add_import_script’: mod_wsgi.c:7701: warning: unused variable ‘error’ mod_wsgi.c: In function ‘wsgi_add_handler_script’: mod_wsgi.c:8179: warning: unused variable ‘dconfig’ mod_wsgi.c:8178: warning: unused variable ‘sconfig’ mod_wsgi.c: In function ‘wsgi_hook_handler’: mod_wsgi.c:9375: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9377: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9379: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9383: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9403: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9405: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9408: warning: suggest parentheses around assignment used as truth value mod_wsgi.c: In function ‘wsgi_daemon_worker’: mod_wsgi.c:10819: warning: unused variable ‘duration’ mod_wsgi.c:10818: warning: unused variable ‘start’ mod_wsgi.c: In function ‘wsgi_hook_daemon_handler’: mod_wsgi.c:13172: warning: unused variable ‘i’ mod_wsgi.c:13170: warning: unused variable ‘elts’ mod_wsgi.c:13169: warning: unused variable ‘head’ mod_wsgi.c: At top level: mod_wsgi.c:8142: warning: ‘wsgi_set_user_authoritative’ defined but not used mod_wsgi.c:15251: warning: ‘wsgi_hook_check_user_id’ defined but not used /usr/lib64/apr-1/build/libtool --silent --mode=link gcc -o mod_wsgi.la -rpath /usr/lib64/httpd/modules -module -avoid-version mod_wsgi.lo -L/usr/local/lib -L/usr/local/lib/python2.7/config -lpython2.7 -lpthread -ldl -lutil -lm

    Read the article

  • Start a software company offshore

    - by Mascarpone
    Hello Everybody, I own a small, very young, EU based (Italy) company, and among other things, we sell IT solutions. I have a degree in applied mathematics, and I mainly deal with user interfaces, embedded systems, automation and web applications. You can say that I'm an enlightened entrepreneur because I work only with open source software (OS, IDE, I release under BSD , ... everything is free as in freedom), I give high importance to post sales services and customer satisfaction, plus I think I'm the best boss someone could desire (LOL), as I have google in mind when I think about IT workers rights. But the most beautiful thing is that, although everybody advised us not to use open source, is that we are quite profitable!!! (for the sixth trimester in a row). Now I offshore most of the work to an Indian company. I divide the work in modules and I outsource the longer or more trivial ones. I spend a lot of time defining the specifications and I leave the hard work to them. Using productivity bonuses, a lot of prototypes and third-party audits I think that my software has reached a very good quality level. I would like to start my own software development company, in order to improve control over process and cut costs. Obviously I can't afford the cost of labor in the EU, so I thought about opening a company in Asia. What I need Is: 1) Cheap labor - I can afford to give productivity bonuses and higher than average wages and stay profitable just because labor is cheap. 2) Many talents - I need a good level of tertiary education, and a good number of graduates, so I can hire junior developers and train and teach them according to my needs and philosophies (e.g.: open source mind) 3) Good infrastructure - buildings, transport, internet, .... everything that a company might need. I thought about 3 possible candidates: 1) India - I already work with indian people, I know that they are realiable and speak a good english. Big cities are too expensive, but maybe a small city like lucknow http://en.wikipedia.org/wiki/Lucknow could suits my needs. 2) China - They say it's cheaper than India, but I everytime I worked with a chineese company the language was a big barrier. They work hard, are somewhat skilled and cheap but maybe it's a risky path. Plus I feel a little uncofortable with their lack of human rights. 3) Philippines - Same as china: cheaper than india, but maybe less educated. Where do you think it's the best place to start a software company? Any reading or book to advise? thank you very much

    Read the article

< Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >