Search Results

Search found 22951 results on 919 pages for 'debug build'.

Page 745/919 | < Previous Page | 741 742 743 744 745 746 747 748 749 750 751 752  | Next Page >

  • Stop squid caching 302 and 307 with deny_info

    - by 0xception
    TLDR: 302, 307 and Error pages are being cached. Need to force a refresh of the content. Long version: I've setup a very minimal squid instance running on a gateway which shouldn't not cache ANYTHING but needs to be solely used as a domain based web filter. I'm using another application which redirects un-authenticated users to the proxy which then uses the deny_info option redirects any non-whitelisted request to the login page. After the user has authenticated the firewall rule gets placed so they no longer get sent to the proxy. The problem is that when a user hits a website (xkcd.com) they are unauthenticated so they get redirected via the firewall: iptables -A unknown-user -t nat -p tcp --dport 80 -j REDIRECT --to-port 39135 to the proxy at this point squid redirects the user to the login page using a 302 (i've also tried 307, and i've also make sure the headers are set to no-cache and/or no-store for Cache-Control and Pragma). Then when the user logs into the system they get firewall rule which no longer directs them to the squid proxy. But if they go to xkcd.com again they will have the original redirection page cached and will once again get the login page. Any idea how to force these redirects to NOT be cached by the browser? Perhaps this is a problem w/ the browsers and not squid, but not sure how to get around it. Full squid config below. # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl localnet src 192.168.182.0/23 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl https port 443 acl http port 80 acl CONNECT method CONNECT # # Disable Cache # cache deny all via off negative_ttl 0 seconds refresh_all_ims on #error_default_language en # Allow manager access only from localhost http_access allow manager localhost http_access deny manager # Deny access to anything other then http http_access deny !http # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !https visible_hostname gate.ovatn.net # Disable memory pooling memory_pools off # Never use neigh cache objects for cgi-bin scripts hierarchy_stoplist cgi-bin ? # # URL rewrite Test Settings # #acl whitelist dstdomain "/etc/squid/domains-pre.lst" #url_rewrite_program /usr/lib/squid/redirector #url_rewrite_access allow !whitelist #url_rewrite_children 5 startup=0 idle=1 concurrency=0 #http_access allow all # # Deny Info Error Test # acl whitelist dstdomain "/etc/squid/domains-pre.lst" deny_info http://login.domain.com/ whitelist #deny_info ERR_ACCESS_DENIED whitelist http_access deny !whitelist http_access allow whitelist http_port 39135 transparent ## Debug Values access_log /var/log/squid/access-pre.log cache_log /var/log/squid/cache-pre.log # Production Values #access_log /dev/null #cache_log /dev/null # Set PID file pid_filename /var/run/gatekeeper-pre.pid SOLUTION: I believe I might have found a solution to this. After days and days trying to figure it out, only through a random stumble I found client_persistent_connections off server_persistent_connections off This did the trick. So it wasn't so much cache as it was a single persistent connection messing things up. W000T!

    Read the article

  • Remote server security: handling compiler tools

    - by Gonzolas
    Hello! I was wondering wether to remove compiler tools (gcc, make, ...) from a remote production server, mainly for security purposes. Background: The server runs a web application on Linux. Consider Apache jailed. Otherwise, only OpenSSHd faces the public network. Of course there is no compiler stuff within the jail, so this is about the actual OS outside of any jails. Here's my personal PRO/CON list (regarding removal) so far: PRO: I had been reading some suggestions to remove compiler tools in order inhibit custom building of trojans etc. from within the host if an attacker attains unpriviliged user permissions. CON: I can't live without Perl/Python and a trojan/whatever could be written in a scripting language like that, anyway, so why bother about removing gcc et al. at all. There is a need to build new Linux kernels as well as some security tools from source directly on the server, because the server runs in 64-bits mode and (to my understanding) I can't (cross-)compile locally/elsewhere due to lack of another 64-bits hardware system. OK, so here are my questions for you: (a) Is my PRO/CON assessment correct? (b) Do you know of other PROs / CONs to removing all compiler tools? Do they weigh in more? (c) Which binaries should I consider dangerous if the given PRO statement holds? Only gcc, or also make, or what else? Should I remove the enitre software packages them come with? (d) Is it OK to just move those binaries to a root-only accessible directory when they are not needed? Or is there a gain in security if I "scp them in" every time? Thank you!

    Read the article

  • Dell Poweredge 1950 with Perc 5i keeps losing raid config -> "Foreign Configuration Found"

    - by nosage
    The quick and dirty: the machine is a Dell Poweredge 1950, dual xeon quad cores, 8GB of ram, 2 2TB seagate SATAs in (supposed to be raid1) using a Perc 5i raid card. They are hot-swappable with a back-plane. I can build the raid fine and after a little while an install of server 08 r2 will blue screen and restart. When it comes up the raid controller says "Foreign Configuration Found." When I go into the raid configuration panel there is no raid listed but I can import the "foreign config", and the OS will boot up fine, until it blue screens again after a little while. The issue is OS independent. I have tried swapping raid cards, swapping the RAM module on the raid card and swapping the raid battery, all to no avail. Its almost as if there is a loose connection from the raid card to the back plane and both of disks get lost and the raid card drops the config. But it sees the disks fine when it boots back up. The raid card uses a SCSI SAS cable to connect to the back-plane so I guess the next step is to replace that, but... then I might as well replace the back-plane with a SCSI SAS to sata breakout cable, but... then I need a way to power the disks. Sorry for the wall of txt but it would be great to get some thoughts from people who worked with perc raid cards or poweredge servers with this type of issue before. Ironically I want to get this system up and running so I can work on MCITP labs. Thank you for any/all help and feel free to ask questions!

    Read the article

  • Confused about the Windows 7 Preinstallation Kit

    - by David Brown
    I build custom PCs and would like to use the Windows 7 Preinstallation Kit to make installation go a little quicker and customize the Windows image. However, since each PC is built to a particular customer's specifications, the hardware will rarely be the same. So, I would like to have a single answer file that will work for everything. I'm not sure if that's possible, however. What I mostly want to do for now is add my support information as well as pre-set anything that I would normally change after each installation completes. I have a Windows 7 Professional Upgrade DVD set (both 32-bit and 64-bit), but no OEM disks. I copied the Install.wim file to my local drive and opened it in the Windows System Image Manager, but it asks me to choose a catalog file specifically for each edition of Windows 7. Will this limit the answer file to whichever edition I choose? I would think choosing Starter would give me the most basic settings, which would apply to all other editions, but I'm not entirely sure of this. I don't intend to install any extra applications or drivers. I merely want to insert an OEM disk, my OPK USB drive, and have it work for whatever edition of Windows 7 I'm installing. If a large number of similarly-configured PCs need to be built, I'll go ahead and create a custom answer file in that case, but for a single machine order, that seems like overkill. In addition, do I need a separate answer file for 32-bit and 64-bit versions of Windows 7? Or will it work for both, even though I copied the Install.wim file from the 32-bit disk? Thanks!

    Read the article

  • Deploying Django App with Nginx, Apache, mod_wsgi

    - by JCWong
    I have a django app which can run locally using the standard development environment. I want to now move this to EC2 for production. The django documentation suggests running with apache and mod_wsgi, and using nginx for loading static files. I am running Ubuntu 12.04 on an Ec2 box. My Django app, "ddt", contains a subdirectory "apache" with ddt.wsgi import os, sys apache_configuration= os.path.dirname(__file__) project = os.path.dirname(apache_configuration) workspace = os.path.dirname(project) sys.path.append(workspace) sys.path.append('/usr/lib/python2.7/site-packages/django/') sys.path.append('/home/jeffrey/www/ddt/') os.environ['DJANGO_SETTINGS_MODULE'] = 'ddt.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() I have mod_wsgi installed from apt. My apache/httpd.conf contains NameVirtualHost *:8080 WSGIScriptAlias / /home/jeffrey/www/ddt/apache/ddt.wsgi WSGIPythonPath /home/jeffrey/www/ddt <Directory /home/jeffrey/www/ddt/apache/> <Files ddt.wsgi> Order deny,allow Allow from all </Files> </Directory> Under apache2/sites-enabled <VirtualHost *:8080> ServerName www.mysite.com ServerAlias mysite.com <Directory /home/jeffrey/www/ddt/apache/> Order deny,allow Allow from all </Directory> LogLevel warn ErrorLog /home/jeffrey/www/ddt/logs/apache_error.log CustomLog /home/jeffrey/www/ddt/logs/apache_access.log combined WSGIDaemonProcess datadriventrading.com user=www-data group=www-data threads=25 WSGIProcessGroup datadriventrading.com WSGIScriptAlias / /home/jeffrey/www/ddt/apache/ddt.wsgi </VirtualHost> If I am correct, these 3 files above should correctly allow my django app to run on port 8080. I have the following nginx/proxy.conf file proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; Under nginx/sites-enabled server { listen 80; server_name www.mysite.com mysite.com; access_log /home/jeffrey/www/ddt/logs/nginx_access.log; error_log /home/jeffrey/www/ddt/logs/nginx_error.log; location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/proxy.conf; } location /media/ { root /home/jeffrey/www/ddt/; } } If I am correct these two files should setup nginx to take requests on the HTTP port 80, but then direct requests to apache which is running the django app on port 8080. If i go to mysite.com, all I see is Welcome to Nginx! Any advice for how to debug this?

    Read the article

  • Secure ldap problem

    - by neverland
    Hi there, I have tried to config my openldap to have secure connection by using openssl on Debian5. By the way, I got trouble during the below command. ldap:/etc/ldap# slapd -h 'ldap:// ldaps://' -d1 >>> slap_listener(ldaps://) connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_read(15): unable to get TLS client DN, error=49 id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 ber_get_next ber_get_next on fd 15 failed errno=0 (Success) connection_closing: readying conn=7 sd=15 for close connection_close: conn=7 sd=15 Then I have search for "unable to get TLS client DN, error=49 id=7" but it seems no where has a good solution to this yet. Please help. Thanks # Well, I try to fix something to get it work but now I got this ldap:~# slapd -d 256 -f /etc/openldap/slapd.conf @(#) $OpenLDAP: slapd 2.4.11 (Nov 26 2009 09:17:06) $ root@SD6-Casa:/tmp/buildd/openldap-2.4.11/debian/build/servers/slapd could not stat config file "/etc/openldap/slapd.conf": No such file or directory (2) slapd stopped. connections_destroy: nothing to destroy. What should I do now? log : ldap:~# /etc/init.d/slapd start Starting OpenLDAP: slapd - failed. The operation failed but no output was produced. For hints on what went wrong please refer to the system's logfiles (e.g. /var/log/syslog) or try running the daemon in Debug mode like via "slapd -d 16383" (warning: this will create copious output). Below, you can find the command line options used by this script to run slapd. Do not forget to specify those options if you want to look to debugging output: slapd -h 'ldaps:///' -g openldap -u openldap -f /etc/ldap/slapd.conf ldap:~# tail /var/log/messages Feb 8 16:53:27 ldap kernel: [ 123.582757] intel8x0_measure_ac97_clock: measured 57614 usecs Feb 8 16:53:27 ldap kernel: [ 123.582801] intel8x0: measured clock 172041 rejected Feb 8 16:53:27 ldap kernel: [ 123.582825] intel8x0: clocking to 48000 Feb 8 16:53:27 ldap kernel: [ 131.469687] Adding 240932k swap on /dev/hda5. Priority:-1 extents:1 across:240932k Feb 8 16:53:27 ldap kernel: [ 133.432131] EXT3 FS on hda1, internal journal Feb 8 16:53:27 ldap kernel: [ 135.478218] loop: module loaded Feb 8 16:53:27 ldap kernel: [ 141.348104] eth0: link up, 100Mbps, full-duplex Feb 8 16:53:27 ldap rsyslogd: [origin software="rsyslogd" swVersion="3.18.6" x-pid="1705" x-info="http://www.rsyslog.com"] restart Feb 8 16:53:34 ldap kernel: [ 159.217171] NET: Registered protocol family 10 Feb 8 16:53:34 ldap kernel: [ 159.220083] lo: Disabled Privacy Extensions

    Read the article

  • DNS and DHCP dies after ~2 days of use on ClearOS

    - by TheLQ
    I'm using ClearOS (based on CentOS, so any info specific to it should apply here) as a gateway, DHCP, and DNS server. I had this server running perfectly for a month or two before replacing it with another server. However due DNS and DHCP failing 2 days in and a host of other performance issues (the box was a little underpowered), I changed back to the origional server. However 2 days in DHCP and DNS are failing again, and I'm out of idea's on why. In both cases to my knowledge no network or server changes occurred after installation. Right after installing (and at least a day in) DNS and DHCP was working just fine. However later (Day 2) I get a call saying their internet is down (translation: Nobody can get to websites because DNS is down) I've tried to fix the problem by checking if the dnsmasq is even running (it is), restarting the service, and restarting the server to no effect. I do have two internal servers that have static DHCP leases but one's lease must of expired as I can't connect to it anymore. I'm hesitant to do any dhcp testing on the last server as I'll not be able to connect to it anymore. Is there anything anyone can think of on why DNS and DHCP would fail 2 days in to running perfectly? More info: Running dnsmasq in debug mode. This is all that's displayed even when running nslookup quackwall. I'm not sure though if nslookup commands should show up in the log [root@quackwall ~]# /usr/sbin/dnsmasq -dq dnsmasq: started, version 2.49 cachesize 150 dnsmasq: compile time options: IPv6 GNU-getopt no-DBus no-I18N DHCP TFTP dnsmasq-dhcp: DHCP, IP range 10.0.0.100 -- 10.0.0.254, lease time 12h dnsmasq: reading /etc/resolv.conf dnsmasq: using nameserver 74.128.17.114#53 dnsmasq: using nameserver 74.128.19.102#53 dnsmasq: read /etc/hosts - 5 addresses dnsmasq-dhcp: read /etc/ethers - 2 addresses On the other server DNS and the Gateway are all configured correctly (10.0.0.2 is quackwall) lordquackstar@quackgame:~$ netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.0.0.0 0.0.0.0 255.255.240.0 U 0 0 0 eth0 0.0.0.0 10.0.0.2 0.0.0.0 UG 0 0 0 eth0 lordquackstar@quackgame:~$ cat /etc/resolv.conf nameserver 10.0.0.2 domain highwow.lan search highwow.lan

    Read the article

  • Best way to mount 3-4 monitor like this?

    - by jasondavis
    I just purchased 2 HP 2009m widescreen monitors, they are not the biggest thing on the block, they are like 19-20" and are only around 150-200$ so I think they are perfect. I bought 2 of them just to make sure I like them, with the full intention of purchasing more to make either a tripple or quad display. I now I am stuck trying to decide, if I purchase 1 more to have a tripple display I would then like to just wrap the third monitor to either the rigth or left side, I could do this without a mount most likely pretty easy. If I decide to go with 2 more monitors to make a quad display then I would like to add the 2 new monitor directly above the 2 that I have now. So it would make a grid of 2 wide and 2 high. I have posted a few photos belwo to show them now with the 2 I have, you will notice that I have them tilted inwards to make more of a "V" shape instead of them being side by side and "STRAIGHT". Now if I decide to make thegrid of 4 then I will need to buy or build a stand to hold them all tightly together (no whitespace or gap between the grid of monitors) but I would like to still have both rows invert to make the slight "V". Do you know of any existing stands I could purchase that would hold all 4 monitors without making them be STARIGHT without the "V" shape? Any tips appreciated please, also they do have holes in the back for VESA. a few photos... (they are from iphone and lighting made them note very good but you can see what I am working with here)

    Read the article

  • RemoteApp .rdp embed creds?

    - by Chris_K
    Windows 2008 R2 server running Remote Desktop Services (what we used to call Terminal Services back in the olden days). This server is the entry point into a hosted application -- you could call it Software as a Service I suppose. We have 3rd party clients connecting to use it. Using RemoteApp Manager to build RemoteApp .rdp shortcuts to distribute to client workstations. These workstations are not in the same domain as the RDS server. There is no trust relationship between domains (nor will there be). There is a tightly controlled site to site VPN between workstations and the RDS server, we're quite confident we have access to the server locked down. The remoteApp being run is an ERP application with its own authentication scheme. The issue? I'm trying to avoid the need to create AD logins for every end user when connecting to the RemoteApp server. In fact, since we're doing a remoteApp and they have to authenticate to that app, I'd rather just not prompt them at all for AD creds. I certainly don't want them caught up in managing AD passwords (and periodic expirations) for accounts they only use to get to their ERP login. However, I can't figure out how to embed AD creds in a RemoteApp .rdp file. I don't really want to turn off all authentication on the RDS server at that level. Any good options? My goal is to make this as seamless as possible for the end-users. Clarifying questions are welcome.

    Read the article

  • Archive software for big files and fast index

    - by AkiRoss
    I'm currently using tar for archiving some files. Problem is: archives are pretty big, contains many data and tar is very slow when listing and extracting. I often need to extract single files or folders from the archive, but I don't currently have an external index of files. So, is there an alternative for Linux, allowing me to build uncompressed archive files, preserving the file attributes AND having fast access list table? I'm talking about archives of 10 to 100 GB, and it's pretty impractical to wait several minutes to access a single file. Anyway, any trick to solve this problem is welcome (but single archives are non-optional, so no rsync or similar). Thanks in advance! EDIT: I'm not compressing archives, and using tar I think they are too slow. To be precise about "slow", I'd like that: listing archive content should take time linear in files count inside the archive, but with very little constant (e.g. if a list of all the files is included at the head of the archive, it could be very fast). extraction of a target file/directory should (filesystem premitting) take time linear with the target size (e.g. if I'm extracting a 2MB PDF file in a 40GB directory, I'd really like it to take less than few minutes... If not seconds). Of course, this is just my idea and not a requirement. I guess such performances could be achievable if the archive contained an index of all the files with respective offset and such index is well organized (e.g. tree structure).

    Read the article

  • JBoss7 load balancing with mod_proxy_balancer - session not working

    - by Phil P.
    I am trying to set up mod_proxy_balancer for routing requests to 2 jboss7-servers. For the time being I am testing this setup on my local machine, using following config in httpd.conf: ProxyRequests Off <Proxy \*> Order deny,allow Deny from all </Proxy> ProxyPass / balancer://mycluster/ stickysession=JSESSIONID|jsessionid scolonpathdelim=On <Proxy balancer://mycluster> BalancerMember http://localhost:8080 route=node1 BalancerMember http://localhost:8081 route=node2 Order allow,deny Allow from all </Proxy> and in the standalone.xml file of each jboss I have defined the jvmRoute system property: <system-properties> <property name="jvmRoute" value="node1"/> </system-properties> At http:// localhost/myapp the application is accessible but the java-session is not build up correctly. Consequently the authentication is not working. The funny thing is, that everything is working if I turn off one JBoss-instance. As I have tried a couple of settings already, I am thankful for any further suggestions.

    Read the article

  • Issues with duplicating TFS 2005 to a virtual server

    - by Hitchhiker
    We have a problem with our current TFS installation. For some reason, which I won't get into, the Sharepoint DBs (sts_content, sts_config) were upgraded to WSS 3 (MOSS). So now, none of our team-project sites work, we have no access to our documents and can't create new team projects. We can still work with the version control, though. We wanted to "play" with the server and try to fix it, without affecting the users. So we used p2v for VmWare, to duplicate the server to a virtual one. We now continued and changed all of the relevant configuration to point to the new server, as explained in the MSDN article. The step of rebuilding the warehouse (with setupwarehouse) failed. Also, we can't access the VersionControl web service (ourserver:8080/VersionControl/v1.0/repository.asmx). We are seeing errors in the EventLog: TF53002: Unable to obtain registration data for application Build. TF53005: Unable to retrieve the Team Foundation Server installed UI culture. TF53002: Unable to obtain registration data for application VersionControl. TF30040: The database is not correctly configured. Contact your Team Foundation Server administrator. The solutions suggested in this blog post did not assist. So now we're kind of stuck. Any assistance will be appreciated.

    Read the article

  • Does any economically-feasible publicly available software compare audio files to determine if they are dupes?

    - by drachenstern
    In the vein of this question http://unix.stackexchange.com/questions/3037/is-there-an-easy-way-to-replace-duplicate-files-with-hardlinks is there any software that will automatically parse a library of my songs and find the ones that really are duplicates that one can be eliminated? Here's an example: My brother used to be a huge fan of remixing CDs. He would take all of his favorite tracks and put them on one. Then he would use my computer to read them in. So now I have like 6 copies of Californication on my HDD, and they're all a few bytes difference overall. I have hundreds of songs in my library like this. I want to trim them down to having uniques. They don't all have correct ID3 tags, so figuring out that Untitled(74).mp3 is the same as californication.mp3 is the same as whowrotethis.mp3 is tricky. I do NOT want to consider a concert album and a studio album rip to be the same (if I just did artist/title matching I would end up with this scenario, which doesn't work for me). I use Windows (pick your platform) and will be getting an OSX box later in the year. I'll run Linux if that's what it takes to get it organized. I have unprotected AAC and mp3 files. Bonus points for messing with WAV or MIDI and bonus points for converting from those into MP3 (I can always use Audacity and LAME to convert later if I know they match or to convert ahead of time if that will make things easier). Are there any suggestions, or do I need to goto Programmers or SO and build a list of requirements for comparing these things and write the software myself?

    Read the article

  • ffmpeg rotate mp4 90º

    - by shox
    Hi, can I rotate(+save / reencode) a .mp4 with ffmpeg? The only thing I found was on the mailinglist saying -vfilters "rotate=90" but ffmpeg says no vfilters. Tried -vf, it says there is no rotate. If I try to do it in VLC it simply does not rotate and kills the audio (did the vlc encoding EVER work? Every single freakin video I throw at it gets fu****d up in some way -_- ) I'm on a MAC and don't have iWork. Any ideas? Thanks FFmpeg version git-svn-r23607, Copyright (c) 2000-2010 the FFmpeg developers built on Jun 14 2010 23:52:55 with gcc 4.2.1 (Apple Inc. build 5659) configuration: libavutil 50.19. 0 / 50.19. 0 libavcodec 52.76. 0 / 52.76. 0 libavformat 52.68. 0 / 52.68. 0 libavdevice 52. 2. 0 / 52. 2. 0 libavfilter 1.20. 0 / 1.20. 0 libswscale 0.11. 0 / 0.11. 0 Hyper fast Audio and Video encoder

    Read the article

  • Networking lost after update from Debian Wheezy to Jessie

    - by Charaf
    I am currently setting a Virtual Machine for development purposes. I did a big part of this configuration under Wheezy, but I need some debs that were available only on Jessie. So, I've updated the sources.list and did a dist-upgrade. Everything went well, but after the reboot, I noticed that I lost all the networking. Repositories are unreachable, as well as a simple ping google.fr returns nothing. What can I do to quickly restore networking so that I can continue my working. I have a poor connexion and can not afford to download the whole install DVDs. root@vm~# ifconfig lo Link encap:Boucle locale inet adr:127.0.0.1 Masque:255.0.0.0 adr inet6::1/128 Scope:Hôte UP LOOPBACK RUNNING MTU:65536 Metric 1 RX packets:452 errors:0 dropped:0 overruns:0 frame:0 TX packets:452 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 RX bytes:164238 (160.3 KiB) TX bytes:164238 (160.3 KiB) root@vm~# I am running VMware 1.0.1 build 1379776 and the last update of Jessie (debian 3.14.4-1) Please help. Thanks.

    Read the article

  • Picking a linux compatible motherboard

    - by Chris
    Last time I bought a new computer (I build them myself) I got a motherboard that had really poor linux support for a long time. Specifically the audio. I had to wait months before the kernel supported the on board audio chipset. That is exactly the situation I'm trying to avoid this time around. I have some specific questions about "server motherboards" actually. I looked at a few models of server motherboards by intel, and some random models on newegg. I wasn't able to see much of a difference from regular desktop motherboard other than most had two sockets, and support for much more ram. These boards seem more popular with Linux users. Why? AMD and Intel both have server CPUs as well. Some question, what's the difference? To make this question more concrete, I was looking at this this motherboard. The main questions about it that I can't answer are: Can I get a motherboard without on board raid and audio? I wanted to get a hardware raid controller and a PCI audio card. I thought a server motherboard would be cheaper and not have these "extras", since who wants an audio card on a server? Where can I found out about Linux support for the components on this board? "Intel ICH10R", "Realtek ALC889", "Marvell 88E8056" I'm buying this computer to work as a Linux desktop for a lot of compiling, coding and audio/video work, but I don't want to rule out the possibility of installing windows and playing some games at one point. (even if the last game I got has been sitting in its box unopened for almost a year). Is it a good idea to buy a "server motherboard" and play games on it, or are desktop boards better value for this? The ultimate solution for me would be a motherboard that had GPL divers for onboard LAN, a single CPU socket, lots of PCI express and PCI. USB 3.0, and no fancy hard disk controllers since I'll be getting a separate one.

    Read the article

  • Apache, suexec, PHP, suPHP

    - by Chris_K
    While I'm quite comfortable as a Linux user, my Linux Admin-fu is a bit weak. Thus, I'm here looking for guidance with a CentOS server I'm about to build. I need to setup an Apache2 web server for a few of our clients. I want each client's web content to be under their home directory (USERDIR in apache.conf, right?) for the static HTML sites. I want Apache to run as the client (suexec?). Some of their stuff will be PHP apps and I'm under the impression I'll want to look at suphp as well then. So basically I want to look like a small version of a shared web hosting company. Considering how common those are I thought I'd easily find a nice current How-To guide on setting this all up but so far I've had very little luck. I suspect my search words are off. So the questions (feel free to answer any or all): Anyone have some solid links to current/modern guides that would help me set this all up? No, the apache documentation site is not a guide ;-) Since I have a mix of static sites and PHP apps do I want/need both suexec and suphp installed? If so, does that introduce any challenges I should be aware of? Should I be looking at other options instead of suexec and suphp? I plan to give the end users SSH, SFTP or SCP access to their stuff (if that affects anything). Thanks in advance for your help.

    Read the article

  • JBoss 5 on AIX 5.3

    - by jess
    I am a very newbie for AIX and system monitoring. Actually our application currently run production on jboss 5.1 in AIX 5.3. Please check below configuration & system settings. AIX system configuration OS Level 5.3.9.0 (oslevel -g) Physical Memory size 24GB (svmon -G) Page space 4GB (lsps -s) processors 3 cores, Processor Type: PowerPC_POWER6, Processor Clock Speed: 4704 MHz (prtconf | grep Processor) Java version JRE 1.6.0 IBM AIX build pap6460sr10fp1-20120321_01 (SR10 FP1) (java -fullversion) JBoss configuration JBoss 5.1/JBoss ESB 4.11 Hornetq messaging with consumer flow control java opts : -d64 -Xms2g -Xmx4g -XX:MaxPermSize=1024m Sometime we observe very strange behavior in the JBoss that freeze without any error logs. Also server log stop without any further trace. We also not able to get thread dump (kill -3) and its not generate at that point. (kill -3 xxxxx works in normal circumstances) Only option available for us was restart the jboss server and its seem all messages that were in queues during the freeze time process after restarting. We try tweak some of setting in JBoss hornetq, we though issue was there. Hornetq Stuck By Default. But we haven't any luck and also unable to isolate the issue in any point. We looking at tool like nmon for monitoring this but no clue is that good enough to do so. Please provide some point to investigate this issue. Thanks

    Read the article

  • Karmic iptables missing kernel moduyles on OpenVZ container

    - by luison
    After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki) When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model. I've been testing with a few post I've found but I was stack with the error: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this: IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc" As the error remained I read that I should build dependencies again on the virtual machine: depmod -a but this returned an error: WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory So I read again about creating the directory empty and redoing "depmod -a" it. I now don't get the dependancies error but get this and I don't have a clue how to proceed: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Module ip_tables not found. iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config. Any help would be greatly appreciated.

    Read the article

  • Excessive CPU Utilization for Bind 9.8.1 `named` processes

    - by justinzane
    I just noticed that named is eating vast amounts of CPU time for a very small network with only a few domains. Can someone help me determine what is misconfigured, please? Or how to debug this. top top - 14:13:08 up 25 days, 14:16, 1 user, load average: 1.04, 1.04, 1.05 Tasks: 149 total, 1 running, 148 sleeping, 0 stopped, 0 zombie %Cpu(s): 17.3 us, 4.3 sy, 0.0 ni, 78.2 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 2042776 total, 1347916 used, 694860 free, 249396 buffers KiB Swap: 3976080 total, 30552 used, 3945528 free, 574164 cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 17445 bind 20 0 244m 42m 3124 S 99.4 2.2 2345:03 named rndc stats +++ Statistics Dump +++ (1352931389) ++ Incoming Requests ++ 65869 QUERY ++ Incoming Queries ++ 31809 A 241 NS 3 CNAME 27455 SOA 276 PTR 123 MX 462 TXT 5400 AAAA 7 A6 1 DS 14 DNSKEY 15 SPF 55 AXFR 8 ANY ++ Outgoing Queries ++ [View: internal] 22206 A 509 NS 10 SOA 25 PTR 12 MX 524 TXT 4851 AAAA 62 DNSKEY 19 SPF 3157 DLV [View: external] 87 A 2 NS 80 AAAA 120 DNSKEY 7 DLV [View: _bind] ++ Name Server Statistics ++ 65869 IPv4 requests received 27670 requests with EDNS(0) received 112 TCP requests received 65652 responses sent 20 truncated responses sent 27670 responses with EDNS(0) sent 62920 queries resulted in successful answer 37117 queries resulted in authoritative answer 28482 queries resulted in non authoritative answer 7 queries resulted in referral answer 591 queries resulted in nxrrset 53 queries resulted in SERVFAIL 2081 queries resulted in NXDOMAIN 14530 queries caused recursion 162 duplicate queries received 55 requested transfers completed ++ Zone Maintenance Statistics ++ 109536 IPv4 notifies sent ++ Resolver Statistics ++ [Common] [View: internal] 29362 IPv4 queries sent 2013 IPv6 queries sent 28531 IPv4 responses received 4209 NXDOMAIN received 6 SERVFAIL received 31 FORMERR received 32 EDNS(0) query failures 3359 query retries 836 query timeouts 5348 IPv4 NS address fetches 3271 IPv6 NS address fetches 83 IPv4 NS address fetch failed 2779 IPv6 NS address fetch failed 17421 DNSSEC validation attempted 12731 DNSSEC validation succeeded 4690 DNSSEC NX validation succeeded 21104 queries with RTT 10-100ms 7418 queries with RTT 100-500ms 3 queries with RTT 500-800ms 1 queries with RTT 800-1600ms [View: external] 192 IPv4 queries sent 104 IPv6 queries sent 192 IPv4 responses received 2 NXDOMAIN received 104 query retries 44 IPv4 NS address fetches 44 IPv6 NS address fetches 1 IPv4 NS address fetch failed 1 IPv6 NS address fetch failed 4 DNSSEC validation attempted 3 DNSSEC validation succeeded 1 DNSSEC NX validation succeeded 152 queries with RTT 10-100ms 40 queries with RTT 100-500ms [View: _bind] ++ Cache DB RRsets ++ [View: internal (Cache: internal)] 2007 A 652 NS 131 CNAME 1 MX 32 TXT 421 AAAA 28 DS 244 RRSIG 110 NSEC 3 DNSKEY 2 !A 2 !TXT 89 !AAAA 2 !SPF 14 !DLV 148 NXDOMAIN [View: external (Cache: external)] 55 A 12 NS 34 AAAA 2 DS 10 RRSIG 1 DNSKEY [View: _bind (Cache: _bind)] ++ Socket I/O Statistics ++ 82958 UDP/IPv4 sockets opened 2118 UDP/IPv6 sockets opened 4 TCP/IPv4 sockets opened 1 TCP/IPv6 sockets opened 82956 UDP/IPv4 sockets closed 2117 UDP/IPv6 sockets closed 58 TCP/IPv4 sockets closed 15 UDP/IPv4 socket bind failures 2117 UDP/IPv6 socket connect failures 29554 UDP/IPv4 connections established 59 TCP/IPv4 connections accepted 2117 UDP/IPv6 send errors 5 UDP/IPv4 recv errors ++ Per Zone Query Statistics ++ --- Statistics Dump --- (1352931389)

    Read the article

  • Unusual Apache->Tomcat caching issue.

    - by iftrue
    Right now, I have an Apache setup sitting in front of Tomcat to handle caching. This setup has been given to an external service to manage, and since the transition, I've noticed odd behavior. Specifically, when I request a swf file from the web server, I hit the Apache cache (good), but occasionally I'll receive a truncated file. Once I receive this truncated file, the cache will NOT refresh until I manually delete the cache and let the swf pull down from tomcat again. The external service claims that the configuration is fine, but I don't see any way this could be happening aside from improper configuration. Now, there are two apache and two tomcat servers under a load balancer, and occasionally one apache cache will break while another does not (leading to 50% of all requests getting bad, truncated data). Where should I start looking to debug this issue? What could POSSIBLY be causing this odd behavior? Edit: Inspecting the logs, tomcat throws this: java.io.IOException: Bad file number at java.io.FileInputStream.readBytes(Native Method) at java.io.FileInputStream.read(FileInputStream.java:199) at java.io.BufferedInputStream.read1(BufferedInputStream.java:256) at java.io.BufferedInputStream.read(BufferedInputStream.java:317) at java.io.FilterInputStream.read(FilterInputStream.java:90) at org.apache.catalina.servlets.DefaultServlet.copyRange(DefaultServlet.java:1968) at org.apache.catalina.servlets.DefaultServlet.copy(DefaultServlet.java:1714) at org.apache.catalina.servlets.DefaultServlet.serveResource(DefaultServlet.java:809) at org.apache.catalina.servlets.DefaultServlet.doGet(DefaultServlet.java:325) at javax.servlet.http.HttpServlet.service(HttpServlet.java:690) at javax.servlet.http.HttpServlet.service(HttpServlet.java:803) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:568) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.ha.session.JvmRouteBinderValve.invoke(JvmRouteBinderValve.java:209) at org.apache.catalina.ha.tcp.ReplicationValve.invoke(ReplicationValve.java:347) at org.terracotta.modules.tomcat.tomcat_5_5.SessionValve55.invoke(SessionValve55.java:57) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:190) at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:283) at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:767) at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:697) at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:889) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:690) at java.lang.Thread.run(Thread.java:619) followed by access_log.2009-12-14.txt:1.2.3.4 - - [14/Dec/2009:00:27:32 -0500] "GET /myApp/mySwf.swf HTTP/1.1" 304 - access_log.2009-12-14.txt:1.2.3.4 - - [14/Dec/2009:01:27:33 -0500] "GET /myApp/mySwf.swf HTTP/1.1" 304 - access_log.2009-12-14.txt:1.2.3.4 - - [14/Dec/2009:01:39:53 -0500] "GET /myApp/mySwf.swf HTTP/1.1" 304 - access_log.2009-12-14.txt:1.2.3.4 - - [14/Dec/2009:02:27:38 -0500] "GET /myApp/mySwf.swf HTTP/1.1" 304 - So apache is caching the bad file size. What could possibly be causing this, and possibly separate, how do I ensure that this exception does not get written to cache?

    Read the article

  • Bash Completion Script Help

    - by inxilpro
    So I'm just starting to learn about bash completion scripts, and I started to work on one for a tool I use all the time. First I built the script using a set list of options: _zf_comp() { local cur prev actions COMPREPLY=() cur="${COMP_WORDS[COMP_CWORD]}" prev="${COMP_WORDS[COMP_CWORD-1]}" actions="change configure create disable enable show" COMPREPLY=($(compgen -W "${actions}" -- ${cur})) return 0 } complete -F _zf_comp zf This works fine. Next, I decided to dynamically create the list of available actions. I put together the following command: zf | grep "Providers and their actions:" -A 100 | grep -P "^\s*\033\[36m\s*zf" | awk '{gsub(/[[:space:]]*/, "", $3); print $3}' | sort | uniq | awk '{sub("\-", "\\-", $1); print $1}' | tr \\n " " | sed 's/^ *\(.*\) *$/\1/' Which basically does the following: Grabs all the text in the "zf" command after "Providers and their actions:" Grabs all the lines that start with "zf" (I had to do some fancy work here 'cause the ZF command prints in color) Grab the second piece of the command and remove any spaces from it (the spaces part is probably not needed any more) Sort the list Get rid of any duplicates Escape dashes (I added this when trying to debug the problem—probably not needed) Trim all new lines Trim all leading and ending spaces The above command produces: $ zf | grep "Providers and their actions:" -A 100 | grep -P "^\s*\033\[36m\s*zf" | awk '{gsub(/[[:space:]]*/, "", $3); print $3}' | sort | uniq | awk '{sub("\-", "\\-", $1); print $1}' | tr \\n " " | sed 's/^ *\(.*\) *$/\1/' change configure create disable enable show $ So it looks to me like it's producing the exact same string as I had in my original script. But when I do: _zf_comp() { local cur prev actions COMPREPLY=() cur="${COMP_WORDS[COMP_CWORD]}" prev="${COMP_WORDS[COMP_CWORD-1]}" actions=`zf | grep "Providers and their actions:" -A 100 | grep -P "^\s*\033\[36m\s*zf" | awk '{gsub(/[[:space:]]*/, "", $3); print $3}' | sort | uniq | awk '{sub("\-", "\\-", $1); print $1}' | tr \\n " " | sed 's/^ *\(.*\) *$/\1/'` COMPREPLY=($(compgen -W "${actions}" -- ${cur})) return 0 } complete -F _zf_comp zf My autocompletion starts acting up. First, it won't autocomplete anything with an "n" in it, and second, when it does autocomplete ("zf create" for example) it won't let me backspace over my completed command. The first issue I'm completely stumped on. The second I'm thinking might have to do with escape characters from the colored text. Any ideas? It's driving me crazy!

    Read the article

  • Slow parity initialization of RAID-5 array on HP Smart Array 411 controller

    - by Rob Nicholson
    On 29th October 2011, I built a RAID-5 array using 4 x 146.8GB Seagate SAS ST3146855SS drives running at 15k connected to a PowerEdge R515 with HP Smart Array P411 controller running Windows 2008 (so nothing particularly unusual). I know that parity initialisation of a RAID-5 array can take some time but it's still running after 2.5 weeks which seems a little unusual. I'd previously built another array on the same controller using 4 x 2TB SATA-2 drives and that did take a while to complete but a) I'm sure it was less than 2.5 weeks, b) that array was ~12 times bigger and c) during initialization, the percentrage slowly increased each day. At the moment, the status display for this new 2nd array simply says "Parity Initialization Status: In Progress" and it's said that since the start. It's this lack of change on the status that worries me the most - feels like it's not actually doing anything. Do you think something has gone wrong or am I being unpatient and for some reason, the status not increasing is normal? I kind of expected a much smaller array on faster drives (15k SAS versus 7.5k SATA-2) to build in a few days. This is our primary SAN running StarWind so my "have a play" options are very limited. This 2nd array is currently in use for one small virtual disk so I could shut the target machine down, move the virtual disk to another drive and try rebuilding.

    Read the article

  • Stop Picasa (Mac) from scanning my harddrive

    - by Bodyscanner
    I want to use Picasa desktop app instead of the tedious & clunky web interface to share a couple of photos. Every time I launch Picasa it proceeds to open an annoying pop-up/tool-tip which flicks through every file on my HD using <=95% of CPU. I don't want this so I click the X. It appears again. I try to drag it somewhere less annoying onscreen but it pings back. I look in prefs for an option to turn it off. I give up and quit app until a new build comes out, which I download and repeat the above. WTF?! I understand Google can't be as cool as Apple - iPhoto isn't perfect by any means but at least it looks nice and 'just works'. I want to launch Picasa, not have it go through everything, not have 1000's of random pics and HD cruft on display in the list, and then perhaps drag in a few photos and upload them. Any idea of if that is possible? </rant>

    Read the article

  • Installing sqlite gem fails on AWS Linux instance with sqlite-devel libraries installed

    - by Scott
    Hi, I'm running an instance built off ami-595a0a1c. I am trying to install the sqlite3 (or sqlite) gem and it's failing with the below error: $ sudo gem install sqlite3 Building native extensions. This could take a while... ERROR: Error installing sqlite3: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb checking for sqlite3.h... no sqlite3.h is missing. Try 'port install sqlite3 +universal' or 'yum install sqlite3-devel' and check your shared library search path (the location where your sqlite3 shared library is located). extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib Gem files will remain installed in /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3 for inspection. Results logged to /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3/ext/sqlite3/gem_make.out Typically, this just means you need to install the development libraries and everything is cool. However, I have installed the sqlite-devel packages and still no dice. Since this is the Amazon Linux instance, I'd rather not add more repositories than the ones Amazon provides if possible. What can i do to get this thing to compile? Thanks for any insight! From a brand new instance, here's what I've done: $ sudo yum install rubygems ruby-devel $ sudo gem update --system $ sudo gem install rails $ rails new app $ cd app $ rails server Could not find gem 'sqlite3 (= 0)' in any of the gem sources listed in your Gemfile. $ sudo yum install sqlite-devel $ sudo gem install sqlite (or sqlite3 -- same result) See breakage above

    Read the article

< Previous Page | 741 742 743 744 745 746 747 748 749 750 751 752  | Next Page >