Search Results

Search found 9366 results on 375 pages for 'common lisp'.

Page 178/375 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • Bind dns server in Solaris 10 and win xp clients

    - by stevecomptech
    Hi, Added this in zone db file, i am running solaris 10 _ldap._tcp.mydomain.com. SRV 0 0 389 dc.mydomain.com. _kerberos._tcp.mydomain.com. SRV 0 0 88 dc.mydomain.com. _ldap._tcp.dc._msdcs.mydomain.com. SRV 0 0 389 dc.mydomain.com. _kerberos._tcp.dc._msdcs.mydomain.com. SRV 0 0 88 host.mydomain.com. Now i get this error when i try to join win xp to the domain The query was for the SRV record for _ldap._tcp.dc._msdcs.mydomain.com The following domain controllers were identified by the query: host.mydomain.com Common causes of this error include: Host (A) records that map the name of the domain controller to its IP addresses are missing or contain incorrect addresses. Domain controllers registered in DNS are not connected to the network or are not running. What do i need to change in order my win xp join the domain

    Read the article

  • Getting error while starting tomcat?

    - by ram
    For my Tomcat installation process case is 1. cd /home/mpatil/Downloads/ 2. tar zxvf apache-tomcat-6.0.37.tar.gz 3. cd apache-tomcat-6.0.37/bin 4. ./startup.sh 5. tail -f /home/mpatil/Downloads/apache-tomcat-6.0.37/logs/catalina.out for `5` command results : [root@localhost bin]# tail -f /home/mpatil/Downloads/apache-tomcat-6.0.37/logs/catalina.out Nov 08, 2013 12:04:04 PM org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory docs Nov 08, 2013 12:04:04 PM org.apache.coyote.http11.Http11Protocol start INFO: Starting Coyote HTTP/1.1 on http-8080 Nov 08, 2013 12:04:04 PM org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening on /0.0.0.0:8009 Nov 08, 2013 12:04:04 PM org.apache.jk.server.JkMain start INFO: Jk running ID=0 time=0/115 config=null Nov 08, 2013 12:04:04 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 3036 ms and i tried in browser like http://locahost:8080/ nothing comming why.whats the wrong in my command or i did any wrong in my commands pls tel me

    Read the article

  • Apache Alias / VirtualHost run as different user

    - by inx
    I tried to create an alias or virtual host to run as different user. Well below is part of apache httpd.conf that doesn't work. Or, is it even possible? <VirtualHost blah:80> user DifferentUser group DifferentGroup ServerAdmin blah DocumentRoot blah ServerName blah ServerAlias blah ScriptAlias /cgi-bin/ blah DirectoryIndex index.html index.htm default.htm index.shtml index.php ErrorLog logs/blah-error_log CustomLog logs/blah-access_log common <Directory "/blah/"> Options Indexes FollowSymLinks MultiViews ExecCGI AllowOverride all Order Deny,Allow Deny from none Allow from all </Directory> </VirtualHost>

    Read the article

  • What kind of proxy acl rules should be applied?

    - by user42891
    I try to block sites in squid based on this article. Assuming you would want to block access to Yahoo (e.g http://www.yahoo.co.jp, http://www.yahoo.com, http://www.yahoo.co.in), you would ideally want to block all of the above URLs, if I use a regular expression and try to search something called yahoo it seems to get blocked. We are just interested in applying rules which would be most commonly used across all companies (e.g. social networking sites like facebook, orkut), porn sites (e.g. sex), gaming sites (games), movie & song download sites, and sites where they can upload data (e.g. rapidshare) What would be the common set of effective rules in achieving the above?

    Read the article

  • Installing cURL on Ubuntu

    - by davykiash
    Am trying to install cURL on my ubuntu server using the command sudo apt-get install php5-curl However i get the following error Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: php5-curl: Depends: phpapi-20060613+lfs Depends: php5-common (= 5.2.6.dfsg.1-3ubuntu4.5) but 5.3.2-0.dotdeb.1 is to be installed E: Broken packages I am running PHP Version 5.3.2-0.dotdeb.1 on my server. Whats the issue? I need to get curl up and running.

    Read the article

  • Nginx Tornado Combination Causing 502 Bad Gateway Errors

    - by PlaidFan
    We are facing a problem with inconsistent 502 errors and tracking down the reasons has been a very frustrating exercise. We can reproduce the problem by sending several simultaneous requests quickly. The problem is that several is only in the range of 10 to 20 within a 5 seconds (not a typo). So clearly this type of load should be handled easily. We really like the Nginx + Tornado approach but are considering going to a more traditional (e.g. threading) approach because this problem has been very difficult to solve. I was wondering if you a) know how to fix this issue and b) how we can tracked down the culprit(s). The log files simply identify there being a connection refused. We have the same problem as this post: How do I debug a HTTP 502 error? But there is no answer provided on how to solve the problem so I'm hoping you can help because this may be a common issue with this type of setup. Thanks in advance, Paul

    Read the article

  • CentOS 5.5 Package documentation

    - by fthinker
    Usually when I install a common package like PostgreSQL or MySQL or Python etc using Yum it installs the files held within those packages into locations specific to CentOS itself. It may also install scripts specific to CentOS only. These paths may not be the same as the defaults found within the source distributions found on the PostgreSQL, MySQL, Python etc project websites and the scripts are usually unique to CentOS. Recently when I installed PostgreSQL under Ubuntu I found some very nice distribution specific information about how the install was organized and how to use the package in a Ubuntu way. I found this information in /usr/share/doc/ Is there any such information included within CentOS?

    Read the article

  • Drupal 7 on Windows - File Module Problems

    - by TimothyP
    Installed Drupal 7 using the Web Platform installer on Windows 2008 For some reason, the file module, when you upload a file, uses the first few letters of the filename as the unique key to store in the database, which of course causes problems very fast. I'm wondering does anybody have a workaround for this? An AJAX HTTP request terminated abnormally. Debugging information follows. Path: /file/ajax/field_file/und/0/form-EBMatHzV5cZXcWvXJtdADSdyw7Id9-GIpFM_NCJg_a4 StatusText: n/a ResponseText: Error message PDOException: SQLSTATE[23000]: [Microsoft][SQL Server Native Client 10.0][SQL Server]Cannot insert duplicate key row in object 'dbo.file_managed' with unique index 'uri_unique'. in drupal_write_record() (line 6776 of ..........\includes\common.inc). Error The website encountered an unexpected error. Please try again later. ReadyState: undefined (PS: I hope superuser is the right place to ask)

    Read the article

  • Computer only booting after POWER ON/OFF 10 times or more?

    - by Jan Gressmann
    Hi fellow geeks, recently my computer started to behave like an old car and won't start up anymore unless I flip the power switch repeatedly. What happens when I power it on: CPU fans spins briefly and very slowly, then it stops Same with GPU fan No BIOS beeps or HDD activity Screen stays black After turning it on and off for like 10 times, it'll eventually boot like normal and run smooth without any problems what-so-ever. But I'm worried it might eventually die completely. Anyone know the most common cause of this? Maybe I should just leave the computer powered on? :)

    Read the article

  • Sharing / replicating EBS across AWS nodes

    - by skrat
    I would like to use single EBS storage across multiple EC2 nodes (web/app servers). I've read some articles on snapshot sharing, but that doesn't suit well for what we need. We use filesystem for storing DB record attachments, so if one such attachment gets created, we need it to be immediately available to all nodes (to serve). So far only NFS seem to be viable, but it's a pain to configure and maintain. Another option could be storing those attachments on S3 instead, but that would cut us of doing any analysis on that data. This must be quite common problem when scaling in AWS, what solutions are there?

    Read the article

  • How to keep TightVNC client on Windows XP alive when connected to OS X?

    - by craibuc
    I'm using TightVNC on my Windows XP workstations to connect to a remote OS X box (10.5.x) using OS X's VNC support. I've noticed that the TightVNC will become unresponsive after a period of inactivity. Is this a common issue? Restarting TightVNC solves the problem, but can be a bit annoying. Is there a solution to this? I don't suppose copy & paste between the two systems can be made to work?

    Read the article

  • Broadband Traffic Question

    - by rutherford
    I have a broadband ADSL line with plus.net in the UK. Having checked the modem there is no firewall or any weird features enabled. But since I arrived at the apartment (the broadband already being installed), I cannot log into Twitter nor update any of my wordpress blogs (I can browse them and log in, but cannot save any edits or new posts). It only seems to affect these two sites in their unique ways. If I take the netbook I use in this place out to say a McDonalds or some other wifi access point then these sites work fine again. Anyone know what could possibly be preventing access of the pages in question? The only thing common to these pages are the POST response they are expecting. But POST form submission works fine on other sites...

    Read the article

  • MapReduce job is hung after 1 of 5 reducers completed on single-node environment

    - by Marboni
    I have only one Data Node on my dev environment on EC2. I ran heavy MR job and in 6 hours noticed that 100% of mappers and 20% of reducers finished (1 of reducer shows 100% competition, other ones - 0%). Looks like job is hung between 2 reducer runs. I don't see any errors in log files. What it can be? P.S. Last logs of successfully finished reducer: 2012-11-09 11:29:21,576 INFO org.apache.hadoop.mapred.Task: Task:attempt_201211090523_0004_r_000000_0 is done. And is in the process of commiting 2012-11-09 11:29:22,692 INFO org.apache.hadoop.mapred.Task: Task attempt_201211090523_0004_r_000000_0 is allowed to commit now 2012-11-09 11:29:22,719 INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Saved output of task 'attempt_201211090523_0004_r_000000_0' to /data/output/1352457275873/20121109-053433-common 2012-11-09 11:29:22,721 INFO org.apache.hadoop.mapred.Task: Task 'attempt_201211090523_0004_r_000000_0' done. 2012-11-09 11:29:22,725 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1

    Read the article

  • Why does Exim puts emails on hold if there are frozen messages in the queue?

    - by user51932
    I've a CentOS with CPanel server working as a SMTP server, which currently uses 20 different hostnames and IP addresses to deliver email for an email newsletter service. However, it's extremely slow in sending emails. It's sending like 10 emails per minute, which I check by running the "exim -bpc" command. What could be affecting this? One thing I'm supposing, is that there are frozen messages in the queue, which are slowing down the sending until they're sent out, and are putting new messages on hold. What are the most common reasons a message can get frozen? Also, would it be more efficient to use 20 different small VPSs to send out email rather than use one large VPS with the 20 different hostnames and IPs in it?

    Read the article

  • What to do with old hard drives?

    - by caliban
    I have over 100+ old hard drives, ranging from 100MB Quantums to 200GB WDs, most of them PATA, some SATA. Most still working. The squirrel mentality runs in my family - hoard everything, discard nothing. Thus, and this is a relevant question - any suggestions on how to put these drives to use (anything) instead of them just being deadweights and space takers around the office? Hopeful objectives and suggestions to keep in mind when you post an answer : Should showcase your geekiness, or plain fun, or serve a social purpose, or benefit the community. You do not need to limit your answer to only one hard drive - if your project needs all 100++, bring it on! Your answer need not be limited to one project per hard drive - if one hard drive can be used for multiple projects, bring it on! If additional accessories need be purchased, make sure they are common. Don't tell me to get a moon rock or something. The projects you suggested should serve a utility, and not just for decoration purposes.

    Read the article

  • Binding services to localhost and using SSH tunnels - can requests be forged?

    - by Martin
    Given a typical webserver, with Apache2, common PHP scripts and a DNS server, would it be sufficient from a security perspective to bind administration interfaces like phpmyadmin to localhost and access it via SSH tunnels? Or could somebody, who knew eg. that phpmyadmin (or any other commonly availible script) is listening at a certain port on localhost easily forge requests that would be executed if no other authentication was present? In other words: could somebody from somewhere in the internet easily forge a request, so that the webserver would accept it, thinking it originated from 127.0.0.1 if the server is listening on 127.0.0.1 only? If there were a risk, could it be somehow dealt with on a lower level than the application, eg. by using iptables? The idea being, that if someone found a weakness in a php script or apache, the network would still block this request because it did not arrive via a SSH-tunnel?

    Read the article

  • Is anyone working on an encfs client for windows?

    - by snth
    I've been looking into encfs as a solution to encrypt my personal data. However I want to access this data both on Linux and Windows on different machines (synced through Dropbox). So far all Google searches have brought up pages which specify that there is no Windows client that reads encfs. Therefore my question is: is anyone working on a Windows client for encfs? It would be really useful if someone was and it seems to be a common enough issue raised that I have a glimmer of hope that someone might be working on it.

    Read the article

  • Is it possible to have tab completion of drop-down lists in web pages in Firefox?

    - by Nick Booker
    Does anyone know of a Firefox plugin that would enable tab-completion (or some other key sequence like Alt-L) of items in drop-down lists in web forms? e.g. ou<TAB>in<TAB>s<TAB> for 'OurCompany - Internal Support' Vimperator's hints mode makes it very ergonomic to focus the drop-down list with a key sequence like f13 but the keyboard interface to the drop-down list still sucks. I very frequently have to pick items from a very long list with very long common prefixes among the entries (e.g. 30-40 starting with OurCompany -), which renders both the built-in keyboard interface and the mouse pretty slow and unergonomic. I basically want readline support for filling webforms!

    Read the article

  • How are suspected DoS attacks handled by webservers?

    - by Jan Kuboschek
    I rent a server somewhere out in Canada or so that I'm using to host a website of mine. That website has close to 400,000 pages that I wanted to index today. For that, I wrote a crawler a while back (see JCrawler on Stackoverflow.com). Now, I'm greedy and didn't want it to take too long so I ran multiple threads resulting in some 60+ requests per second from my IP. A couple minutes later, my server locked me out. I can still FTP into it, but I can't HTTP it. As server administrator or user, do you have any idea how servers usually handle these situations? Is it common to place a permanent or temporary ban on the IP or what is typically done? Naturally, I'll re-run my software with fewer requests once I'm back on.

    Read the article

  • How to forward http traffic through a specific network adapter.

    - by user18129
    i have the following scenario. Two laptops are connected via a router through the Ethernet ports. These two computers need to be able to communicate together. One computer also needs to access the internet through a different adapter (i.e. we will taking these two laptops two various sites where by the most common type of internet access will be wireless).In isolation all of the various adapters work fine (i.e. the internal network works fine, and the wireless connects to the internet). However,we try to turn on all of the adapters at the same time,the following occurs: If we bridge the two network connections together on the "Server" -The internet connection doesn't work through the wireless If we don't bridge the connections The internet connections don't work It seems like http traffic is trying to be sent through the Ethernet adapter (which of course is not connected to an internet connection). How can we solve this?

    Read the article

  • Debian Squeeze can't install php-pear

    - by Lennier
    I use Debian 6.0.6 sudo apt-get install php-pear results in: Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: initscripts : Breaks: console-setup (< 1.74) but 1.68+squeeze2 is to be installed Breaks: initramfs-tools (< 0.104) but 0.98.8 is to be installed Breaks: nfs-common (< 1:1.2.5-3) but 1:1.2.2-4squeeze2 is to be installed keyboard-configuration : Breaks: console-setup (< 1.71) but 1.68+squeeze2 is to be installed klibc-utils : Breaks: initramfs-tools (< 0.103) but 0.98.8 is to be installed E: Broken packages How can i solve it?

    Read the article

  • Restricting output to only allow localhost using iptables

    - by Dave Forgac
    I would like to restrict outbound traffic to only localhost using iptables. I already have a default DROP policy on OUTPUT and a rule REJECTing all traffic. I need to add a rule above that in the OUTPUT chain. I have seen a couple different examples for this type of rule, the most common being: -A OUTPUT -o lo -j ACCEPT and -A OUTPUT -o lo -s 127.0.0.1 -d 127.0.0.1 -j ACCEPT Is there any reason to use the latter rather than the former? Can packets on lo have an address other than 127.0.0.1?

    Read the article

  • How to mount a iSCSI/SAN storage drive to a stable device name (one that can't change on re-connect)?

    - by jcalfee314
    We need stable device paths for our Twinstrata SAN drives. Many guides for setting up iSCSI connectors simply say to use a device path like /dev/sda or /dev/sdb. This is far from correct, I doubt that any setup exists that would be happy to have its device name suddenly change (from /dev/sda to /dev/sdb for example). The fix I found was to install multipath and start a multipathd on boot which then provides a stable mapping between the storage's WWID to a device path like this /dev/mapper/firebird_database. This is a method described in the CentOS/RedHat here: http://www.centos.org/docs/5/html/5.1/DM_Multipath/setup_procedure.html. This seems a little complicated though. We noticed that it is common to see UUIDs appear in fstab on new installs. So, the question is, why do we need an external program (multipathd) running to provide a stable device mount? Should there be a way to provide the WWID directly in /etc/fstab?

    Read the article

  • A space-efficient filesystem for grow-as-needed virtual disks ?

    - by Steve Schnepp
    A common practice is to use non-preallocated virtual disks. Since they only grow as needed, it makes them perfect for fast backup, overallocation and creation speed. Since file systems are usually based on physical disks they have the tendency to use the whole area available1 in order to increase the speed2 or reliability3. I'm searching a filesystem that does the exact opposite : try to touch the minimum blocks need by an aggressive block reuse. I would happily trade some performance for space usage. There is already a similar question, but it is rather general. I have very specific goal : space-efficiency. 1. Like page caching uses all the free physical memory 2. Canonical example : online defragmentation 3. Canonical example : snapshotting

    Read the article

  • Windows and domain suffix addition

    - by grawity
    I have a DNS domain and host it on my own server. My desktop PC (Windows XP) is configured to have mydomain.tld as its primary DNS suffix. Now, when the system tries to resolve any domain - stackoverflow.com, for example - it tries with the suffix added first, even if the name has periods in it. In other words, it tries stackoverflow.com.mydomain.tld. before stackoverflow.com.. Is this valid according to DNS standards and common sense? Is there anything I can do to prevent it, other than removing the prefix completely? (I still want it to be appended to single-component hostnames. Currently I have two prefixes . and mydomain.tld. configured, but it isn't very fast when resolving foohost.)

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >