Search Results

Search found 29101 results on 1165 pages for 'thread local storage'.

Page 303/1165 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • PCI Encryption Key Management

    - by Unicorn Bob
    (Full disclosure: I'm already an active participant here and at StackOverflow, but for reasons that should hopefully be obvious, I'm choosing to ask this particular question anonymously). I currently work for a small software shop that produces software that's sold commercially to manage small- to mid-size business in a couple of fairly specialized industries. Because these industries are customer-facing, a large portion of the software is related to storing and managing customer information. In particular, the storage (and securing) of customer credit card information. With that, of course, comes PCI compliance. To make a long story short, I'm left with a couple of questions about why certain things were done the way they were, and I'm unfortunately without much of a resource at the moment. This is a very small shop (I report directly to the owner, as does the only other full-time employee), and the owner doesn't have an answer to these questions, and the previous developer is...err...unavailable. Issue 1: Periodic Re-encryption As of now, the software prompts the user to do a wholesale re-encryption of all of the sensitive information in the database (basically credit card numbers and user passwords) if either of these conditions is true: There are any NON-encrypted pieces of sensitive information in the database (added through a manual database statement instead of through the business object, for example). This should not happen during the ordinary use of the software. The current key has been in use for more than a particular period of time. I believe it's 12 months, but I'm not certain of that. The point here is that the key "expires". This is my first foray into commercial solution development that deals with PCI, so I am unfortunately uneducated on the practices involved. Is there some aspect of PCI compliance that mandates (or even just strongly recommends) periodic key updating? This isn't a huge issue for me other than I don't currently have a good explanation to give to end users if they ask why they are being prompted to run it. Question 1: Is the concept of key expiration standard, and, if so, is that simply industry-standard or an element of PCI? Issue 2: Key Storage Here's my real issue...the encryption key is stored in the database, just obfuscated. The key is padded on the left and right with a few garbage bytes and some bits are twiddled, but fundamentally there's nothing stopping an enterprising person from examining our (dotfuscated) code, determining the pattern used to turn the stored key into the real key, then using that key to run amok. This seems like a horrible practice to me, but I want to make sure that this isn't just one of those "grin and bear it" practices that people in this industry have taken to. I have developed an alternative approach that would prevent such an attack, but I'm just looking for a sanity check here. Question 2: Is this method of key storage--namely storing the key in the database using an obfuscation method that exists in client code--normal or crazy? Believe me, I know that free advice is worth every penny that I've paid for it, nobody here is an attorney (or at least isn't offering legal advice), caveat emptor, etc. etc., but I'm looking for any input that you all can provide. Thank you in advance!

    Read the article

  • Password not working for sudo ("Authentication failure")

    - by Souta
    Before I mention anything further, DO NOT give me a response saying that terminal won't show password input. I'm AWARE of that. I'm typing my user password in (not a capslock issue), and for some reason it still says 'Authentication Failure'. Is there some other password (one I'm not aware of) I'm supposed to be using other than my user password? I've had this ubuntu before, on another hard drive and I didn't have this problem. (And it was the same ubuntu, ubuntu 12.04 LTS) ai@AiNekoYokai:~$ groups ai adm cdrom sudo dip plugdev lpadmin sambashare ai@AiNekoYokai:~$ lsb_release -rd Description: Ubuntu 12.04 LTS Release: 12.04 ai@AiNekoYokai:~$ pkexec cat /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # Please consider adding local content in /etc/sudoers.d/ instead of # directly modifying this file. # # See the man page for details on how to write a sudoers file. # Defaults env_reset Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL:ALL) ALL # Members of the admin group may gain root privileges %admin ALL=(ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) ALL # See sudoers(5) for more information on "#include" directives: #includedir /etc/sudoers.d I can log in with my password, but it's not accepted as valid for authentication <-- That is pretty much my issue. (Although, I haven't gone into recovery mode.) I've ran: ai@AiNekoYokai:~$ ls /etc/sudoers.d README And also reinstalled sudo with: pkexec apt-get update pkexec apt-get --purge --reinstall install sudo pkexec usermod -a -G admin $USER <- Says admin does not exist su $USER <- worked for me, however, my password still does not do much (in sense of not working for other things) I changed my password with pkexec passwd $USER. I was able to change it no problem. gksudo xclock was something I was able to get into, no problem. (Clock showed) ai@AiNekoYokai:~$ gksudo xclock

    Read the article

  • How can I connect my USB (HP) printer in 10.4, which can't be discovered and worked in 9.x

    - by Brian
    My printer was working under 9.x. It is a an hp photosmart C3100 series. When I open the Admin- printing. no local printers are found. I try to add via other (My local choices are Serial and other). I have tried many uri's - ipp://localhost:631/ipp, http://localhost/ipp, localhost, 127.0.0.1, etc... None have worked. Under the networked I have tried JetDirect, using localhost and 127.0.0.1 and port 631. I have tried many options under IPP with different variants in the host trying to verify a printer. No luck. I tried LPD/LPR with localhost and tried the probe. no luck. I tried the cups admin via localhost:631 and that didn't work. On the old version its simply found the local printer, I might have picked the driver, I can't remember but it was the photosmart c3100 series that was working. I just can't get 10.4 to print.

    Read the article

  • Windows Vista Backups?

    - by skaz
    I am trying to configure a Windows Backup on Vista but don't see some capabilities I would expect to be there. For one, it looks like I can only select a Local Drive or a network share. I want to use a local drive, but I want to use a sub folder of one of the drives. Must I really pick the root? As a work-around, I made a network share to the local drive, thinking I could then pick network share. However, when I do this, I am prompted for credentials to hit the share, and none work. However, the share works Explorer, and it works from other computers, so the access is configured correctly. Is there any way to do what I am trying to do? Thanks.

    Read the article

  • can't update 12.04 getting package header error

    - by joel
    I originally posted this question, and was redirected to another thread where the question had already been asked. I then posted to that thread and had my post deleted by moderator fossfreedom, and told to post a "new" question... so wth ever.... I don't care if it's old or new I just need help here people! In a nutshell, I can't use sudo apt-get update or the GUI update tool to update my system. Anytime I try using either tool it gives me an error about packages missing headers. I can't send error reports, I have tried all the listed solutions from this post: I can't update my system properly, "no package header" error and from this post: "Problem with MergeList" error when trying to do an update and neither one work. I just want a working solution since I don't have the means of re-installing the OS entirely, and I REALLY don't want to have to go back to using Windows.

    Read the article

  • Android Loading Screen: How do I go about using a stack to load elements, and the option of incrementing the size counter?

    - by tom_mai78101
    I have some problems with figuring out what value I should put in the function: int value_needed_to_figure_out = X; ProgressBar.incrementProgressBy(value_needed_to_figure_out); I've been researching about loading screens and how to use them. Some examples I've seen have implemented Thread.sleep() in a Handler.post(new Runnable()) function. To me, I got most of that concept of using the Handler to update the ProgressBar, while pretending to do some heavy crunching work. So, I kept looking. I have read this thread here: How do I load chunks of data from an assest manager during a loading screen? It said that I can try using a stack it needs to load, and adding a size counter as I add elements to the stack. What does it mean? This is the part where I'm totally stumped. If anyone would provide some hints, I'll gladly appreciate it. Thanks in advance.

    Read the article

  • bat file to disable ethernet adaptor and then reenable it after windows log in

    - by jaslr
    When I log into Windows 7 I need to wait 10 seconds and then disable the Local Area Connection (ethernet adaptor) and then reenable it. I have looked through the suggested answer: Enable/disable wireless interface in a bat file but that seems irrelevant as it just toggles the current state. From what I can tell I need to include: netsh interface set interface "Local Area Connection" DISABLED netsh interface set interface "Local Area Connection" ENABLED but I'm unsure of the wait time or how I can have this start after Windows has successfully logged in. What's the best approach here?

    Read the article

  • Remote file copy util (like rsync) but that will take account of data already copied (in this sessio

    - by Rory McCann
    Let's say I have a directory with 2 files, both are identical and quite large (e.g. 2GB ea.) I want to rsync that directory to a remote host. As I understand it (and I could be wrong), rsync calculates checksums of files. Surely if it sees 2 files with the same checksum it can just copy the first file, then do a local copy on the remote host for the 2nd file? That would make it faster, no? On a similar note, doesn't rsync hash all the remote files before copying? If it saw a different file with the same hash as a file that was to transfered, it could do a local copy on the remote host. Does rsync support this sort of thing? Is there some way to turn it on? Is there a tool similar to rsync that will do this sort of 'hash based' local copies?

    Read the article

  • "Nos petits trucs utiles de développeurs", et si on partageait nos astuces et nos trouvailles de programmation ?

    Bonjour J'espère que je poste dans le bon forum, sinon merci à la modération de rediriger ce thread. Depuis longtemps, j'avais envie de créer quelque chose de très utile pouvant servir à tout le monde et surtout soit au niveau débutant, mais puisse, pourquoi pas, également servir aux autres. Voici donc un thread entièrement consacré aux petits trucs sans prétention bêtes comme choux, mais très efficaces à l'usage. Il ne s'agit en aucun cas de code compliqué ni de manipulation de haut vol ; tout le contraire et accessible à tous. Je vous invite pour notre plus grand plaisir de développeurs d'ajouter v...

    Read the article

  • Upgrading to 12.10 on an external hard drive

    - by Tom Childers
    I did some googling on this and didn't find anything specific for my situation. I currently have 12.04 installed on an external USB hard drive. It's working great. I want to upgrade it to 12.10. My bandwidth is very limited so I have a friend who will download 12.10 for me and put it on a flash stik. Then I can upgrade without having to do the download myself. Which particular version of the 12.10 download file(s) should I get? Are there alternate 12.10 downloads that have all the packages? How do I set it up so when I upgrade 12.04 I can specify that it look in some local repository for the 12.10 files? Can I just dump the 12.10 files in some local directory? Or do I have do go thru some complex commands to create a local repository? I'm pretty new to Linux so a long process of complex terminal commands will probably be a show stopper for me. Remember that my 12.04 install resides on an external hard drive. And I have a laptop with multiple USB ports. Thanks! Advait

    Read the article

  • Java @Contented annotation to help reduce false sharing

    - by Dave
    See this posting by Aleksey Shipilev for details -- @Contended is something we've wanted for a long time. The JVM provides automatic layout and placement of fields. Usually it'll (a) sort fields by descending size to improve footprint, and (b) pack reference fields so the garbage collector can process a contiguous run of reference fields when tracing. @Contended gives the program a way to provide more explicit guidance with respect to concurrency and false sharing. Using this facility we can sequester hot frequently written shared fields away from other mostly read-only or cold fields. The simple rule is that read-sharing is cheap, and write-sharing is very expensive. We can also pack fields together that tend to be written together by the same thread at about the same time. More generally, we're trying to influence relative field placement to minimize coherency misses. Fields that are accessed closely together in time should be placed proximally in space to promote cache locality. That is, temporal locality should condition spatial locality. Fields accessed together in time should be nearby in space. That having been said, we have to be careful to avoid false sharing and excessive invalidation from coherence traffic. As such, we try to cluster or otherwise sequester fields that tend to written at approximately the same time by the same thread onto the same cache line. Note that there's a tension at play: if we try too hard to minimize single-threaded capacity misses then we can end up with excessive coherency misses running in a parallel environment. Theres no single optimal layout for both single-thread and multithreaded environments. And the ideal layout problem itself is NP-hard. Ideally, a JVM would employ hardware monitoring facilities to detect sharing behavior and change the layout on the fly. That's a bit difficult as we don't yet have the right plumbing to provide efficient and expedient information to the JVM. Hint: we need to disintermediate the OS and hypervisor. Another challenge is that raw field offsets are used in the unsafe facility, so we'd need to address that issue, possibly with an extra level of indirection. Finally, I'd like to be able to pack final fields together as well, as those are known to be read-only.

    Read the article

  • tailwatchd - chkservd on host.domain.com status: hang

    - by Zim3r
    The chkservd sub-process with pid 17420 was running for 602 seconds. The sub-process was terminated as it exceeded the time between checks of 300 seconds. Please check /var/log/chkservd.log and /usr/local/cpanel/logs/tailwatchd_log to discover the I was notified for this error by email on the destination server while transferring server. what does it mean ? and also this happened: ftpd failed @ Wed Aug 8 11:26:38 2012. A restart was attempted automagically. Service Check Method: [socket connect] Reason: Timeout while trying to get data from service: Died at /usr/local/cpanel/Cpanel/TailWatch/ChkServd.pm line 607. Number of Restart Attempts: 1 Startup Log: Starting pure-config.pl: Running: /usr/sbin/pure-ftpd -O clf:/var/log/xferlog --daemonize -A -c50 -B -C8 -D -fftp -H -I15 -lextauth:/var/run/ftpd.sock -L10000:8 -m4 -s -U133:022 -u100 -Oxferlog:/usr/local/apache/domlogs/ftpxferlog -k99 -Z -Y1 -JHIGH:MEDIUM:+TLSv1:!SSLv2:+SSLv3 [ OK ] Starting pure-authd:

    Read the article

  • I'm a Subversion geek, why should I consider or not consider Mercurial or Git or any other DVCS?

    - by user2567
    I try to understand the benefits of distributed version control system (DVCS). I found Subversion Re-education and this article by Martin Fowler very useful. Mercurial and others DVCS promote a new way of working on code with changesets and local commits. It prevents from merging hell and other collaboration issues We are not affected by this as I practice continuous integration and working alone in a private branch is not an option, unless we are experimenting. We use a branch for every major version, in which we fix bugs merged from the trunk. Mercurial allows you to have lieutenants I understand this can be useful for very large projects like Linux, but I don't see the value in small and highly collaborative teams (5 to 7 people). Mercurial is faster, takes less disk space and full local copy allows faster logs & diffs operations. I'm not concerned by this either, as I didn't notice speed or space problems with SVN even with very large projects I'm working on. I'm seeking for your personal experiences and/or opinions from former SVN geeks. Especially regarding the changesets concept and overall performance boost you measured. UPDATE (12th Jan): I'm now convinced that it worth a try. UPDATE (12th Jun): I kissed Mercurial and I liked it. The taste of his cherry local commits. I kissed Mercurial just to try it. I hope my SVN Server don't mind it. It felt so wrong. It felt so right. Don't mean I'm in love tonight. FINAL UPDATE (29th Jul): I had the privilege to review Eric Sink's next book called Version Control by Example. He finished to convince me. I'll go for Mercurial.

    Read the article

  • Setting up a LAMP VM server for Development and Testing?

    - by TdotThomas
    Info: I would like to set up a VM server on my local computer which will serve pages in the exact same way as my current hosting (but only to me on my local computer). I currently pay a big web hosting company to host my website & web store and they are doing a great job, but I would like to be able to work on my Web site and its corresponding MySQL DB, HTML, and PHP code without being at risk of messing something completely up on the live servers. My current plan of action: Set up a VM webserver with Debian, MySQL, PHP, Apache. Copy web store (PHP/HTML) code to VM server. Copy my current MySQL databases from my hosting provider and install on VM server. Modify and test new features on VM server. Upload MySQL DB and HTML/PHP code back to web host's server where it should work as before but with new modifications. Questions: Now I'm pretty sure I have steps one and two down correctly but I can't for the life of me figure out how to proceed next, so here are my questions. I have my /etc/host file set up so www.MySite.test redirects to the IP address of the local VM webserver. Once I import my PHP/HTML files and MySQL file whats the best way to navigate around the fact that all of my files and DBs will reference www.MySite.com. I can export my MySQL dbs but do I also have to export my MySQL users and passwords to access those db or are those coded into my html/php code?

    Read the article

  • Mac OS X 10.6 executable not found without full path

    - by Danack
    I just installed Apache via MacPorts. It seems that my Mac was absolutely confused about which version of the Apache executables to run. After moving the Apache executables that ship with the Mac to a directory that is not listed in the PATH variable, trying to run the httpd built by MacPorts fails even though the correct directory (/opt/local/apache2/bin) is listed in the PATH variable. If I navigate to the directory /opt/local/apache2/bin and type the command httpd I still get the error message -bash: httpd: command not found If I type the command with the full path /opt/local/apache2/bin/httpd it works fine. I've run the command alias to see if something was clashing but the only thing listed is: alias wget='curl -O' How do I find what is intercepting the command and preventing the executable being found in the directory, even when I'm inside the same directory? By the way, the httpd file is executable: -rwxr-xr-x 1 root admin 442496 9 May 2012 httpd

    Read the article

  • NETSH : Set default ip address for an interface with multiple Ips

    - by elarichi.y
    To test a load balancer I need to switch my ip address several time a day, and keep other ips routing trough other Wans. I run these commands in a batch script: netsh interface ip set address "Connexion au réseau local" static %ipd% 255.255.255.0 192.168.1.1 1 netsh in ip add address "Connexion au réseau local" %ips1% 255.255.255.0 netsh in ip add address "Connexion au réseau local" %ips2% 255.255.255.0 ipd: is the default ip I want to set (all traffic should go trough it). ips1 and ips2 : are the secondary ips I want to keep but what ever I do all traffic goes trough one IP !! (first one in the range) Please help me with this issue.

    Read the article

  • Windows Xp, Svchost.exe connecting to different ips with remote port 445

    - by Coll911
    Im using Windows Xp professional Sp2 Whenever i start my windows, svchost.exe starts connecting to all the possible ips on lan like from 192.168.1.2 to 192.168.1.200 The local port ranges from 1000-1099 and the remote port being 445. After its done with the local ips, it starts connecting to other random ips. I tried blocking connections to the port 445 using the local security polices but it didn't work Is there any possible way i could prevent svchost from connecting to these ips without involving any firewall installed ? since my pc slows down due to the load I'd be thankful for any advices

    Read the article

  • Let CGI-PHP load a non-default shared library.

    - by ralle
    In Apache2 I configured PHP as CGI in a virtual host: SetEnv PHPRC "/usr/local/php5.3" ScriptAlias /php5.3 "/usr/local/php5.3/bin" Action application/php5.3 /php5.3/php-cgi AddType application/php5.3 .php Everything works fine. Now I have some issues with the standard version of the GD because it restricts me in settings several hinting and anti-aliasing stuff for fonts. Therefore I want to modify the GD source and create a new shared library. Since I don't want a modifed library in my system I want only PHP to use that library. My question now: How can I change the Apache configuration in a way that PHP uses a certain new version of the library? Something like this does not work: ScriptAlias /php5.3 "LD_LIBRARY_PATH=/path/to/my/lib:$LD_LIBRARY_PATH /usr/local/php5.3/bin"

    Read the article

  • Do you have to recreate workspaces after upgrading a TFS 2008 server to TFS 2010?

    - by Clara Oscura
    I am just reposting this thread from a MSDN forum since it seems to be unavailable. It was very useful when I was having trouble with my folder mappings after migrating to TFS 2010. Question: I opened VS2008 and connected it to the upgraded 2010 TFS server.  Upon clicking any of our Team Projects in source control explorer I get "Team Foundation Error - The workspace MYWORKSPACE;DOMAIN\MYUsername already exists on computer MYPCNAME." Answer: The same local paths on your machine are mapped to 2 different workspaces, one on the preupgrade server and one on the postupgrade server.  It's not safe to have multiple workspaces on different servers mapped to the same local paths b/c you could pend some changes while connected to one server, and the other server would have no idea what you did.  You should either delete your conflicting workspaces from one of the servers (if you don't need them on both), or test the new TFS instance from a new workspace (on different machine). If you want to test an existing production workspace on both servers, then yes, you will have to mess around with the workspace cache. You don’t have to delete the entire cache, you just need to run "tf workspaces /remove:* /server:<serverurl>" to clear the cached workspaces from a server (the command won't delete the workspaces), and possibly "tf workspaces /server:<server>" to refresh the workspace cache for a given server.  You will also have to do back up and restore the workspace before switching servers or your local files could be inconsistent. From the “Microsoft Visual Studio Team Foundation Server 2010 Beta 1” forum (not available anymore?) Technorati Tags: TFS 2010,TFS Workspaces,Team System,Team Foundation Server 2010

    Read the article

  • Have You Visited the New Procurement Enhancement Request Community?

    - by LuciaC
    Have you visited the new Procurement Enhancement Request Community yet?  If not, we strongly encourage you to visit this site to vote on current Enhancement Requests (ERs) available through the ‘Quick Preview of Voting List’.  You can also vote on any ER currently displayed.  Have an ER that is not listed?  Simply add it by creating a thread stating the ER and any detailed information you would like to include.  If the ER already exists in the database, we will add the ER # to the thread so that development can provide updates around the requested ERs. This community is your one-stop source for all Enhancement information.  It is being monitored regularly by development and soon we will be posting some updates around some of the top voted Enhancement Requests.  Know that your vote counts!  By voting, you will bring forward those ERs that impact the Procurement Suite's value and usability.  Is your request industry specific?  Let us know by posting this information in the body of the thread.  We have a team monitoring these ERs and will be happy to highlight industry specific ERs to ensure they also get equal visibility! Coming Soon:  A list of the Top implemented ERs!  Development has been working hard to make improvements to the Procurement Suite of Products and they want you to know about them!  Until then, check out the Best Practices Section for some key ERs and how they can help your company secure the most value from your implementation!! What you need to know: The Procurement Enhancement Requests Community is your 1-stop shop for the latest information on Enhancements! The Community allows you to vote on ERs bringing visibility to the collective audience interest in value and usability recommendations. Your place to submit any new enhancement requests. Get the latest on top Procurement Enhancement Requests (ERs) - know when an improvement is PLANNED, COMING SOON, and DELIVERED. This Community is owned and managed by the Oracle Procurement Development team! Let your voice be heard by telling us what you want to see implemented in the Procurement Suite.

    Read the article

  • Cannot get script to run at startup (tried all the simple answers)

    - by Carey Head
    I have Ubuntu Desktop 12.04 LTS running great on an older Acer desktop. I want to use this machine as an in-home server for hosting Minecraft. The command to start the Minecraft server is java -Xmx1024M -Xms1024M -jar minecraft_server.jar nogui and that works great when I cd into the correct directory and execute the above. I created a script to do this: #!/bin/bash cd /home/myuser/minecraft-server1 java -Xmx1024M -Xms1024M -jar minecraft_server.jar nogui & cd /home/myuser/minecraft-server2 java -Xmx1024M -Xms1024M -jar minecraft_server.jar nogui & exit 0 I made this .sh file executable, and it too runs great when I start it manually from the terminal. The problem I'm having is getting these to execute at startup. I have my user account on this machine to auto login. I have tried the following: Adding the following to "Startup Applications" : sh /home/myuser/myscript.sh (Nothing happens on reboot) Adding the same to /etc/rc.local (Nothing happens on reboot). I even tested this one by running /etc/rc.local from the terminal, and it executed great. Just not at boot/auto login Added the lines from the script directly to rc.local (Nothing happens on reboot). I can't help but think that there's something I'm missing. The script executes great when run manually, but will not run at boot/auto login. Many thanks in advance.

    Read the article

  • Installer doesn't recognise windows 7 install

    - by SirWaffleRaptor
    I will start off saying I searched a bit for this problem with little success. You see, I am an utmost noob when it comes to Ubuntu and Linux based OS (hence why I want to install it and discover it) and the thread I found were too specific for me to do anything with. Therefore, if you find a thread that is noob-friendly and answers my question, please link it below or please try to answer as simply as possible :) The problem: I want to install Ubuntu 12.04 LTS 64 bit alongside Windows 7 on my main hard drive (to get the choice which one to start when booting). Pop the CD in, reboot and get to the installer screen. I know I'm supposed to have 3 options, but I'm only given the option to erase my hard drive or "do something else". Therefore, my question is: how do I make Ubuntu recognise Windows 7, so it may install alongside it?

    Read the article

  • Backing up an rsnapshot directory to a remote device

    - by user123480
    I have a local backup server that uses rsnapshot with hard links that contains about 10TBs of information which add about 4 to 5GBs per day. It's works great. I've been requested to set up and maintain a remote backup of the local rsnapshot directory structure. It's a nightly backup. I've tried using rsync with encryption which takes forever and eats system resources. A previous post says not to use rsync with hard links for that reason. I need a suggestion of how I can keep the local and remote copies of the rsnapshot structures in sync? Thanks

    Read the article

  • How to configure DNS BIND to work locally on one computer?

    - by user619656
    I want to do some changes to the BIND source code. In order to test those changes I want to be able to post queries to my local BIND server and for it to use only the local zone files. I know how to make the zone files and somewhat the named.conf file but what should i put in /etc/resolv.conf? In resolv.conf currently there is the line nameserver 192.168.0.1 witch i guess is my router IP address and the queries go through the router to my ISP. I want those queries to go to the local BIND server and to look for answers in the zone files i provided. Is there a way for this using resolf.conf file or should i do something else?

    Read the article

  • Getting some perl script errors on execution of svnnotify

    - by user2474633
    I installed svnnotify:2.84, perl module 5.10 for subversion 1.7.11 on redhat release 6. And i am using this command in post-commit hooks to get notified svnnotify --repos-path "$1" --revision "$2" --from [email protected] \ --to-regex-map [email protected]="branches/Test_branch12" \ --smtp xxxxxx.com HTML::ColorDiff >> /tmp/notify.txt 2>&1 once the commit is successful i can see the below mentioned error in the output file. Use of uninitialized value $[0] in exec at /usr/local/share/perl5/SVN/Notify.pm line 2332. Can't exec "": No such file or directory at /usr/local/share/perl5/SVN/Notify.pm line 2332. Use of uninitialized value $[0] in concatenation (.) or string at /usr/local/share/perl5/SVN/Notify.pm line 2332. Cannot exec : No such file or directory Child process exited: 512 Can anyone help on this.

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >