Search Results

Search found 18728 results on 750 pages for 'setup deployment'.

Page 435/750 | < Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >

  • Disk / system configuration for log collection / syslog server

    - by Konrads
    I am looking into building a syslog / logging infrastructure and am pondering about some architecture best practices. Essentially, I see that a syslog system needs to support two conflicting workloads: log collection. Potentially massive streams of data need to be written quickly to disks and indexed. log querying. logs will be queried by both fixed fields such as date and source as well as text search. What is the best disk/system setup assuming I'd like to keep it to a single server for now? Should I use SSDs or ramdisk to off-load some processing? some disks in stripe and some in raid5? I am particularly eyeing Graylog2 with ElasticSearch/MongoDB

    Read the article

  • SVN: Error validating server certificate for svn hook linux

    - by Dr Casper Black
    Hi, I managed to setup a SVN (over SSL) server and TortoiseSVN client on Win. I made a Post-Commit Hook for test project. The Post-Commit will update the web dir so the App in PHP can be executed with the newest version. It all works when done over shell. The only problem is, when i commit the changes over the client in Win the change is commited but HOOK throws error post-commit hook failed (exit code 1) with output: Error validating server certificate for 'https://SERVER_IP:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! - The certificate hostname does not match. Certificate information: - Hostname: DEVSRVR - Valid: from Fri, 28 Jan 2011 09:22:45 GMT until Sat, 28 Jan 2012 09:22:45 GMT - Issuer: PHP, SS, SS, SRB - Fingerprint: 5f:d0:50:d6:dd:a6:d4:64:a5:ac:3a:4b:7c:7d:33:e3:75:dd:23:9f (R)eject, accept (t)emporarily or accept (p)ermanently? svn: OPTIONS of 'https://SERVER_IP/svn/myproject/trunk': Server certificate verification failed: certificate issued for a different hostname, issuer is not trusted (https://SERVER_IP)

    Read the article

  • Leveraging .Net 4.0 Framework Tools For Encrypting Web Configuration Sections

    - by Sam Abraham
    I would like to share a few points with regards to encrypting web configuration sections in .Net 4.0. This information is also applicable to .Net 3.5 and 2.0. Two methods can work perfectly for encrypting connection strings in a Web project configuration file:   1-Do It All Yourself! In this approach, helper functions for encrypting/decrypting configuration file content are implemented. Program would explicitly retrieve appropriate content from configuration file then decrypt it appropriately.  Disadvantages of this implementation would be the added overhead for maintaining the encryption/decryption code as well the burden of always ensuring sections are appropriately decrypted before use and encrypted appropriately whenever edited.   2- Leverage the .Net 4.0 Framework (The Way to go!) Fortunately, all needed tools for protecting configuration files are built-in to the .Net 2.0/3.5/4.0 versions with very little setup needed. To encrypt connection strings, one can use the ASP.Net IIS Registration Tool (Aspnet_regiis.exe). Note that a 64-bit version of the tool also exists under the Framework64 folder for 64-bit systems. The command we need to encrypt our web.config file connection strings is simply the following:   Aspnet_regiis –pe “connectionstrings” –app “/sampleApplication” –prov “RsaProtectedConfigurationProvider”   To later decrypt this configuration section:   Aspnet_regiis –pd “connectionstrings” –app “/SampleApplication”   The following is a brief description of the command line options used in the example above. Aspnet_regiis supports many more options which you can read about in the links provided for reference below.   Option Description -pe  Section name to encrypt -pd  Section name to decrypt -app  Web application name -prov  Encryption/Decryption provider   ASP.Net automatically decrypts the content of the Web.Config file at runtime so no programming changes are needed.   Another tool, aspnet_setreg.exe is to be used if certain configuration file sections pertinent to the .Net runtime are to be encrypted. For more information on when and how to use aspnet_setreg, please refer to the references below.   Hope this helps!   Some great references concerning the topic:   http://msdn.microsoft.com/en-us/library/ff650037.aspx http://msdn.microsoft.com/en-us/library/zhhddkxy.aspx http://msdn.microsoft.com/en-us/library/dtkwfdky.aspx http://msdn.microsoft.com/en-us/library/68ze1hb2.aspx

    Read the article

  • Set up linux box for hosting a-z

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP/MySQL: To upgrade PHP and MySQL to the latest versions, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! Add IUS repository to our package manager cd /tmp wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. # list all the packages in the IUS repository; use this to find PHP/MySQL version and libraries you want to install Remove old version of PHP and install newer version from IUS rpm -qa | grep php # to list all of the installed php packages we want to remove yum shell # open an interactive yum shell remove php-common php-mysql php-cli #remove installed PHP components install php53 php53-mysql php53-cli php53-common #add packages you want transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) Upgrade MySQL from IUS repository /etc/init.d/mysqld stop rpm -qa | grep mysql # to see installed mysql packages yum shell remove mysql mysql-server #remove installed MySQL components install mysql51 mysql51-server mysql51-devel transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell service mysqld start mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project Upgrade instructions courtesy of IUS wiki: http://wiki.iuscommunity.org/Doc/ClientUsageGuide Install rssh (restricted shell) to provide scp and sftp access, without allowing ssh login cd /tmp wget http://dag.wieers.com/rpm/packages/rssh/rssh-2.3.2-1.2.el5.rf.x86_64.rpm rpm -ivh rssh-2.3.2-1.2.el5.rf.x86_64.rpm useradd -m -d /home/dev -s /usr/bin/rssh dev passwd dev Edit /etc/rssh.conf to grant access to SFTP to rssh users. vi /etc/rssh.conf Uncomment or add: allowscp allowsftp This allows me to connect to the machine via SFTP protocol in Transmit (my FTP program of choice; I'm sure it's similar with other FTP apps). rssh instructions appropriated (with appreciation!) from http://www.cyberciti.biz/tips/linux-unix-restrict-shell-access-with-rssh.html Set up virtual interfaces ifconfig eth1:1 192.168.1.3 up #start up the virtual interface cd /etc/sysconfig/network-scripts/ cp ifcfg-eth1 ifcfg-eth1:1 #copy default script and match name to our virtual interface vi ifcfg-eth1:1 #modify eth1:1 script #ifcfg-eth1:1 | modify so it looks like this: DEVICE=eth1:1 IPADDR=192.168.1.3 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes NAME=eth1:1 Add more Virtual interfaces as needed by repeating. Because of the ONBOOT=yes line in the ifcfg-eth1:1 file, this interface will be brought up when the system boots, or the network starts/restarts. service network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: [ OK ] Bringing up interface eth1: [ OK ] ping 192.168.1.3 64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.105 ms And this is where I'm at. I will keep editing this as I make progress. Any tips on how to Configure virtual interfaces/ip based virtual hosts for SSL, setting up a CA, or anything else would be appreciated.

    Read the article

  • BIOS password and hardware clock problems

    - by Slartibartfast
    I have HP 6730b lap top. I've bought it used and installed (Gentoo) linux on it. BIOS is protected with password, and guy I bought it from said "I've tweaked BIOS from Windows program, it never asked me for password". I've tried to erase password by removing battery, but it's still there. What did get erased obviously is hw clock. This is what hapends: a) I can leave lap top in January 1980 and it works b) I can correct system time, but boot wil fail with "superblock mount time in future" from where I need to manually do fsck and continue boot c) I can correct system time and sync it with hwclock -w but than it will behave as b) and it will reset BIOS time to 1.1.1980 00:00 So I need either a way to bypass a BIOS password (wich after lot of googling seems impossible),a way to persist a clock, or a setup that will enable hw clock in eighties, system clock in present time and normal boot.

    Read the article

  • What is the simplest, open-source webmail frontend available?

    - by josePhoenix
    I am working on a project to create a few extremely stripped down interfaces for common Web/Internet tasks in order to make computers accessible to my visually impaired grandmother. Currently she uses Mac OS X Mail.app, but I had the idea that I could re-skin a webmail interface running on my own server to make it easier for her to use. The ideal webmail interface to use as a starting point would be without frames or AJAX and written in Python, Perl, or PHP5+, though any setup could work as long as the template and stylesheet files were separate from the application itself. This frontend must also connect to a remote IMAP server, since her email account is with her ISP and not on my server. Can anyone recommend a bare-bones, no-nonsense webmail interface that would work for this?

    Read the article

  • Set Users as chrooted for sftp, but allow user to login in SSH

    - by Eghes
    I have setup a ssh server on debian 7, to use sftp connection. I chrooted some user, with this config: Match Group sftpusers ChrootDirectory /sftp/%u ForceCommand internal-sftp But if i want login with one of this chrooted users in ssh console, they get logged, but autoclose the connection. In logs I see: Oct 17 13:39:32 xxxxxx sshd[31100]: Accepted password for yyyyyy from zzz.zzz.zzz.zzz port 7855 ssh2 Oct 17 13:39:32 xxxxxx[31100]: pam_unix(sshd:session): session opened for user yyyyyyyyyyyy by (uid=0) Oct 17 13:39:32 d00hyr-ea1 sshd[31100]: pam_unix(sshd:session): session closed for user yyyyyyyyyyyy How can I chroot a user only for sftp, and use it as a normal user for ssh?

    Read the article

  • network topology including many services

    - by mete
    I know this is yet another question on how to setup network but I hope you are not bored of such questions yet. The site is also an office, so it includes windows dc, windows ad, exchange, sql, file sharing, development app servers and other pcs. In addition to office (internal) things, there are both test and prod environments consisting of a web server-app server-sql stack. There is also ftp service open to public. I consider: dmz1 - web server - exchange edge - ftp dmz2 - app server - sql for app server internal - dc and ad - exchange hub and transport - internal file sharing - sql for internal use - app servers for internal use - pcs public - dmz1, only web, ftp and smtp public - dmz2 not possible public - internal not possible dmz1 - dmz2 is possible from web servers to app servers by using http or ajp dmz1 - internal is only possible for exchange, otherwise not possible dmz2 - internal not possible Does this sound ok ? Any other recommendations ? It will be configured using either MS ISA or Jupiter SSG. Thank you.

    Read the article

  • how to disable local logins to a mac snow leopard machine?

    - by Wang
    I have a maintenance script that needs to run uninterrupted, so I'd like some way to disable local user logins. Right now, the solution is to send SIGSTP to the loginwindow process, which is suboptimal for several reasons. The most important of them is that the observed behavior is a login prompt that appears to accept the user's credentials but then hangs on a blank desktop before the menu bar or dock or desktop icons appear. This has led to users "fixing" the problem by rebooting the machine. Is there a better way to disable local logins? We currently use iHook, so if there's any way to abort a login from within the login hook, that would integrate nicely with our current setup. Unfortunately, Apple doesn't seem to have documented exactly what would cause Mac OS to abort the login.

    Read the article

  • Cloud Backup: Getting the Users' Backs Up

    - by Tony Davis
    On Wednesday last week, Microsoft announced that as of July 1, all data transfers into its Microsoft Azure cloud will be free (though you have to pay for transferring data out). On Thursday last week, SQL Azure in Western Europe went down. It was a relatively short outage, but since SQL Azure currently provides no easy way to take a standard backup of a database and store it locally, many people had no recourse but to wait patiently for their cloud-based app to resume. It seems that Microsoft are very keen encourage developers to move their data onto their cloud, but are developers ready to do it, given that such basic backup capabilities are lacking? Recently on Simple-Talk, Mike Mooney described a perfect use case for the Microsoft Cloud. They had a simple web-based application with a SQL Server backend; they could move the application to Windows Azure, and the data into SQL Azure and in the process free themselves from much of the hassle surrounding management and scaling of the hardware, network and so on. It was a great fit and yet it nearly didn't happen; lack of support for the BACKUP command almost proved a show-stopper. Of course, backups of Azure databases are always and have always been taken automatically, for disaster recovery purposes, but these are strictly on-cloud copies and as of now it is not possible to use them to them to restore a database to a particular point in time. It seems that none of those clever Microsoft people managed to predict the need to perform basic backups of Azure databases so that copies could be stored locally, outside the Azure universe. At the very least, as Mike points out, performing a local backup before a new deployment is more or less mandatory. Microsoft did at least note the sound of gnashing teeth and, as a stop-gap measure, offered SQL Azure Database Copy which basically allows you to create an online clone of your database, but this doesn't allow for storing local archives of the data. To that end MS has provided SQL Azure Import/Export, to package up and export a database and its data, using BACPACs. These BACPACs do not guarantee transactional consistency; for example, if a child table is modified after the parent is copied, then the copied database will be in inconsistent state (meaning, to add to the fun, BACPACs need to be created from a database copy). In any event, widespread problems with BACPAC's evil cousin, the DACPAC have been well-documented, and it seems likely that many will also give BACPAC the bum's rush. Finally, in a TechEd 2011 presentation tagged "SQL Azure Advanced Administration", it was announced that "backup and restore" were coming in the next SQL Azure CTP. And yet this still doesn't mean that we'll get simple backups as DBAs know and love them. What it does mean, at least, is the ability to restore any given database to a point in time within a 2-week window. For the time being, if you want a local copy of your data and don't want to brave the BACPAC, one is left with SSIS or BCP, creative use of schema and data comparison tools, or use of SQL Azure Backup (currently in beta) in order to perform this simple but vital task. Cheers, Tony.

    Read the article

  • Correct configuration of multiple Analytics trackers per page, spanning domains and subdomains

    - by Eliot Shepard
    My company publishes sites on a somewhat convoluted domain structure, and we're having trouble getting accurate numbers in Analytics when we have multiple trackers on the page. We publish under two brands (A, B). Each brand has a "national" site at A.com, B.com, as well as per-city "local" sites at eg. ny.A.com, la.A.com, sf.A.com, etc. Right now we're trying to track in these dimensions: Full network (A.com, ny.A.com, B.com, la.B.com, etc.) All sites in brand (A.com, ny.A.com, la.A.com, etc.) Inidividual site (ny.A.com) Here are the commands we're using on an individual site: _gaq.push( ['t0._setAccount', 'UA-XXXXXX-1'], // full network ['t0._setDomainName', 'none'], ['t0._setAllowLinker', true], ['t0._trackPageview'], ['t1._trackPageLoadTime'], ['t1._setAccount', 'UA-XXXXXX-2'], // brand ['t1._setDomainName', 'none'], ['t1._setAllowLinker', true], ['t1._trackPageview'], ['t1._trackPageLoadTime'], ['t2._setAccount', 'UA-XXXXXX-3'], // individual ['t2._setDomainName', 'none'], ['t2._setAllowLinker', true], ['t2._trackPageview'], ['t2._trackPageLoadTime'] ); We send the same commands to each account because we've had strange results when trackers were configured differently in the past. However, right now we're seeing inflated numbers for uniques on all three trackers. What is the correct way to configure this setup? Thanks for your time.

    Read the article

  • How do I install kivy?

    - by aspasia
    I was trying to install Kivy (by following the instructions here). I downloaded and installed all packages where the installation process went through without giving me any errors. However, when later I enter below command; sudo easy_install kivy It looked like it was going to work but it ends with an error by displaying following lines, which I don't comprehend: Detected compiler is unix /tmp/easy_install-BtOA_u/Kivy-1.8.0/kivy/graphics/texture.c:8:22: fatal error: pyconfig.h: No such file or directory #include "pyconfig.h" ^ compilation terminated. error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 I saw a similar question asked as; Problem with kivy installation. However, this didn't work for me though the question suggests installing libgles-mesa-dev-lts-raring which I did as below; sudo apt-get install libgles-mesa-dev-lts-raring which then gave below; E: Unable to locate package libgles-mesa-dev-lts-raring (sorry for being so specific and perhaps obvious, but I'm in the early stage of learning my way around linux). This user was running Ubuntu 12.04, and most other questions related to this I've seen came from people with a different release from mine, which has led me to believe that that is the reason why the suggestions to those didn't solve my problem. I'm using Ubuntu 13.10

    Read the article

  • Photo Management Software for OS X That Supports Multiple Library Locations

    - by Lance Rushing
    I'm looking at the possibility of changing our photo management software. Thinking about Aperture or Lightroom. And want to know if either supports: Having folders/libraries on separate harddrives/Volumes. Tolerates having network folders occasionally connected Snappy interface A solid enough piece of software as not to crash or behave weirdly. [ so the wife doesn't get stuck and have to call me ;) ] Background The wife is photographer and I'm the computer programmer. Our current setup is Picasa, with a "Recent/Working" folder on the local iMac harddrive, and "Archive" folders on NFS mounted Linux Server, Raid 5, for redundant and extra storage capacity. (the Linux server also syncs with Amazon's EC2) Picasa is doing OK, but I get annoyed when it doesn't do behave properly. Usually around issues when the Linux disk isn't mounted. And overall, I wish Picasa seemed a little more polished and snappier. Thanks, Lance

    Read the article

  • How can I find a computer on my network that is doing mass mailings?

    - by Alex Ciarlill
    I was notified by my isp that one of my machines is sending out spam. This happened about 3 months ago on windows machine running cygwin that was hacked due to an SSH vuln. The hackers setup IIS and SMTP. I cleared out the machine and all the services are disabled so I think that machine is okay I am wondering if there is any other way to identify which machine it could be coming from? The ISP has NO useful information such as source port, destination port, destination IP... nothing. I am running DD-WRT on my router, Windows 7 PC and a Windows XP PC.

    Read the article

  • Set up SSL/HTTPS in zend application via .htaccess

    - by davykiash
    I have been battling with .htaccess rules to get my SSL setup working right for the past few days.I get a requested URL not found error whenever I try access any requests that does not do through the index controller. For example this URL would work fine if I enter the it manually https://www.example.com/index.php/auth/register However my application has been built in such a way that the url should be this https://www.example.com/auth/register and that gives the requested URL not found error My other URLs such as https://www.example.com/index/faq https://www.example.com/index/blog https://www.example.com/index/terms work just fine. What rule do I need to write in my htaccess to get the URL https://www.example.com/auth/register working? My htaccess file looks like this RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L] RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] I posted an almost similar question in stackoverflow

    Read the article

  • shared web hosting architecture in a university setting

    - by gaspol
    We're in the process of creating a shared webhosting infrastructure for our university. Departments within the university can host their sites on this infrastructure. We're thinking of setting up multiple, load balanced web servers attached to shared storage (for web content and Apache config files). There will also be database servers behind these web servers. Does anyone have any other suggestions about this? Any recommendations for an alternative setup? Would having cPanel/WHM/Plesk be a good idea to automate account creation/maintenance?

    Read the article

  • Centos expanding directory

    - by Ansell Cruz
    I currently have a file server, all the files are installed in /usr/local/nginx/html/. The setup is 1 hard disk with 1TB of data. This 1TB of storage is all used up. I asked the guys to add 2 HDDs with 2TB each. This new HDDs will be used for new storage of files. Now, if I mount this 2 new HDDs into /usr/local/nginx/html/, the current files in there will be deleted. My goal is to expand the storage in /usr/local/nginx/html/ without losing data in it. Would this be possible?

    Read the article

  • Wrong Sound Blaster's PCI ID within Windows

    - by pavian
    I own Sound Blaster X-Fi Titanium, it was working with no problems under Linux or Windows 7. It's original PCI ID is 1102:000b but now I see different within MS Windows. BIOS setup: 1102:000b GNU/Linux: 1102:000b Windows 7: 1102:000d Windows 8: 1102:000d In last days I'm experimenting with IOMMU PCI passthrough in Xen and I tried to pass this device to virtual Windows 7 and 8. Here I found this problem. I don't know if this is just coincidence or reason of my problem but it's wrong even in physical system. Windows detects 1102:000d as a High Definition Audio sound device (I guess this name, I have localized Windows, but this is general name, the same was with Realtek HDA before drivers), it's playing but it's unstable (Windows speaker testing can crash that application) and I can't install Creative software. Used driver is hdaudio.sys. Booting in BIOS or UEFI mode doesn't change anything. Nor CMOS clean. Someone met the same problem.

    Read the article

  • Routing a url to fetch content from another site

    - by Abhishek
    Environment: IIS 7. I have a default site www.domain.com and its folder is C:Inetpub/wwwroot/domain There is subdomain www.subdomain.domain.com and its folder is C:Inetpub/wwwroot/domain/subdomain. Now I have setup a new website at an external server. I cannot put the content on the above server due to some reasons. I need the URL www.subdomain.domain.com/blog fetch content from this external server while the URL should remain the same. How could this be achieved in IIS 7?

    Read the article

  • MAAS/JuJu Clarifications

    - by ectoskeleton
    I really love the concept of MAAS underlying an OpenStack implementation, but there are a few questions about MAAS that I am not entirely clear on. Should all hosts be set to network boot at all times or after they have been registered and allocated as a service, should they boot to disk? After juju bootstrap is executed, I turn on the machine that has been allocated (note WoL isn't working, I suspect it's being blocked on the network), the machine boot's up and then juju status executes correct, agent running and all that good stuff. If I 'reboot' the machine (testing power failure/problem whatever), juju status comes back but the agent-state is no longer in running state, and so far I have to destroy the environment and restart. In all cases I have never been able to deploy any services to any of the other nodes. I deploy the service with juju, note which node it was assigned, and then start the system. The system just boots up into a basic node. If I SSH to it I have to enter password, so it's not setting up the ssh key or anything. This is on Ubuntu 12.04.1 LTS systems and HP GL360G7 hosts. The MAAS management server is running as a VM but all on the same network. At this point I am not sure if I am doing something wrong or if there is a problem somewhere else. Is the idea that anytime a host is rebooted it should be rebuilt from the ground up, or is something else going on behind the scene to tell it to boot the local image. If the latter, why doesn't the agent start on a system that has been successfully setup before (juju bootstrapped system)?

    Read the article

  • PowerDNS: multiple supermasters and transfering domain

    - by blauwblaatje
    Hi, I've got a setup with multiple supermasters (bind) and multiple superslaves (pdns). It all seems to work just fine, pdns is being updated when I'm adding or changing a domain. But, when I want to migrate a domain from one master to another, pdns doesn't like it. It tells me the new server isn't a master for this domain, although I deleted the domain on the old server. Now, I think that part of the problem is, that pdns doesn't get an update when a domain is deleted, which would also explain a lot of dead domains in my pdns. It looks like the slave is constantly polling a server and getting RCODE=5 back. The master isn't aware of the domain and the slave thinks the master still serves that domain. Anyone familiar with this problem?

    Read the article

  • Is there a command line two-factor authentication verification code generator?

    - by dan
    I manage a server with two-factor authentication. I have to use the Google Authenticator iPhone app to get the 6-digit verification code to enter after entering the normal server password. The setup is described here: http://www.mnxsolutions.com/security/two-factor-ssh-with-google-authenticator.html I would like a way to get the verification code using just my laptop and not from my iphone. There must be a way to seed a command line app that generates these verification codes and gives you the code for the current 30-second window. Is there a program that can do this?

    Read the article

  • We have moved to larger offices

    - by Chris Houston
    First of all we should probably apologise for the complete lack of blogging over the last 6 months! As web developers we are constantly telling our clients that they should keep their blogs up to date and it seems we have been ignoring our own advice.That being said, we have been very busy moving offices and helping our new host QV Offices setup their new business. As well as all the moving we have not been sitting on our hands, we have built the new site for DairyMaster over in Ireland as well as a separate private website for their global distributor network.As Umbraco Gold Partners we have found more and more that we are working on projects where we are the silent development partners, so although we cannot talk publicly about a lot of the sites we develop, we have some real beauties now in our portfolio :)Now that the dust has settled in our new office ( and has been hovered up! ) we are read for the new year and are looking forward to working on some exciting projects that are currently in the pipeline.We are also intending to run some Hacking sessions for Umbraco as we now have lots of space for developers to come and work with us, so if you have any ideas of a theme for an Umbraco Hackathon then do let us know.And with that it just remains to say Happy Christmas to you all and see you in the new year!

    Read the article

  • MySQL Database synchronizing with local and remote with c#

    - by Neo
    I've posted this here as its more of a mysql questions than c#, I have written some software that runs a local instance of mysql when it first starts, now once mysql is up I would like to synchronize the data between the remote database table and the local database table that the software runs (it shouldn't sync any other databases / tables as there are a lot). I have replication setup to synchronize the entire database to another server which works unless the server goes down then it never comes back up, so based on that I don't think replication will work as when the software is closed it also closes MySQL. So what would be the best method of synchronizing the remote and local databases?

    Read the article

  • Is there a way to find what values comes from what file in HAL under Ubuntu?

    - by vava
    I've been playing with multitouch on my Thinkpad and read a few tutorials on how to setup it. One of them mentioned /usr/hal/fdi/policy/20thirdparty/11-x11-synaptics.fdi, I edited it and enabled SHMConfig through it. Later I found out about /etc/hal/policy/ directory and put some customization for my touchpad there as well in separate fdi file. But now it looks like touchpad doesn't care about my customizations. I have gsynaptec installed and can configure it though GUI, I can configure it with synclient but I can't set any values through fdi files. I even turned off SHMConfig, reverting 11-x11-synaptics,fdi file to it's original state but it seems like SHMConfig still enabled, otherwise I wouldn't be able to configure properties in runtime. So, I was thinking, maybe there's additional hal files I don't know about. How can I find them, particularly ones responsible for turning SHMConfig on?

    Read the article

< Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >