Search Results

Search found 7776 results on 312 pages for 'configure in'.

Page 234/312 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • How can MySQL be in GDAL's dependencies when it's already installed?

    - by Julien Fouilhé
    I'm trying to install GDAL on my CentOS 64 bits server to be able to make some GIS operations. I tried a simple: # yum install gdal First, the GDAL version is 1.4 (the last released one is 1.9) Then, I see in the dependencies list mysql. But I have mysql already installed, from another repository (remi), with a newer version than the one suggested by yum... Is it a problem of architecture (yum suggests i386)? I risked a yes, but still impossible to install it! Here's the error I have. Transaction Check Error: package mysql-5.5.28-1.el5.remi.x86_64 (which is newer than mysql-5.0.95-1.el5_7.1.i386) is already installed Then, I tried to install it from sources with last version available (1.9.2). I downloaded the GDAL tar.gz, extracted the files and installed it like following: # tar -xzf gdal-1.9.2.tar.gz # ./configure --with-static-proj4=/usr/local/lib --with-threads --with-libtiff=internal --with-geotiff=internal --with-jpeg=internal --with-gif=internal --with-png=internal --with-libz=internal # make # make install But during the make, I have some strange errors displaying, about RegisterOGRMySQL, that I can't understand: chmod a+x gdal-config /bin/sh /home/benjamin/gdal-1.9.2/libtool --mode=link g++ gdalinfo.lo /home/benjamin/gdal-1.9.2/libgdal.la -o gdalinfo libtool: link: g++ .libs/gdalinfo.o -o .libs/gdalinfo /home/benjamin/gdal-1.9.2/.libs/libgdal.so -L/usr/local/lib/lib -L/usr/kerberos/lib64 -lproj -lsqlite3 /usr/lib64/libexpat.so -lpthread -lrt -lcurl -ldl -lgssapi_krb5 -lkrb5 -lk5crypto -lcom_err -lidn -lssl -lcrypto -lz -Wl,-rpath -Wl,/usr/local/lib -Wl,-rpath -Wl,/usr/lib64 /home/benjamin/gdal-1.9.2/.libs/libgdal.so: undefined reference to `RegisterOGRMySQL' collect2: ld returned 1 exit status make[1]: *** [gdalinfo] Error 1 make[1]: Leaving directory `/home/benjamin/gdal-1.9.2/apps' make: *** [apps-target] Error 2 Has anyone a solution? Thanks a lot!

    Read the article

  • Can't view other computers on Network

    - by Darkart
    Systems: My Machine: Windows 7 Ultimate connected through ethernet into router Her Machine--Other Machine: Windows 7 Ultimate connected through wireless Router:F5D8236-4 N Wireless Router Version 1 Firmware: 2.01.03 (Apr 28 2009) ISP: Comcast Problem: I can not view the "Other Machine" on the network at all. I opened command prompt and ran net view and saw the pc name. I tried pinging the pc and it times out. Went inside the router and tried viewing the computer on the DHCP list and it can not be seen. I restored the router back to default settings and firmware and completely reset the modem and router, and created home group. I went to the other machine to configure home group settings and made sure that both PC's had identical settings. She was able to see my machine but I could not see hers. I restarted both machines and now we cant see each other at all. Also her PC ("Other Machine") had exclamation mark in the wireless icon but was connected just fine. There is no firewalls on currently or anti-virus enabled, and still can not see each other. Right now I am checking for updated drivers for the wireless card, but my question is could it be the router or something hardware related? I have went through all the settings in the Home group and visited most FAQ's and still no luck. Also as it stands I can not view her machine inside the router DHCP Client List :(

    Read the article

  • Nokia 5800 v50, getting "Invalid server name" with ad-hoc wlan with Windows 7

    - by Krunal
    After upgrading to v50 firmware in my Nokia 5800 xpress music phone. I am not able to connect to ad-hoc wlan network in my laptop(Windows 7 OS). I have a dialup internet connection and created wireless ad-hoc network to access the internet in my phone. But I am getting "invalid server name" in the phone and my wireless network access type in Windows 7 shows "no internet access". I am able to connect to wlan infrastructure network using 5800 in my office. So my phone is working with wlan and I m able to access internet on it. BUT The problem is "dial up with ICS enabled" and "AD-HOC wlan network" in Windows 7. Can anyone give some solution on it. Or can you provide a guide/tutorial to connect 5800 with adhoc network and dialup internet in Windows 7. System: Windows 7 Dial up internet connection with ICS enabled. Ad-Hoc wireless network with WEP Nokia 5800 Xpress Music phone Note: I have read other posts also and tried their solutions but didn't worked for me. the first solution is to bridge the network and access it. but how can I bridge the dialup and wlan netwok? second solution is manually configure IP and DNS addresses in phone. But what should I add as IP and DNS as my dial up and adhoc network have automatic addresses.

    Read the article

  • external postfix forwarding to zimbra server

    - by Marko
    I want to migrate from my current mail server (old_server) for my domain mydomain.com. old_server setup is Postfix+LDAP+Cyrus. Now I want to migrate my domain mail to Zimbra server (zimbra), but I am considering option to leave current mail server working in the first phase, and then to only have subset of email addresses to be forwarded to zimbra server. It seems that zimbra refers this in their documentation as 'edge MTA'. Current config mydomain.com MX: old_server <---------- smtp send ----------> smtp receive New config mydomain.com MX: old_server zimbra <------------------------------------------- smtp send ----------> smtp receive ---- forward ----> smtp receive I need following: old_server to receive mail for my domain as before, but for some of the email addresses I want them to be delivered to zimbra server. I should be able to determine which email addresses will be forwarded. I would like to avoid possible false spam detections for mails from mydomain.com due to this setup. Questions: How should I configure postfix on old_server to support this mail forwarding? To avoid false spam detection, can I have outgoing mail from mydomain.com to be sent by zimbra or should I use old_server? Is there anything extra I would need to do in order to avoid possibility of my outgoing mails being marked as spam on other servers?

    Read the article

  • Windows 7: Windows Firewall: Logging/Notifying on Outgoing Request Attempts

    - by Maxim Z.
    I'm trying to configure Windows Firewall with Advanced Security to log and tell me when programs are trying to make outbound requests. I previously tried installing ZoneAlarm, which worked wonders for me with this in XP, but now, I'm unable to install ZA on Win7. My question is, is it possible to somehow monitor a log or get notifications when a program tries to do that if I set all outbound connections to auto-block, so that I can then create a specific rule for the program and block it.? Thanks! UPDATE: I've enabled all the logging options available through the Properties windows of the Windows Firewall with Advanced Security Console, but I am only seeing logs in the %systemroot%\system32\LogFiles\Firewall\pfirewall.log file, not in the Event Viewer, as the first answer suggested. However, the logs that I can see only tell me the request's or response's destination IP and whether the connection was allowed or blocked, but it doesn't tell me what executable it comes from. I want to find out the file path of the executable that each blocked request comes from. So far, I haven't been able to.

    Read the article

  • Bacula v5.0.2 Windows Installation Issues

    - by JohnyD
    First off, I am very new to Bacula but I'm very intriqued from what I've read. I'm looking to set up Bacula 5.0.2 on a Windows 2008 R2 server. I've run the installer and at the end it asks me to configure DIR name, DIR password, DIR Address. Windows documentation is somewhat hard to come by and I'm not certain what exactly I'm supposed to enter here. Do I need to create a local account that matches this info? Will the installation process create the account for me? Will this be the account that handles the FD daemon/service? I'm also not certain if Address means network location or local direcory. I apologize for my ignorance. Currently I'm trying to use the following information: Name: john pass: john address: thin1 (server name although I have also tried thin1.fqdm.local and 10.0.0.104) This info allows for the installer to complete successfully. However, when I run the BAT it hangs at, "Connecting to Director thin1:9101". The Bacula File Service is currently running under the local system account. What am I doing wrong? What do I have yet to do? Once I get this working properly I assume I will need to install clients on all my Windows boxes? Also, this is a 64-bit cpu but I am installing the 32-bit client. Are there any issues with this? Should I be using the 32-bit client? Thanks very much for the help.

    Read the article

  • Google Play Music Not Adding MP3s On-Demand

    - by J0e3gan
    My recent attempts to add music on-demand to Google Play Music have yielded nothing - no "Processing music..." or "Added __ of __" messages, just nothing. Previously I could add music on-demand; and nothing has changed on the machine from which I successfully added music previously, from which I have tried to add music on-demand recently. What could be hampering my ability to add music on-demand? WHAT I'VE TRIED: Right after I started using GPM, I briefly found that I could not add music (on-demand), but the problem went away after a logout/login. This time a logout/login has not helped. Dragging & dropping or browsing to folders or files to add has made no difference either. Nor has waiting ridiculously long for GPM to show signs of life after adding music on-demand seemed to work. Digging deeper, I read a related Google Play Help article and followed its suggestions... ran the Google Play Music Manager troubleshooter = no errors or warnings double checked my available storage = 8 GB free double checked supported file types = MP3 is still supported (of course) ..., but the problem remains. UPDATE: I found that if I configure GPM to automatically upload music added to specific folders, it strangely does add automatically what it will not add on-demand.

    Read the article

  • Sync desktop Mac environment to laptop

    - by Andrew Vit
    I spend the majority of my time working at my desktop Mac, which I have configured for my web development environment. My spouse has a MacBook for casual use, and I occasionally steal it back when I need to work off-site, or when travelling. The question is how to best synchronize the two so I can switch between them more readily. I've solved a few obvious things by using online services: Email is hosted on IMAP. Working files are in Dropbox. Source code is managed in git. However, the following are things I always miss when jumping on the laptop: Installed Applications (current versions) Installed libraries & utilities (/usr/local) Apache VirtualHosts & other configurations (/etc) Disk image files for VMs My current method is to connect the MacBook via Firewire target mode and rsync the /Users/me home directory, and then cherry-pick the other items I need from Applications, /etc and /usr/local. The problem with this method is that it can be very time consuming due to things like my virtual machine image files, cached emails, etc. How can I make this faster & easier? Can you recommend a solution for configuration management (so I can repeatably install & configure the same software on both), or synchronization (so I can bring the MacBook up to date nightly, over our home network)?

    Read the article

  • DFSR NTFS Permissions Not Working!??!

    - by megadood
    I have two windwos 2008 standard servers running DFSR okay. I can create a file on one server, it is replicated to the other okay etc. I have the namespace shared folder on each server shared with full control administrators / everyone change/read permissions. I then browse to the folder on server 1 e.g.\server1\namespace\share\folder1. I right click the folder, and configure the NTFS permissions as I would like for example Adminsitrators Full Control / One User Read/Write Access / No other users in the user list. I save this and then double check the second server e.g. \server2\namespace\share\folder1. I right click the same folder name as before and can see the NTFS permissions have replicated accordingly. I right click the folder and go to properties - security - advanced - effective permissions and select a user that shouldnt be able to get into that folder e.g. testuser. It agrees with the NTFS permissions and shows that testuser has no ticks next to any permissions so should be denied access. I logon to any network PC or the server as testuser. Browse to \server1\namespace\share\folder1. It lets me straight in, no access denied messages. The same applies to server2. It seems as thought all my NTFS permissions are being ignored. I have 1 DFS share and then all the subfolders are a mixture of private folders and public folders so need the NTFS permissions to work ideally. Any idea whats going on? Is this normal? From my tests all users can access any DFSR folder under the namespace\share which is quite worrying. Thanks

    Read the article

  • Configuring apache and php to handle many connections

    - by Marc
    My preliminary setup is like this. Two QuadCore 8GB servers running debian 6, with php and apache, One QuadCore 16GB server running debian 6, with mysql My plan is to have one 8Gb server to act as a proxy server, using vertx java to handle connections. I will let vertx use HttpClient to send web requests to the second 8GB server. This would have apache installed and use php to deliver any information that it gets from the mysql server on the third, 16GB server. The main reason I want this setup is to have things separated, so the "proxy" will be the only way to access the system, as the other two server will only be reachable from the local network. I can have the vertx proxy handle 5000+ concurrent connections, but, I don't know how to configure apache to handle all the requests coming from the proxy. Php will connect over mysqli with persistent connection pool of 500-800 connections, the mysql server seems not to have any issues on this part. In previous projects, the apache part was always causing issues, no matter how I set it up. I might not fully understand how to setup apache, since normally apache should handle many concurrent connections, but it does seem to now.

    Read the article

  • A complicated nginx/php-fpm chroot setup

    - by Rsaesha
    I'm running nginx and php-fpm, and I want to set up jails for each host. My setup is a little complicated, so following tutorials on the web gets me nowhere. Each site has a directory /var/www/domain.name/ Inside that directory, there will be a public/ directory which will be the website root, a logs/ directory which will store nginx logs for that site specifically, and the chroot filesystem (etc/, usr/, etc.) The first problem I've run into is that nomatter how I configure it, PHP-FPM cannot find the files that are passed to it via nginx. They result in a "Primary script unknown" error, and to make matters worse, the error messages from PHP-FPM are no more verbose than that, so I can't figure out what path is being passed by nginx. A php-fpm pool configuration for a host looks like this: [host] user = host group = www-data chroot = /var/www/domain.name chdir = /public listen = 127.0.0.1:900x 'x' is incremented for each pool. The nginx config for this host looks like this: server { listen 80; server_name domain.name *.domain.name; root /var/www/domain.name/public; index index.php index.html index.html; location ~ \.php$ { expires epoch; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9001; } } I'm guessing that the problem is the SCRIPT_FILENAME parameter, but I've changed it to just $fastcgi_script_name, and various other combinations, but to no avail. Can anyone help?

    Read the article

  • Jira access with AJP-Proxy

    - by user60869
    I want to Configure the Jira-Acces over APJ-Proxy. I proceeded as follows (Following this howto: http://confluence.atlassian.com/display/JIRA/Configuring+Apache+Reverse+Proxy+Using+the+AJP+Protocol) : 1) In the server.xml I activate the AJP: 2) Edit VHOST Konfiguration: # Load Proxy-Modules LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so # Load AJP-Modules LoadModule proxy_ajp_module /usr/lib/apache2/modules/mod_proxy_ajp.so # Proxy Configuration <IfModule proxy_http_module> ProxyRequests Off ProxyPreserveHost On # Basic AuthType configuration <Proxy *> AuthType Basic AuthName Bamboo-Server AuthUserFile /var/www/userdb Require valid-user AddDefaultCharset off Order deny,allow Deny from all Allow from 192.168.0.1 satisfy any </Proxy> ProxyPass /bamboo http://localhost:8085/bamboo ProxyPassReverse /bamboo http://localhost:8085/bamboo ProxyPass /jira ajp://localhost:8009/ ProxyPassReverse /jira ajp://localhost:8009/ </IfModule> EDIT: In the logs if found follow: //localhost:8080/ [Fri Nov 19 14:51:13 2010] [debug] proxy_util.c(1819): proxy: worker ajp://localhost:8080/ already initialized [Fri Nov 19 14:51:13 2010] [debug] proxy_util.c(1913): proxy: initialized single connection worker 1 in child 5578 for (localhost) [Fri Nov 19 14:51:32 2010] [error] ajp_read_header: ajp_ilink_receive failed [Fri Nov 19 14:51:32 2010] [error] (120006)APR does not understand this error code: proxy: read response failed from (null) (localhost) [Fri Nov 19 14:51:32 2010] [debug] proxy_util.c(2008): proxy: AJP: has released connection for (localhost) [Fri Nov 19 14:51:32 2010] [debug] mod_deflate.c(615): [client xx.xx.xx.xx Zlib: Compressed 468 to 320 : URL /jira But It dosen´t work. Somebody have an idea?

    Read the article

  • How can I erase the traces of Folder Redirection from the Default Domain Policy

    - by bruor
    I've taken over from an IT outsourcer and have found a struggle now that we're starting a migration to windows 7. Someone decided that they would setup Folder redirection in the Default Domain Policy. I've since configured redirection in another policy at an OU level. No matter what I do, the windows 7 systems pick up the Default Domain Policy folder redirection settings only. I keep getting entries in the event log showing that the previously redirected folders "need to be redirected" with a status of 0x80000004. From what I can tell this just means that it's redirecting them locally. Is there a way I can wipe that section of the GPO clean so it's no longer there? I'm hesitant to try to reset the default domain policy to complete defaults. ***UPDATE 6-26 I found that the following condition occurred and was causing the grief here. I've already implemented the new policies for clients, and for some reason, XP was working great, 7 was refusing to process. The DDP was enforced. Because of this, and the fact that the folder redirection policies were set to redirect back to the local profile upon removal, it was forcing clients to pick up it's "redirect to local" settings. Requirements for to recreate the issue. -Create a new test OU and policy. -Create some folder redirection settings, set them to redirect to local upon removal -Remove settings on that GPO -Refresh your view of the GPO and check the settings. -You'll notice that the settings show "not configured" entries for folder redirection. -Enforce this GPO -Create another sub-OU -Create a GPO linked to this sub-ou and configure some folder redirection settings. -Watch as the enforced GPOs "not configured" setting overrides the policy you just defined. I've had to relink the DDP to all OU's that have "block inheritance" enabled, and disable the "enforced" option on the DDP as a workaround. I'd love to re-enable enforcement of the DDP, but until I can erase the traces of folder redirection settings from the DDP, I think I'm stuck.

    Read the article

  • Make dhcp assign same IP and hostname for different interfaces at one machine

    - by Egeshi
    I have a feeling that question itself looks stupid but it is not. Please let me clarify. I have dynamic DNS with BIND and NIS configured at my LAN and have laptop which I am using in both wireless and wired mode. I mean that sometimes I have to use wired interface to achieve higher throughput but most of time I don't need it and using wireless mode. Everything works great. Issue is that I want both interfaces get same IP from DHCP. Just for convenient firewall setup. If I add both hosts to dhcp in this manner # bt wireless host bt { hardware ethernet 00:1f:1f:62:60:28; fixed-address 172.16.77.110; } # bt wired host bt { hardware ethernet 00:14:22:b7:5a:de; fixed-address 172.16.77.110; } DHCP says logs following message dhcpd: Dynamic and static leases present for 172.16.77.110 dhcpd: Remove host declaration bt-wired or remove 172.16.77.110 dhcpd: from the dynamic address pool for 172.16/16 Host records are added outside of any subnet, but it makes no difference if I put them there, effect is still the same. This is not critical but either is not my whim because even if DHCP seems to work fine for that "bt" host, I cannot make connection TO it from remote machine anymore with this definitely incorrect DHCP config. I'd be thankful if one spares a minute for advice about how to configure DHCPD correctly. UPDATE. I realize that there's a soulution to assign different hostname in DHCP config but would like to use benefits of short host names.

    Read the article

  • Postfix aliases and duplicate e-mails, how to fix?

    - by macke
    I have aliases set up in postfix, such as the following: [email protected]: [email protected], [email protected] ... When an email is sent to [email protected], and any of the recipients in that alias is cc:ed which is quite common (ie: "Reply all"), the e-mail is delivered in duplicates. For instance, if an e-mail is sent to [email protected] and [email protected] is cc:ed, it'll get delivered twice. According to the Postfix FAQ, this is by design as Postfix sends e-mail in parallel without expanding the groups, which makes it faster than sendmail. Now that's all fine and dandy, but is it possible to configure Postfix to actually remove duplicate recipients before sending the e-mail? I've found a lot of posts from people all over the net that has the same problem, but I have yet to find an answer. If this is not possible to do in Postfix, is it possible to do it somewhere on the way? I've tried educating my users, but it's rather futile I'm afraid... I'm running postfix on Mac OS X Server 10.6, amavis is set as content_filter and dovecot is set as mailbox_command. I've tried setting up procmail as a content_filter for smtp delivery (as per the suggestion below), but I can't seem to get it right. For various reasons, I can't replace the standard OS X configuration, meaning postfix, amavis and dovecot stay put. I can however add to it if I wish.

    Read the article

  • How do I use a self encrypting drive?

    - by Unique_Key
    I recently purchased a Micron RealSSD c400 self encrypting drive, and I am having a few issues when trying to get it recognized by my laptop (HP Elitebook 8440p running Windows 7 x64; also tried on a custom-built desktop). When I try to initialize the drive from disk management, I get a CRC error; also, when attempting to partition it from Windows setup, the program can't create the partitions. I also tried with UBCD, nothing. I assume this is due to drive security, but I haven't been able to find much information about this online; do I need a management software or something? I'm completely stumped here. EDIT As requested, when I try partitioning the device from Windows setup I get a 0x80300024 error; when I try initializing it from disk management, I get a "Data error (cyclic redundancy check)" message, and the event log shows the following under System: Source: VDS Basic Provider, message: unexpected failure. error code 490@01010004 (2x) Source: Virtual Disk Service, message: VDS fails to write boot code on a disk during clean operation. Error code: 80070001@02070008 (1x) Source: Disk, message: The device \Device\Harddisk2\DR2 has a bad block (2x) The security logs show nothing related. Also, when attempting to configure it from UBCD (utility: HDAT2), I get an error along the lines of "can't edit partition information" or something to that tune.

    Read the article

  • How can I bridge a VM to a remote network?

    - by asciiphil
    I have a system running QEMU/KVM (via libvirt). One of its VMs needs to have a presence on a subnet that is not local to the VM host. I have a Linux system on the remote subnet. Is there a way to set up some sort of tunneled bridge to cause the VM to appear present on the remote system? This will be a temporary situation (hopefully just until the VM owner can configure their system) and network performance and long-term maintainability aren't really issues. To give some more concrete information: My VM host has IP address 192.168.54.155/24. The VM has IP address 192.168.65.71/24. I have a remote system at 192.168.65.254/24. Both the VM host and remote system are running Scientific Linux 6.5. I do not control the network or routing in between the VM host and remote system. I do not have access to the guest OS on the VM. I would like traffic to the VM's IP address to end up at the VM even though its host isn't directly connected to the appropriate network. I've tried using iproute2's tunnelling, but Linux won't let me add a tunnel to a bridge. I've considered using some sort of iptables mangling to route traffic over the tunnel and make the VM think it's on the right network, but I'm not sure whether there are better approaches. What's the best way to accomplish this hack?

    Read the article

  • Understanding what needs to be in place for a server to send outgoing email from a linux box

    - by Matt
    I am attempting to configure an openSuse 11.1 box to send outgoing email for a domain that the same server is hosting. I don't understand enough about smtp servers and the like to know what needs to be in place and working. The system already had Postfix installed, and I confirmed it was running via a > sudo /etc/init.d/postfix status I examined the Postfix config file in /etc/main.cf and configured a couple of items regarding the domain/host name and such, but left it largely default. I attempted to send an email from the command line with the following command: > echo "test 123" | mail -s "test subject" [email protected] Where differentdomain.com was not the same domain as the one best hosted on the server. However, the email never reaches the target account. Any suggestions? EDIT: In the postfix log, (/var/log/mail.info, there's nothing in .err) I see that postfix is trying to connect to what appears to be a different smtp server on our network, with a connection refused: connect to ourdomain.com.inbound15.mxlogic.net[our ip address]:25: Connection refused However, I can't figure out why it is 1) trying to connect to that server and 2) not just sending the messages itself... I mean, isn't postfix an smtp server? I did a grep -ri on ourdomain from /etc and see no configuration files anywhere telling it to do this. Why is it?

    Read the article

  • Updating a staging server (from a CI server) in a Vagrant box with Chef

    - by Tomas Brambora
    I'm using Vagrant + Chef (chef_client provisioner) to create & provision a staging environment for my server. And I have a Jenkins job set up that is run every time I push to my 'develop' branch. In the Jenkins job, I would like to update & rebuild the source code of the server in the staging box and restart it. I have already written the cookbooks that install the dependencies, configure the db etc. But I'm not sure how to run only the update & rebuild & restart stuff from the cookbooks. I understand I could always tear down the whole box and rebuild it, but provisioning the box is a lengthy process so I would like to do that as little as possible. I split my server cookbook into 3 recipes: dependencies, db_setup and server. What I want to run in the my Jenkins job is the "server" recipe only. But I dont' understand how can I do that... If I specify the run_list on my Chef server, then I lose the ability to provision the whole box from scratch. Basically, I would like to be able to tell Vagrant from the command line what recipes Chef should run. Is that possible somehow? Cheers!

    Read the article

  • Openfire on EC2 with Jingle

    - by Bjorn Roche
    I would like to run Openfire (or another XMPP server) on EC2. At the moment this is just for testing, so easy setup and configuration are important, as is low cost. At some point, however, if things go well, it will be important to scale this. Ideally, it would be nice to not have to switch software when the scaling happens, but if a switch needs to happen later it certainly can. My requirements are: basic XMPP services, including muc and pubsub. Logins controlled from an external API. Preferably, when a user attempts to connect, the XMPP server checks with the api to see if their username and password are correct, but I can also have the API keep the XMPP server up to date on new users, deleted users, pasword changes and so on. I see Openfire has a "user service" API. Not ideal, but it looks workable. Jingle, including relay and STUN. It's not at all clear to me if the Jingle Nodes plugin takes care of this. I'm a bit confused about what's required to set this up, and I'd rather know in advance than be confused along the way :). eg It seems like STUN servers require more than one IP address. Can Openfire do all this for me, including stun and media relay on a single machine? Is this hard to configure on EC2 with Openfire? What are the basic steps? Would this be easier with something else like, say Tigase? What about database? Should I use amazon's database service, or run a db on the same machine? Would the server be compatible with a service like http://www.siteuptime.com/ Thanks!

    Read the article

  • Testlink stop working when configuring it to use LDAP

    - by YuriAlbuquerque
    I have a TestLink webservice running on a server, and OpenLDAP running on other server. There are no firewall problems between them (I managed to configure Redmine, on the same server as TestLink, to use LDAP authentication). But whenever I place the configuration for LDAP in TestLink, TestLink stops working. I have no clue on what is happening. This is where I define LDAP's settings on custom_config.inc.php: $tlCfg->authentication['method'] = 'LDAP'; $tlCfg->authentication['ldap_server'] = 'serverip'; $tlCfg->authentication['ldap_port'] = '389'; $tlCfg->authentication['ldap_version'] = '2'; $tlCfg->authentication['ldap_root_dn'] = 'dc=mycompany,dc=com,dc=br'; $tlCfg->ldap_organization'] = ''; $tlCfg->authentication['ldap_uid_field'] = 'uid'; $tlCfg->authentication['ldap_bind_dn'] = 'myuser'; //Not actual login name and password, for obvious reasons $tlCfg->authentication['ldap_bind_passwd'] = 'mypassword'; $tlCfg->authentication['ldap_tls'] = false; $tlCfg->user_self_signup = true; I'm certain that OpenLDAP is 2.X. My TestLink version is 1.9.3 What could I be doing wrong?

    Read the article

  • Apche ssl is not working

    - by user1703321
    I have configure virtual host on 80 and 443 port(Centos 5.6 and apache 2.2.3), following is the sample, i have wrote the configuration in same order Listen 80 Listen 443 NameVirtualHost *:80 NameVirtualHost *:443 <VirtualHost *:80> ServerAdmin [email protected] ServerName www.abc.be ServerAlias abc.be . . </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName www.abc.fr ServerAlias abc.fr . . </VirtualHost> then i have define 443 <VirtualHost *:443> ServerAdmin [email protected] ServerName www.abc.be ServerAlias abc.be . . SSLEngine on SSLCertificateFile /etc/ssl/private/abc.be.crt SSLCertificateKeyFile /etc/ssl/private/abc.be.key SSLCertificateChainFile /etc/ssl/private/gd_bundle_be.crt </VirtualHost> <VirtualHost *:443> ServerAdmin [email protected] ServerName www.abc.fr ServerAlias abc.fr . . SSLEngine on SSLCertificateFile /etc/ssl/private/abc.fr.crt SSLCertificateKeyFile /etc/ssl/private/abc.fr.key SSLCertificateChainFile /etc/ssl/private/gd_bundle_fr.crt </VirtualHost> First ssl certificate for abc.be is working fine, but 2nd domian abc.fr still load first ssl. following the output of apachictl -s VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:443 is a NameVirtualHost default server www.abc.be (/etc/httpd/conf/httpd.conf:1071) port 443 namevhost www.abc.fr (/etc/httpd/conf/httpd.conf:1071) Thanks

    Read the article

  • How to write re-usable puppet definitions?

    - by Oliver Probst
    I'd like to write a puppet manifest to install and configure an application on target servers. Parts of this manifest shall be re-usable. Thus I used define for defining my re-usable functionality. Doing so, I've always the problem that there are parts of the definition which are not re-usable. A simple example is a bunch of configuration files to be created. These file must be placed in the same directory. This directory must be created only once. Example: nodes.pp node 'myNode.in.a.domain' { mymodule::addconfig {'configfile1.xml': param => 'somevalue', } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', } } mymodule.pp define mymodule::addconfig ($param) { $config_dir = "/the/directory/" #ensure that directory exits: file { $config_dir: ensure => directory, } #create the configuration file: file { $name: path => "${config_dir}/${name}" content => template('a_template.erb'), require => File[$config_dir], } } This example will fail, because now the resource file {$config_dir: is defined twice. As far as I understood, it is required to extract these parts into a class. Then it looks like this: nodes.pp node 'myNode.in.a.domain' { class { 'mymodule::createConfigurationDirectory': } mymodule::addconfig {'configfile1.xml': param => 'somevalue', require => Class ['mymodule::createConfigurationDirectory'], } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', require => Class ['mymodule::createConfigurationDirectory'], } } But this makes my interface hard use. Every user of my module has to know, that there is a class which is additionally required. For this simple use case the additional class might be acceptable. But with growing module complexity (lots of definitions) I'm a bit afraid of confusing the modules user. So I'd like to know is there a better way to handle this dependencies. Ideally, classes like createConfigurationDirectory are hidden from the user of the modules api. Or are there some other "Best Practices"/Patterns handling such dependencies?

    Read the article

  • Debian/Redmine: Upgrade multiple instances at once

    - by Davey
    I have multiple Redmine instances. Let's call them InstanceA and InstanceB. InstanceA and InstanceB share the same Redmine installation on Debian. Suppose I would want to install Redmine 1.3 on both instances, how would I do that? After upgrading the core files I would have to migrate the databases. What I would like to know is: can I migrate all databases in a single action? Normally I would do something like: rake -s db:migrate RAILS_ENV=production X_DEBIAN_SITEID=InstanceA for each instance, but this would get tedious if you have 50+ instances. Thanks in advance! Edit: The README.Debian file that's in the (Debian) Redmine package states: SUPPORTS SETUP AND UPGRADES OF MULTIPLE DATABASE INSTANCES This redmine package is designed to automatically configure database BUT NOT the web server. The default database instance is called "default". A debconf facility is provided for configuring several redmine instances. Use dpkg-reconfigure to define the instances identifiers. But can't figure out what to do with the "debconf facility". Edit2: My environment is a default Debian 6.0 "Squeeze" installation with a default Redmine (aptitude install redmine) installation on a default libapache2-mod-passenger. I have setup two instances with dpkg-reconfigure redmine.

    Read the article

  • what is uninstall procedure for software installed via "make install" on CentOS 6.2

    - by gkdsp
    I installed OCILIB on my CentOS 6.2 server some time ago, and now I want to install a newer version. The vendor requires an uninstall, but doesn't provide instructions. I'm guessing that's because it's trivial for people with a Linux background. http://orclib.sourceforge.net/doc/html/group_g_install.html If I installed this software using: step 1: # ./configure --with-oracle-headers-path=/usr/include/oracle/11.2/client64 --with-oracle-lib-path=/usr/lib/oracle/11.2/client64/lib step 2: # make step 3: # su root step 4: # make install step 5: # gcc -g -DOCI_IMPORT_LINKAGE -DOCI_CHARSET_ANSI -L/usr/lib/oracle/11.2/client64/lib -lclntsh -L/usr/local/lib -locilib conn.c -o conn How would I go about uninstalling this? I tried following this http://www.cyberciti.biz/faq/delete-uninstall-software-linux-commands/ but nothing was found on my disk using rpm -qa *oci* or yum list *oci*. Maybe since it wasn't installed with yum or rpm then I shouldn't expect either of these to find it. Are there generic instructions for uninstalling software on Linux that I could use, or do the instructions really depend on the specific software? Any help much appreciated.

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >