Search Results

Search found 23127 results on 926 pages for 'based'.

Page 713/926 | < Previous Page | 709 710 711 712 713 714 715 716 717 718 719 720  | Next Page >

  • How can I automatically edit an email before auto-forwarding it?

    - by Miss Cellanie
    Is there a way to automatically edit emails before forwarding them? I'm getting email notifications from Foursquare that I want to send to my phone as text messages. I know how to send messages to my number using an email address (I'm in the US and use Verizon) but I don't know how to strip out any unnecessary formatting, like HTML, before the email gets sent. What I want: Ability to strip out HTML Ability to start forwarding at a specific part of the email based on a search (e.g., I might know that Foursquare starts their messages with "Hey hey!" and only want content after that phrase occurs) Ability to truncate at 160 characters Things I've tried: I'm not using Foursquare DM pings through Twitter because I have two Twitter accounts and Twitter only allows a phone to be linked to one account at a time. I'm not willing to change which account it's linked to. I tried to work around the Twitter limitation using Google Voice, but they don't support SMS short codes. I'll compromise on the features I want if I can find a free solution that doesn't require me to set up my own server. I do think this is computer related because it will happen on my computer, not on my phone. edit My current setup: Gmail in Firefox 3.0.15 on Windows XP. I use a netbook as my only personal computer. However, if the only way to accomplish this well is to set up my own mail server or something, I would still want to know that.

    Read the article

  • Sharing music on NAS with Zune and iPod?

    - by osij2is
    After being a long time iPod owner, I'm switching to the new Zune with its subscription model. I haven't bought a Zune yet but I'm planning on doing so within the next month or so. I have approximately 40GB worth of music and my girlfriend has her iPod music library around 30GB. I've been trying to figure out how to migrate all our music off of our laptops/desktops and centralize everything on my NAS. In sharing iPod music isn't too bad. Sharing from one machine to all is fairly easy within the iTunes player. As far as storing all the music on a NAS, again, iPods aren't too bad and imagine other systems aren't difficult. But I'm really new to the Zune and I'm beginning to run into some issues. My questions are: Is it possible to store all music from our iPods and Zune subscriptions and share music between the iPod/Zune within the same file share on my NAS? I'm sure it's possible to store music on a share, but I'm not sure how iTunes and the Zune software differs. Is there 3rd party software, maybe something like DoubleTwist that can sync based from NAS to multiple desktop/laptops? I've never used DoubleTwist but it's something that I found that looks close to being what I need. I've never quite done this myself so I'm trying to find a solution that can: a) store music on a network share; b) sync between different devices (Zune/iPod) seamlessly.

    Read the article

  • Connecting a Wifi router to receivers with a cable instead of antenna?

    - by 31eee384
    This is a very strange question--I'd go so far as to say it's a stupid question. I'm being told that it is possible to, to describe it briefly, use a cable to connect an access point and a receiver directly to one another. This means that I would unscrew the access point's antenna, and attach one end of a cable to the port. Then, on the wireless receiver, I would also unscrew the antenna and plug in the other side of the cable. I'm being told the connection would work after this, just as a normal Wifi connection would. Bonus mini-question: if this works, would it still work if a splitter were attached to the access point and multiple receivers plugged in to the network? What would happen if I do this? Based on my surprisingly deficient knowledge of radio transmission, I don't think it would work. I would like some help knowing why it won't (or will) though, if possible. This is a somewhat hypothetical question--I realize that Ethernet does this exact job very handily, and I could just throw in a switch instead of the splitter. I simply feel that I should understand this scenario. Thanks for any help you can offer.

    Read the article

  • Managing SharePoint permissions via Active Directory?

    - by rgmatthes
    My company has thousands of employees organized thoroughly via Active Directory. I have confidence in the accuracy of the Department and Title information displayed in the user profiles. I'm helping to put up a brand new SharePoint 2007 site, and I contacted IT about managing the site's permissions through AD Groups. The goal is to have the site automatically assign read/write/contribute/whatever permissions based on the information in AD. For example, we could create an AD Group called "Managers" that would contain anyone with the "Manager" title in their AD user profile. I would have SharePoint tap into this AD Group to mass assign permissions if I knew all managers would need a certain level of access (read/write/contribute/whatever). Then if a manager joins the company or leaves it, the group is automatically updated (provided AD gets updated, of course). My IT rep called back and said it couldn't be done. This seems like a pretty straightforward business requirement, and one of the huge benefits of having Active Directory, but maybe I'm mistaken. Could anyone shed some light on this? A) Is it possible to use dynamically-updated AD Groups when assigning permissions via SharePoint? (Does anyone know of a guide I could show my doubtful IT rep?) B) Is there a "best practice" way to go about this? I've read some debate on whether SharePoint Groups or AD Groups are the way to go. My main concern is dynamic updating. C) If this isn't available out of the box, can someone recommend third-party software that will provide the functionality I'm looking for? A big thanks to anyone who can help me out!!

    Read the article

  • Log connections to program

    - by Zac
    Besides for using iptables to log incoming connections.. Is there a way to log established inbound connections to a service that you don't have the source to (suppose the service doesn't log stuff like this on its own)? What I'm wanting to do is gather some information based on who's connecting to be able to tell things like what times of the day the service is being used the most, where in the world the main user base is, etc. I am aware I can use netstat and just hook it up to a cron script, but that might not be accurate, since the script could only run as frequently as a minute. Here is what I am thinking right now: Write a program that constantly polls netstat, looking for established connections that didn't appear in the previous poll. This idea seems like such a waste of cpu time though, since there may not be a new connection.. Write a wrapper program that accepts inbound connections on whatever port the service runs on, but then I wouldn't know how to pass that connection along to the real service. Edit: Just occurred to me that this question might be better for stackoverflow, though I am not certain. Sorry if this is the wrong place.

    Read the article

  • Is Ubuntu a bad distro for a standalone mysql database server?

    - by DhruvPathak
    I read an article here : http://www.mysqlperformanceblog.com/2011/12/08/which-linux-distribution-for-mysql-server/ On the other end there are Debian and Ubuntu. Both use tool called dpkg for package management. There isn’t a month that I log in to a system based on either distribution where there are no issues with packages consistency. Unfinished installations, unresolved conflicts are so common that it’s just beyond simple negligence. The packaging system is just not robust enough. Another problem is that one broken package may block you from installing or uninstalling anything else. Imagine that someone left system in such shape, you prepared for downtime, stopped MySQL and… error – text editor has not been properly installed, so you cannot upgrade MySQL either until the problem is fixed. In a stressful situation when downtime clock ticks – annoying at best We prefer Ubuntu server because of familiarity and Ubuntu also being development environment. Questions: Is Ubuntu used commonly in production for a mysql database server ? Is it worth the trouble ever to have one distro eg Ubuntu in web server, and another say Red Hat in database server ? Or Is a homogenous server pool a better choice ?

    Read the article

  • Sendmail Configuration for Exchange Server

    - by user119720
    i need help for sendmail configuration in our linux machine. Here the things: I want to send email to outside by using our exchange server as the mail relay.But when sending the email through the server,it will response "user unknown".To make it worse, it will bounce back all the sent message to my localhost. I already tested our configuration by using external mail server such as gmail and yahoo,the configuration is working without any issue and the email can be sent to the recipient.Most of the configuration of my sendmail is based on here. authinfo file : AuthInfo:my_exchange_server "U:my_name" "I:my_email" "P:my_passwd" "M:PLAIN LOGIN" AuthInfo:my_exchange_server:587 "U:my_name" "I:my_email" "P:my_passwd" "M:PLAIN LOGIN" sendmail.mc : FEATURE(authinfo,hash /etc/mail/authinfo.db) define(`SMART_HOST', `my_exchange server')dnl define('RELAY_MAILER_ARGS', 'TCP $h 587') define('ESMTP_MAILER_ARGS', 'TCP $h 587') define('confCACERT_PATH', '/usr/share/ssl/certs') define('confCACET','/usr/share/ssl/certs/ca-bundle.crt') define('confSERVER_CERT','/usr/share/ssl/certs/sendmail.pem') define('confSERVER_KEY','/usr/share/ssl/certs/sendmail.pem') define('confAUTH_MECHANISMS', 'EXTERNAL GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN') TRUST_AUTH_MECH('EXTERNAL DIGEST-MD5 CRAM-MD5 LOGIN PLAIN') define('confAUTH_OPTIONS, 'A')dnl My first assumptions the problem occur is due to the authentication problem, as exchange server need encrypted authentication (DIGEST-MD5).I have already changed this in the authinfo file (from plain login to digest-md5 login) but still not working. I also can telnet our exchange server.So the port is not being blocked by firewall. Can someone help me out with this problems?I'm really at wits ends. Thanks.

    Read the article

  • Trying to Set up SMTP Server on WIndows Server 2012

    - by datc
    I'm working on a website, and I need to test the functionality of sending email messages from ASP.NET, something like this: Dim msg As New MailMessage("email1", "email2") msg.Subject = "Subject"<br> msg.IsBodyHtml = True<br> msg.Body = "Click <a href='site'>here</a>." Dim client As SmtpClient = New SmtpClient() client.Host = "My-Server"<br> client.Port = 25<br> client.DeliveryMethod = SmtpDeliveryMethod.Network<br> client.Send(msg) This is running from a Windows 8 workstation. I've installed SMTP server on my Windows Server 2012 machine. The mail shows up in the mailroot/Queue folder and sits there, eventually getting deposited into Badmail. Now I have AT&T U-verse at home, and a few devices connected to the gateway, including let's call it "My-Server." When I run SmtpDiag from say, datc@... to [email protected] I get SOA serial number match passed, Local DNS (99-135-60-233.lightspeed.bcvloh.sbcglobal.net) & Remote DNS (hotmail.com) tests *not* passed, and ultimately, Connecting to the server failed. Error: 10060. Failed to submit mail to mx2.hotmail.com error. When I set My-Server's IP to static and equal to the external IP, 99.135.60.233, and again run SmtpDiag, I get SOA, Local DNS, and Remote DNS tests passed, but the same 10060 error. Same for yahoo.com, gmail.com, and so forth. Is it my ISP's job to fix this? Some PTR record missing somewhere? Is it at all possible to have a home-based SMTP server? All I want is to test my email code. Perhaps, my IP address is just not "trusted" somehow. Thanks.

    Read the article

  • Best method to redirect internal DNS to external website?

    - by ProfessionalAmateur
    We host several web based applications outside of intranet. The URL's to these applications are long, complex and overall not user friendly. Ex: http://hostingsite:port/approot/folder/folder/login.aspx <-- (production) http://hostingsite:port22/approot/folder/folder/login.aspx <-- (dev) http://hostingsite:port33/approot/folder/folder/login.aspx <-- (test) I'd like to create an internal DNS entry to allow users to access these sites with ease. Ex: http://prod --> http://hostingsite:port/approot/folder/folder/login.aspx http://dev --> http://hostingsite:port22/approot/folder/folder/login.aspx I'm not familiar with the DNS process and setup, as far as I know a DNS can only be redirected to an IP, but not to subdomains for directory paths as described above? Is this a correct assumption? I am thinking for throwing up an internal webserver that will listen to the internal DNS entries and redirect to the external sites. http://prod --> [internal webserver] --> redirect --> http://hostingsite:port/approot/folder/folder/login.aspx Is there a better way to do this?

    Read the article

  • Is it better to always copy and delete, rather than move?

    - by nbolton
    Generally speaking, I find myself panicking when I realise that if I cancel a file move, it could cause the target or source to be incomplete. This question applies to Windows and Unix-based platforms. I can never remember exactly how the move command works in either case. For example, if you're moving a directory; does it copy the entire directory, then delete it after, or does it copy then delete each file individually? I always realise after typing something like, mv verybigdir dest that I really should have typed cp -R verybigdir dest && rm verybigdir (where the && operator only moves to the next command if the first was successful) -- or is this pointless? What happens exactly when I press Ctrl+C half way through a move? Likewise, what exactly happens on Windows when I press the cancel button? I can't count the number of times I've moved something (the last time was when using svn) and had two directories, with split contents. I guess the answer is difficult, because not all applications move groups of files in the same way.

    Read the article

  • Xen domU mem-set issue

    - by Casper Langemeijer
    I'm running into a problem on my xen 4.0.1 server (debian squeeze) My host has 32G of memory, Domain-0 has 2048 M assigned to it. (scaled down with xm mem-set Domain-0 2048) top in Domain-0 confirms this. I created a virtual machine config file (using xen-tools) with the following options: memory = '512' maxmem = '2048' Both host and guest machines are running the standard 2.6.32-5-xen-amd64 debian kernel. 'xm create' creates a virtual machine with 512MB of memory as expected. Then 'xm mem-set domU 1024' will not expand the memory to 1024MB running 'xm mem-set domU 400' does set the memory to about 400MB Then 'xm mem-set domU 1024' will expands the memory back to 512MB Based on this, you would say that xm ignores the maxmem and silently sets maxmem to 512, but in the output of xm top the MAXMEM column reads 2G. the MEM column will not go over 512M. The output of xm list tells another story, it shows 1024 when I 'xm mem-set domU 1024'. I've googled myself all away around the internet for this issue and found that most people don't scale back Domain-0. I know I've seen a bugreport about the issue I'm experiencing, but can't find it anymore. Does anyone see what I'm doing wrong here? Hmm.. I just upgraded my kernel to the one provided by debian backports. The issue has gone.

    Read the article

  • windows: force user to use specific network adapter

    - by Chad
    I'm looking for a configuration/hack to force a particular application or all traffic from a particular user to use a specific NIC. I have an legacy client/server app that has a "security feature" that limits connections based on IP address. I'm trying to find a way to migrate this app to a terminal server environment. The simple solution is for the development team to update the code in the application, however in this case that's not an option. I was thinking I might be able to install VMware NIC's installed for each user on the terminal server and do some type of scripting to force that user account to use a specific NIC. Anybody have any ideas on this? EDIT 1: I think I have a hack to work around my specific problem, however I'd love to hear of a more elegant solution. I got lucky in that the software reads the server IP address out of a config file. So I'm going to have to make a config file for each user and make a customer programs files for each user. Then add a VMware NIC for each user and make each server IP address reside on a different subnet. That will force the traffic for a particular user to a particular IP address, however its really messy and all the VM NIC's will slow down the terminal server. I'll setup a proof of concept Monday and let the group know how it affects performance.

    Read the article

  • Reverse proxy for mailserver (SMTP + HTTP for web client)

    - by ba
    I'm looking at doing some reverse proxy work for a mail server with corresponding web client. Both servers are running on the same machine, this is not a server with a high load. :) The solution I've discussed with friends is having the mail server/web client on our internal network. Then to put a reverse proxy on the DMZ to service both SMTP and web client HTTP-traffic to the mail server on the internal network. From what I understand this is the recommended secure solution? So far I've thought for the SMTP-proxy part of using postfix which will receive mail, do some spamhause and similar anti-spam measures and if it all checks out, send the mail to the mail server on the inside. The mail server on the inside will send all outgoing mail to the proxy which will then send it out on the Internet. For the web client I'm not sure exactly which software I should be running on the proxy machine, I've been thinking about using Squid -- but that's basically based on the fact that I know squid is a http proxy. The web client data will be sent out over SSL. Reading around some here on Serverfault I've seen other people using Apache with mod_proxy+mod_security for similar situations. Am I thinking correctly for this solution? What software would you guys use and with which modules? Thanks in advance for the help! :)

    Read the article

  • How to setup Joomla CMS as a backend for iPhone app

    - by srik
    I would like my iPhone app to get dynamic content off the net. This content should be managed using a CMS. I have gone ahead and installed Joomla on my server and will be using the Joomla web interface to create and manage content. I would now like the iPhone app to login to my server and fetch the content. I do not want the complete web pages for my iPhone app. Instead, I want the content in the form of XML or JSON or some serialized format so that I can use the data in a custom layout native to the app. So I am looking for 2 things in particular: 1. How to setup HTTP based authentication for my iPhone app to access data from my server. 2. How to access the content in a serialized format (XML, JSON etc) Are there plugins/extensions/components I can use to achieve the same. Any advice on how this can be achieved would be helpful. I am completely new to setting up/using CMS.

    Read the article

  • What to do with old laptop screens?

    - by Lord Torgamus
    This question is inspired by another SU question I came across earlier today: What to do with old hard drives? It made me think about two long-dead laptops I have with perfectly good screens still inside. One is a Dell Inspiron 5100 and the other is an Averatec E1200, but responses need not be geared towards those particular models' screens. Rules, based heavily on the original question's: Objectives and suggestions to keep in mind when you post an answer : Should showcase your geekiness, be plain ol' fun, serve a social purpose or benefit the community. Your answer need not be limited to only one screen. For a really good answer, I'll go out and buy additional leftover screens. Your answer need not be limited to one project per screen. If additional accessories need be purchased, make sure they are common. Don't tell me to get a moon rock or something. The projects you suggested should serve a useful purpose; art is nice, but functional art is way better. Thanks in advance, folks. EDIT: Found another related question. Fun projects to do with an old 17" LCD monitor EDIT 2: I, for one, am enjoying the new outpouring of creativity here. Best fifty bucks... I mean, rep points... I ever spent. EDIT 3: That does it. At the end of the week, there was a tie for most votes between the accepted answer and the game platform answer. The game platform answer was cooler, but less reasonable as a project to actually do; in other words, it was more moon rocky. Unfortunately, I think fencepost had the best comment on the topic, which is that displays on their own have no good interface. Thanks for playing, everyone!

    Read the article

  • How to setup Joomla CMS as a backend for iPhone app

    - by srik
    I would like my iPhone app to get dynamic content off the net. This content should be managed using a CMS. I have gone ahead and installed Joomla on my server and will be using the Joomla web interface to create and manage content. I would now like the iPhone app to login to my server and fetch the content. I do not want the complete web pages for my iPhone app. Instead, I want the content in the form of XML or JSON or some serialized format so that I can use the data in a custom layout native to the app. So I am looking for 2 things in particular: 1. How to setup HTTP based authentication for my iPhone app to access data from my server. 2. How to access the content in a serialized format (XML, JSON etc) Are there plugins/extensions/components I can use to achieve the same. Any advice on how this can be achieved would be helpful. I am completely new to setting up/using CMS.

    Read the article

  • Creating a Logical network diagram

    - by user273284
    Im a student and I have been assigned the task of creating a logical network diagram for the following scenario There are 2 buildings, the first is the head office and the second is the branch. The data centre is in the head office, it contains domain controller, mail server, file server and a web server. it provides wired and wireless access to the staff. the branch building is new and it does not have a network. The two buildings must be connected using a VPN connection. The branch building will not have any servers but just network devices that will provide the connectivity, the users in the branch building will be connected to the head office over the VPN. I had created a diagram based on this scenario, but my teacher rejected it saying that it does not follow Cisco hierarchical Model and the servers were not placed correctly in the diagram. I just wanted some help in this matter so that I Can create my network diagram correctly. If anyone could upload a picture of how the logical diagram should be for this scenario will be helpful, any other resources would also be great.

    Read the article

  • VPS goes slow at more than 20 users online at the same time

    - by hachiari
    I have 512 MB VPS (brustable to 1GB) Somehow, the site goes slow when there are about 10 users, and becomes impossible to load at 20 users online at the same time. I wonder what could be the problem for this. The bandwidth connection of the VPS is 1Gbps. Here is some settings in my VPS: KeepAlive Off <IfModule prefork.c> StartServers 7 MinSpareServers 7 MaxSpareServers 10 ServerLimit 64 MaxClients 64 MaxRequestsPerChild 0 </IfModule> my.cnf settings - calculated Max Memory 300MB Output from UNIXBENCH INDEX VALUES TEST BASELINE RESULT INDEX Dhrystone 2 using register variables 376783.7 13429727.4 356.4 Double-Precision Whetstone 83.1 1137.5 136.9 Execl Throughput 188.3 1637.4 87.0 File Copy 1024 bufsize 2000 maxblocks 2672.0 148868.0 557.1 File Copy 256 bufsize 500 maxblocks 1077.0 79430.0 737.5 File Read 4096 bufsize 8000 maxblocks 15382.0 1410009.0 916.7 Pipe Throughput 111814.6 4419722.0 395.3 Pipe-based Context Switching 15448.6 561505.1 363.5 Process Creation 569.3 10272.7 180.4 Shell Scripts (8 concurrent) 44.8 514.3 114.8 System Call Overhead 114433.5 3537373.8 309.1 ========= FINAL SCORE 295.0 I am afraid that the VPS company limit the number of connection to the VPS... is it possible? The server is in Japan, but the site has global traffic (some of the traffic are from countries with low speed connection). Could this be the problem? This is a serious problem :( my site just cant grow if this keeps on happening... please tell me if you have any idea. Thank You, Bryant

    Read the article

  • Cisco IOS ACL: Don't permit incoming connections just because they are from port 80

    - by cjavapro
    I am going much based on my memory and I may not be correct on all of this. On a Cisco 851 (IOS) that uses a BVI or a bridge-route (the servers on the inside are configured with static and public IP addresses). I would apply two access lists (both end with deny ip any any log) on FastEthernet4 (the WAN port). There would be one for FA4 in and another for FA4 out. FA4 out would have a line like access-list 110 permit 98.76.54.0 0.0.0.255 gt 1023 any eq http I think this means from 98.76.54.* with a from port of at least 1024 can connect to any other machine with a destination port 80. So, then I have to allow the response to the HTTP connection. FA4 in would have a line like access-list 120 permit any eq http 98.76.54.0 0.0.0.255 gt 1023 Now the problem with that is that anybody on the outside can set their from port to port 80 and then connect to any inside port that is at least 1024. How do we prevent this and require the incoming data to be a response to the outgoing data.

    Read the article

  • Whats the best cloud backup solution for a small scale server environment?

    - by nbv4
    I have a server that runs a postgres database that contains about 200MB of data. Currently I have a cron job setup on my home computer which: ssh's into my server runs a remote script which makes a backup of the database scp's that dump over to my local hard drive for storage. Each dump is 20MiB. does this every six hours (one months of backups is roughly 2GiB) The problem with this setup is that if my local machine goes down for whatever reason, no backups will be made. Also, I can't have the cron run from the server, because I can't have it scp'd to my local machine from my server (firewalls and all that crap). My local machine is running Ubuntu 10.04, and my server is Ubuntu 9.10 server edition. I looked into Ubuntu One, but currently it's gui-only. I also looked into dropbox, but it's a pain in the ass to get setup in linux without gui support. Amazon S3 looks good but it's not free (yet dirt cheap). Is there any other alternative that I should look into? I'd prefer something where I can just have my script dump the database into a directory, and have the backup service 'watch' that folder and sync accordingly. I can maybe also have my local machine sync to the cloud backup so I have even more redundancy, plus easy access to my backups for use in testing. Edit: My server is a VPS, so what solution I end up using has to be 100% software based.

    Read the article

  • Re-packaging commercial software into RPM packages

    - by gac
    The situation is this - I have a small CentOS 5 "cluster" (currently 7 machines, but potential for more) which run a commercially available software package that's distributed essentially in tarball format (it's actually a zip file with a mixture of Windows/Linux binaries and an installation shell script with no potential for automation). I'd like to re-package this somehow into an RPM package (ideally that I can throw onto a self-hosted yum repository) in order to keep these "cluster" machines both up to date and consistent. I could do 7 manual installations, but there's scope for error. As I understand it, I'll need to accomplish the following tasks: add a non-privileged user to the target system for running the daemon without unnecessary root privileges package the binary files themselves up from the final installation location on a separate build machine (probably under /opt/package for sanity's sake). No source is available. add a firewall hole in order for the end-users to be able to communicate with the "cluster" nodes add a cron task which can start the daemon on @reboot I'm coming up with plenty of good packaging resources so far, but all are based on the traditional method (i.e. if I were the vendor packaging up my source files), rather than re-packaging a ton of binary files from an already-installed instance of the application, which is the only option available to me. Anyone have any good resources they can share for achieving this goal? Thanks!

    Read the article

  • Dead Linux server - need help and options

    - by Choi S.
    All, I have a Dell PE 1950 w/ 2 SATA drives in a software RAID1. OS is CentOS 5.5 (2.6.18.x). Starting this afternoon we received HW errors (something on the bus is bad, E171F) and the machine became unresponsive. We hard booted and it came back up for about 5 hours but then it happened again. I'm trying to figure out our options. Unfortunately we do not have similar hardware but I have a small desktop that I can use. I was contemplating putting one of the drives into the desktop and then starting it up. My goal was to then P2V it using Vmware converter but apparently the free v5.x doesn't support hot cloning/converting on a RAID volume, only the Enterprise 4.x version of Converter does. My questions are: 1.) Is putting a single drive out of a RAID1 pair into another piece of HW is safe? Based on my research and understanding it appears to be but would like confirmation. 2.) Is there any work around to the Vmware Converter not supporting RAID volumes during a hot clone/convert session? 3.) Are there other options I'm overlooking? Thanks in advance for reading and responding. --Choi S.

    Read the article

  • nginx short urls for mediawiki

    - by William
    I am trying to do short URLs for a MediaWiki site. The wiki is in a subdirectory mydir (http://www.example.com/mywiki). I've already set up rewrites in /etc/nginx/sites-available so that example.com redirects to example.com/mywiki. Currently the URL is like http://www.example.com/mywiki/index.php?title=Main_Page. I want to clean up the url so that it looks like http://www.example.com/mywiki/Main_Page. I am having quite a bit of trouble doing this. I am not familiar with regular expressions or the syntax that the nginx config files use. This is what I currently have: server_name example.com www.example.com; location / { rewrite ^.+ /mywiki/ permanent; } location /wiki/ { rewrite ^/mywiki/([^?]*)(?:\?(.*))? /mywiki/index.php?title=$1&$2 last; } The second rewrite is obviously the one that's broken. It is based off of Page title -- nginx rewrite--root access in the MediaWiki documentation. When I try to load the site, the browser tells me I get infinite redirects. Does anyone who how I should go about fixing this issue? Or rather, what is the correct way to implement this, and what do all those symbols mean?

    Read the article

  • Apache Named Virtual Hosts and HTTPS

    - by Freddie Witherden
    I have an SSL certificate which is valid for multiple (sub-) domains. In Apache I have configured this as follows: In /etc/apache2/apache2.conf NameVirtualHost <my ip>:443 Then for one named virtual host I have <VirtualHost <my ip>:443> ServerName ... SSLEngine on SSLCertificateFile ... SSLCertificateKeyFile ... SSLCertificateChainFile ... SSLCACertificateFile ... </VirtualHost> Finally, for every other site I want to be accessible over HTTPS I just have a <VirtualHost <my ip>:443> ServerName ... </VirtualHost> The good news is that it works. However, when I start Apache I get warning messages [warn] Init: SSL server IP/port conflict: Domain A:443 (...) vs. Domain B:443 (...) [warn] Init: SSL server IP/port conflict: Domain C:443 (...) vs. Domain B:443 (...) [warn] Init: You should not use name-based virtual hosts in conjunction with SSL!! So, my question is: how should I be configuring this? Clearly from the warning messages I am doing something wrong (although it does work!), however, the above configuration was the only one I could get to work. It is somewhat annoying as the configuration files have an explicit dependence on my IP address.

    Read the article

  • How to make Firefox file associations consistent with Ubuntu file associations?

    - by wbharding
    This seems to be a pretty commonly Google question, but one for which there are no answers. http://www.linuxquestions.org/questions/linux-software-2/firefox-download-mime-types-378902 http://www.birkit.com/content/kubuntu-linux/internet/firefox/fix-file-associations-in-firefox.html Being three links amongst the many. The gist of what I want to accomplish is to have Firefox understand the file associations I download without me having to manually map all of them myself. Gnome knows the file extensions, so I would have expected that Firefox could just use the already-known file mappings there to open the right stuff (as I presume Chrome does). But it doesn't. At least not for me, using Firefox 4, and not by default. When I click on a downloaded file right now, Firefox always has to ask me what application should be used to open the file. A handful of Google results tell me that I can reassociate my file extensions by deleting ~/.mozilla/firefox/[profile name]/mimeTypes.rdf, but while deleting that file does in fact result in a new mimeTypes file being generated, the new mimeTypes is just as barren as the old one had been. Based on the amount of unanswered Qs on the Googlesphere, I know this is a very common problem for Ubuntu users, but it seems to be one for which nobody has chimed in with a good solution. Maybe Superuser can finally be the panacea for us all?

    Read the article

< Previous Page | 709 710 711 712 713 714 715 716 717 718 719 720  | Next Page >