Search Results

Search found 13430 results on 538 pages for 'easy'.

Page 391/538 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • Any Recommendations for a Web Based Large File Transfer System?

    - by Glen Richards
    I'm looking for a server software product that: Allows my users to share large files with: The general public securely to 1 or more people (notification via email, optionally with a token that gives them x period of time to download) Allows anyone in the general public to share files with my users. Perhaps by invitation. Has to be user friendly enough to allow my users to use this with out having to bug me as the admin. It needs to be a system that we can install on our own server (we don't want shared data sitting on anyone else's server) A web based solution. Using some kind or secure comms channel would be good too, eg, ssh Files to share could be over 1 GB. I found the question below. WebDav does not sound user friendly enough: http://serverfault.com/questions/86878/recommendations-for-a-secure-and-simple-dropbox-system I've done a lot of searching, but I can't get the search terms right. There are too many services that provide this, but I want something we can install on our own server. A last resort would be to roll my own. Any ideas appreciated. Glen EDIT Sorry Tom and Jeff but Glen specifically says that he's looking for a 'product' so given that I specialise in this field thought that my expertise in this area may have been of use to him. I don't see how him writing services is going to be easy for him to maintain going forward (large IT admin overhead) or simple for his users and the general public to work with.

    Read the article

  • Removing forward lookup zone broke our site - why?

    - by user102469
    I'm fairly new to the job and trying to get to grips with the infrastructure here. We've moved a site from being locally hosted on our own network to an external host (1&1). I've transferred the DNS hosting from the previous DNS host to 1&1 to keep things simple. Once everything had gone through, visitors that were external to our network were being directed to the new site on 1&1 but requests from within our network were still going to our own server. I noticed in the DNS server that there was a forward lookup zone for the site pointing to our own server still. My (admittedly simplistic) understanding was that pausing that zone would then cause the DNS server to get the address for the site from our external DNS servers and our users would start landing on our new site. However, what happened instead was that they were being met with "page not found" type errors. I've resolved it my modifying the forward lookup zone A record to point to the external web server but would like to get an understanding as to why pausing the zone didn't work. Would deleting the zone work? I am reluctant to try that as creating it again will not be as easy as simply pressing "start". Many thanks.

    Read the article

  • Swap static public IPs without creating DNS conflicts?

    - by Jakobud
    Our ISP is Comcast and we have 5 static public IPs from them that we use for various services, including customers connecting to our network, VPN, web, DNS, etc... We need more IP addresses from Comcast. Unfortunately, Comcast is telling us that they can't just simply give us 5 more addresses. They only give static IP addresses in blocks of 1, 5 or 13. In order for us to get more static IPs, they have to take away our current 5 static IPs and give us 13 new ones. How do we make this transition without causing all sorts of DNS chaos? We run public DNS servers, so we can make the DNS changes ourselves, but it will take some time obviously for those DNS changes to propagate throughout the internet. Are there any easy ways to make this transition? Like create some type of fallback DNS entry or something? Surely there must be some sort of procedure for this kind of thing. The Comcast support guy was useless.

    Read the article

  • Recovering a damaged microSDHC

    - by djechelon
    I just bought from eBay a Kingston 32GB microSDHC that was advertised as defective. The seller said that there could be formatting problems or with transfer of large files. Unfortunately, when I got it, it was a total mess. My Nikon camera doesn't read it at all (OK, maybe it doesn't support 32GB) My Linux laptop doesn't mount it: can't read superblock The same laptop refuses to mkfs.msdos because it failed whilst writing reserved sector The same laptop, under Windows, doesn't read nor format the card HTC HD2 mounts the MMC, allows me to write via USB, but is unable to open the just written files OK, folks, now you would say I would have to go through Paypal complaint... that's not that easy. I consciously bought a half-price card that was known to show some defects, and Paypal complaints take time. Obviously, I can't accept somebody sold me a completely use-less computer decoration. So I'll keep it as last option. My question is Do you know a way, under either Linux or Windows, to thoroughly scan, test and possibly repair memory cards, even if I have to lose some percentage of space because of bad sectors? If I can keep at least half of the card intact it would certainly be fine. I used to do broken sector marking with hard disks in the past. I almost forgot: MONSTR:/home/djechelon # fsck /dev/mmcblk0p1 fsck from util-linux-ng 2.17.2 dosfsck 3.0.9, 31 Jan 2010, FAT32, LFN Read 512 bytes at 0:Input/output error

    Read the article

  • Best photo management software?

    - by Niels Basjes
    Hi, What I would like is a single piece of software (or a smart combination of tools) that allow me to manage my photos in a better way than what I've found so far. 1. Tags Primarily I need a way of tagging the images. So I can manually tag photos the same way we tag questions here at SO/SF/SU. I want this software to place a lot of the tags automagically (obvious things like date and resolution). 2. Face recognition What I would really like is that this software has a feature that it can recognize faces in images and places tags with the name of the person. So far I've only heard of one online photo system that can do that (Picasa) and not yet of any offline tool. 3. Version database I must have some way of having a central GIT/SVN/... that contains all images. I have had a harddrive corruption a few years ago and it took me a long time to figure out which images had been damaged. I always want to be able to go back to what the camera produced. 4. Website I want to be able to generate a website (few 'tag' specific websites) based on the actual content. 5. Easy bulk uploading Many photo tools have a one on one uploading option. I prefer simply 'throwing' my images on a file server under Linux (Samba) and let the system automagically integrate, tag, recognize, etc. all images. Ok, I know these are a bit much. Perhaps you guy's have some suggestions about existing tools that can make this possible. Or even a complete system that does this. EDIT: To clarify on the OS. I prefer Linux for any 'server' task and Windows XP for any 'desktop' task. Thanks for all your input. Niels Basjes

    Read the article

  • File permissions question

    - by Matthew Robert Keable
    I just switched my site's server from Windows to Linux, and am finally able to control file permissions from my ftp. So, seeing that all permissions were 705 by default (and not wanting just anyone to have permission to execute), I went and changed everything to 744. Now, gif and jpg links don't work, pdf download links don't work, php links don't load, and mov files don't play. Conversely, all html files work perfectly. Setting things back doesn't seem to help. Even setting to 777 gets me nowhere. Any ideas on what might be going wrong? I've been googling file permissions all day (solved that problem with the Windows-Linux switch, which has bred a new problem), and I don't think anything I can find has escaped my attention. The site: absis-minas.com Go easy on a n00b. I took up learning php out of interest, and wound up delving into server management issues due to a very simple line of code not working the way it was supposed to. Thanks!

    Read the article

  • How can I set up a local nameserver and modify DNS zones on it?

    - by Joe Hopfgartner
    This is a follow up to this question. I am having an issue with a Router that doesn't support hairpinning properly. See the link above for details. Now I want to set up a local DNS server that Hosts in our LAN can use to resolve public Hostnames (usual webbrowsing... ). Additionally I want to modify certain zones. In our LAN we have some servers serving resources that are not available in our public dns zone. We always have to configure our local LMHost files accordingly. For example we have a staging installation with a new feature running on a local Webserver, and we cannot access it with the IP directly because the website runs in a named virtual host container, we have to configure LMHost file to point some domain to the local IP address. And now we have also the Hair pinning issue. So my question is: What software can I use? Will bind do the job? I just need to insert some A entries into the zone. As easy as possible. We have local Linux/Ubuntu servers.

    Read the article

  • Change font size for "Default Paragraph Font" in Word

    - by Richard Gadsden
    I have a document where the built-in style "Default Paragraph Font" has been set to a particular size. It shouldn't have a size - it should be inheriting from the paragraph style (that's the whole point of the style). If I go through the user interface, I can't modify this style (the modify button / dropdown is greyed out) While I can work around this in most places, it creates problems for the Table of Contents in particular, as that is forced to be in this style and it overrides the font size from the styles like TOC 1 (etc). I can set the font size through VBA - ActiveDocument.Styles("Default Paragraph Font").Font.size = 10 sets it to ten point, but I can't work out how to reset it back to inherit. At the moment, my table of contents is set to be all in the same size, but really TOC 1 should be bigger than TOC 2. Does anyone have any suggestions for how to fix this? One approach is to use the organizer to copy over the style from a working document, but ideally I'd like a way to resolve the problem without doing that - especially as that's not an easy approach to automate.

    Read the article

  • Red Hat server minimal install

    - by chmeee
    In a farm of virtualized Red Hat servers, there's the need to install a minimal system for security reasons. Minimal installs have serveral advantages (even no security related): Lees exposure to vulnerabilities (if you don't need it, don't install it) Better update process (less packages to update, less probability of breaking the system) Better performance (no unneeded daemons or processes) The less software you have the easier it is to harden the system Unfortunately, this is not easy because the "Minimal Installation" on Red Hat contains lots of unnecessary packages. There is an added challenge as the farm is running Oracle iAS. I've been told that iAS has dependencies with local graphical envieronment. So finally every server in the farm has gnome, X, etc. I've been searching the web and one solution seems to be making a kickstart script that will intall only the necessary packages. But I find this difficult and have several doubts about how to maintain the system dependencies afterwards. How do you install minimal Red Hat servers? Is it Ok to use kickstart or will I have dependency problems in the installation or in updates? Is there any way to avoid installing the graphical environment for iAS?

    Read the article

  • Help me understand Ubuntu user/group permissions.

    - by Bartek
    I'm beginning to deal with more than one user on my system (it's a VPS serving some sites) and I need to make sure I understand how group permissions work. Here's my setup: I have an account named "admin" .. it's basically the primary account that is used for serving most of the sites that I control myself. Now, I added a second account named "Ville" as one of my users wants to be able to administer that site. So, I can do this the easy way and just chown their domains folder under the ville user and viola, they have permission to do whatever they need be and so forth. However, let's say I want to also give the admin user access to the files (modifying and all) .. how can I put both users into the same group and give them both permission? I've tried doing: sudo usermod -a -G admin ville To add the ville into the admin group, but ville still cannot edit files by admin. Permissions for the primary directory for the ville user are read/write for both owner and group, and the current group for the files is admin:admin .. But ville still can't write into the directory. So, what should I be doing here to get this right and secure at the same time? Thank you.

    Read the article

  • CPU-adaptive compression

    - by liori
    Hello, Let assume I need to send some data from one computer to another, over a pretty fast network... for example standard 100Mbit connection (~10MB/s). My disk drives are standard HDD, so their speed is somewhere between 30MB/s and 100MB/s. So I guess that compressing the data on the fly could help. But... I don't want to be limited by CPU. If I choose an algorithm that is intensive on CPU, the transfer will actually go slower than without compression. This is difficult with compressors like GZIP and BZIP2 because you usually set the compression strength once for the whole transfer, and my data streams are sometimes easy, sometimes hard to compress--this makes the process suboptimal because sometimes I do not use full CPU, and sometimes the bandwidth is underutilized. Is there a compression program that would adapt to current CPU/bandwidth and hit the sweet spot so that the transfer will be optimal? Ideally for Linux, but I am still curious about all solutions. I'd love to see something compatible with GZIP/BZIP2 decompressors, but this is not necessary. So I'd like to optimize total transfer time, not simply amount of bytes to send. Also I don't need real time decompression... real time compression is enough. The destination host can process the data later in its spare time. I know this doesn't change much (compression is usually much more CPU-intensive than decompression), but if there's a solution that could use this fact, all the better. Each time I am transferring different data, and I really want to make these one-time transfers as quick as possible. So I won't benefit from getting multiple transfers faster due to stronger compression. Thanks,

    Read the article

  • Parsing text files

    - by d03boy
    I encountered a situation tonight where I wanted to parse a text file. I had a very, very long word list that contained English words delimited by lines. I wanted to get rid of every word (or line) that was longer than 7 characters. This would be simple in Linux but I can't seem to find a simple solution in WindowsXP. I tried using Notepad++ regular expression search but that was a huge failure. I tried using the expression .{6,} without finding any matches. I'm really at a loss because I thought this sort of thing would be extremely easy and there would be tons of tools to accomplish a task like this. It seems like Notepad++ supports every other feature in the world except the very basic ones that seem the most obvious. Another one of my goals was to put some code before and after the word on each line. aardvark apple azolio would turn into INSERT INTO Words (word) VALUES ('aardvark'); INSERT INTO Words (word) VALUES ('apple'); INSERT INTO Words (word) VALUES ('azolio'); What suggestions/tools/tips do you have to accomplish tasks similar to this in WindowsXP?

    Read the article

  • Bind a key to a commandline command in Mac OS X?

    - by Stefan Lasiewski
    I have a Mac Powerbook running Leopard (10.5.8). Does Leopard provide an easy way to bind keys to commands which are typically run on the commandline? For example, I can open up Terminal.app and run the command /System/Library/Frameworks/ScreenSaver.framework/Resources/ScreenSaverEngine.app/Contents/MacOS/ScreenSaverEngine which will activate the screensaver and lock my screen. What if I want to bind 'Apple-key L' to this command and execute this globally, regardless of which application is in use at the moment? Can I do this, or can I only run ScreenSaverEngine from a Terminal window? I tried to set up global keyboard shortcuts, but it seems that this won't allow me to bind a key to an arbitrary shell command: Note: You can create keyboard shortcuts only for existing menu commands. You cannot define keyboard shortcuts for general purpose tasks such as opening an application or switching between applications. I tried to set up a application keyboard shortcut, but commands like ScreenSaverEngine don't seem to be an application. Note that this Screensaver/Lock screen is just one example. I have come across other nifty commands which I might want to bind to a key-combination as well. I can do this in Gnome and Windows (with varying success). How about with Leopard? Should I be looking at doing this with AppleScript? (I haven't used that since the Hypercard days ...)

    Read the article

  • Epson Artisan 800 on Ubuntu/Linux

    - by Tim Lytle
    Update for Ubuntu 10.04: Printing should work 'out-of-box', scanning still needs the newer sane backend. Looking for a known good way to setup an Epson Artisan 800 on Ubuntu specifically or any linux box in general. It is a printer/scanner with ethernet/wifi/usb. I'd like to use it as a network printer/scanner being able to do both from my Windows and Ubuntu machines; however, if it needs to be physically connected to a computer (preferably the Ubuntu machine) that is doable (again, then sharing print/scan functions to the network). Basically, I'm looking for someone who has used this printer/scanner (or similar) in a multi-platform environment to share how the set it up and how well it worked. Updated: A little more information, like most printers (I expect) the documentation for the printer basically says, "don't use plug-n-play, run our setup CD from your Windows/Mac system", to do anything (set it up for network use even). I guess that's to make it easy for anyone else to setup, but when you're looking to use it with an unsupported (by Epson's documentation) OS, you're just stuck on your own. What I was hoping for was someone who could say, "Forget the bundled software, do [this] to set it up on wifi manually, install [this] to connect to the scanner from [os], printing works with [this] driver - at least that's how I set it up." I'll will (and have so far) use the information here, and post my own setup when I'm done, if there's no one else out there with that experience.

    Read the article

  • Linux/Unix in Windows

    - by Dmitriy Nagirnyak
    Hi, What would be the best way to get the full-blown Unix/Linux bash inside Windows? I don't mean the Virtual Machine, but rather only the terminal with mounted NTFS drives. This way I could use the power of Unix/Linux still being on Windows. The things I want to be able to do from the terminal: Package management (apt-get in Debian). SSH. File operations (including grub and similar). Run a web server (Apache, nginx) for testing purposes. Easy to use: start terminal - Linux is on, end terminal - Linux is shut down. Would be nice to be able to copy-paste from Windows into Terminal and vice versa. This really feels like a separate OS and I realize that VM would, probably, be the best thing. But I guess it should be possible to have a lighter installation. THE NOTE: I cannot just use Linux because of I still need to do development on Windows. Also I am a Linux noobie - just getting started with it so sorry if asking something obvious/stupid. Thanks, Dmitriy.

    Read the article

  • Can I tell if crashplan has backed up a particular file in a particular state?

    - by Chris Cogdon
    I would like to be able to tell, programmatically, if CrashPlan has backed-up a particular file, including the current updates to that file. I.e., that the current contents of a file are backed up. It's relatively easy to tell when CrashPlan last backed up a file: its file name appears in /usr/local/crashplan/log/backup_files.log.0, and with some accuracy, I could compare the backup time with the last modification time to the file, but that method appears to be somewhat dubious. A couple of methods I could think of, but I don't know how: Compare the current file to CrashPlan's metadata about that file. This needs knowledge about the format of CrashPlan's "cache" files as well as the hashing system used. This might be achievable through the CLI, but the CLI is just a portal into the GUI, and I need something that's scriptable. Restore the file to a temporary directory, and compare it. Unfortunately, there is no CLI to do restores; the GUI is the only way. I'll describe what I'm trying to achieve. It would be nice to know how to do the above, even if there are alternative methods for the following: I'm using CrashPlan for continuous backups to my PostgreSQL database, using WAL archives. In the current configuration, the archive command copies the files to an archive directory, which is backed up by CrashPlan. Every so often I manually confirm (or just trust) a group of WALs are backed up, and remove them from the archive directory, and occasionally do a restore through the GUI to ensure I can retrieve current and "deleted" WALs. The xlog directory is backed-up, too, so I have a good chance of doing a near-full restore even if a particular xlog hasn't been archived by PostgreSQL yet. I'd like to be able to automate this process, which necessitates either confirming the backup status and recency, or automating a restore for comparison purposes. (As a bonus, if the method is trustworthy, I could turn the "archive_command" from "copy to archive directory" into "confirm CrashPlan has backed up the current version", and do away with the archive directory completely). (And, yes, I'm doing regular pg_dumpall's, in addition to the above.)

    Read the article

  • Managing multiple IMAP accounts in Thunderbird

    - by baritoneuk
    I've been using Thunderbird for years without issues with 20+ pop3 accounts. I'm moving over to imap which will enable me to keep copies of the emails locally and on the server whilst keeping everthing synchronised. However I'm looking for the best way to manage multiple imap accounts on Thunderbird. Currently I have a filter that copies all the emails into a central inbox and into seperate local folders. The reason for this is I go through my inbox daily and delete all emails that don't require any action. I move any emails that require action to my "action" imap account folder. This way I can syncronise all the emails that require action across multiple computers (and mobile devices). This technique is my implemantion of the GTD or Getting things Done philosophy. I also copy over each email into seperate local folders. The reason I do this is just in case any emails on the imap accounts get deleted, or something drastic happens on the server which means I lose all the emails. My business partner has access to some of these emails and still uses pop3 (with "leave copy on server" checked), but I know sometimes Thunderbird can still delete emails off the server sometimes. The problem with the above is that thunderbird gives me the dreaded error dialogue saying that the emails cannot be filtered due to another process. I find the folder list in Thunderbird hard to manage. Here is a screenshot of part of my folder list- as you can see it's a bit of a complicated list and not easy to manage: What would be the best way of me managing multiple imap accounts whilst allowing me to have copies put in a central folder and emails in local folders? It would be useful if people think this is necessary, as perhaps there is a betterway? How do people manage multiple imap accounts in a way that allows them to keep on top of actionable emails? I'd be interested in how others manage this. I've never used the Thunderbird-based client "Postbox", does this handle multiple imaps better?

    Read the article

  • Why won't this script accept any arguments?

    - by Nate Wagar
    I'm trying to write an SVN post-commit hook and, strangely, am getting hung up on what should be the easiest part. The Script: set REPO="$1" set REV="$2" set SVNBIN="/opt/CollabNet_Subversion/bin/" set SSHBIN="/usr/bin/ssh" set HOST="staging.domain.net" set timeout=30 set USERNAME="svn-usr" set E_NO_CONNECT=2 set E_WRONG_PASS=3 set E_UNKOWN=25 set CHANGED=`"$SVNBIN"svnlook changed --revision $REV $REPOS` echo "Here are changes: $CHANGED" >> /var/svn/repos/www/logs/testing echo "Command: $0; Repo: $REPO; Rev: $REV; Total: $#" >> /var/svn/repos/www/logs/testing set PROJECT "" Yet when I call it, it doesn't seem to be seeing the arguments I pass to it: /var/svn/repos/www/logs> sudo ../hooks/post-commit /var/svn/repos/www 33 svnlook: missing argument: --revision Type 'svnlook help' for usage. /var/svn/repos/www/logs> cat testing Here are changes: Command: ../hooks/post-commit; Repo: ; Rev: ; Total: 1 This is on a Solaris 10 SPARC box. I'm a bit of a script newbie, but shouldn't this be really easy??

    Read the article

  • How (in)secure are cell phones in reality?

    - by Aron Rotteveel
    I was recently re-reading an old Wired article about the Kaminsky DNS Vulnerability and the story behind it. In this article there was a quote that came across a little bit exaggerated to me: "The first thing I want to say to you," Vixie told Kaminsky, trying to contain the flood of feeling, "is never, ever repeat what you just told me over a cell phone." Vixie knew how easy it was to eavesdrop on a cell signal, and he had heard enough to know that he was facing a problem of global significance. If the information were intercepted by the wrong people, the wired world could be held ransom. Hackers could wreak havoc. Billions of dollars were at stake, and Vixie wasn't going to take any risks. When reading this I could not help but feel like it was a bit blown-up and theatrical. Now, I know absolutely nothing about cell phones and the security problems involved, but to my understanding, cell phone security has quite improved over the past few years. So my question is: how insecure are cell phones in reality? Are there any good articles that dig a bit deeper into this matter?

    Read the article

  • Mercurial internal Setup on Windows 7 - Exception happened during processing of request from ...

    - by Sad0w1nL1ght
    Hy, i have 1 central repository and many locals. On my machine i have local and a central repository too. I can make clone/commit/update/push/pull very easy between the local and central repository on my local machine. but when i want to make a clone from another machine it gets an error. listening at http://MyLocalMachine:8000/ (bound to *:8000) ---------------------------------------- Exception happened during processing of request from ('192.168.0.194', 49319) Traceback (most recent call last): File "SocketServer.pyc", line 558, in process_request_thread File "SocketServer.pyc", line 320, in finish_request File "mercurial\hgweb\server.pyc", line 47, in __init__ File "SocketServer.pyc", line 615, in __init__ File "BaseHTTPServer.pyc", line 329, in handle File "BaseHTTPServer.pyc", line 323, in handle_one_request File "mercurial\hgweb\server.pyc", line 79, in do_GET File "mercurial\hgweb\server.pyc", line 70, in do_POST File "mercurial\hgweb\server.pyc", line 63, in do_write File "mercurial\hgweb\server.pyc", line 127, in do_hgweb File "mercurial\hgweb\hgweb_mod.pyc", line 86, in __call__ File "mercurial\hgweb\hgweb_mod.pyc", line 118, in run_wsgi ErrorResponse ---------------------------------------- The command line wich started the central repo: hg serve -R TT -n TTZoli The command from remote machine for cloning: hg clone --pull http://MyLocalMachine:8000/TT Config for the central repo: [ui] username = MyLocalUserName username = test <[email protected]> with this user i'm trying to acces the central repo [web] push_ssl = false Config for the remote repo: [ui] username = test <[email protected]> [web] push_ssl = false I'm not sure if it's relevant,my firewall is turned off on both machines, and the files in /hg folder are not versioned on the server, except hgignore. Could you please suggest some ideas? What could be the problem? Thanks in advance!

    Read the article

  • How would I setup iMail to forward a user's mail to another service w/o leaving a copy locally?

    - by Scott Mayfield
    I have an iMail 2006 server installation in which I have a particular user that has several aliases that all point to a single user (me, for the record). I've been copying all of my mail to GMail and reading it there, but it annoys me that I have to go back weekly and log into my mail account on iMail and delete between 6 and 10 thousand copies of messages I've already received, in order to keep my mailbox from filling up (yes, I have it set with no quota, but I consider it bad form to just let the box grow indefinitely). I've got the copying setup via an inbound user rule, but I'm wondering how to accomplish a "copy and delete" rule. The manual isn't clear on what happens with multiple matching rules (will they be processed in order, or is it a first match situation?) and there isn't a means to combine multiple actions into a single rule. If I use the "forward" action, I THINK that it's going to screw up all the sender information once the mail reaches my GMail account and show it as coming from me instead of the original senders (can anyone confirm that this is accurate?) An easy answer would be to delete my user account entirely, replace it with an alias that maps to my GMail account, but then I would lose my ability to log into the system for admin duties. So that leads me to creating a second, lesser known account for admin use, but since it's a real account, sooner or later I'm going to get mail sent to it and I'll be back to the same situation of having a user account that doesn't get emptied periodically. I imagine I can set the quota to 0 MB to cause all incoming mail to my admin account to bounce, or setup an inbound rule to bounce everything, but this is starting to sound kludgy to me. Does anyone know of a more direct work around to copying a user's incoming mail to an outside server and then deleting the local copy w/o removing their account entirely? Or is this just wishful thinking? Thanks in advance. Scott

    Read the article

  • Accessing Virtual Host from outside LAN

    - by Ray
    I'm setting up a web development platform that makes things as easy as possible to write and test all code on my local machine, and sync this with my web server. I setup several virtual hosts so that I can access my projects by typing in "project" instead of "localhost/project" as the URL. I also want to set this up so that I can access my projects from any network. I signed up for a DYNDNS URL that points to my computer's IP address. This worked great from anywhere before I setup the virtual hosts. Now when I try to access my projects by typing in my DYNDNS URL, I get the 403 Forbidden Error message, "You don't have permission to access / on this server." To setup my virtual hosts, I edited two files - hosts in the system32/drivers/etc folder, and httpd-vhosts.conf in the Apache folder of my WAMP installation. In the hosts file, I simply added the server name to associate with 127.0.0.1. I added the following to the http-vhosts.conf file: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www/ladybug" ServerName ladybug ErrorLog "logs/your_own-error.log" CustomLog "logs/your_own-access.log" common </VirtualHost> <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www" ServerName localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" common </VirtualHost> Any idea why I can't access my projects from typing in my DYNDNS URL? Also, is it possible to setup virtual hosts so that when I type in http://projects from a random computer outside of my network, I access url.dyndns.info/projects (a.k.a. my WAMP projects on my home computer)? Help is much appreciated, thanks!

    Read the article

  • How to edit known_hosts when several hosts share the same IP and DNS name?

    - by Frédéric Grosshans
    I regularly ssh into a computer which is a dual-boot OS X / Linux computer. The two OS instance do not share the same host key, so they can be seen as two host sharing the same IP and DNS. Let's say the IP is 192.168.0.9, and the names are hostname and hostname.domainname As far as I understood, the solution to be able to connect to the two host is to add them both to the ~/.ssh/know_hosts file. However, it is easier said than done, because the file is hashed, and has probably several entries per host (192.168.0.9, hostname, hostname.domainname). As a consequence, I have the following warning Warning: the ECDSA host key for 'hostname' differs from the key for the IP address '192.168.0.9' Is there an easy way to edit the known_hosts file, while keeping the hashes. For example, how can I find the lines corresponding to a given hostame? How can I generate the hashes for some known hosts? The ideal solution would allow me to connect to seamlessly to this computer with ssh, no matter whether I call it 192.168.0.9, hostname or hostname.domainname, nor if it uses its Linux hostkey or its OSX hostkey. However, I still want to receive a warning if there is a real man-in-the middle attack, i.e. if another key than these two is used.

    Read the article

  • Is there the equivalent of cloud computing for modems?

    - by morpheous
    I asked this question on SF, and someone recommended that I ask it here - (I don't think I have enough points to move a question from SF to SO - and in any case, I don't know how to do it - so here is the question again): I am interested in the concept of PAAS (platform as a service). However, all talk about SAAS/PAAS seems to focus on only the computer itself - not its peripherals. Is it possible to 'outsource' modems as a resource - so that an app running remotely can pump data to a modem in the cloud? As a bit of background to the question, a group of us are thinking of starting a company that offers similar services to companies like twilio etc - but I want to 'outsource' both the computing hardware (thats PAAS - the easy bit) and the modems (thats what I cant seem to find any info on). Does anyone know if modems can be bundled as part of a PAAS service? - alternatively, is there a way that an application running on one computer can communicate (i.e. pump data) to a remote modem residing on another machine?. I assume I can come up with some protocol over UDP or TCP - but there is no point reinventing the wheel - if such a protocol like that already exists (or if it some open source software allows one to do this). Any suggestions on how to solve this problem?

    Read the article

  • Can I disable this Windows (XP) Security Warning?

    - by FumbleFingers
    I recently reformatted my hard drive and reinstalled Windows XP (I know I'll have to take the plunge and commit to Win8 "real soon, now", but I'm just not quite ready for the upheaval yet! :) I used to use WinRar (and later, when I got fed up with the "nag" messages, 7-Zip), but I haven't installed either of them in my new configuration, so I must be using the built-in XP facility when I open *.zip files. For years, I've been opening downloaded *.zip archives, and using "drag & drop" to copy to a File Explorer window open on the folder where I want the files to end up (usually, My Documents\Downloads). But now I find that when I "drop" the file(s), I get a pop-up Windows Security Warning saying Are you sure you want to copy or move files to this folder? You should only move or copy files from locations that you trust Can anyone explain why I'm getting this message, and is there any (reasonably easy, please! :) way to suppress it? Since I've already put the *.zip file on my computer, it seems a bit late to ask if I trust it. (Thus far, the files in question have always been plain text, so it's not a matter of dodgy programs, etc.) Apologies for the low quality image - I don't have the appropriate tools or knowledge to do any better, and it doesn't help that my "PrtScr" screen capture has included what would have been on my second monitor (TV) if it had been turned on. If you can't read it, trust me - I have copied the text verbatim.

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >