Search Results

Search found 31417 results on 1257 pages for 'site structure'.

Page 475/1257 | < Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >

  • IIS 7: One Page Works, All Others Fail With "Error code: ssl_error_rx_record_too_long"

    - by Michael
    On my local machine, I have a second site bound to Port 81. Within that site is a certain page which I can browse from other machines with no problems, but all other pages fail with "Error code: ssl_error_rx_record_too_long". Each of the failing pages (as well as the lone working page), works with localhost. So, from any machine, local or remote: http://cmwmach01.mydomain.biz:81/RD/SS/SS.aspx (works) http://cmwmach01:81/RD/SS/SS.aspx (works) http://cmwmach01.mydomain.biz:81/RD/POV/SC.aspx (fails - gets changed to https) http://cmwmach01:81/RD/POV/SC.aspx (fails - gets changed to https) Everything works with localhost (locally, of course). I've tagged this question with SSL because, at one point, it would warn about an SSL cert issue (maybe this was self-signed at one point?), but now it doesn't. While there may be an issue around that, I don't see how this could cause the issue I am seeing (but, as I mention below, am I way out of my depth here). I am way out of depth here in trying to figure out why that one page works (or the others don't), so that I can make them all work. Any ideas?

    Read the article

  • Apache on CentOS 5.9 VM serves my optimized images corrupted (but my Mac doesn't)

    - by Robert K
    I'm using a Vagrant VM to mirror the client's environment as closely as I can. As part of our build process we do no optimization of assets early on; that comes as we're ready to take a site live. Needless to say, this issue is beginning to worry me as we need to take the site live very soon. I use ImageOptim to automate optimization of image assets, which runs a whole series of tools (Zopfli, PNGOUT, OptiPNG, AdvPNG, PNGCrush). I always set the optimizations to their maximum setting. After optimization, my PNGs start looking like this: What's weird is, if I serve the same file through my Mac's copy of Apache, not through Vagrant, the image loads fine. In fact, the only time it's ever corrupt like this is when the image is served from the Vagrant VM and its install of Drupal. All optimized JPEGs display only the first ~20% of the image. And PNGs, depending on the image, may show either a portion or the "progressive"-style corruption below. The browser itself makes no difference, the same browser will serve an uncorrupted image from my Mac's Apache instance and a corrupt image from the VM. When I disable all PNG optimizations except PNGCrush, and the removal of the PNG metadata, the image is served corrupted. I'm optimizing JPEG images with JPEGmini. The server is running CentOS 5.9, Apache 2.2.3-85, PHP 5.3.3, and Drupal 7. As best as I can tell the error lies somewhere within the VM, either with Apache or with (perhaps) the network stack. Seems like the tools that optimize the compression of the PNGs and JPEGs are what trigger this error. I've already determined that the .htaccess file isn't interfering with how the images load. What should I try to troubleshoot this?

    Read the article

  • How can I get write permission for the Web (Inetpub) directory on a new Win 7 machine?

    - by marcipollo
    I mirror my Web site on my laptop, and am trying to move the mirror site to a new laptop. I copied the files to the Inetpub directory, and can view them perfectly, but they are read-only (the check-mark is grey, not black), and I cannot change the permission. When I un-check the read-only attribute on the Inetpub directory, and click "apply" it displays a dialog box stating that I need administrative permission to change the attributes. (I am logged in as an administrator). When I click "continue," it pops up another dialog box saying access is denied to the attributes of the file: c:\inetpub\custerr\en-us\500-100.asp That dialog box has an "ignore" button, and if I click that, it appears to work through the directory tree setting the permissions. It leaves all of the files (leafs) set to "read-write," but the directories remain "read only." I am using 64-bit Windows 7. I stopped the IIS service while doing all of this. Might it have something to do with the fact that I copied the files from a different machine in the workgroup (my old laptop)?

    Read the article

  • where Redirect permanent rule need to be add

    - by eli1128
    I want redirect my web site http request to https my web site is https://test my apache is version 2.4 and ssl configration is (ssl.conf) on separate file from httpd.conf and I am not using .htaccess file so where I should append. i have tried on both file but didn't work. Redirect permanent / https://test is that should be on my httpd.conf or ssl.conf or did I miss something else. I prefer to use redirect over rewrite. Rewrite.log 10.10.86.1 - - [05/Apr/2012:15:10:19 --0700] [test/sid#7ce00][rid#277448/initial/redir#1] (2) init rewrite engine with requested uri /error/HTTP_BAD_REQUEST.html.var 10.10.86.1 - - [05/Apr/2012:15:10:19 --0700] [test/sid#7ce00][rid#277448/initial/redir#1] (3) applying pattern '^(.*)$' to uri '/error/HTTP_BAD_REQUEST.html.var' 10.10.86.1 - - [05/Apr/2012:15:10:19 --0700] [test/sid#7ce00][rid#277448/initial/redir#1] (4) RewriteCond: input='off' pattern='!=on' = matched 10.10.86.1 - - [05/Apr/2012:15:10:19 --0700] [test/sid#7ce00][rid#277448/initial/redir#1] (2) rewrite /error/HTTP_BAD_REQUEST.html.var - *ttps://test/error/HTTP_BAD_REQUEST.html.var[QSA,R=301,L] 10.10.86.1 - - [05/Apr/2012:15:10:19 --0700] [test/sid#7ce00][rid#277448/initial /redir#1] (2) implicitly forcing redirect (rc=302) with *ttps://test/error/HTTP_BAD_REQUEST.html.var[QSA,R=301,L] 10.10.86.1 - - [05/Apr/2012:15:10:19 --0700] [test/sid#7ce00][rid#277448/initial/redir#1] (1) escaping *ttps://test/error/HTTP_BAD_REQUEST.html.var[QSA,R=301,L] for redirect 10.10.86.1 - - [05/Apr/2012:15:10:19 --0700] [test/sid#7ce00][rid#277448/initial/redir#1] (1) redirect to *ttps://test/error/HTTP_BAD_REQUEST.html.var%5bQSA,R=301,L%5d [REDIRECT/302]

    Read the article

  • Successfully concatenating multiple videos

    - by wiseguydigital
    My mission is to create videos out of old web slideshows. To start with I have jpegs and audio files that worked as Flash slideshows in an old system, structured such as this: Audio structure my_audio_1.mp3 (this file is a 3 second mp3 of silence) my_audio_2.mp3 my_audio_3.mp3 my_audio_4 etc... roughly 30 mp3s per slideshow Image structure my_image_1.jpg (this acts as the opening slide) my_image_2.jpg my_image_3.jpg my_image_4. etc... roughly 30 images per slideshow. As there are almost 100 slideshows that must be converted to video, I have created a web-based interface using PHP to automate the process, that sits on a local system and attempts to combine the files using shell_exec(). The process uses the following workflow: Loop through each slide and make an avi or mpeg. So for instance my_mini_video_2.avi would be a video that consists of my_image_2.jpg and has a soundtrack of my_audio_2.mp3. This slide would last the length of my_audio_2.mp3. Join / stitch / concat all of the mini videos to create the final video (Using a combination of cat and either mencoder or ffmpeg (I have also tried avimerge but to no avail). Transcode the new 'master' video to various formats such as flv etc. I thought this would be simple and have been close on many occasions but it still won't work. I can't get past stage 2 as I can't get a perfect 'master' video. I have now experimented with Mencoder, FFMpeg and seem to have been through every combination I can think of. The problem is that the audio and visuals never sync, no matter what I try. Also, I have even tried created audio-less mini videos, joining the MP3s into one long MP3 using both cat and mp3wrap and then assigning the new long MP3 as the audio track, but this always produces either a very short file or a badly slowed down file and makes the female voiceover sound like a male boxer!!! There appears to be no problems at all with the original files. Does anybody have any experience in producing a video successfully from the same kind of starting point? Or any ideas on what I may be doing wrong? As an example: If I create silent mini-videos, and stitch them together into 'temp-master.mpg' and then join the MP3s together into single MP3 called 'temp-master-audio.mp3', the audio file's duration is 09:10 and the video file's duration is 08:35. They should be the same and the audio will seem sloooow. I haven't posted code as I have written lots and lots of combinations.

    Read the article

  • Is it possible to modify/rebuild an rpm without the srpm?

    - by warren
    I have an rpm for which I need to change the preinstal scriptlet for testing. However, I do not have the SRPM from which is was built. Is it possible to change the scriptlet and/or rebuild the rpm without having the SRPM? If so, how? I've tried using Midnight Commander (mc) to open the rpm as a directory structure and edit the contents, but even with 444 permissions, it won't let me save any changes.

    Read the article

  • Copy all text in a LibreOffice Draw drawing

    - by harbichidian
    I have a large flowchart, created in LibreOffice Draw (3.3.1), that I would like to copy all of the text from. I do not need, nor care about the order or structure, I just need all of the text from within the blocks. I can't seem to find any way to export without turning it into an image, and none of the "Paste Special" options allow me to get unformatted text. Is there a way to do this without retyping everything?

    Read the article

  • Is my current htaccess setting hurting SEO?

    - by user656002
    I have a site that I have redirecting to https. I do this to leverage wildcard SSL for my password protected pages. Everything seems to work fine with testing. For example, whether you type in http or www, you always get redirected to the SSL https... That said, I have about 200-300 external backlinks -- many high quality, yet google webmaster (along with SEOMoz), shows I have just 4... Huh? I'm embarrassed to say I just discovered this. This has led me to hypothesize that maybe my settings in htaccess is messed up, so google isn't recognizing a link because it's recorded on another site as http, instead of https. Maybe? At any rate, here is my simple htaccess setting for 301 www to http (The https redirect must be done inside the virtual host file--I think). I don't have anything in the htaccess file for https RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] Like I said, everything works fine for redirect over https, so I'd rather not screw up what works. On the other hand something is very wrong with google finding all my back links, so I need to fix something... I'm just wondering that maybe google isn't picking up a my backlinks from other websites recording me as http because I'm at https. Maybe google doesn't care and it's some other issue. Am I barking up the right tree? If so any quick fixes? Thanks as always!

    Read the article

  • Domain joining debate for Outlook 2010 with Exchange 2007 on windows SBS 2008 for a user on a laptop that will travel a fair amount of the time.

    - by user71195
    I'm basically debating on whether or not to join the Domain on a Laptop, and was wondering if anyone has had a similar experience. If the computer were staying in the office, its a no brainer. Join the domain. In this case I have a user who will come into the office a few days a week, and work remotely the rest of the time. There is a working VPN using OpenVPN client/server, but it's not site-to-site. My knee jerk reaction is to not join the domain, so that the user can have 1 profile that they always use. In this configuration, should Outlook work properly with the user's domain account, and should the shared calendar still work (at least once inside the VPN)? My concern with joining the domain would be the inability to login to it when elsewhere. Is there maybe a way around this with caching or something? Would creating a second local login make sense for a user like this in any way? If so, why not just skip the domain join to begin with? Any thoughts on or experiences with this would be appreciated. Laptop OS Windows 7 (Not purchased yet.. pro if domain needed) Server SBS 2008, Exchange 2007 Outlook version 2010 Thanks for any help, Mike

    Read the article

  • GIT and Django Projects

    - by Garfonzo
    I have two servers, a Dev server and a Production server. The Production server runs a live Django site, while the Dev server has a copy of the Django project. I use the Dev server to work on the Django site, make improvements, fix bugs, etc. Once I am satisfied with how the Dev version is working, I move the whole Django directory from the Dev server and replace the same directory on the Production server. The two servers are not on the same LAN so the process is not straight forward. There are a few issues with this that I am having so far. Moving the whole directory is laborious and time consuming If I only change a few files, it is even move tedious to replace a few files than the whole directory since the project is getting fairly large and I worry that I'll miss something I often run into permission issues after I've moved things It's super inefficient, and, due to lack of time, I haven't bothered figuring out a new method. Now it's just getting out of hand and i need to address the situation. I am thinking I need to move to a GIT repository for this process. But my question is how would I set this all up? Do I host the repository on the Production server, pull from the Dev server, do work, then commit? Then I would pull from the Production server (same server the repo is hosted on) to run the current working version? Do I host the repo on the Dev Server, pulling from the same server to do work on the repo, then pull a working version onto the Production server? Should I be hosting the repo on a different server than the Production server and the Dev server (a third server)? Are there any special considerations with Django and repos that I need to worry about? Thanks for the help :)

    Read the article

  • Rsync to a WebDAV filesystem on OSX copies all files regardless of being changed or not.

    - by MarceloR
    I am trying to sync my (Mac) desktop, with an iPad and an iPhone. OSX mounts WebDAV as a native filesystem, but syncing results in all files in my directory structure being copied again. This occurs when I use rsync -a or even a simple rsync -r. Various iPhone OS apps use the WebDAV server in iPhone OSX to transfer files. This occurs on several apps I use including GoodReader.

    Read the article

  • Why should I use a puppet parametrized class?

    - by robbyt
    Generally when working with complex puppet modules, I will set variables at the node level or inside a class. e.g., node 'foo.com' { $file_owner = "larry" include bar } class bar { $file_name = "larry.txt" include do_stuff } class do_stuff { file { $file_name: ensure => file, owner => $file_owner, } } How/when/why does parametrized classes help when this situation? How are you using parametrized classes to structure your puppet modules?

    Read the article

  • Designing a persistent asynchronous TCP protocol

    - by dogglebones
    I have got a collection of web sites that need to send time-sensitive messages to host machines all over my metro area, each on its own generally dynamic IP. Until now, I have been doing this the way of the script kiddie: Each host machine runs an (s)FTP server, or an HTTP(s) server, and correspondingly has a certain port opened up by its gateway. Each host machine runs a program that watches a certain folder and automatically opens or prints or exec()s when a new file of a given extension shows up. Dynamic IP addresses are accommodated using a dynamic DNS service. Each web site does cURL or fsockopen or whatever and communicates directly with its recipient as-needed. This approach has been suprisingly reliable, however obvious issues have come up and the situation needs to be addressed. As stated, these messages are time-sensitive and failures need to be detected within minutes of submission by end-users. What I'm doing is building a messaging protocol. It will run on a machine and connection in my control. As far as the service is concerned, there is no distinction between web site and host machine -- there is only one device sending a message to another device. So that's where I'm at right now. I've got a skeleton server and a skeleton client. They can negotiate high-quality authentication and encryption. The (TCP) connection is persistent and asynchronous, and can handle delimited (i.e., read until \r\n or whatever) as well as length-prefixed (i.e., read exactly n bytes) messages. Unless somebody gives me a better idea, I think I'll handle messages as byte arrays. So I'm looking for suggestions on how to model the protocol itself -- at the application level. I'll mostly be transferring XML and DLM type files, as well as control messages for things like "handshake" and "is so-and-so online?" and so forth. Is there anything really stupid in my train of thought? Or anything I should read about before I get started? Stuff like that -- please and thanks.

    Read the article

  • Enabling `mod_rewrite` apache, permissions issues

    - by rudolph9
    In attempting to enable mod_rewrite on the Apache2 web server installed with Mac OSX 10.7.4. Following these instruction, ultimately using the configuration to host CakePHP applications, I run into permissions issues accessing the site via a web browser when I set the directory block associated with cakephp site /etc/apache2/users/username.conf from: <Directory "/Users/username/Sites/"> Options Indexes FollowSymLinks MultiViews AllowOverride none Order allow,deny Allow from all </Directory> /etc/apache2/users/username.conf to: <Directory "/Users/username/Sites/"> Options Indexes MultiViews AllowOverride none Order allow,deny Allow from all </Directory> <Directory "/Users/username/Sites/cakephp_app/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny Allow from all </Directory> The .htaccess files are the CakePHP 2.2.2 default as follows: /Users/username/Sites/cakephp_app/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule> /Users/username/Sites/cakephp_app/app/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ webroot/ [L] RewriteRule (.*) webroot/$1 [L] </IfModule> /Users/username/Sites/cakephp_app/app/webroot/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php [QSA,L] </IfModule> When performing the request via a web browser at to http://0.0.0.0/~username/cakephp_app/index.php the content of the response is Not Found The requested URL /Users/username/Sites/cakephp_app/app/webroot/ was not found on this server. Apache/2.2.21 (Unix) DAV/2 PHP/5.3.10 with Suhosin-Patch Server at 0.0.0.0 Port 80 Upon a request to http://0.0.0.0/~username/ and http://0.0.0.0/~username/cakephp_app/, added to /var/log/apache2/error_log accordingly are the following: [Tue Sep 04 22:53:26 2012] [error] [client 127.0.0.1] File does not exist: /Library/WebServer/Documents/Users, referer: http://0.0.0.0/~username/ [Tue Sep 04 22:53:26 2012] [error] [client 127.0.0.1] File does not exist: /Library/WebServer/Documents/favicon.ico What is causing the issue? Is there server program, ideally available via a homebrew script, which would make hosting CakePHP applications for testing purposes more effective and efficient?

    Read the article

  • Referer is passed from HTTPS to HTTP in some cases... How?

    - by ravisorg
    In theory browsers do not pass on referer information from HTTPS to HTTP sites. And in my experience this has always been true. But I just found an exception, and I want to understand why it works so I can use it as well. Search for "what is my referer" on https://www.google.ca/ eg: https://www.google.ca/search?q=what+is+my+referer There are a few sites that will show referer. They all seem to "work" when they shouldn't. For example, click the www.whatismyreferer.com one. I get: Your referer: https://www.google.ca/ Note that sometimes, rarely, I get "no referer" as the result. Go back and click the link again and it'll "work" the next time. This should not happen. www.whatismyreferer.com is a non-HTTPS site. The referer header should not be being passed, but it is. What's going on here, and how can I do the same from my HTTPS site to the HTTP sites I'm linking to?

    Read the article

  • Suggestions for transitioning to new GW/private network

    - by Quinten
    I am replacing a private T1 link with a new firewall device with an ipsec tunnel for a branch office. I am trying to figure out the right way to transition folks at the new site over to the new connection, so that they default to using the much faster tunnel. Existing network: 192.168.254.0/24, gw 192.168.254.253 (Cisco router plugged in to private t1) Test network I have been using with ipsec tunnel: 192.168.1.0/24, gw 192.168.1.1 (pfsense fw plugged in to public internet), also plugged in to same switch as the old network. There are probably ~20-30 network devices in the existing subnet, about 5 with static IPs. The remote endpoint is already the firewall--I can't set up redundant links to the existing subnet. In other words, as soon as I change the tunnel configuration to point to 192.168.254.0/24, all devices in the existing subnet will stop working because they point to the wrong gateway. I'd like some ability to do this slowly--such that I can move over a few clients and verify the stability of the new link before moving critical services or less tolerant users over. What's the right way to do this? Change the netmask on all of the devices to /16, and update gateway to point to the new device? Could this cause any problems? Also, how should I handle DNS? The pfsense box is not aware of my Active Directory environment. But if I change DNS to use the local servers, it will result in a huge slowdown as DNS queries will still be routed over the private t1. I need some help coming up with a plan that's not too disruptive but will really let me thoroughly test the stability of the IPSEC tunnel before I make the final switch. The AD version is 2008R2, as are the servers. Workstations are mostly Windows XP SP3. I have not configured the 192.168.1.0/24 as a site in AD sites and services.

    Read the article

  • Changing farm account in Sharepoint 2010

    - by user55709
    After changing the the farm account to a domain user account I get the following error when trying to access the Central Administration page: "Cannot connect to configuration database" After I realized the headache may not be worth it, I decided to a reinstall using the following SP user account guidelines: http://technet.microsoft.com/en-us/library/ee662513.aspx After getting everything up, I am getting an error when using the designated farm account under the Central Admin Website Manage Service Accounts: "Access denied" If it is the farm administrator, why would I not be able to manage service accounts? I am able to access the other part of the admin site. Also, when logging in with the farm account it lists me as a "system account" not the domain account which I used for log in. Am I missing something or is this normal behavior? Am I not suppose to login with the farm account? When I log in with the Setup account (also a domain account) I can access everything with no errors on the site. The only difference between the two accounts is one has local admin privileges on the Sharepoint farm server which is the setup account. if you notice those privileges are not necessary for the farm account according to the article cited.

    Read the article

  • What method of MySQL mirroring should I use for this?

    - by user45745
    I'm running an web application hosting service (basically hosting forums for free), and I have two remote servers at my disposal. The code for the application is stored on both servers and isn't a problem, but I'm wondering how to deal with the databases. When someone goes onto a site *.example-host.com, they are sent to one of the two servers and both must be capable of loading the forums from a database. The database must also have write access, for when new members register or post topics etc. The main requirement is speed, but uptime is also important (if a server goes out, the site should still work). I have a few options, but I'm inexperienced and not sure which to go with: 1) [PHP] Split the forum records 50:50 between the two servers. If a server does not have the record for a forum requested, it can request it from the other by remote MySQL and load it. This idea sounded okay, until I realised that 50% of the time, users would be waiting significantly longer for pages to load. I also realised that if one of the servers went down, half the forums would be inaccessible and registrations would have to be disabled. 2) [MySQL] Dual master replication. This would attempt to mirror the two databases and sounds perfect, but I've heard that it can be very problematic. I don't know how fast this is. 3) [MySQL] Use a standard replication, distribute read only queries on both nodes and read/write queries to the master. This sounds like a good option, but again, I'm not sure on speed. I also don't know what would happen if the master server went down. If you have any other suggestions, please post them :)

    Read the article

  • How to keep multiple servers in sync file wise?

    - by GForceSys
    I'm currently managing a cluster of PHP-FPM servers, all of which tend to get out of sync with each other. The application that I'm using on top of the app servers (Magento) allows for admins to modify various files on the system, but now that the site is in a clustered set up modifying a file only modifies it on a single instance (on one of the app servers) of the various machines in the cluster. Is there an open-source application for Linux that may allow me to keep all of these servers in sync? I have no problem with creating a small VM instance that can listen for changes from machines to sync. In theory, the perfect application would have small clients that run on each machine to be synced, which would talk to the master server which would then decide how/what to sync from each machine. I have already examined the possibilities of running a centralized file server, but unfortunately my app servers are spread out between EC2 and physical machines, which makes this unfeasible. As there are multiple app servers (some of which are dynamically created depending on the load of the site), simply setting up a rsync cron job is not efficient as the cron job would have to be modified on each machine to send files to every other machine in the cluster, and that would just be a whole bunch of unnecessary data transfers/ssh connections.

    Read the article

  • Free web gallery installation that can use existing directory hierarchy in filesystem?

    - by user1338062
    There are several different free software gallery projects (Gallery, Coppermine, etc), but as far as I know each of those creates a copy of imported images in their internal storage, be it directory structure or database). Is there any gallery software that would allow keeping existing directory hierarchy of media files (images, videos), as-is, and just store the meta-data of them in a database? I guess at least various NAS solutions ship with software like this.

    Read the article

  • 7-Zip Command Line Maximum Compression

    - by Steve Robathan
    I am writing a batch file to compress a folder using various archiving applications. Currently I also use 7-Zip but manually set up the parameters I would like to add 7-zip to my batch The folder concerned has many sub folders and I need to take this into account What is the command line for the following keeping folder structure?: Archive Format=7z Compression Level=Ultra Compression Method=LZMA Dictionary Size=512MB Word Size=273 Solid Archive Many thanks

    Read the article

  • Set up layer 2 vlan between 2 data centres

    - by user41679
    Hello, Our data centre provider operates 2 sites, and we currently have equipment in one and would like to have equipment in the second. They've told me that they operate a layer 2 vlan between the 2 sites over a 20gbit connection, and that they'd just give me ethernet cable at each end to connect the locations. At the current site, we have Cisco 2960 48TC-L switches, all the machines are on a 192.168.x.x subnet and we have cisco firewalls with which we connect to our internet provider with. My question is what would I need to do to connect the 2 sites? could I just plug the ethernet cables the provide into the cisco switches, and have the same switches the other end? would I need to set up a separate internal network on the other side and connect both through the firewalls? Would the cisco switches need special configuration? We expect to maintain a number of connections between the 2 sites, and each site would have its own internal dns name like dc1.xx.com. Sorry if I'm being vague or haven't included enough information, I've a fairly good knowledge of hardware but we're down a netops guy at the moment and I'd like to get both sites on-line ASAP! Thanks in advance!

    Read the article

  • Encrypt windows 8 file history

    - by SnippetSpace
    File history is great but it saves your files on the external drive without any encryption and stores them using the exact same folder structure as the originals. If a bad guy gets his hands on the hard drive it could basically not be easier to get to your important files. Is there any way to encrypt the file history backup without breaking its functionality and without having to encrypt the original content itself? Thanks for your input!

    Read the article

  • One Apache server, multiple clients - best practices for config files?

    - by OttaSean
    First time user; please be gentle. :-) (And if you don't like my question I'd be grateful for a comment as to why...) I am doing a contract at a government server shop that provides web services for multiple client groups in other areas of the government. My employer has asked me to look into how other shops, in similar situations, handle configuration files, and whether there are any best practices on the subject. I'm pretty sure there are lots of installations out there running multiple VirtualHosts out of one Apache installation, but surprisingly I couldn't find anything online about how people handle config file layout, so was hoping some of you wise folks on ServerFault might have some thoughts or pointers for me. The current setup - which seems logical to me - is that each client site has its own directory off the root - so: /client/tps-reports/ /client/silly-walks/ /client/ministry-of-magic/ and so on - and each of those directories has a /htdocs, /cgi-bin, and /conf (among others). The main /etc/apache/httpd.conf only contains Include statements (and lots of comments), the last of which is: Include /etc/apache/vhosts/*.conf The vhosts directory contains symlinks: tpsrept.conf - /client/tps-reports/conf/tpsrept.conf sillywk.conf - /client/silly-walks/conf/sillywk.conf mom.conf - /client/ministry-of-magic/mom.conf Each of those .conf files contains the actual NameVirtualHost definition and a gigantic <VirtualHost 192.168.12.34> stanza - which contains all the stuff about the specific site. The idea is that clients have access to what's in their own /client/xx directory, so they can change stuff in the section of the config that is relevant to them. As I mentioned above, that seems fairly logical to me, but I'm wondering if any of you wise folks are aware of potential gotchas with this sort of layout, or any other thoughts on why it is or isn't a good idea. In particular, how do other places do it? Is there a "best practice" for this sort of thing? Many thanks in advance for your time and any thoughts you all might have.

    Read the article

< Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >