Search Results

Search found 37152 results on 1487 pages for 'shared access'.

Page 244/1487 | < Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >

  • How to write to a remote domain

    - by user347743
    Hello, I have a domain www.example.com In the domain, www.example.com/write directory is chmod to 777. Now, suppose, if I want to create a new directory like www.example.com/write/new from www.some-domain.com, what should I be doing. I know PHP, Python. Please relate the answers to this.

    Read the article

  • treating paramater as literal

    - by I__
    DoCmd.TransferText acImportDelim, Import-Accounts, "tableImport", _ "C:\Documents and Settings\accounts.txt", True The second parameter: Import-Accounts is the actual name of the saved import specifications. supposedly it does NOT need to be in quotes; however in this case since there is a - there it is treating it as if i were doing an operation. is there a way i can force it to treat it literally instead of as an operation?

    Read the article

  • Why delete and recreate a querydef object when you can just change the .SQL property?

    - by dblE
    Do you remember the venerable old Microsoft Query by Form (QBF) VBA example from back in the day link that recommended that you delete an existing query and then recreate it dynamically?: On Error Resume Next db.QueryDefs.Delete ("qryResults") On Error GoTo 0 Set qdf = db.CreateQueryDef("qryResults", "SELECT p.*... Why not just change the SQL property of the querydef object? qdf.SQL = "SELECT p.*... I am wondering if anyone knows why the MS engineers wrote an example that suggests that you delete and then recreate a query instead of simply changing the SQL property? I would guess that the act of deleting and recreating objects over time could contribute to corruption and bloating in your front end, not to mention changing the SQL property is so much simpler. Does anyone have more insight into this?

    Read the article

  • How do I deny access to everybody but me in Windows 7?

    - by GregH
    I am trying to set up a file server on my my Windows 7 Pro system at home. I set up one common "Share" folder that I have shared/published. Within the share folder I want to have individual folders for me and my wife...that is only I can read/write my folder and only my wife can read/write to her folder and neither of us can read the contents of the other person's folder. Then I want to have a "public" folder where we can both read/write to contents of the folder as well as any sub-folders created, but my "kids" account can only read from this folder and sub folders. It seems really confusing to set up something like this and it really shouldn't. I am really confused between the "allow", "deny", and dimmed check boxes in the security tab. It seems that if I "Deny" access to "Everyone" on my private folder, then I don't even have access to it. Windows security seems backwards from the rest of the world's security models. If I am in two groups and I deny access to one of the groups but allow access to the other group then Windows security denies me access as I am in one of the groups that has access disallowed. Very confusing.

    Read the article

  • Since upgrading to Windows 8.1, I can't open any files on a SMB share shared by my OS X Mavericks Mac

    - by Gary
    I have a PC with Windows 8.1 and a Mac with Mavericks. I have a folder on the Mac that is shared with the PC. When I'm on the PC and I try to open a file that is shared by the Mac, such as an ISO file (a disk image), then I get a message saying that I cannot open the file, or the file is in use (it depends on the app/filetype). I have the same problem when I open a video file. Strangely, text files and PDF files are just fine. And if I copy any of the problematic files to the local Windows disk, then I can open them just fine. The specific error messages are: AVI files opened in VLC: "Your input can't be opened. VLC is unable to open the MRL." ISO files opened by Windows Explorer: "Sorry, there was a problem mounting the file." This only started happening after I upgraded to Windows 8.1 on the PC and Mavericks on the Mac. Mavericks upgraded its SMB version from SMB1 to SMB2, so perhaps that is related? Does anyone know what the problem might be, and how I could fix it? Thanks in advance!

    Read the article

  • Can MS Services for Unix be deployed and accessed from a shared drive?

    - by Ian C.
    I'm interested in experimenting with replacing our dependency on MKS with MS' Sevices for Unix toolset. I was wondering if anyone has any experience with deploying SFU on a shared drive? We like to, wherever possible, host our dev tools on one central NAS and call to the NAS to access the tools instead of rolling stuff out to each and every desktop. I'm not interested in the NFS support or ActiveState Perl. Really, none of the daemon technology is required here. I'm looking for replacements for the coreutils/binutils stuff you find in Linux (and MKS on Windows): sed, awk, csh, bash, grep, ls, find -- the meat-and-potates command line apps that our build and test scripts are built around. If I limit the install to just the Interix GNU Components (and maybe the Remote Connectivity components) will is run nicely from a shared location? To head off some questions: Yes, I've looked at Cygwin. Unfortunately it's performance in our build and test environment is poor. It runs considerably slower than MKS and it's not a direct drop-in replacement for MKS (thanks to its internal pathing and limitations with commands like 'ps'), so it's a tougher sell. Yes, I'm looking at the MinGW offering in parallel to this.

    Read the article

  • Website has become slower on a VPS, was much fast on a shared host. What's wrong?

    - by Arpit Tambi
    My shared host suspended my website stating system overload, so I moved my website to a VPS which has 4GB RAM. But for some reason the website has become very slow. This is the vmstat output - procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 3050500 0 0 0 0 0 1 0 0 0 0 100 0 0 Here's the Apache Benchmark output for a STATIC html page I ran on the server itself - Benchmarking www.ask-oracle.com (be patient)...apr_poll: The timeout specified has expired (70007) Total of 20 requests completed Update: Server Config: List item Centos 5.6 4 cores cpu 4 GB RAM LAMP stack with APC Wordpress Only one website It takes almost double time to load now, same website was much fast on shared hosting. I know I need to tweak some settings but have no clue where to start from? I have already tried to optimize apache, mysql etc. Update 2: CPU usage is low, see uptime output: 11:09:02 up 7 days, 21:26, 1 user, load average: 0.09, 0.11, 0.09 Update 3: When I load any webpage, browser shows "Waiting" for a long time and then page loads quickly. So I suspect server can accept only limited connections and holds extra connections in a waiting state. How to check this? Update 4: Following is the output on executing netperf TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost.localdomain (127.0.0.1) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.00 9615.40 [root@ip-118-139-177-244 j3ngn5ri6r01t3]# Here are the Apache MPM settings from httpd.conf, do they look okay? <IfModule worker.c> StartServers 5 MaxClients 100 MinSpareThreads 50 MaxSpareThreads 250 ThreadsPerChild 125 MaxRequestsPerChild 10000 ServerLimit 100 </IfModule>

    Read the article

  • choosing hosting for custom ecommerce site, shared, dedicated, what to look for?

    - by spirytus
    Hi, I have (almost) developed website for my client and now need to decide on hosting. Most of the users of the site will be located in Australia, and so am I and my client. Now, I want to consider everything before deciding on host and few questions comes to my mind: I cannot afford website being down, and all hosts say something like "99% uptime guranted". Should just that be enough or shall I ask hosts for some stats maybe? Does it make any difference if servers and whole hosting company is located in Australia or outside? I've been hosting few sites with JustHost.com on shared hosting (cheapest plan, servers in US I believe) and never seen any delays but could that be an issue? I would prefer Australian company so I can actually go to them and give them piece of my mind if something goes wrong, but US servers seem cheaper. Would share hosting do? Its ecommerce custom build php application, I know there are security issues with sessions etc on shared hosting. Will take precautions of course but could share hosting be an issue? Would dedicated be worthy option considering that my knowledge of server is very limited? I need to run php/mysql, with preferably unlimited bandwidth as with my experience I cannot tell what amount of traffic would be sufficient. Please let me know if I didn't provide you with enough information so you could answer my questions, will gladly explain further. In advance thanks for any answers :)

    Read the article

  • 403 Forbidden serving static files from VirtualBox shared folder with nginx (Ubuntu 10.04LTS guest, Windows 7 host)

    - by Chris Pratt
    I'm working on a local development VM and trying to test serving my site with gunicorn and nginx as a reverse proxy for static resources only. The site loads minus static resources with user nginx; in nginx.conf. Attempting to load a static resource individually reveals a 403 Forbidden error. For background. The static resources are in a shared folder under /media/sf_work. All files are owned by root:vboxsf (VirtualBox default). My user account on the system has been added to the vboxsf group, and I have full access to the shared folder. For comparison, I tried changing the nginx.conf user to my user account. In that scenario, the static files did load, but then the homepage itself gives a 403 Forbidden error. So, I then tried adding the nginx user to the vboxsf group, but then everything gives a 403 Forbidden error. After further investigation it seems that if the nginx.conf user is in any group, it results in a 403 Forbidden. Any idea what could possibly be going on here?

    Read the article

  • Server 2012 - transparent SMB failover without shared discs, possible?

    - by TomTom
    here is the scenario - there is a small set (200gb around) of data that I HAVE to keep available. Those are basically shared VHD images that serve as master images for a lot of our VM's - they then run in differential discs off those. The whole set is "mostly read only". In more detail: A file that IS there and IS used will NEVER change. I may delete files (when absolutely not in use) and add new files, but a file that is there once gets read protection set and that it is until it is retired. Obviously, I need as much uptime as possible. SO FAR we run that by having this directory local on every Hyper-V server. Now I think moving this into our storage fabric. Due to the "it HAS to be there" I pretty much want a share nothing architecture. DFS would be perfect for this - a file never changes, so replication would work nicely. Folders could be replicated to a number of servers, all would reference them from there. Now, that hyper-V supports SMB that could be a good idea to isolate these on a number of servers - we try to move into a scenario where the storage is more centralized. Server 2012 supports always on shares, but it seems that this only works with a clustered disc behind. Is there any way around this for read only file stores? All documentation points to stuff like a shared JBOD - but that would leave me open for file system corruption. I really plan to go quite separately here, vertically - 2 servers, both with SSD only for this, both with their own 2000W separate USV, both with enough bandwidth to handle everything thrown at them (note to everyone tinking this is 10G - this would be SLOW and EXPENSIVE compared to a nice Infiniband backbone). The real crux is that this is an edge case obviously - as the files are read only once in use.

    Read the article

  • How can I access one desktop session from another on the same machine?

    - by d3vid
    I want to run a desktop session as user A, and from that session access a different desktop session as user B. This way I can test, screencast or share my screen from session B, while having access to apps/resources in session A that I do not want running/visible in session B. What application can I do this with? I assume some kind of a remote desktop client/server is what I'm looking for. So far I have tried: VNC. Logged in as user A and user B. In session B run Desktop Sharing. Switched to session A. Tried to access share with Remmina. Failed. (Can get image to appear but it's frozen.) x2go. Installed server and client from stable PPA (needed a workaround for installation to succeed). Created a connection which starts then fails instantly. Discovered mailing list post suggesting that accessing localhost is not supported. On the non-remote front: VirtualBox. Created a minimal virtual machine for session B. Too resource heavy. Am I attempting the impossible? Should I be looking for something other than a remote desktop tool?

    Read the article

  • Cannot access personal website from home IP. More details inside.

    - by GX67
    This is a recent problem I've been having. My site can be accessed from almost everywhere else except from my home IP, where I do most of my editing/updating, etc. I've tested my connection from my school's network, a friend's connection from out of state (multiple states), and through a tethered connection with my friend's Android. It works in all those cases, both viewing, accessing the cPanel, and using FTP. Here's the problem that happens to me when I try to view it from my home IP: The page times out in Firefox, IE, and Chrome. Using the cmd, I ran tracert and ping, both as failed attempts. Log here. downforeveryoneorjustme.com says my site is up. So do the other site checkers. I can't access my cPanel or FTP accounts. I can't access the host site. (I use perfectz.info for hosting, and I can't access their site either.) System settings: No firewall enabled. Ports are seemingly properly forwarded. (e.g. The ports are open in the router settings, and are open everywhere else.) I have an email forwarder set up from the cPanel that works just fine. (i.e. I can receive emails sent to that address. If any other information is needed, I'll do my best to provide it. UPDATE @ilhan: I use two things: 1) The site cPanel from in-browser. 2) Dreamweaver CS5 FTP. @Matthias: I tested both, and it passes the dual stack with a 10/10. What should I do then?

    Read the article

  • What credentials should I use to access a Windows share?

    - by JMCF125
    Hi, I have installed Samba and CIFS and all that, followed a bunch of tutorials, but still I can't access a share in the separate Windows 7 machine. Before I could access a share in Ubuntu from Windows, but although now I can't for whatever reason; the error of the attempt to mount the Windows share is the same: 13, asking for credentials (the computer with Windows is off now, but I can add the exact error message later). In /etc/fstab I have: # ... (help info) ... # <file system> <mount point> <type> <options> <dump> <pass> # ... (mounting points that don't matter for the question) ... //192.168.1.2/C\:/Users/Public/Documents /srv/Z\:/ cifs user=guest,password=,uid=1000,iocharset=utf8 0 0 I also tried options such as username=guest,uid=1000,iocharset=utf8 and guest,uid=1000,iocharset=utf8, which, of course, don't work. What user am I supposed to use? (user=user; username=user; my credentials in the Windows and Ubuntu machines do not work, at least with the syntax I tried - similar to this). Even if this worked it's not actually what I want. I wanted to setup an authentication for any one trying to access the drive (it's currently 777, for the Linux share as well) and put a limit/quota on the share's use (as I see Z:on Windows, it allows for the entire C:drive to be filled). Thank you in advance. I'd be glad if you suggested a way to do this even without the last paragraph.

    Read the article

  • Coherent access to mainframe files from Win32 application and IBM RDZ/Eclipse?

    - by Ira Baxter
    I have a suite of tools for processing IBM COBOL source code; these tools are built as Win32 applications and talk to Windows (including network) files using traditional Windows file system calls (open, close, read, write) and work just fine, thank you. I'd like to integrate these with Eclipse; we understand how to get Eclipse to do UI for us we think. The problem is that Eclipse/RDZ users access mainframe files through some IBM magic. In How does RDZ access mainframe files I tried to understand how Eclipse accessed files on a mainframe. Apparantly Eclipse/RDZ has a secret filesystem access backdoor not available to normal mortals. At issue is how our tools, reading some Windows-accessible file (local disk file, NFS to mainframe, ...) can associate such files with the files that Eclipse can access or is using? Ideally we'd like UI-integrated versions of our tools take an Eclipse file-name string for a mainframe file, pass it to our Windows application to process, have the Windows application open/read/process the file, and return results associated with that file to the Eclipse UI. Is there a canonical file name path that would be used with mainframe NFS that would be equivalent to the name or access object the Eclipse RDZ used to access the same file? Are all operations doable internally by Eclipse, doable by the mainframe NFS [for instance, can NFS read/update an element in a partitioned data set? Can Eclipse RDZ? Does it matter?] Is the mainframe file access available to custom Java code running under Eclipse RDZ (e.g., equivalents of open/close/read/write based on filename/path/something?) If so, can somebody steer me towards documentation describing the access methods? Anybody else already solve this problem or have a good suggestion?

    Read the article

  • logrotate isn't rotating a particular log file (and i think it should be)

    - by Max Williams
    Hi all. For a particular app, i have log files in two places. One of the places has just one log file that i want to use with logrotate, for the other location i want to use logrotate on all log files in that folder. I've set up an entry called millionaire-staging in /etc/logrotate.d and have been testing it by calling logrotate -f millionaire-staging. Here's my entry: #/etc/logrotate.d/millionaire-staging compress rotate 1000 dateext missingok sharedscripts copytruncate /var/www/apps/test.millionaire/log/staging.log { weekly } /var/www/apps/test.millionaire/shared/log/*log { size 40M } So, for the first folder, i want to rotate weekly (this seems to have worked fine). For the other, i want to rotate only when the log files get bigger than 40 meg. When i look in that folder (using the same locator as in the logrotate config), i can see a file in there that's 54M and which hasn't been rotated: $ ls -lh /var/www/apps/test.millionaire/shared/log/*log -rw-r--r-- 1 www-data root 33M 2010-12-29 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stderr.log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stdout.log Some of the other log files in that folder have been rotated though: $ ls -lh /var/www/apps/test.millionaire/shared/log total 91M -rw-r--r-- 1 www-data root 33M 2010-12-29 15:05 test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stderr.log -rw-r--r-- 1 deploy deploy 41K 2010-12-29 11:03 unicorn.stderr.log-20101229.gz -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stdout.log -rw-r--r-- 1 deploy deploy 1.1K 2010-10-15 11:05 unicorn.stdout.log-20101229.gz I think what might have happened is that i first ran this config with a pattern matching *.log, and that means it only rotated the two files that ended in .log (as opposed to -log). Then, when i changed the config and ran it again, it won't do any more since it think's its already had its weekly run, or something. Can anyone see what i'm doing wrong? Is it to do with those top folders being owned by root rather than deploy do you think? thanks, max

    Read the article

  • logrotate isn't rotating a particular log file (and i think it should be)

    - by Max Williams
    Hi all. For a particular app, i have log files in two places. One of the places has just one log file that i want to use with logrotate, for the other location i want to use logrotate on all log files in that folder. I've set up an entry called millionaire-staging in /etc/logrotate.d and have been testing it by calling logrotate -f millionaire-staging. Here's my entry: #/etc/logrotate.d/millionaire-staging compress rotate 1000 dateext missingok sharedscripts copytruncate /var/www/apps/test.millionaire/log/staging.log { weekly } /var/www/apps/test.millionaire/shared/log/*log { size 40M } So, for the first folder, i want to rotate weekly (this seems to have worked fine). For the other, i want to rotate only when the log files get bigger than 40 meg. When i look in that folder (using the same locator as in the logrotate config), i can see a file in there that's 54M and which hasn't been rotated: $ ls -lh /var/www/apps/test.millionaire/shared/log/*log -rw-r--r-- 1 www-data root 33M 2010-12-29 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stderr.log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stdout.log Some of the other log files in that folder have been rotated though: $ ls -lh /var/www/apps/test.millionaire/shared/log total 91M -rw-r--r-- 1 www-data root 33M 2010-12-29 15:05 test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stderr.log -rw-r--r-- 1 deploy deploy 41K 2010-12-29 11:03 unicorn.stderr.log-20101229.gz -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stdout.log -rw-r--r-- 1 deploy deploy 1.1K 2010-10-15 11:05 unicorn.stdout.log-20101229.gz I think what might have happened is that i first ran this config with a pattern matching *.log, and that means it only rotated the two files that ended in .log (as opposed to -log). Then, when i changed the config and ran it again, it won't do any more since it think's its already had its weekly run, or something. Can anyone see what i'm doing wrong? Is it to do with those top folders being owned by root rather than deploy do you think? thanks, max

    Read the article

  • Not able to access the server after changing the password?

    - by cyrilsebastian
    While accessing the server, the error comes: Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed. Disconnect all previous connections to the server or shared resource and try again. I am logging in from Administrator in XP machine, able to access server from other machines. Is there any problem with administrator profile??

    Read the article

  • Not able to access the server after changing the password?

    - by cyrilsebastian
    While accessing the server, the error comes: Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed. Disconnect all previous connections to the server or shared resource and try again. I am logging in from Administrator in XP machine, able to access server from other machines. Is there any problem with administrator profile??

    Read the article

  • Sharing directories in Windows 7

    - by CoryR
    I've created a directory named "Shared" in my "My Documents" directory (C:\Users\Cory.MYDOMAIN\Documents\Shared). I right clicked on the Shared folder and publicly shared this one directory. At least that's what wanted to do. In reality, Windows 7 shared the entire C:\Users directory. How can I share this one subdirectory without granting access to the rest of the C:\Users tree?

    Read the article

  • Not able to access the server after changing the password? [closed]

    - by cyrilsebastian
    While accessing the server, the error comes: Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed. Disconnect all previous connections to the server or shared resource and try again. I am logging in from Administrator in XP machine, able to access server from other machines. Is there any problem with administrator profile??

    Read the article

  • Redirect all access requests to a domain and subdomain(s) except from specific IP address? [closed]

    - by Christopher
    This is a self-answered question... After much wrangling I found the magic combination of mod_rewrite rules so I'm posting here. My scenario is that I have two domains - domain1.com and domain2.com - both of which are currently serving identical content (by way of a global 301 redirect from domain1 to domain2). Domain1 was then chosen to be repurposed to be a 'portal' domain - with a corporate CMS-based site leading off from the front page, and the existing 'retail' domain (domain2) left to serve the main web site. In addition, a staging subdomain was created on domain1 in order to prepare the new corporate site without impinging on the root domain's existing operation. I contemplated just rewriting all requests to domain2 and setting up the new corporate site 'behind the scenes' without using a staging domain, but I usually use subdomains when setting up new sites. Finally, I required access to the 'actual' contents of the domains and subdomains - i.e., to not be redirected like all other visitors - in order that I can develop the new site and test it in the staging environment on the live server, as I'm not using a separate development webserver in this case. I also have another test subdomain on domain1 which needed to be preserved. The way I eventually set it up was as follows: (10.2.2.1 would be my home WAN IP) .htaccess in root of domain1 RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} !^staging.domain1.com$ [NC] RewriteCond %{HTTP_HOST} !^staging2.domain1.com$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301] .htaccess in staging subdomain on domain1: RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} ^staging.revolver.coop$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301,L] The multiple .htaccess files and multiple rulesets require more processing overhead and longer iteration as the visitor is potentially redirected twice, however I find it to be a more granular method of control as I can selectively allow more than one IP address access to individual staging subdomain(s) without automatically granting them access to everything else. It also keeps the rulesets fairly simple and easy to read. (or re-interpret, because I'm always forgetting how I put rules together!) If anybody can suggest a more efficient way of merging all these rules and conditions into just one main ruleset in the root of domain1, please post! I'm always keen to learn, this post is more my attempt to preserve this information for those who are looking to redirect entire domains for all visitors except themselves (for design/testing purposes) and not just denying specific file access for maintenance mode (there are many good examples of simple mod_rewrite rules for 'maintenance mode' style operation easily findable via Google). You can also extend the IP address detection - firstly by using wildcards ^10\.2\.2\..*: the last octet's \..* denotes the usual "." and then "zero or more arbitrary characters", signified by the .* - so you can specify specific ranges of IPs in a subnet or entire subnets if you wish. You can also use square brackets: ^10\.2\.[1-255]\.[120-140]; ^10\.2\.[1-9]?[0-9]\.; ^10\.2\.1[0-1][0-9]\. etc. The third way, if you wish to specify multiple discrete IP addresses, is to bracket them in the style of ^(1.1.1.1|2.2.2.2|3.3.3.3)$, and you can of course use square brackets to substitute octets or single digits again. NB: if you're using individual RewriteCond lines to specify multiple IPs / ranges, make sure to put [OR] at the end of each one otherwise mod_rewrite will interpret as "if IP address matches 1.1.1.1 AND if IP address matches 2.2.2.2... which is of course impossible! However as far as I'm aware this isn't necessary if you're using the ! negator to specify "and is not...". Kudos also to SE: this older question also came in useful when I was verifying my own knowledge prior to my futzing around with code. This page was helpful, as were the various other links posted below (can't hyperlink them all due to spam protection... other regex checkers are available). The AddedBytes cheat sheet's useful to pin up on your wall. Other referenced URLs: internetofficer.com/seo-tool/regex-tester/ fantomaster.com/faarticles/rewritingurls.txt internetofficer.com/seo-tool/regex-tester/ addedbytes.com/cheat-sheets/mod_rewrite-cheat-sheet/

    Read the article

  • How to get rid of the GUI access from shared library.

    - by Inso Reiges
    Hello, In my project i have a shared library with cross-platform code that provides a very convenient abstraction for a number of its clients. To be more specific, this library provides data access to encrypted files generated by main application on a number of platforms. There is a great deal of complicated code there that implements cryptographic protocols and as such is very error-prone and should be shared as much as possible across clients and platforms. However parsing all this encrypted stuff requires asking user for a number of different secrets ones in a while. The secret can be either a password, a number of shared passwords or a public key file and this list is a hot target for extension in the future. I can't really ask the user for any of those secrets beforehand from main application, because i really don't know what i need to ask for until i start working with the encrypted data directly in the library code. So i will have to create dialogs and call them from the library code. However i really see this as a bad idea, because (among other things) there is a possibility of a windows service using it and services can't have GUI access. The question is, are there any known ways or patterns to get rid of the GUI calls that are suitable for my case? Thank you.

    Read the article

  • c++ Design pattern for CoW, inherited classes, and variable shared data?

    - by krunk
    I've designed a copy-on-write base class. The class holds the default set of data needed by all children in a shared data model/CoW model. The derived classes also have data that only pertains to them, but should be CoW between other instances of that derived class. I'm looking for a clean way to implement this. If I had a base class FooInterface with shared data FooDataPrivate and a derived object FooDerived. I could create a FooDerivedDataPrivate. The underlying data structure would not effect the exposed getters/setters API, so it's not about how a user interfaces with the objects. I'm just wondering if this is a typical MO for such cases or if there's a better/cleaner way? What peeks my interest, is I see the potential of inheritance between the the private data classes. E.g. FooDerivedDataPrivate : public FooDataPrivate, but I'm not seeing a way to take advantage of that polymorphism in my derived classes. class FooDataPrivate { public: Ref ref; // atomic reference counting object int a; int b; int c; }; class FooInterface { public: // constructors and such // .... // methods are implemented to be copy on write. void setA(int val); void setB(int val); void setC(int val); // copy constructors, destructors, etc. all CoW friendly private: FooDataPrivate *data; }; class FooDerived : public FooInterface { public: FooDerived() : FooInterface() {} private: // need more shared data for FooDerived // this is the ???, how is this best done cleanly? };

    Read the article

< Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >