Search Results

Search found 41598 results on 1664 pages for 'segmentation fault'.

Page 339/1664 | < Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >

  • .htaccess working on remote server but does not work on localhost. Getting 404 errors on localhost

    - by Afsheen Khosravian
    MY PROBLEM: When I visit localhost the site does not work. It shows some text from the site but it seems the server can not locate any other files. Here is a snippet of the errors from firebug: "NetworkError: 404 Not Found - localhost/css/popup.css" "NetworkError: 404 Not Found - localhost/css/style.css" "NetworkError: 404 Not Found - localhost/css/player.css" "NetworkError: 404 Not Found - localhost/css/ui-lightness/jquery-ui-1.8.11.custom.css" "NetworkError: 404 Not Found - localhost/js/jquery.js" It seems my server is looking for the files in the wrong places. For example, localhost/css/popup.css is actually located at localhost/app/webroot/css/popup.css. I have my site setup on a remote server with the same exact configurations and it works perfectly fine. I am just having this issue trying to run the site on my laptop at localhost. I edited my VirtualHosts file DocumentRoot and to /home/user/public_html/site.com/public/app/webroot/ and this reduces some errors but I feel that this is wrong and sort of hacking it since I didn't use these setting on my production server which works. The last note I want to make is that the website uses dynamic URLs. I dont know if that has anything to do with it. For example, on the production server the URLS are: site.com/#hello/12321. HERES WHAT I AM WORKING WITH: I have a LAMP server setup on my laptop which runs on Ubuntu 11.10. I have enabled mod_rewrite: sudo a2enmod rewrite Then I edited my Virtual Hosts file: <VirtualHost *:80> ServerName localhost DirectoryIndex index.php DocumentRoot /home/user/public_html/site.com/public <Directory /home/user/public_html/site.com/public/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> Then I restarted apache. My website is using cakePHP. This is the directory structure of the website: "/home/user/public_html/site.com/public" contains: index.php app cake plugins vendors These are my .htaccess files: /home/user/public_html/site.com/public/app/.htaccess: <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ webroot/ [L] RewriteRule (.*) webroot/$1 [L] </IfModule> /home/user/public_html/site.com/public/app/webroot/.htaccess: <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?url=$1 [QSA,L] </IfModule>

    Read the article

  • SQL CLR not properly enabling

    - by dnolan
    We have a SQL server running SQL 2005 Workgroup 64 bit (9.0.4273), on Windows 2003 server 64 bit. We have run sp_configure and reconfigured the server which indicates that the clr is now enabled. exec sp_configure 'clr enabled', '1' go reconfigure go However, when trying to call CREATE ASSEMBLY the server completely dies on us and we have to do a full reboot of the machine. A little more diagnostic information, even though clr enabled is set to 1 and we have rebooted the full server, running the following statement select * from sys.dm_clr_properties returns directory version state locked CLR version with mscoree which is what it says when the CLR is not enabled on another machine. On a correctly enabled machine (after reboot) this function reads directory C:\Windows\Microsoft.NET\Framework64\v2.0.50727\ version v2.0.50727 state CLR is initialized

    Read the article

  • Simple SQL Server 2005 Replication - "D-1" server used for heavy queries/reports

    - by Ricardo Pardini
    Hello. We have two SQL 2005 machines. One is used for production data, and the other is used for running queries/reports. Every night, the production machine dumps (backups) it's database to disk, and the other one restores it. This is called the D-1 process. I think there must be a more efficient way of doing this, since SQL 2005 has many forms of replication. Some requirements: 1) No need for instant replication, there can be (some) delay 2) All changes (including schemas, data, constraints, indexes) need to be replicated without manual intervention 3) It is used for a single database only 4) There is a third server available if needed 5) There is high bandwidth (gigabit ethernet) available between the servers 6) There isn't a shared storage (SAN) available What would be a good alternative to this daily backup/restore routine? Thanks!

    Read the article

  • How Can I Configure Selenium grid to test website in parallel?

    - by prakash.panjwani
    Hello Friends, I want to use selenium grid for my web page testing. I have successfully installed the demo of selenium grid on my PC and it is running fine. I have followed this link to install and run the selenium grid demo. I am trying to code a java program using selenium rc which can run with selenium grid for testing the web site, but I am not getting how does I make change on the selenium grid existing demo so that it will work for my web test. Can some body provide me any link/example so that I will do that?

    Read the article

  • Add Route for machine in same DC

    - by gary
    My routing table on my machine with IP of 46.84.121.243 currently looks like this - Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 46.84.121.225 46.84.121.243 21 46.84.121.224 255.255.255.224 On-link 46.84.121.243 276 46.84.121.239 255.255.255.255 On-link 46.84.121.243 21 46.84.121.243 255.255.255.255 On-link 46.84.121.243 276 46.84.121.255 255.255.255.255 On-link 46.84.121.243 276 I'm trying to access 46.84.121.239, which is my other machine in the same DC but my guess is the first rule is blocking it as it is trying to go via the gateway and failing - Tracing route to [46.84.121.239] over a maximum of 30 hops: 1 OWNEROR-9O83HBL [46.84.121.243] reports: Destination host unreachable. Trace complete. I'm doing all this via RDP and already tried changing the metric on the persistent rule with devastating consequences! Here's the persistent rule (working) - Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 46.84.121.225 1 Any help to be able to access the 46.84.121.243 would be very helpful thanks very much.

    Read the article

  • Scripting a Windows 2008 Cluster from Windows 2003

    - by glancep
    Our current environment is all Windows 2003. When we migrate a new version of our service to the cluster, we first stop the service with a command like: cluster.exe <clusterName> resource "<serviceName>" /offline We do similarly after the migrate to bring the service back online. Now, we are upgrading our environment to new Windows 2008 servers. However, our build/migrate machine will remain Windows 2003. When issuing the same command from Windwos 2003 to Windows 2008, we get: System error 1722 has occurred (0x000006ba). The RPC server is unavailable. We need to be able to remotely administer a Windows 2008 cluster from a Windows 2003 server in an automated fashion (such as the command-line cluster.exe utility). Is this possible? Thanks, Gideon

    Read the article

  • rsAccessDenied - SQL server 2008 reporting services

    - by rboorgapally
    Hi, I am running SQL server 2008 developer edition on windows vista home premium. I created a reporting services project that was built successfully in BIDS. When I try to deploy it it gives the following error: Error rsAccessDenied : The permissions granted to user 'COMP\MYSELF' are insufficient for performing this operation. The MYSELF account is the only account on the system. It has administrator rights. The reporting service is running with the LocalSystem service account. If I log in with the MYSELF account into reportmanager, I cannot see the site settings tab. Without the site settings tab, how do I add or change the roles for MYSELF account. In summary, please help me to open the reportmanager in the browser with the site settings link so that I can change the role of the user account.

    Read the article

  • can 'Percona MySQL Data Recovery' be used to recover dropped tables if the datadir filesystem is mounted as /

    - by Tom Geee
    according to Percona: Unmount the filesystem or make it read-only if... You have filesystem corruption OR You have dropped tables in innodb_file_per_table format If I have innodb_file_per_table enabled, and accidently dropped a table, while the datadir is mounted as within the / partition , can data still be recovered? Obviously you can't work with an unmounted root filesystem. Our VPS host has a defaulted filesystem table which we cannot customize. I was wondering in case of any future scenario. edit: would mounting the / filesystem through NFS onto another system as read-only be a workaround? TIA.

    Read the article

  • How to recover deleted NTFS partitions?

    - by Frank
    Last night I made a terrible mistake. I was reinstalling Windows and I accidentally deleted all the partitions on all my drives. I realized my mistake before I had created any partitions, so nothing has been written to any of the disks. I'm currently at my wits' end about what I'll do if I don't manage to recover the data. I have two 1TB drives and a 2TB. One of the 1TB was the drive I was supposed to be reformatting so nothing to be recovered there. I am currently in a Linux livecd. In this article http://support.microsoft.com/kb/245725 Microsoft advises to recreate the exact same partition but choose not to format it, and then recover the backup boot sector from the end of the ntfs volume. But none of the drives I want to recover are bootable drives. So does that mean I do not need to rewrite the boot sector? As in if I simply recreate a partition of the same size it will see all my data? Or would I be better off using the TestDisk utility? http://www.cgsecurity.org/wiki/TestDisk Please help, I'm desperate!!

    Read the article

  • How to configure CruiseControl.Net for Windows Authentication?

    - by balu
    I am using CruiseControl.Net for continuous integration which is now accessing the dashboard through login plugin, which in turn is authenticating and authorizing after verifying it with a set of users saved as XML file in the CruiseControl.Net server. Now, i need to bring in Windows Authentication to the system whereby which CruiseControl.Net server webdashboard when accessed from a client machine(local machine associated with a common server), would be authenticated and be authorized to access the CruiseControl.Net features based on the authority of the logged in users. Kindly guide me to go ahead with this, appreciate all kinds of resources that would be helpful for achieving this. Thanks.

    Read the article

  • DNS Server Order Incorrect on Windows 7 via PPTP VPN to Windows 2003 Server

    - by Simon
    Hi there. When I connect a Windows XP laptop via PPTP vpn to our Windows 2003 Server, the DNS Server order is correct: 192.168.8.3 208.67.222.222 208.67.220.220 But when I connect a Windows 7 laptop via PPTP VPN to our Windows 2003 Server, the DNS order is incorrect: 208.67.222.222 208.67.220.220 192.168.8.3 What do I need to do on our Windows 2003 Server to fix this so the when I do a ping, it will work correctly?

    Read the article

  • SQL Server 2005 - Linked Visual Foxpro Authorization

    - by John
    Here's the Scenario: We have an existing SQL 2000 Server that has a linked server to a share directory (on another server) containing Visual FoxPro tables; all connections work correctly. Porting the SQL 2000 server to a new SQL 2005 server results in questionable behavior: If you connect to the server, remotely, using Windows Authentication, you receive this error when running a query against the linked server: OLE DB provider "MSDASQL" for linked server "[linked server name]" returned message "[Microsoft][ODBC Visual FoxPro Driver]File 'MyTable.dbf' does not exist.". Msg 7350, Level 16, State 2, Line 2 Cannot get the column information from OLE DB provider "MSDASQL" for linked server "[linked server name]". However, logged in locally, the query works fine. The query also works correctly when logged in remotely, but using a SQL login. The only scenario I receive the error is when connected remotely, using windows authentication. As I mentioned before, this works on the SQL 2000 server, and both the old and new servers are running under the same network account (which has access to the folder the FoxPro files are in). Doing a little searching on the internet it looks like others have run into this situation, but I haven't found a resolution. Has anyone run into this before?

    Read the article

  • How to resolve a "driver failure" error in the Cisco VPN client connecting from a Windows 7 client

    - by JosephStyons
    I have recently upgraded my laptop from Windows Vista SP1 to Windows 7 Professional. After the upgrade, if I try to use the Cisco VPN client to connect to a network, I get this message: Secure VPN Connection terminated locally by the Client. Reason 440: Driver Failure. Prior to the upgrade, I was able to connect with no problems. The version of the client I am using is 5.0.05.0290.

    Read the article

  • Enabling a trace spec on Glassfish v2

    - by Kiran
    Hi Guys, I guess this might be answered previously but I dont seem to find one answered . Can anyone please lemme know hot to add a trace spec in Glassfish v2. Am very much new to this so not much aware of this. I need to enable a security trace string and orb trace string on Glassfish v2 .On Glassfish v3 I see a file called logging.properties where all the trace string been given , is there any such file on v2 to work with or we need to add a property to domain.xml. Thanks in advance.

    Read the article

  • tomcat5 HTTP 400 BAd Request

    - by Oneiroi
    OS is centOS 5.5 x64, rpm's are as follows: tomcat5-jsp-2.0-api-5.5.23-0jpp.9.el5_5 tomcat5-common-lib-5.5.23-0jpp.9.el5_5 tomcat5-servlet-2.4-api-5.5.23-0jpp.9.el5_5 tomcat5-server-lib-5.5.23-0jpp.9.el5_5 tomcat5-5.5.23-0jpp.9.el5_5 tomcat5-jasper-5.5.23-0jpp.9.el5_5 telnet localhost 8080 Trying 127.0.0.1... Connected to localhost.localdomain (127.0.0.1). Escape character is '^]'. GET / HTTP/1.0 Host: localhost HTTP/1.1 400 Bad Request Server: Apache-Coyote/1.1 Date: Thu, 16 Sep 2010 15:06:21 GMT Connection: close alternatives --display java output: alternatives --display java java - status is manual. link currently points to /usr/lib/jvm/jre1.6.0_21/bin/java /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java - priority 16000 slave keytool: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/keytool slave orbd: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/orbd slave pack200: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/pack200 slave rmid: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/rmid slave rmiregistry: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/rmiregistry slave servertool: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/servertool slave tnameserv: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/tnameserv slave unpack200: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/unpack200 slave jre_exports: /usr/lib/jvm-exports/jre-1.6.0-openjdk.x86_64 slave jre: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64 slave java.1.gz: /usr/share/man/man1/java-java-1.6.0-openjdk.1.gz slave keytool.1.gz: /usr/share/man/man1/keytool-java-1.6.0-openjdk.1.gz slave orbd.1.gz: /usr/share/man/man1/orbd-java-1.6.0-openjdk.1.gz slave pack200.1.gz: /usr/share/man/man1/pack200-java-1.6.0-openjdk.1.gz slave rmid.1.gz: /usr/share/man/man1/rmid-java-1.6.0-openjdk.1.gz slave rmiregistry.1.gz: /usr/share/man/man1/rmiregistry-java-1.6.0-openjdk.1.gz slave servertool.1.gz: /usr/share/man/man1/servertool-java-1.6.0-openjdk.1.gz slave tnameserv.1.gz: /usr/share/man/man1/tnameserv-java-1.6.0-openjdk.1.gz slave unpack200.1.gz: /usr/share/man/man1/unpack200-java-1.6.0-openjdk.1.gz /usr/lib/jvm/jre-1.4.2-gcj/bin/java - priority 1420 slave keytool: /usr/lib/jvm/jre-1.4.2-gcj/bin/keytool slave orbd: (null) slave pack200: (null) slave rmid: (null) slave rmiregistry: /usr/lib/jvm/jre-1.4.2-gcj/bin/rmiregistry slave servertool: (null) slave tnameserv: (null) slave unpack200: (null) slave jre_exports: /usr/lib/jvm-exports/jre-1.4.2-gcj slave jre: /usr/lib/jvm/jre-1.4.2-gcj slave java.1.gz: (null) slave keytool.1.gz: (null) slave orbd.1.gz: (null) slave pack200.1.gz: (null) slave rmid.1.gz: (null) slave rmiregistry.1.gz: (null) slave servertool.1.gz: (null) slave tnameserv.1.gz: (null) slave unpack200.1.gz: (null) /usr/lib/jvm/jre1.6.0_21/bin/java - priority 2 slave keytool: (null) slave orbd: (null) slave pack200: (null) slave rmid: (null) slave rmiregistry: (null) slave servertool: (null) slave tnameserv: (null) slave unpack200: (null) slave jre_exports: (null) slave jre: (null) slave java.1.gz: (null) slave keytool.1.gz: (null) slave orbd.1.gz: (null) slave pack200.1.gz: (null) slave rmid.1.gz: (null) slave rmiregistry.1.gz: (null) slave servertool.1.gz: (null) slave tnameserv.1.gz: (null) slave unpack200.1.gz: (null) Current `best' version is /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java. Same occurs trying HTTP/1.1, and I am at a complete loss as to why.

    Read the article

  • difference between compiled and installed via rpm (zypper)

    - by cherouvim
    In an openSUSE 11.1 I download, compile and install ImageMagick via: wget ftp://.../pub/graphics/ImageMagick/ImageMagick-6.7.7-0.zip unzip ImageMagick-6.7.7-0.zip cd ImageMagick-6.7.7-0 ./configure --prefix=/usr/local/ImageMagick make make install Everything works nicelly until I discover that JPG is not supported: identify -list format | grep -i jpg [nothing related to JPG returned] So I reconfigure and recompile using: ./configure --prefix=/usr/local/ImageMagick --with-jpeg=yes --with-jp2=yes make make install But that changes nothing. I end up uninstalling: make uninstall and installing via zypper: zypper install ImageMagick This installed version 6.4.3 and now it does support JPG: identify -list format | grep -i jpg JPG* JPEG rw- Joint Photographic Experts Group JFIF format Any idea on what is going on here? What is a possible reason that this capability of ImageMagick was not there when compiled from source but was there when installed from rpm? Note that I don't necessarily care a lot about ImageMagick (since it now works), but generally about his kind of behaviour, becase in one way or another I've seen this happen in other ocasions as well.

    Read the article

  • Sharing disk volumes across OpenVZ guests to reduce Package Management Overhead

    - by andyortlieb
    Is it feasible to create a single "master" OpenVZ guest who would only be used for package management, and use something like mount --bind on several other OpenVZ guests sort of trick them into using the environment installed by the master guest? The point of this would be so that users can maintain their own containers, and yet stay in sync with the master development environment, so they'll always have the latest & greatest requirements without worrying too much about system administration. If they need to install their own packages, could put them in /opt, or /usr/local (or set a path to their home directory)? To rephrase, I would like several (developer's, for example) OpenVZ guests whose /bin, /usr (and so on...) actually refer to the same disk location as that of a master OpenVZ guest who can be started up to install and update common packages for the environment to be shared by all of this group of OpenVZ guests. For what it's worth, we're running Debian 6. Edit: I have tried mounting (bind, and readonly) /bin, /lib, /sbin, /usr in this fashion and it refuses to start the containers stating that files are already mounted or otherwise in use: Starting container ... vzquota : (error) Quota on syscall for id 1102: Device or resource busy vzquota : (error) Possible reasons: vzquota : (error) - Container's root is already mounted vzquota : (error) - there are opened files inside Container's private area vzquota : (error) - your current working directory is inside Container's vzquota : (error) private area vzquota : (error) Currently used file(s): /var/lib/vz/private/1102/sbin /var/lib/vz/private/1102/usr /var/lib/vz/private/1102/lib /var/lib/vz/private/1102/bin vzquota on failed [3] If I unmount these four volumes, and start the guest, and then mount them after the guest has started, the guest never sees them mounted.

    Read the article

  • Remote Desktop Session Black after Minimize

    - by TorgoGuy
    PROBLEM: When I minimize a remote desktop session and restore it, the remote desktop screen shows up black. This only happens when connecting to a particular computer. DETAILS: If I start clicking around in the black area, portions of the screen will start redrawing and showing up correctly. For example, if I leave a window open in the remote session and click where that window is located on the remote computer, then that window--and only that window--will redraw, and sometimes a portion of that window won't redraw (usually the toolbar). And to clarify--the window only has to be minimized momentarily, so it doesn't seem to be a timeout issue. Clicking or typing in the remote session still causes the remote computer to respond appropriately. Disconnecting from the session and reconnecting restores the whole screen image, as does clicking all over the place in the black image (causing each section to redraw). CONFIGURATION: This problem only happens for me when connecting to a particular computer (a W2K Server box configured to allow remote administration) and only with certain client computers. I've tried 7 different client computers with various versions of Remote Desktop (the OSes were: Win2K, Server 2003, Server 2008, Windows 7 RC, 3 XP) and two of them exhibit the problem (one is one of the XP boxes and the other is Windows 7). Those same computers can RDP to other computers without problem. RESOLUTION ATTEMPTS: I have tried the following: Disable the LOCAL screen saver as mentioned on Technet Turned off bitmap caching in the client, as mentioned on many forums. Updated to version 6.1 of the remote desktop client Using mRemote (I doubted this would work since it uses MS's code for connecting to RDP servers) Turning off all video acceleration. QUESTION: Any ideas on what is causing this?

    Read the article

  • Connecting Snow Leopard 10.6.4 to a Linux shared folder using Samba

    - by Vittorio Vittori
    Hi, I'm trying to connect to a web server running on Linux CentOS 5.5 where I've shared a folder. I'm trying to connect to the directory with Snow Leopart 10.6.4 client without success. On CentOS I've started the Samba service and a Samba user with his password and then I've tried to connect to the server with the command smb://10.0.0.7 to reach the IP of the machine and then writing the username and password I've previously created. The server returns me the list of the shared folders with the leopard specific browser, when I click to the folder I want the browser returns this error (translated from Italian): Leopard message: Connection failed There was an error on connecting to "smb://10.0.0.7". Please verify the name or the IP of the server, and try again. How can I do to solve the connection problem?

    Read the article

  • Allow only certain files to be exposed to the web on Lighttpd?

    - by darkAsPitch
    Just installed it on my linux desktop, and I only want 1 or 2 files accessible to the outside world. Everything else should only be accessibly via http://localhost/ for various privacy/security reasons. It is just a test server, don't want just anybody accessing my large batch files. How would you go about allowing only certain select files access to the internet and making everything else available only via http://localhost/?

    Read the article

  • Difference between adding MIME types in IIS via Websites vs Local Computer?

    - by Alex Key
    What is the difference between adding MIME types in these 2 different situations? When in IIS 6 manager... Right click on the computer name (local computer) properties mime types Right click on the "Web sites" folder properties http headers mime types I'm guessing that perhaps option 1 adds MIME types for FTP also? However if that were true i'd expect to be able to add MIME types specifically in the properties of FTP (and not just websites). thanks for your help.

    Read the article

  • Mac OS X Server 10.6.6 DNS not responding properly, get a "Truncated, retrying in TCP mode" for subdomain

    - by Eric Arseneau
    If I do an nslookup on youtube.com, no problem, if I do one with www.youtube.com, failure. See details below. [~] nslookup youtube.com Server: 192.168.1.1 Address: 192.168.1.1#53 Non-authoritative answer: Name: youtube.com Address: 74.125.127.93 Name: youtube.com Address: 74.125.47.93 Name: youtube.com Address: 74.125.95.93 [~] nslookup www.youtube.com ;; Truncated, retrying in TCP mode. ;; Connection to 192.168.1.1#53(192.168.1.1) for www.youtube.com failed: connection refused. If I do the same from a Windows machine its fine, its when I do it from a Mac workstation that I get the issue. I have rebooted, both server and workstation, I did a changeip, but nothing is working. Any recommendations?

    Read the article

  • Can I completely remove the Windows DNS in favour of BIND9 in an AD network?

    - by Vinícius Ferrão
    I would like to remove the DNS feature of Windows Domain Controllers and point the DNS servers to our BIND9 servers. I know it's possible to setup coexistence but this requires a number of extra Windows DNS Servers equals to the number of Domain Controllers in the network. Active Directory expects the _msdcs zone and other things like _tcp, _udp; etc. The main question is: how to make BIND9 takes care of all this AD specific data? And with dynamic updating to make AD even more happier. Thanks, PS: Making BIND9 points to the Windows DNS Servers to resolve the Active Directory specific zones isn't an option. We already do this... EDIT: As today, I'm running without Windows DNS. I'm writing up a guide on how to do this, and I'll update this topic.

    Read the article

  • Permissions won't cascade more than 1 level

    - by Jovin_
    Running Windows Small Business Server 2011 I have a file structure with a lot of sub folders (sometimes 5-6 levels deep). I have created access groups to grant access to my users, and also deny groups to deny access to others. X Access & X Deny. These allow or deny access to a mapped network drive X: On the server I put in the groups with Full Control Allow for X Access and Full Control Deny for X Deny, I also tick the box "Apple these permissions to objects and/or containers within this container only" and have ensured that "Apply to:" is "This folder, subfolders and files". But for some reason the permissions will only apply to the next level of folders & files. ex. structure: X: Folder 1 Folder 1a Folder 2 Folder 2a If I apply the permissions to X: it'll only go to Folder 1 & 2, not 1a and 2a, I then need to manually apply the permissions to these too. Is this working as intended or am I doing something wrong?

    Read the article

< Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >