Search Results

Search found 20852 results on 835 pages for 'local seo'.

Page 520/835 | < Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >

  • How can I tell what user account is being used by a service to access a network share on a Windows 2008 server?

    - by Mike B
    I've got a third-party app/service running on a Windows 2003 SP2 server that is trying to fetch something from a network share on Windows 2008 box. Both boxes are members of an AD domain. For some reason, the app is complaining about having insufficient permissions to read/write to the store. The app itself doesn't have any special options for acting on the authority of another user account. It just asks for a UNC path. The service is running with a "log on as" setting of Local System account. I'd like to confirm what account it's using when trying to communicate with the network share. Conversely, I'd also like more details on if/why it's being rejected by the Windows 2008 network share. Are there server-side logs on 2008 that could tell me exactly why a connection attempt to a share was rejected?

    Read the article

  • The dreaded Brightness issue (Fn keys + Max brightness)

    - by Adam
    I'm trying to get some control over the brightness of my Samsung QX411 (Integrated Intel and discrete Nvidia, though Ubuntu doesn't see the latter yet, I'll play around with Bumblebee later) Using the FN+up/down lowers the screen brightness from max to one peg down or back up. If I try to bring the brightness down any more, it just flickers and stays the same. I can lower the brightness in Settings, but that's delicate and gets reverted to max if I open up the brightness settings again, or log out. The closest I got was adding acpi_backlight=vendor to a line in /etc/default/grub, (source) I could consequently lower the brightness a couple of pegs down to the minimum with FN+down, but then it's as if the problem got inversed, and I'd get stuck in the bottom tier, I could only increase the brightness by one peg and back down. Rebooting would revert to max brightness. acpi_osi=, acpi_osi=Linux, acpi_osi=vendor, acpi_osi='!Windows 2012', acpi_backlight=Linux, acpi_backlight='!Windows 2012' don't do anything for me. I've also tried adding echo 2000 > /sys/class/backlight/intel_backlight/brightness to /etc/rc.local, where my max value from cat /sys/class/backlight/intel_backlight/brightness is 4648, which didn't do anything. (same result with echo 2000 > /sys/class/backlight/acpi_video0/brightness) source Samsung tools also didn't help in this regard. I've spent hours on this, it's getting quite frustrating. Any help would be greatly appreciated.

    Read the article

  • What NIS maps are needed for OSX 10.6 to authenticate?

    - by Kyle__
    What NIS maps are necessary for OSX 10.6 to authenticate? I have an ubuntu-sever sharing NIS, and from the OSX client, ypcat passwd, ypcat group and (as root) ypcat shadow.byname all work, and return the correct info. If I type groups kyle (a user in NIS, but not on the local machine), I get all the correct group information. The only thing that doesn't work, is logging in. (And yes, if I point an ubuntu box to that NIS server, everything authenticates off of it just fine).

    Read the article

  • front end to linux std mailbox for development purposes

    - by Fabio
    I am actually a software developer, do have a fair amount of linux experience as a user though since 1997. I am normally on stackoverflow.com, please excuse me if this question isn't appropriate here. I am working on a web project. We send out emails. I work locally on a linux box. When coding I use my local mailboxes to check what's been sent. Emails sent out to valid email addresses are not arriving at my official mailbox; they might be stopped by the provider's mail servers (gmail, yahoo). Now, we are sending out HTML mails too. I need to check how they look like. Is there a GUI frontend to the standard linux BSD mailbox? Or should I install some IMAP/POP server for this? Will such server get the emails sent to username@localhost ? Thanks for any suggestion

    Read the article

  • OS X Server App ail not accept login credentials

    - by user164113
    I cannot login to the server app and i know for sure all the login info is right. every time i enter my password the name window shakes to tell me it's invalid. I never post unless I'm out of ideas and I need it to be up so I can host my splash page tonight. i've spoken with "certified apple" specialists at some local mac stores and they are stumped too. Something is going on with the apache server, that's all i know for sure. I'll be monitoring this closely so any requests for info, etc. can be filled instantly. Thanks in advance. here's a few things i've noticed: I can ssh successfully into the server address as both root and admin. I've been fussing with the workgroup manager and could use guidance if that's where the problem lies. console shows this message repeatedly (org.apache.httpd[1890]) Exited with code: 1). Much appreciated, harry

    Read the article

  • Remote Desktop Client Crashes following domain join

    - by Roberto Charlie Ciarleglio
    I recently joined my laptop to our windows domain and now the remote desktop client crashes when i try and connect to any machine. It works if I run as administrator but not ordinarily. The domain join migrated my local profile to the domain profile which i think is where the problem lies. I'm guessing its a permission thing as I had a similar problem with dropbox and had to delete reg keys and reinstall. I can't figure out how to fix this problem though. The event viewer shows this: Faulting application name: mstsc.exe, version: 6.1.7601.17514, time stamp: 0x4ce7ab44 Faulting module name: FACredProv2.dll, version: 2.4.95.1, time stamp: 0x4bb8d766 Exception code: 0xc0000005 Fault offset: 0x00000000000025b2 Faulting process id: 0xb24 Faulting application start time: 0x01cd43fbd3a81fba Faulting application path: C:\Windows\System32\mstsc.exe Faulting module path: C:\Windows\System32\FACredProv2.dll Report Id: 154ee55a-afef-11e1-a443-b8ac6f704c5d any help would be appreciated!

    Read the article

  • Custom dedicated email server combined with Amazon AWS?

    - by Simon
    Hi there. We are considering moving our servers to Amazone EC2 cloud. The only thing that stops right now is their problems with ip ranges banned from spam mail lists like SORBS. We are considering leaving one dedicated server in our current hosting, the one which we use to send mail (and other several features we will move to EC2), in order to be able to send the mails from this smtp server instead from Amazon. So, the idea is to have our sites hosted in EC2, and when they need to send mail, redirect they to our "local" smtp server. Do you think it´s viable? Can you think on a better solution? Thanks in advance, Simon.

    Read the article

  • Start daemon after specific samba share is mounted

    - by getack
    I have a homebrew headless NAS running 12.04. In it I have a bunch of disks that are presented as a samba share thanks to Greyhole. If I want to do anything to the files within this share, I must do it through greyhole so that everything is updated properly. Thus, the share must be mounted locally and then accessed from there if I want to work on the files from the local machine. I do this mounting automatically thanks to these instructions. I also have Deluge installed that takes care of all my torrenting needs. Deluge's default download location is in this share, so that all the downloads are immediately available to the rest of the network. Obviously for everything to work, the share must be mounted, otherwise Deluge is going to have a problem downloading to it. The problem is, it seems like Deluge is starting before the shares are mounted when the system boots. So downloading/seeding does not continue automatically after boot. I have to log in and force a manual rescan and start on each torrent otherwise all the torrents just hangs on the error. Is there a way I can make deluge start after the shares got properly mounted? I looked into Upstart's emits functionality but I cannot seem to get it to work properly. Any advice?

    Read the article

  • Fedora12, XP and connection sharing via iptables

    - by Paul L
    Just a quick question ( I Hope ) To find out if what I'm trying is even possible. I am trying to share internet connection with Fedora12 as default gateway and XP machine hooked up via NIC using iptables commands as shown in Mark Sobell's book 'A Practical Guide To Fedora And Red Hat Enterprise Linux' These are the commands as placed in /etc/rc.local iptables -A FORWARD -i eth1 -o eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT iptables -A FORWARD -j LOG iptables -t NAT -A POSTROUTING -o eth1 -j MASQUERADE I did flip the in and out parameters to match my NIC configuration ( as opposed to example from book ) but other than that followed example. One thing to note is that Sobell did not mention whether this should work with mix of Linux and XP. One other note ( maybe meaningless ) is that I do have samba working between the two machines. Thanks for any insights anyone might have. PL

    Read the article

  • HP DL160 vs DL360 for virtualization

    - by Chris
    Hi, We're planning to consolidate our infrastructure (a dozen servers). We'll buy two or three identical new servers who will be setup as a XenServer pool. Load balancing and HA tools will monitor the pool and vMotion VMs in case of failure/overload. I know that the DL100 series of HP servers is cheaper than the DL300 serie (in every sense of the word). As we don't need local storage (we have a SAN) and can live with a temporary down server (provided that Xen Server HA tools work as advertized), what are the downsides going with the DL100 serie ? Thanks, Chris

    Read the article

  • Make Apache to listen in multiple IPs

    - by Enrique Becerra
    Hi I'm in a big LAN, which is behind a proxy/firewall I'm working with an apache/php/mysql application, which is hosted in a small server besides my workstation. This server is connected to the LAN also and is behind the proxy: The server has a local IP assigned: 10.64.x.x Also, this server has a public IP assigned (or redirected from within the proxy/firewall) which is: 200.41.x.x I can't access public IP from LAN, but I can ping to the public IP from outside the building How should I configure Apache to listen also for public IP and open the 80 port for people accessing from outside the building?. It is set now to Listen 10.64.x.x:80 Thanks a lot in advance,

    Read the article

  • RDP and New Accounts

    - by leeand00
    I created a new user account on the domain and added them to the Remote Desktop Users group. I could login just fine locally, but when I logged in remotely I was basically told that I could not login from there using that user. I could login just fine as the administrator or anybody else other than that new account. So I researched it a bit more and found that my setting looked like this on the local machine: So I changed it to Allow connections only from computers running Remote Desktop with Network Level Authentication (NLA). Now when I tried this down at my office I connected with RDP just fine on another computer. But low and behold when I got home and simply try to connect to the machine, I get the message: There has to be some kind of in between setting, or additional setting that I need to change on the user that allows me to connect directly via remote desktop over the VPN. At the moment I can connect by connecting to another computer on the network and then RDPing from there into my machine, but this is not ideal.

    Read the article

  • How can I automate FTP downloads based on date without bi-directional syncing?

    - by Bill
    I have a particular FTP-related situation that I'm having trouble finding a solution for. I need an FTP download/syncing application that can operate within the following parameters: It must run under Windows (installing Python to be able to run a script or some such thing is an acceptable solution). It must be able to ignore files before a certain date (I want to start downloading new files, not all the files that exist in this very large FTP directory). I don't want bi-directional syncing (e.g. I don't want changes I make to the local files and directory structure to change the remote FTP server, the FTP server needs to be left completely alone). Automating it in some fashion would be ideal. What would you guys suggest? The solutions I'm turning up are all missing the mark in some fashion (e.g. they have bi-directional syncing or they have no way of starting the syncing today instead of trying to pull down the entire directory).

    Read the article

  • Amazon EC2 Creating Tunnel with OpenVPN

    - by nocode
    I have followed these instructions: http://aws.amazon.com/articles/0639686206802544 I can ping the VPN endpoints and I have the corresponding VPC CIDR pointing to the EC2 instance in the route table. Here is my config: port 1194 proto udp dev tun # Remote peer and network remote Elastic_IP route 10.0.0.0/16 # Configure local and remote VPN endpoints ifconfig 169.254.255.1 169.254.255.2 # The pre-shared static key secret /etc/openvpn/ovpn.key keepalive 10 120 persist-key persist-tun log /var/log/openvpn.log verb 3 When I look at my logs, I get this error: RESOLVE: Cannot resolve host address: 10.0.0.0/16: Name or service not known OpenVPN ROUTE: failed to parse/resolve route for host/network: 10.0.0.0/16 in VPC1, the CIDR is 172.31.0.0/16 which is targeting the EC2 instance also running OpenVPN. I'm getting the same error from the Instance in VPC2 with the corresponding CIDR. Just for testing, i stopped the IPTABLES service I am running the Amazon linux AMI image (x64) as specified in the article I linked.

    Read the article

  • How to enable mod_info in Apache?

    - by Amit Nagar
    I gone through the apache guide to enable to mod_info. As per doc: To configure mod_info, add the following to your httpd.conf file. Location /server-info SetHandler server-info /Location You may wish to use mod_access inside the directive to limit access to your server configuration information: Location /server-info SetHandler server-info Order deny,allow Deny from all Allow from yourcompany.com Location Once configured, the server information is obtained by accessing http://your.host.dom/server-info In my case the this link is not giving any info. Http 404 NOT FOUND error Is there anything I need to install as mod_info.c or something ? Is there anything i need to put as AddModule or something ? In Error log : File does not exist: /usr/local/apache2/htdocs/example1/server-info I have 3 virtual host. One of this as default which use example1 as Docroot dir. I am not sure where this page (server-info) should be ? in case of server-status, it's working fine

    Read the article

  • PHP Composer Not Working On Mac

    - by Richard Knop
    I have installed a bitnami mac stack mainly because I require at least PHP 5.4.7 version for my project. However, I have run into an issue with composer. This is the error I get when I run: php composer.phar install --dev The error: Richard-Knops-MacBook-Pro:my-project richardknop$ php composer.phar install --dev dyld: Library not loaded: /Applications/MAMP/Library/lib/libiconv.2.dylib Referenced from: /opt/local/bin/php Reason: Incompatible library version: php requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap Richard-Knops-MacBook-Pro:my-project richardknop$ How to solve it?

    Read the article

  • Dutch ACEs SOA Partner Community Award Celebration

    - by JuergenKress
    When you win you need to celebrate. This was the line of thinking when I found out that I was part of a group that won the Oracle SOA Community Country Award. Well – thinking about a party is one thing, preparing it and finally having the small party is something completely different. It starts with finding a date that would be suitable for the majority of invited people. As you can imagine the SOA ACEs and ACE Directors have a busy life, that takes them places. Alongside that they are engaged with customers who want to squeeze every bit of knowledge out of them. So everybody is pretty busy (that’s what makes you an ACE). After some deliberation (and checks of international Oracle events, Trip-it, blogs and tweets) a date was chosen. Meeting on a Friday evening for some drinks is probably not a Dutch-only activity. But as some of the ACEs are self-employed they miss the companies around them to organize such events. Come the day a turn-out of almost 50% was great – although I expected some more folks . This was mainly due to some illness and work overload. Luckily the mini-party got going, (alcoholic) beverages were consumed, food was appreciated, a decent picture was made (see below) and all had a good chat and hopefully a good time. (Above from left to right: Eric Elzinga, Andreas Chatziantoniou, Mike van Aalst, Edwin Biemond) All in all a nice evening and certainly a "meeting" which can be repeated.  For the full article please visit Andreas's blog Want to organize a local SOA & BPM community? Let us know we are more than happy to support you! To receive more information become a member of the SOA & BPM Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: Eric Elzinga,Andreas Chatziantoniou,Mike van Aalst,Edwin Biemond,Dutsch SOA Community,SOA Community,Oracle,OPN,Jürgen Kress,ACE

    Read the article

  • not find 127.0.0.1 or vhost with localhost apache in mac

    - by Charly Palencia
    i was working with localhost:81 during a long time with vhost and all was rigth. BUT, right now i need to work over the 80 port and i change the http.conf and http-vhost for used the 80 port but right now into the browser localhost works ok, 127.0.0.1 and the vhost not find the server. my configurations are: * My local machine is lion osx * mamp * HTTP.conf: ServerName localhost:80 * http-vhost NameVirtualHost localhost <VirtualHost localhost> DocumentRoot "/Users/chalien/projects/ownProjects/PHP" ServerName example.dev </VirtualHost> * /private/etc/hosts 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 127.0.0.1 example.dev * /private/etc/services http 80/udp www www-http # World Wide Web HTTP http 80/tcp www www-http # World Wide Web HTTP

    Read the article

  • Windows 2008 R2 File Sharing - 'Access denied' if groups are specified in ACL

    - by John Smith
    I am trying to move our old Windows 2003 File Server to Windows 2008 R2. What I have noticed, however, is that the entries for groups in the ACL are being ignored. For example, a user is part of a group in active directory. If I create a folder and enable full access for this group, then share this folder (and define sharing permissions for this group), users in that group do not get access to that folder. If I make an entry in the ACL for the user itself, it works perfectly. These even applies to my domain administrator account - If I create a folder and give full control to the local administrators group / domain administrators group, and I physically log on to the server, I still do not get access - I need to explicitly define my name to proceed. I am not sure what the problem is, tried looking it up in Google to no avail Any assistance will be greatly appreciated

    Read the article

  • Create a Social Community of Trust Along With Your Federal Digital Services Governance

    - by TedMcLaughlan
    The Digital Services Governance Recommendations were recently released, supporting the US Federal Government's Digital Government Strategy Milestone Action #4.2 to establish agency-wide governance structures for developing and delivering digital services. Figure 1 - From: "Digital Services Governance Recommendations" While extremely important from a policy and procedure perspective within an Agency's information management and communications enterprise, these recommendations only very lightly reference perhaps the most important success enabler - the "Trusted Community" required for ultimate usefulness of the services delivered. By "ultimate usefulness", I mean the collection of public, transparent properties around government information and digital services that include social trust and validation, social reach, expert respect, and comparative, standard measures of relative value. In other words, do the digital services meet expectations of the public, social media ecosystem (people AND machines)? A rigid governance framework, controlling by rules, policies and roles the creation and dissemination of digital services may meet the expectations of direct end-users and most stakeholders - including the agency information stewards and security officers. All others who may share comments about the services, write about them, swap or review extracts, repackage, visualize or otherwise repurpose the output for use in entirely unanticipated, social ways - these "stakeholders" will not be governed, but may observe guidance generated by a "Trusted Community". As recognized members of the trusted community, these stakeholders may ultimately define the right scope and detail of governance that all other users might observe, promoting and refining the usefulness of the government product as the social ecosystem expects. So, as part of an agency-centric governance framework, it's advised that a flexible governance model be created for stewarding a "Community of Trust" around the digital services. The first steps follow the approach outlined in the Recommendations: Step 1: Gather a Core Team In addition to the roles and responsibilities described, perhaps a set of characteristics and responsibilities can be developed for the "Trusted Community Steward/Advocate" - i.e. a person or team who (a) are entirely cognizant of and respected within the external social media communities, and (b) are trusted both within the agency and outside as practical, responsible, non-partisan communicators of useful information. The may seem like a standard Agency PR/Outreach team role - but often an agency or stakeholder subject matter expert with a public, active social persona works even better. Step 2: Assess What You Have In addition to existing, agency or stakeholder decision-making bodies and assets, it's important to take a PR/Marketing view of the social ecosystem. How visible are the services across the social channels utilized by current or desired constituents of your agency? What's the online reputation of your agency and perhaps the service(s)? Is Search Engine Optimization (SEO) a facet of external communications/publishing lifecycles? Who are the public champions, instigators, value-adders for the digital services, or perhaps just influential "communicators" (i.e. with no stake in the game)? You're essentially assessing your market and social presence, and identifying the actors (including your own agency employees) in the existing community of trust. Step 3: Determine What You Want The evolving Community of Trust will most readily absorb, support and provide feedback regarding "Core Principles" (Element B of the "six essential elements of a digital services governance structure") shared by your Agency, and obviously play a large, though probably very unstructured part in Element D "Stakeholder Input and Participation". Plan for this, and seek input from the social media community with respect to performance metrics - these should be geared around the outcome and growth of the trusted communities actions. How big and active is this community? What's the influential reach of this community with respect to particular messaging or campaigns generated by the Agency? What's the referral rate TO your digital services, FROM channels owned or operated by members of this community? (this requires governance with respect to content generation inclusive of "markers" or "tags"). At this point, while your Agency proceeds with steps 4 ("Build/Validate the Governance Structure") and 5 ("Share, Review, Upgrade"), the Community of Trust might as well just get going, and start adding value and usefulness to the existing conversations, existing data services - loosely though directionally-stewarded by your trusted advocate(s). Why is this an "Enterprise Architecture" topic? Because it's increasingly apparent that a Public Service "Enterprise" is not wholly contained within Agency facilities, firewalls and job titles - it's also manifested in actual, perceived or representative forms outside the walls, on the social Internet. An Agency's EA model and resulting investments both facilitate and are impacted by the "Social Enterprise". At Oracle, we're very active both within our Enterprise and outside, helping foster social architectures that enable truly useful public services, digital or otherwise.

    Read the article

  • What happens when my domain provider cancels order after domain transfer?

    - by Saifur Rahman Mohsin
    I purchased 2 domains say xyz.in and abc.com on october 2012 and I got emails that they will be expiring on oct 2013. I called my local domain provider and told him I'd like to transfer the domain from Webiq to GoDaddy to which he said I cannot unless the domain is active. He asked me to pay for both the domains, renew it and then I could transfer the domain via the domain panel. When I went to the domain panel I noticed that the order was made and so I made a transfer which happened successfully. Just as he mentioned the period of validity (1 year) for each domain got transferred to GoDaddy as well! Additionally, I added 1 year of period to both the domain via GoDaddy so both of them and also GoDaddy provided an extra free year to both these domains as I paid for the transfer on 11/10/2013 at 9:18 PM MST so both of these were stated to be valid till 2016 and that's what it showed when I did a whois lookup as well. But now it suddenly shows me that my domains are getting expired this year (and the whois also shows 2015). This is confusing as I have no idea who to blame for the missing one year. I'm wondering what would have happened say if my old domain provider's client who got my domain registered cancelled the order. Since it was no longer under their control would they still be able to deduce that one year? When I tried submitting a support request to Webiq they replied: Your domain "abc.com" has been transferred away from us on 17-11-2013 and the domain "xyz.in" was transferred away from us on 18-01-2014. There are no order cancellation actions placed. If you have any billing related issues kindly contact your parent reseller. I need some guidance on explaining what issue might have occurred or understanding how this domain control works!

    Read the article

  • nginx static files caching doesn't work

    - by user74344
    here is my conf file: usr/local/nginx/sites-available/default server { listen 80; server_name localhost; location / { root html; index index.php index.html index.htm; } # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # serve static files directly location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|swf)$ { expires 30d; } but it doesn't cache static files, how should I fix it? thanks a lot

    Read the article

  • Chrome browser caching

    - by Kyle B.
    I do a lot of development on my local machine and would like to start using Chrome, however I cannot seem to do a hard-refresh (ctrl+f5) or any other key combination to get my browser to forcibly refresh all content @ http://localhost. I change projects frequently in IIS and this presents a problem because I see stylesheet and image data from my previous project with no way to get this page to reload without forcibly dumping all cache data from the settings menu. Is there another key combination I am missing, or is there a place I can (on a site by site basis) turn off caching? I prefer not to have to clear out my temporary files in the browser settings as I switch projects frequently. Thanks, Kyle

    Read the article

  • MPI Project Template for VS2010

    If you are developing MS MPI applications with Visual Studio 2010, you are probably tired of following some tedious steps for every new C++ project that you create, similar to the following:1. In Solution Explorer, right-click YourProjectName, then click Properties to open the Property Pages dialog box.2. Expand Configuration Properties and then under VC++ Directories place the cursor at the beginning of the list that appears in the Include Directories text box and then specify the location of the MS MPI C header files, followed by a semicolon, e.g.C:\Program Files\Microsoft HPC Pack 2008 SDK\Include;3. Still under Configuration Properties and under VC++ Directories place the cursor at the beginning of the list that appears in the Library Directories text box and then specify the location of the Microsoft HPC Pack 2008 SDK library file, followed by a semicolon, e.g.if you want to build/debug 32bit application:C:\Program Files\Microsoft HPC Pack 2008 SDK\Lib\i386;if you want to build/debug 64bit application:C:\Program Files\Microsoft HPC Pack 2008 SDK\Lib\amd64;4. Under Configuration Properties and then under Linker, select Input and place the cursor at the beginning of the list that appears in the Additional Dependencies text box and then type the name of the MS MPI library, i.e.msmpi.lib;5. In the code file#include "mpi.h"6. To debug the MPI project you have just setup, under Configuration Properties select Debugging and then switch the Debugger to launch combo value from Local Windows Debugger to MPI Cluster Debugger.Wouldn't it be great if at C++ project creation time you could choose an MPI Project Template that included the steps/configurations above? If you answered "yes", I have good news for you courtesy of a developer on our team (Qing). Feel free to download from Visual Studio gallery the MPI Project Template. Comments about this post welcome at the original blog.

    Read the article

  • UAC-account-users can't see their mounted network-drives

    - by Daniel
    I wrote a few login batches in the Group Policy Management which mount specified devices to specified usergroups. The batches work as they should as long UAC is disabled. My problem is that the UAC-account-users can't see their mounted network-drives because the login scripts run in elevated context. I tried to fix the problem with PsExec (-l) so that the network-folders are mapped with limited user rigths. But it seems that this won't work. (PsExec is already installed on all computers so it can work local.) Has anyone an idea how to fix that problem? I spended a long time in trying to fix the problem but I did not find any solutions about THIS problem.

    Read the article

< Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >