Search Results

Search found 11953 results on 479 pages for 'functional testing'.

Page 281/479 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • CSC folder data access AND roaming profiles issues (Vista with Server 2003, then 2008)

    - by Alex Jones
    I'm a junior sysadmin for an IT contractor that helps small, local government agencies, like little towns and the like. One of our clients, a public library with ~ 50 staff users, was recently migrated from Server 2003 Standard to Server 2008 R2 Standard in a very short timeframe; our senior employee, the only network engineer, had suddenly put in his two weeks notice, so management pushed him to do this project before quitting. A bit hasty on management's part? Perhaps. Could we do anything about that? Nope. Do I have to fix this all by myself? Pretty much. The network is set up like this: a) 50ish staff workstations, all running Vista Business SP2. All staff use MS Outlook, which uses RPC-over-HTTPS ("Outlook Anywhere") for cached Exchange access to an offsite location. b) One new (virtualized) Server 2008 R2 Standard instance, running atop a Server 2008 R2 host via Hyper-V. The VM is the domain's DC, and also the site's one and only file server. Let's call that VM "NEWBOX". c) One old physical Server 2003 Standard server, running the same roles. Let's call it "OLDBOX". It's still on the network and accessible, but it's been demoted, and its shares have been disabled. No data has been deleted. c) Gigabit Ethernet everywhere. The organization's only has one domain, and it did not change during the migration. d) Most users were set up for a combo of redirected folders + offline files, but some older employees who had been with the organization a long time are still on roaming profiles. To sum up: the servers in question handle user accounts and files, nothing else (eg, no TS, no mail, no IIS, etc.) I have two major problems I'm hoping you can help me with: 1) Even though all domain users have had their redirected folders moved to the new server, and loggin in to their workstations and testing confirms that the Documents/Music/Whatever folders point to the new paths, it appears some users (not laptops or anything either!) had been working offline from OLDBOX for a long time, and nobody realized it. Here's the ugly implication: a bunch of their data now lives only in their CSC folders, because they can't access the share on OLDBOX and sync with it finally. How do I get this data out of those CSC folders, and onto NEWBOX? 2) What's the best way to migrate roaming profile users to non-roaming ones, without losing vital data like documents, any lingering PSTs, etc? Things I've thought about trying: For problem 1: a) Reenable the documents share on OLDBOX, force an Offline Files sync for ALL domain users, then copy OLDBOX's share's data to the equivalent share on NEWBOX. Reinitialize the Offline Files cache for every user. With this: How do I safely force a domain-wide Offline Files sync? Could I lose data by reenabling the share on OLDBOX and forcing the sync? Afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? b) Determine which users have unsynced changes to OLDBOX (again, how?), search each user's CSC folder domain-wide via workstation admin shares, and grab the unsynched data. Reinitialize the Offline Files cache for every user. With this: How can I detect which users have unsynched changes with a script? How can I search each user's CSC folder, when the ownership and permissions set for CSC folders are so restrictive? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? c) Manually visit each workstation, copy the contents of the CSC folder, and manually copy that data onto NEWBOX. Reinitialize the Offline Files cache for every user. With this: Again, how do I 'break into' the CSC folder and get to its data? As an experiment, I took one workstation's HD offsite, imaged it for safety, and then tried the following with one of our shop PCs, after attaching the drive: grant myself full control of the folder (failed), grant myself ownership of the folder (failed), run chkdsk on the whole drive to make sure nothing's messed up (all OK), try to take full control of the entire drive (failed), try to take ownership of the entire drive (failed) MS KB articles and Googling around suggests there's a utility called CSCCMD that's meant for this exact scenario...but it looks like it's available for XP, not Vista, no? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? For problem 2: a) Figure out which users are on roaming profiles, and where their profiles 'live' on the server. Create new folders for them in the redirected folders repository, migrate existing data, and disable the roaming. With this: Finding out who's roaming isn't hard. But what's the best way to disable the roaming itself? In AD Users and Computers, or on each user's workstation? Doing it centrally on the server seems more efficient; that said, all of the KB research I've done turns up articles on how to go from local to roaming, not the other way around, so I don't have good documentation on this. In closing: we have good backups of NEWBOX and OLDBOX, but not of the workstations themselves, so anything drastic on the client side would need imaging and testing for safety. Thanks for reading along this far! Hopefully you can help me dig us out of this mess.

    Read the article

  • RDP add domain users broken

    - by Robuust
    I have 3 servers, - domain controller with dns services - dhcp/rras - file/random server with files stored on it and nothing special so far. All servers have static IP's All servers are in the same domain (SOFTWARE) RDP is enabled for all 3 servers All servers are Windwos Server 2008 R2 I can connect to the DHCP/RRAS server via RDP I cannot connect to DC and File server When I add RPD users (both are domain admin for testing) to the File server they show up like this: What is happening what I don't see? And additional why don't I even get a login screen for RPD? Thanks in advance.

    Read the article

  • Using Credentials with network scanners

    - by grossmae
    I'm testing out both Tenable's Nessus scanner as well as eEye's Retina for scanning network devices. I am trying to supply credentials to get deeper, more accurate results, however there seems to be no difference in the results whether I supply the credentials or not. I've read the documentation and it seems like I've tried all the logical settings in the Credential options. I've submit along with usernames and passwords for many different accounts and types of accounts (both SSH Credentials and Web Application Credentials) on the devices as well as their respective domain names (when applicable). Is there possibly a good test for either (or both) scanners to tell where these credentials are being provided (if at all) and if any of them are successfully getting authentication?

    Read the article

  • Behaviour of disabling "Allow non-administrators to receive notifications" GPO

    - by Jaymz
    Hi everyone, As the title suggests, I'm trying to figure out the specific behaviour of the following GPO when disabled: Administrative Templates Windows Components Allow non-administrators to receive update notifications We've just started using WSUS, and have added a few machines for testing. At the moment, this is set to Enabled. The problem with this setting is it seems to allow users to opt out of certain updates if they deselect the checkbox after hitting custom install. My main concern with disabling this setting is this: Does it stop non-admins from getting the installs deployed to them? My guess would be that it will just install them silently at the set scheduled time, suppressing any prompts and ensuring they don't get the opportunity to cancel them (this is what I want). My worry is that non-admin users will never get updates pushed to them unless an admin goes and logs on to their machine (not what I want, and seems like a silly situation to be in). Thanks in advance, Jaymz.

    Read the article

  • NTFS write speed really slow (<15MB/s)

    - by Zulakis
    I got a new Seagate 4TB harddrive formatted with ntfs using parted /dev/sda > mklabel gpt > mkpart pri 1 -1 mkfs.ntfs /dev/sda1 When copying files or testing writespeed with dd, the max writespeed I can get is about 12MB/s. The harddrive should be capable of atleast 100MB/s. top shows high cpu usage for the mount.ntfs process. The system has a AMD dualcore. This is the output of parted /dev/sda unit s print: Model: ATA ST4000DM000-1F21 (scsi) Disk /dev/sda: 7814037168s Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 2048s 7814035455s 7814033408s pri The used kernel is 3.5.0-23-generic. The ntfs-3g versions I tried are ntfs-3g 2012.1.15AR.1 (ubuntu 12.04 default) and the newest version ntfs-3g 2013.1.13AR.2. When formatted with ext4 I get good write speeds with about 140MB/s. How can I fix the writespeed?

    Read the article

  • Tunneling HTTPS traffic via a PUTTY/SSL tunnel with SOCKS

    - by ripper234
    I have configured a SOCKS ssh tunnel to a remote proxy, and set my Firefox to use localhost:<port> as a SOCKS proxy. My intention is to tunnel outgoing HTTP/S connections from my machine via a specific 3rd party server I own (on AWS). In my testing, HTTP UTLs are forwarded properly (e.g. when I access http://jsonip.com/ from my computer I do get the server's IP) However, whenever I try to reach an HTTPS address, I get this error: The proxy server is refusing connections How do I debug/fix it? My PUTTY tunnel config is simply (some random source port number + dynamic checked): P.S. I'm aware I might need to manually accept SSL certificates. The reason I'm doing this is to resolve problems using gmail as an outbound SMTP service.

    Read the article

  • Jumbo Frames, ISCSI and ESXi

    - by vlannoob
    I have enabled Jumbo Frames (9000) in ESXi for all my vmNICs, vmKernels, vSwitches, iSCSI Bindings etc - basically anywhere in ESXi where it has an MTU settings I have put 9000 in it. The ports on the switches (Dell PowerConnects) are all set for Jumbo Frames. I have a Dell MD3200i with 2 controllers, each with 4 ports for iSCSI. Each of these ports is set to Jumbo Frames (9000) as well. So now the questions: Do I need to log into each Windows Server VM I am running and delve into the NIC properties and manually set it to Jumbo Frames in the NIC properties in the device Manager as well? Whats the best way of testing that Jumbo Frames are indeed working as intended?

    Read the article

  • Squid proxy server not forwarding traffic

    - by DilbertDave
    I'm trying to set up a web filtering system (dans guardian) on my home network but am failing at the first hurdle - configuring the Squid Proxy server. No matter what I do I cannot seem to configure it properly and I just receive the 'Requested URL could not be received' error page. If I remove the proxy setting on the browser everything works normally. Ultimately I want to run this on something like a Raspberry Pi but at the moment I'm testing with a virtual machine (although efforts with a netbook were equally unsuccessful). The VM has a clean installation of Linux Mint 15 and I've installed squid via apt-get. I've followed numerous walkthroughs but this on (https://help.ubuntu.com/lts/serverguide/squid.html) pretty much sums up the process I've been taking. I'm obviously missing something but cannot figure it out - any assistance appreciated.

    Read the article

  • What constitutes valid justification for more IP addresses?

    - by David
    I host a small website with a well known VPS service. They provided me with one IPv4 address upon registering and said additional addresses would require justification. I requested one additional IPv4 address so as to have one for a production environment and one for a testing/QA environment. They said this was unnecessary as I could just use alternative TCP ports for the test environment. I can live with using a non-standard port for non-production hosting, but it got me thinking, what would be valid justification? (I asked them and they didn't want to answer). Is there an industry standard for what counts as "valid" justification for additional IPv4 addresses?

    Read the article

  • Lenovo Thinkpad SL500 and Keyboard Flex problem.

    - by nitbuntu
    I was testing a Thinkpad SL500 system at a local store and found the flexing of the keyboard to be quite a deal breaker for me; it was flexing quite prominently with very little pressure applied. I find it hard to believe that a business grade laptop should have this problem as a consumer grade laptop that I own (Dell Inspiron 1526) has very little keyboard flex and one needs to apply a lot of pressure to notice it. Is this a common issue with the SL500 or SL510 models of Thinkpad laptops? What about the Thinkpad R500, does this also suffer from similar issues?

    Read the article

  • Best DSL hardware for ADSL Troubleshooting

    - by Jeff Sacksteder
    I have a situation where I need to make the best of a bad DSL situation. The CPE is a black box with no access to DSL diagnostics. My plan is to get some sort of DSL hardware that exposes link-layer state and gives me knobs to tweak. I'd like to be able to mitigate bufferbloat as much as I can while I'm at it. The obvious choice would seem to be a Sangoma card in a linux system. I have no way of knowing if that will do anything for me without testing it, however. I have no other access to WAN troubleshooting equipment. Are there any other options avail to me as a consumer?

    Read the article

  • Is the hosts file ignored in windows if DNS Client service is running?

    - by Mnebuerquo
    I've seen a number of articles about how to edit the hosts file in Windows 7, but it's all about how to open notepad as administrator, not the actual behavior of the dns lookups afterward. I've read that the hosts file is ignored in XP SP2 if DNS Client service is running. I have tried this on my XP machine and it seems to be true. I can see how it is a security danger to have a hosts file that user programs could modify. If it could write to hosts, then any malware could spoof dns locally with minimal difficulty. I'm trying to use the hosts file for testing stuff on my local network without it going to the live site on the internet. At the same time I want to be able to use dns on the normal internet. Mostly though I just want to understand the rules on the newer windows systems. Thanks!

    Read the article

  • Windows 7 - You don't have permissions to save in this folder

    - by James
    Huh? I'm getting this message - "You don't have permissions to save in this folder" - even though I am the only user on this machine, and administrator. How can I set permissions for myself to do everything, everywhere (including saving deleting etc)? Thanks. Edit: Sorry, forgot to say which folder it was. It is a folder in Program Files, where I save my PHP files for local testing. Sorry if Im a bit daft with all this, but I've upgraded straight from XP to 7, and having never used vista, I'm used to being allowed to have full control.

    Read the article

  • s3cmd run on command line not on cron

    - by Jonar
    Many have said that the problem is with environment but I still can't seem to solve this problem. BTW I am using Ubuntu 9.10 login as user, then sudo -s using this command: s3cmd put file s3://bucket worked! now here is the simple script intended for testing: #! /bin/bash env >/tmp/cronjob.log s3cmd put file s3://bucket issuing the command crontab -e * * * * * /opt/script 2>&1 | logger Then using tail to syslogs Dec 3 23:22:01 ubuntu CRON[10795]: (root) CMD (/opt/script 2&1 | logger) But by verifying it on s3Fox Organizer, the file is not uploaded. (I tried changing the #! /bin/sh (no effect), putting crons on /etc/crontab (no effect), setting HOME=/home/user (no effect) What are other options to try? Or other ways to debug this problem. Thanks

    Read the article

  • Can't pipe echo to netcat?

    - by user1641300
    I have the following command: echo 'HTTP/1.1 200 OK\r\n' | nc -l -p 8000 -c and when I curl localhost:8000 I am not seeing HTTP/1.1 200 .. being printed. I am on mac os x with netcat 0.7.1 Any ideas? #!/bin/bash trap 'my_exit; exit' SIGINT SIGQUIT my_exit() { echo "you hit Ctrl-C/Ctrl-\, now exiting.." # cleanup commands here if any } if test $# -eq 0 ; then echo "Usage: $0 PORT" echo "" exit 1 fi while true do echo "HTTP/1.1 200 OK\r\n" | nc -l -p ${1} -c done and testing with: curl localhost:8000

    Read the article

  • web based source control management software [closed]

    - by tom smith
    hi. not sure if this is the right place, but hopefully someone might have thoughts on a solution/vendor. Starting to spec out a project that will require multiple (50-100) developers to be able to manipulate source files/scripts for a large scale project. The idea is to be able to have each app go through a dev/review/test process, where the users can select (or be assigned) the role they're going to have for the given app. I'm looking for web-based, version control, issue tracking, user roles/access, workflow functionality, etc... Ideally, the process will also allow for the reviewed/valid app to then be exported to a separate system for testing on the test server/environment. This can be hosted on our servers, or we can do the colo process. I've checked out Alassian/Collabnet, but any thoughts you can provide would me appreciated as well. thanks

    Read the article

  • Overriding Apache auth directive

    - by Machine
    Hi! I'm trying to allow public access to a method that generates a WSDL-file for our API. The rest of the site is behind basic auth protection. Can you guys take a look at the following virtual-host configuration and see why the override does not take place? <VirtualHost *:80> ServerName xyz.mydomain.com DocumentRoot /var/www/dev/public <Directory /var/www/dev/public> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all SetEnv APPLICATION_ENV testing </Directory> <Location /> AuthName "XYZ Development Server" AuthType Basic AuthUserFile /etc/apache2/xyz.passwd Require valid-user </Location> <Location /api/soap/wsdl> Satisfy Any allow from all </Location> </VirtualHost>

    Read the article

  • Behaviour of nginx as proxy

    - by HD
    I'm testing nginx with different configurations to replace an architecture working with squid + apache. I know that I can use nginx to manage static requests and load balancing but I'm interested in one particular solution that I don't understand clearly: I'm using 2 nginx servers (balanced) with the proxy_pass setting to pass all requests to an apache server. When one client makes a request to the site one of the nginx servers process it and send it to the apache server. Now, how this behaviour could be an improvement to my system?, it seems that all requests are passing through apache and I don't see benefit at all. What happens when 100 simultaneous connections pass through nginx? The 100 connections will be going to the apache server or is some kind of internal behaviour that allows an small impact into apache?

    Read the article

  • Firefox: how to autocomplete password but not username

    - by Tristan
    I'm a part of a team testing a web application that needs to log into hundreds of test accounts every day. The password is always the same, but the usernames constantly change. I can save the password without an accompanying username, but then it won't autocomplete when I next visit the site. I am hoping to get Firefox to autocomplete the password field but not the username field. To make things more difficult, we're unable to use any third party addons or software thanks to beuraucratic restrictions. We're also unable to modify the login page on the server's side. Does anyone have any ideas?

    Read the article

  • VMWare Workstation Dev Machine Disks: one fast or four echofriendly raid?

    - by Avi
    I'm building a new dev computer. It will be running a few VMWare Worksation virtual machines - A dev machine running VS-2010, a build machine, a version-control machine, a web server for testing, a "personal" machine running office etc. I'll be connecting the computer to my stereo, so I'll also be running iTunes (possible on a dedicated VM) and I want the computer to be a silent one. I'll probably use an Antec P183 case. I was advised on Serverfault to use Raid10 for performance. Raid 10 uses 4 disks. So, my question is as follows: In terms of heat, noise, reliability, warranty, price, capacity and performance, what would you suggest: A Raid10 4 disk array using eco-friendly disks such as the $94 1TB Western Digital Caviar Green, or one high performance disk such as the 2TB Western Digital Caviar Black at $280?

    Read the article

  • PGP - MailCloak/Gpg4Win/etc

    - by ericl42
    Hello, I have used Gpg4Win in the past along with FireGPG to have encryption on my emails. I am needing to roll this out to quite a few more people and was wondering if anyone else had some products they preferred better. The criteria I need is free and as user friendly as possible. I looked at MailCloak but I have had some issues with it on 1 of the 2 computers I've been testing on. It seems like it might work pretty well as a overall solution but on my computer I keep getting "Identity cannot be created." and not sure whats going on with that. Thanks for the info.

    Read the article

  • Can a working Tomcat 6 webapp be turned into a usable .war file?

    - by Bill Cole
    Problem: I have a working webapp on a FreeBSD 8.1 Tomcat 6 test server that I need to move to a production system. The developer who last touched it (and had root on that server) has moved on and isn't helpful. The running app seems to have been deployed from a CVS server that is now unavailable. My thinking is that I would like to find a way to wrap the working webapp into a proper .war so that I can deploy it on a pristine host and (after testing) send the existing system to a very deep bitbucket. But I'm not having luck finding a way to do that. I'm a sysadmin not a developer and don't work much with Tomcat systems so I may be (likely am) overlooking something blindingly simple. I gather that I may be able to just tar up the deployed directory and untar it on the new machine, but I have a nagging feeling that there are pitfalls in that.

    Read the article

  • backuppc - how to backup remote (over the internet) clients?

    - by Scott
    I am testing out backuppc, which works great so far backing up windows clients on a LAN via SMB (no backup client/agent required). However I have quite a few laptops and desktops that are in various remote locations - some of which move around. I need some way to have that remote computer create an outgoing connection for backup purposes (Windows XP/7). I know backuppc supports smb, rsync and 'tar', but I believe these are all connections going from the server TO the client. SO, I either need a way to vpn the client on a timed basis, or it would be a lot better if the client could some how connect to the server (ssh?) and initiate it's own backup somehow (rsync?). Of course this all needs to be pre-installed by me and require no maintenance by the end user, no dialogs on their side. What do you think?

    Read the article

  • Why am I getting 'undefined method' exceptions when executing 'run_list add', 'run_list remove' and 'rackspace server delete'?

    - by Peter Groves
    [Originally posted this to opscode forum, got no response] I’m testing out a free hosted chef-server account and multiple subcommands are failing with ‘Unexpected Errors’. Perhaps my version and the server version are incompatible? OS: Ubuntu 12.04LTS Local Chef: 10.12.0 (Installed through gem) Local Ruby: 1.8.7 Also, the workstation machine has been manually configured, but the client(s) I’ve been experimenting with are launched with the Rackspace plugin (using ‘knife rackspace server create…’) The problem commands seem to fail when talking to the host chef-server, however, before it ever tries to modify the client, so I don’t believe that’s where the problem exists. The virtual-servers that are launched by ‘knife rackspace server create’ are launched properly but then deleting them with knife fails. If I include a recipe in the run_list when I create the server, the recipe is properly added to the run_list. If I try to add it later or remove the one that there server was initialized with, those commands fail. Here is the output of a few relevant commands (with stacktraces): https://gist.github.com/7100ada3fd6690113697

    Read the article

  • Apache CPU usage stays at 100% even when there are no requests

    - by Leirith
    Hi, I've been running the Apache HTTP server benchmarking tool (ab) on my new Apache server to test performance. I noticed that with a command like the following: ab -n 100000 -c 1000 http://www.mysite.com/ The CPU is used 100% by the apache2 processes during the testing. When the test concludes, usually with the following error just before the last requests are made: apr_poll: The timeout specified has expired (70007) Total of 99960 requests completed the CPU usage remains at 100%, and it's all being consumed by apache. I am using the worker MPM with and running PHP with mod_fcgid. Any advice as to why this is or what can be done to stop it would be appreciated.

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >