Search Results

Search found 13748 results on 550 pages for 'split testing'.

Page 323/550 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • How to set up a PC which can be booted from Linux AND Windows?

    - by Martin
    Our PC was running Windows XP up to know. It has become incredibly slow and I'm considering switching to Linux (Ubuntu?!) as a fresh OS. However, there are some applications we rarely use which run only on Windows and I also want to have the possibility to easily go back to the old system, if I should find during testing linux, that anything is missing or not available. So the idea is to install Linux on a new (second) hard drive and use the existing Windows XP from a virtual machine (converted by Paragon Drive Backup) in the transition time. We have a lot of data on the PC, tens of GBs of Photos (managed by Picasa), ... My questions: What could be the best way to setup the new hard drive? (Partitions) I assume that I can not access the Linux data from Windows but I could access (read/write) windows drives from Linux? Does anyone know good tutorials for this use case? What other things might I have to consider for transition Windows-Linux?

    Read the article

  • Understanding ESXi and Memory Usage

    - by John
    Hi, I am currently testing VMWare ESXi on a test machine. My host machine has 4gigs of ram. I have three guests and each is assigned a memory limit of 1 GB (and only 512 MB reserved). The host summary screen shows a memory capacity of 4082.55 MB and a usage of 2828 MB with two guests running. This seems to make sense, two gigs for each VM plus an overhead for the host. 800MB seems high but that is still reasonable. But on the Resource Allocation Screen I see a memory capacity of 2356 MB and an available capacity of 596 MB. Under the configuration tab, memory link I see a physical total of 4082.5 MB, System of 531.5 MB and VM of 3551.0 MB. I have only allocated my VMs for a gig each, and with two VMs running they are taking up almost two times the amount of ram allocated. Why is this, and why does the Resource Allocation screen short change me so much?

    Read the article

  • Sharing Windows Folders on a Network... other PCs see but can't access

    - by John
    I'm soooo tired of network setup issues. All I want to do is share a folder and all it's sub-folders so other PCs on my network can view and change this remote location. Why is it that setting a dir to "shared" doesn't actually make it usable in any way? The other PC can see the fodler but is unable to actually open it and look inside. It seems every time I want to do this I go through some semi-random process of right clicking the folder and enabling sharing, then looking in the folder properties to add permissions and other sharing... and then I end up with some folders working but others will randomly block permission on certain files or sub-dirs. I have 5 PCs in my local testing network and I cannot believe it should be this complicated... where is the simple "make this folder work on the network" option?! I have a mixture of XP, Vista & W7 machines, but this seems common to all of them.

    Read the article

  • How do I purge or empty Windows Explorer's network username and sharename cache?

    - by Abel
    While troubleshooting a Samba vs Windows Network issue, I noticed that Windows' Explorer remembers login credentials of remote shares, even if you ask it not to. For instance, after accessing a share using \\servername\sharename plus entering username/password and then closing Windows Explorer, adding the same share as a network drive gives the following message, regardless whether the username is the same or not: The network folder specified is currently mapped using a different user name and password. To connect using a different user name and password, first disconnect any existing mappings to this network share. Using NET USE does not show the share. After restarting the computer, I have no problems accessing the share using different credentials. But restarting just for testing other credentials is annoying, esp. while troubleshooting. How can I purge this cache, using Windows Vista? Note: using nbtstat -R[R], ipconfig /renew, killing explorer.exe or disabling / re-enabling the network card didn't help.

    Read the article

  • Is email forwarding to the sender's address usually blocked in Mail servers / MTA ?

    - by codecowboy
    I've noticed that email forwarding to an address seems not to work if I send an email from the address to which I am forwarding email. This happens for GMail and Fasthosts mail servers. e.g I send an email to [email protected] from [email protected] , [email protected] is set to forward to [email protected] and the email never arrives. I realise this seems logical but it is a potential cause of confusion when testing email functionality in a web application (for me, anyway ;-). I would just like to know if this is standard for all MTA software so I can avoid confusing myself.

    Read the article

  • Windows media player 12 not launching from custom program

    - by Supertrolly
    There is a program we use for testing that calls windows media player and plays a media file. The problem is that windows media player fails to load unless you open and close it before starting the program. After that the program will open it every time without a hitch but after a reboot it is lost and you must do it again. My question is what could be voilate setting could windows media player have that would be lost on a reboot? I have tried programs like Regshot to capture changes to the registry that might be delated on reboot. The code for the program is very straight forward simply calling windows media player with a parameter with the media to play. Using process montior I have determined that is is crashing shortly after the program executes it. I am at a lost on this problem as I can not find what if anything it is changing to run windows media player.

    Read the article

  • RDP add domain users broken

    - by Robuust
    I have 3 servers, - domain controller with dns services - dhcp/rras - file/random server with files stored on it and nothing special so far. All servers have static IP's All servers are in the same domain (SOFTWARE) RDP is enabled for all 3 servers All servers are Windwos Server 2008 R2 I can connect to the DHCP/RRAS server via RDP I cannot connect to DC and File server When I add RPD users (both are domain admin for testing) to the File server they show up like this: What is happening what I don't see? And additional why don't I even get a login screen for RPD? Thanks in advance.

    Read the article

  • Is it possible to change error messages for users connecting to network printers?

    - by eric.s
    We are cleaning up our print server (Win XP). To test that printers are no longer really there we have set up some tests. These tests have left us with 176 questionable printers. We have now set Print access to Everyone to Deny. Testing shows this gives the user a 5 Access is denied. message. We would like to change this message for the user, so when they call our workstudy's who answer the phone do not interpret this as a computer rights management issue and can route the call properly. Is it possible? Is this error number system wide, or just for printing errors? If it's not a system wide error where might the string for this error be?

    Read the article

  • Context is Hindi when printing line numbers in Word 2007

    - by Lessan Vaezi
    I'm trying to print a Word 2007 document with Line Numbering turned on, and in Word the document looks fine but when I print the document, the line numbers appear in Hindi script. See screenshots here: http://www.lessanvaezi.com/context-is-hindi-when-printing-line-numbers-in-word-2007/ I tried deleting my Normal template and allowing Word to create a new one, and testing using that, with no change. I also tried using different printers. The problem goes away if I choose Arabic instead of Context under Word Options - Advanced - Show Document Content / Numeral. However, I would like to keep this setting as Context. The question is, why is the default context of my document Hindi script? Is there a way to change this context?

    Read the article

  • Squid proxy server not forwarding traffic

    - by DilbertDave
    I'm trying to set up a web filtering system (dans guardian) on my home network but am failing at the first hurdle - configuring the Squid Proxy server. No matter what I do I cannot seem to configure it properly and I just receive the 'Requested URL could not be received' error page. If I remove the proxy setting on the browser everything works normally. Ultimately I want to run this on something like a Raspberry Pi but at the moment I'm testing with a virtual machine (although efforts with a netbook were equally unsuccessful). The VM has a clean installation of Linux Mint 15 and I've installed squid via apt-get. I've followed numerous walkthroughs but this on (https://help.ubuntu.com/lts/serverguide/squid.html) pretty much sums up the process I've been taking. I'm obviously missing something but cannot figure it out - any assistance appreciated.

    Read the article

  • Using Credentials with network scanners

    - by grossmae
    I'm testing out both Tenable's Nessus scanner as well as eEye's Retina for scanning network devices. I am trying to supply credentials to get deeper, more accurate results, however there seems to be no difference in the results whether I supply the credentials or not. I've read the documentation and it seems like I've tried all the logical settings in the Credential options. I've submit along with usernames and passwords for many different accounts and types of accounts (both SSH Credentials and Web Application Credentials) on the devices as well as their respective domain names (when applicable). Is there possibly a good test for either (or both) scanners to tell where these credentials are being provided (if at all) and if any of them are successfully getting authentication?

    Read the article

  • Tunneling HTTPS traffic via a PUTTY/SSL tunnel with SOCKS

    - by ripper234
    I have configured a SOCKS ssh tunnel to a remote proxy, and set my Firefox to use localhost:<port> as a SOCKS proxy. My intention is to tunnel outgoing HTTP/S connections from my machine via a specific 3rd party server I own (on AWS). In my testing, HTTP UTLs are forwarded properly (e.g. when I access http://jsonip.com/ from my computer I do get the server's IP) However, whenever I try to reach an HTTPS address, I get this error: The proxy server is refusing connections How do I debug/fix it? My PUTTY tunnel config is simply (some random source port number + dynamic checked): P.S. I'm aware I might need to manually accept SSL certificates. The reason I'm doing this is to resolve problems using gmail as an outbound SMTP service.

    Read the article

  • Behaviour of disabling "Allow non-administrators to receive notifications" GPO

    - by Jaymz
    Hi everyone, As the title suggests, I'm trying to figure out the specific behaviour of the following GPO when disabled: Administrative Templates Windows Components Allow non-administrators to receive update notifications We've just started using WSUS, and have added a few machines for testing. At the moment, this is set to Enabled. The problem with this setting is it seems to allow users to opt out of certain updates if they deselect the checkbox after hitting custom install. My main concern with disabling this setting is this: Does it stop non-admins from getting the installs deployed to them? My guess would be that it will just install them silently at the set scheduled time, suppressing any prompts and ensuring they don't get the opportunity to cancel them (this is what I want). My worry is that non-admin users will never get updates pushed to them unless an admin goes and logs on to their machine (not what I want, and seems like a silly situation to be in). Thanks in advance, Jaymz.

    Read the article

  • NTFS write speed really slow (<15MB/s)

    - by Zulakis
    I got a new Seagate 4TB harddrive formatted with ntfs using parted /dev/sda > mklabel gpt > mkpart pri 1 -1 mkfs.ntfs /dev/sda1 When copying files or testing writespeed with dd, the max writespeed I can get is about 12MB/s. The harddrive should be capable of atleast 100MB/s. top shows high cpu usage for the mount.ntfs process. The system has a AMD dualcore. This is the output of parted /dev/sda unit s print: Model: ATA ST4000DM000-1F21 (scsi) Disk /dev/sda: 7814037168s Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 2048s 7814035455s 7814033408s pri The used kernel is 3.5.0-23-generic. The ntfs-3g versions I tried are ntfs-3g 2012.1.15AR.1 (ubuntu 12.04 default) and the newest version ntfs-3g 2013.1.13AR.2. When formatted with ext4 I get good write speeds with about 140MB/s. How can I fix the writespeed?

    Read the article

  • Windows 7 - You don't have permissions to save in this folder

    - by James
    Huh? I'm getting this message - "You don't have permissions to save in this folder" - even though I am the only user on this machine, and administrator. How can I set permissions for myself to do everything, everywhere (including saving deleting etc)? Thanks. Edit: Sorry, forgot to say which folder it was. It is a folder in Program Files, where I save my PHP files for local testing. Sorry if Im a bit daft with all this, but I've upgraded straight from XP to 7, and having never used vista, I'm used to being allowed to have full control.

    Read the article

  • Lenovo Thinkpad SL500 and Keyboard Flex problem.

    - by nitbuntu
    I was testing a Thinkpad SL500 system at a local store and found the flexing of the keyboard to be quite a deal breaker for me; it was flexing quite prominently with very little pressure applied. I find it hard to believe that a business grade laptop should have this problem as a consumer grade laptop that I own (Dell Inspiron 1526) has very little keyboard flex and one needs to apply a lot of pressure to notice it. Is this a common issue with the SL500 or SL510 models of Thinkpad laptops? What about the Thinkpad R500, does this also suffer from similar issues?

    Read the article

  • Best DSL hardware for ADSL Troubleshooting

    - by Jeff Sacksteder
    I have a situation where I need to make the best of a bad DSL situation. The CPE is a black box with no access to DSL diagnostics. My plan is to get some sort of DSL hardware that exposes link-layer state and gives me knobs to tweak. I'd like to be able to mitigate bufferbloat as much as I can while I'm at it. The obvious choice would seem to be a Sangoma card in a linux system. I have no way of knowing if that will do anything for me without testing it, however. I have no other access to WAN troubleshooting equipment. Are there any other options avail to me as a consumer?

    Read the article

  • Can't pipe echo to netcat?

    - by user1641300
    I have the following command: echo 'HTTP/1.1 200 OK\r\n' | nc -l -p 8000 -c and when I curl localhost:8000 I am not seeing HTTP/1.1 200 .. being printed. I am on mac os x with netcat 0.7.1 Any ideas? #!/bin/bash trap 'my_exit; exit' SIGINT SIGQUIT my_exit() { echo "you hit Ctrl-C/Ctrl-\, now exiting.." # cleanup commands here if any } if test $# -eq 0 ; then echo "Usage: $0 PORT" echo "" exit 1 fi while true do echo "HTTP/1.1 200 OK\r\n" | nc -l -p ${1} -c done and testing with: curl localhost:8000

    Read the article

  • Jumbo Frames, ISCSI and ESXi

    - by vlannoob
    I have enabled Jumbo Frames (9000) in ESXi for all my vmNICs, vmKernels, vSwitches, iSCSI Bindings etc - basically anywhere in ESXi where it has an MTU settings I have put 9000 in it. The ports on the switches (Dell PowerConnects) are all set for Jumbo Frames. I have a Dell MD3200i with 2 controllers, each with 4 ports for iSCSI. Each of these ports is set to Jumbo Frames (9000) as well. So now the questions: Do I need to log into each Windows Server VM I am running and delve into the NIC properties and manually set it to Jumbo Frames in the NIC properties in the device Manager as well? Whats the best way of testing that Jumbo Frames are indeed working as intended?

    Read the article

  • What constitutes valid justification for more IP addresses?

    - by David
    I host a small website with a well known VPS service. They provided me with one IPv4 address upon registering and said additional addresses would require justification. I requested one additional IPv4 address so as to have one for a production environment and one for a testing/QA environment. They said this was unnecessary as I could just use alternative TCP ports for the test environment. I can live with using a non-standard port for non-production hosting, but it got me thinking, what would be valid justification? (I asked them and they didn't want to answer). Is there an industry standard for what counts as "valid" justification for additional IPv4 addresses?

    Read the article

  • Firefox: how to autocomplete password but not username

    - by Tristan
    I'm a part of a team testing a web application that needs to log into hundreds of test accounts every day. The password is always the same, but the usernames constantly change. I can save the password without an accompanying username, but then it won't autocomplete when I next visit the site. I am hoping to get Firefox to autocomplete the password field but not the username field. To make things more difficult, we're unable to use any third party addons or software thanks to beuraucratic restrictions. We're also unable to modify the login page on the server's side. Does anyone have any ideas?

    Read the article

  • s3cmd run on command line not on cron

    - by Jonar
    Many have said that the problem is with environment but I still can't seem to solve this problem. BTW I am using Ubuntu 9.10 login as user, then sudo -s using this command: s3cmd put file s3://bucket worked! now here is the simple script intended for testing: #! /bin/bash env >/tmp/cronjob.log s3cmd put file s3://bucket issuing the command crontab -e * * * * * /opt/script 2>&1 | logger Then using tail to syslogs Dec 3 23:22:01 ubuntu CRON[10795]: (root) CMD (/opt/script 2&1 | logger) But by verifying it on s3Fox Organizer, the file is not uploaded. (I tried changing the #! /bin/sh (no effect), putting crons on /etc/crontab (no effect), setting HOME=/home/user (no effect) What are other options to try? Or other ways to debug this problem. Thanks

    Read the article

  • web based source control management software [closed]

    - by tom smith
    hi. not sure if this is the right place, but hopefully someone might have thoughts on a solution/vendor. Starting to spec out a project that will require multiple (50-100) developers to be able to manipulate source files/scripts for a large scale project. The idea is to be able to have each app go through a dev/review/test process, where the users can select (or be assigned) the role they're going to have for the given app. I'm looking for web-based, version control, issue tracking, user roles/access, workflow functionality, etc... Ideally, the process will also allow for the reviewed/valid app to then be exported to a separate system for testing on the test server/environment. This can be hosted on our servers, or we can do the colo process. I've checked out Alassian/Collabnet, but any thoughts you can provide would me appreciated as well. thanks

    Read the article

  • Can a working Tomcat 6 webapp be turned into a usable .war file?

    - by Bill Cole
    Problem: I have a working webapp on a FreeBSD 8.1 Tomcat 6 test server that I need to move to a production system. The developer who last touched it (and had root on that server) has moved on and isn't helpful. The running app seems to have been deployed from a CVS server that is now unavailable. My thinking is that I would like to find a way to wrap the working webapp into a proper .war so that I can deploy it on a pristine host and (after testing) send the existing system to a very deep bitbucket. But I'm not having luck finding a way to do that. I'm a sysadmin not a developer and don't work much with Tomcat systems so I may be (likely am) overlooking something blindingly simple. I gather that I may be able to just tar up the deployed directory and untar it on the new machine, but I have a nagging feeling that there are pitfalls in that.

    Read the article

  • Behaviour of nginx as proxy

    - by HD
    I'm testing nginx with different configurations to replace an architecture working with squid + apache. I know that I can use nginx to manage static requests and load balancing but I'm interested in one particular solution that I don't understand clearly: I'm using 2 nginx servers (balanced) with the proxy_pass setting to pass all requests to an apache server. When one client makes a request to the site one of the nginx servers process it and send it to the apache server. Now, how this behaviour could be an improvement to my system?, it seems that all requests are passing through apache and I don't see benefit at all. What happens when 100 simultaneous connections pass through nginx? The 100 connections will be going to the apache server or is some kind of internal behaviour that allows an small impact into apache?

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >