Search Results

Search found 10978 results on 440 pages for 'collision testing'.

Page 258/440 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • What constitutes valid justification for more IP addresses?

    - by David
    I host a small website with a well known VPS service. They provided me with one IPv4 address upon registering and said additional addresses would require justification. I requested one additional IPv4 address so as to have one for a production environment and one for a testing/QA environment. They said this was unnecessary as I could just use alternative TCP ports for the test environment. I can live with using a non-standard port for non-production hosting, but it got me thinking, what would be valid justification? (I asked them and they didn't want to answer). Is there an industry standard for what counts as "valid" justification for additional IPv4 addresses?

    Read the article

  • VMWare Workstation Dev Machine Disks: one fast or four echofriendly raid?

    - by Avi
    I'm building a new dev computer. It will be running a few VMWare Worksation virtual machines - A dev machine running VS-2010, a build machine, a version-control machine, a web server for testing, a "personal" machine running office etc. I'll be connecting the computer to my stereo, so I'll also be running iTunes (possible on a dedicated VM) and I want the computer to be a silent one. I'll probably use an Antec P183 case. I was advised on Serverfault to use Raid10 for performance. Raid 10 uses 4 disks. So, my question is as follows: In terms of heat, noise, reliability, warranty, price, capacity and performance, what would you suggest: A Raid10 4 disk array using eco-friendly disks such as the $94 1TB Western Digital Caviar Green, or one high performance disk such as the 2TB Western Digital Caviar Black at $280?

    Read the article

  • Can't pipe echo to netcat?

    - by user1641300
    I have the following command: echo 'HTTP/1.1 200 OK\r\n' | nc -l -p 8000 -c and when I curl localhost:8000 I am not seeing HTTP/1.1 200 .. being printed. I am on mac os x with netcat 0.7.1 Any ideas? #!/bin/bash trap 'my_exit; exit' SIGINT SIGQUIT my_exit() { echo "you hit Ctrl-C/Ctrl-\, now exiting.." # cleanup commands here if any } if test $# -eq 0 ; then echo "Usage: $0 PORT" echo "" exit 1 fi while true do echo "HTTP/1.1 200 OK\r\n" | nc -l -p ${1} -c done and testing with: curl localhost:8000

    Read the article

  • Can a working Tomcat 6 webapp be turned into a usable .war file?

    - by Bill Cole
    Problem: I have a working webapp on a FreeBSD 8.1 Tomcat 6 test server that I need to move to a production system. The developer who last touched it (and had root on that server) has moved on and isn't helpful. The running app seems to have been deployed from a CVS server that is now unavailable. My thinking is that I would like to find a way to wrap the working webapp into a proper .war so that I can deploy it on a pristine host and (after testing) send the existing system to a very deep bitbucket. But I'm not having luck finding a way to do that. I'm a sysadmin not a developer and don't work much with Tomcat systems so I may be (likely am) overlooking something blindingly simple. I gather that I may be able to just tar up the deployed directory and untar it on the new machine, but I have a nagging feeling that there are pitfalls in that.

    Read the article

  • Solution to Manage and Monitor (Ubuntu) Machines

    - by Elmar Weber
    I'm looking for a tool like Canonical (system management and monitoring for Ubuntu) that is Open Source and free. The goal is to manage a dozen or so KVM machines for private testing purposes. I know of puppet and munin or RHQ as separate tools to manage and monitor, but I'd prefer something integrated. Any tips? Basic requirements would be: system package management and update (individual selection for each managed node) configuration of basic system services (Users, NFS, cron, ideally also Apache) monitoring (charting of system resources, disk, io, memory, etc) and alerting, ideally a default configuration with sensible values for alerts

    Read the article

  • Configure httpd.conf alias/subdirectory point to another server

    - by azrim
    Hi, I,m having a web server for testing purposes to host my domain http://www.domain.com which run perfectly. Below is server specs: OS Freebsd 7.2 MySQL 5.1.33 Apache 2.2.11 PHP 5.2.9 I can do alias directory in my httpd.conf so that my domain can have subdirectory hosting in the same server such as httpd://domain.com/subdomain1, httpd://domain.com/subdomain2 and so on. All my subdomain1 and subdomain2 directory folders reside on the same web server but only different location. Below is my example from httpd.conf for the alias subdomain1 block: Alias /subdomain1 "/usr/local/www/subdomain1" <Directory "/usr/local/www/subdomain1"> Options +Indexes AllowOverride None allow from all </Directory> I,m looking a way in order my subdomain1 and subdomain2 directory is read from another server in my LAN but remain hosted as httpd://domain.com/subdomain1. Really appreciate anyone know how to do this. Thanks,

    Read the article

  • Firefox: how to autocomplete password but not username

    - by Tristan
    I'm a part of a team testing a web application that needs to log into hundreds of test accounts every day. The password is always the same, but the usernames constantly change. I can save the password without an accompanying username, but then it won't autocomplete when I next visit the site. I am hoping to get Firefox to autocomplete the password field but not the username field. To make things more difficult, we're unable to use any third party addons or software thanks to beuraucratic restrictions. We're also unable to modify the login page on the server's side. Does anyone have any ideas?

    Read the article

  • Monitor or log directory permission changes?

    - by Myles
    I'm having an issue with a cPanel shared server running CentOS 5 where a few directories under the public_html folder keep getting changed to 777 from 755. The customer says they are not changing it and i'm wondering if there is a way to monitor these specific directories to find out who/what is changing the permissions. I have looked into using auditctl and after testing it and changing the permissions myself I don't see anything in the logs so i'm not sure if i'm doing it right or if it's even possible. Does anybody have any suggestions or ideas on how I could figure out what is changing the permissions? Thanks!!

    Read the article

  • Maintaining CSS on a pre-built ASP.net website. How can I run this locally on a home linux server?

    - by DavidR
    I was in charge of the css for a website. I sent in my code to a guy who integrates the css with the dev site. Later they decided it would be better for me to have a more direct role in dealing with the css. I've downloaded everything on the ftp but have no idea how to get this set up. I am running an Apache server on a Fedora Linux install. Is there any way I can set up a local testing server so I can test my changes before ftp'ing them back on the site?

    Read the article

  • Need an idiot-proof picture resizing program for Windows.

    - by marcusw
    My friend needs to resize some pictures as part of a web publishing job, but he is rather clueless when it comes to computers. I am in charge of teaching him how to do this, but only have Linux (albeit with Wine installed) at my disposal for testing. Could you guys recommend a fast, easy, batch-capable, and hopefully open-source program that will resize pictures to the resolution he wants? It doesn't have to be anything fancy, but it needs to be quick and easy to use. Thanks!

    Read the article

  • Transfer an account from a 'dead' domain

    - by PJC
    So - following from my previous question: How do I stop DFSR replication preventing a Domain Controller from advertising Domain Services?, I lost the FSMO DC, and my only other DC was in an unrecoverable state. I've created a new domain to continue my testing, but now have an issue which I suspect is relevant to any domain suffering a "catastrophe". I have user accounts and client PCs "on the old domain". (Actually 1 client PC and 3 accounts) I can still sign into the client PC as any of those users on the "dead" domain, because that is cached. There are (thankfully) no encrypted files in the "old" domain. What I would now like to do is migrate the full content (files, preferences, etc) from the "dead" domain to the new "live" domain for any/all user accounts, for the "old" PC. Is there anything out there which can assist me in doing so?

    Read the article

  • Oracle redo log performance degradation when inserting

    - by Aldarund
    I have a oracle 11g database. I'm testing in for inserts. The database running in noarchive mode. I have 3 redo log configured, each 2gb. I'm trying to insert data into test table. At begin it goes fine with 15k ins/second. I make a commit after 200 inserts. But after about 1.3m inserted records it become really slow, about 1-2k ins/second. As i noticed in resource explorer at this point we have filled all redo logs and so the inserts from this points work slow. So my question is why it become so slow when it fills redo logs, even if i commit each 200 records. And how this situation can be fixed ( except the turning off logging completely at inserts)

    Read the article

  • PDAnet on Android IP on PC is not public IP. Where does the NAT take place, PDAnet or Verizon?

    - by lcbrevard
    When using PDAnet on a PC (Win7 ultimate) to USB tether a Motorola Droid on Verizon 3G the IP address of the PC appears to be public - 64.245.171.115 (64-245-171-115.pools.spcsdns.net) - but connections show as coming from another public IP - 97.14.69.212 (212-sub-97.14.69.myvzw.com). Someone is performing Network Address Translation - either PDAnet or within the Verizon 3G network. Can someone tell me who is doing the NAT? Is it PDAnet or is it at Verizon? Is there any possibility of setting up port forwarding, such that connections to the public IP 97.14.69.212 (212-sub-97.14.69.myvzw.com) are forward to the PC? We are testing a network protocol that requires either a true public IP or forwarding a range of ports from the public Internet to the system on which the software runs (actually Linux hosted by VMware Player or Workstation on a PC running Windows).

    Read the article

  • Dealing with LDAP failure when using it for PAM/NSS?

    - by Insyte
    I use a redundant pair of OpenLDAP servers for PAM auth and directory services via NSS. It's been 100% reliable so far, but nothing runs flawlessly forever. What steps should I take now so I have a fighting chance of recovering from failure of the LDAP server(s)? In my informal testing, it appears that even already authenticated shells are largely useless as all username/uid lookups hang until the directory server comes back. So far I've come up with only two things: Do not use NSS-LDAP and PAM-LDAP on the LDAP servers themselves. Create a root-level account on all boxes that only accepts publickey authentication from our local subnet and protect that key well. I'm not sure how much good this would do me as once I'm logged in, I suspect I wouldn't be able to accomplish anything since all the userid lookups would be hanging. Any other suggestions?

    Read the article

  • What are good and bad jitter times for a LAN

    - by garyb32234234
    Ive just ran jperf (frontend to iperf) on our network between 2 workstations, its recorded jitter between 0.033ms and 0.048ms. Is this good or bad? Are there more variables that i would need to consider to make the decision? EDIT: TCP/IP Ethernet LAN 43 PCs 1 server, 100Mbits main switch, various small 8 port switches, test was done using UDP, Its a Windows Domain. I want to instal a few voip softphones on the workstations, see how many i can use that reliably work, im testing a few different workstations around the network to see where the best quality network paths are. Will also change some equipment if i identify bad connections.

    Read the article

  • OpenVZ kernel panic

    - by GtoXic
    I recently installed OpenVZ on my VMWare box (To do some testing) and I get the following: https://www.dropbox.com/s/p38btkv5j84bvsh/Capture.JPG the GRUB config is as follows: # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00 # initrd /initrd-version.img #boot=/dev/sda default=0 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title OpenVZ (2.6.32-042stab057.1) root (hd0,0) kernel /vmlinuz-2.6.32-042stab057.1 ro root=/dev/VolGroup00/LogVol00 sysfs.deprecated=1 initrd /initrd-2.6.32-042stab057.1.img title CentOS (2.6.18-238.el5) root (hd0,0) kernel /vmlinuz-2.6.18-238.el5 ro root=/dev/VolGroup00/LogVol00 initrd /initrd-2.6.18-238.el5.img

    Read the article

  • nginx static file buffer

    - by Philip
    I have a nfs which several frontend-servers are connected to for making the files stored on the nfs available for http downloads. It looks like I have problems with the way apache is serving the files, there seems to be a very small buffer or no buffer at all which results in a lot disk seeks. I did some testing with loading the whole requested file into memory at once and serve it to the client from memory. With this technique I need less disk seeks for a download stream. Since I don't want to implement this by myself for production use I thought that I could maybe use nginx for that because the documentation says that it uses buffers for static file serving. Is it possible to increase the buffer size to a few mb, if so which config parameter do I have to change for this? Has anyone experience with large buffers for static file serving? Is there a better way to reduce disk seeks?

    Read the article

  • How to properly start gvfs without gnome?

    - by 9000
    I have a Debian testing box with Xfce (no Gnome, no Nautilus). It has all gvfs-related stuff installed, including all backends and fuse interface. But any attempts to gvfs-mount anything (like sftp://... or smb://...) fail with error opening file: Operation not supported, and gigolo shows only 'unix device (file)' in the list of supported protocols. My ~/.gvfs has rwx permissions, and I'm a member of fuse group; other fuse-related stuff works for me. What do I do? Where to look?

    Read the article

  • FTP Active vs passive mode

    - by Dan
    I have a FTP server and while doing testing I found an odd issue that I don't understand. I send a RETR command on file "/Folder1/file.txt" and it works fine. Then I send a RETR command on file "/Folder1/SubFolder1/file.txt" and it times out on transfering the data to the client. This was in active mode. When I switch to passive mode it works fine. I understand the difference between the two modes, but what I don't understand is why it worked for one file in active mode, but not the other. I tried it a dozen times and still got the same results. Any ideas? Thanks!

    Read the article

  • Windows 7 x64 Hard Freezing (again)

    - by Lanissum
    Awhile ago, my computer was randomly freezing a few minutes after booting, and I ended up replacing the CPU and mobo after testing the RAM and hard drive, I also couldn't find anything wrong with the video card. So after replacing the presumably faulty hardware, everything worked fine for about a month and a half. All of a sudden, My computer is randomly freezing a few minutes after loading up any intensive application (games, mostly). Most of the time it just freezes with the current frame until I hard reset, although once it printed a BSOD message stating that dxgmms1.sys was to blame. The only difference between these two episodes I can think of is that I can do word/internet/work without issue now, as opposed to the near uselessness my computer was rendered last time. For those of you who want to know, I tested my memory with memtest86 (for 64 bit machines). I can't figure out what could have started this latest round of issues, the event logger just states that a kernel-power event has occurred (like last time) but I think thats just a generic "this machine has rebooted after a sudden shutdown" message.

    Read the article

  • slow disk writes between host and guest

    - by Jure1873
    I've got a ubuntu (server kernel) on a amd x4, 4gb ram, 2x seagate sata 1 tb disks for testing virtual machines and the write performance is very slow. The two disks are in a software raid1 array, one small boot ext3 partition, 10gb system partition and the rest is a xfs partition (about 980) gb for data (virtual machines). If I'm copying files from the virtual machine to the host with rsync or scp the copy frequently stalls or goes at about 1mb/s. What's wrong? I've tried disabling barriers on xfs, increased logbufs, allocsize, but it seems nothing helps. The strange thing is that await (for example during copying) for sda is usually under 100, while for sdb is around 400. Any ideas on what could be wrong / what could I do to improve this setup?

    Read the article

  • SQLIO help decipher output

    - by SQL Learner
    When load testing on a SQL Server Box, using following (testfile is 25 GB) sqlio -kW -t8 -s360 -o8 -frandom -b8 -BH -LS g:\testfile.dat > result.txt sqlio -kW -t8 -s360 -o8 -frandom -b64 -BH -LS g:\testfile.dat >> result.txt sqlio -kW -t8 -s360 -o8 -frandom -b128 -BH -LS g:\testfile.dat >> result.txt sqlio -kW -t8 -s360 -o8 -frandom -b256 -BH -LS g:\testfile.dat >> result.txt Can anyone help me decipher output.. I do not understand latency min and average....? What does this number means IOs/sec: 10968.80 MBs/sec: 685.55 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 5 Max_Latency(ms): 21

    Read the article

  • -w test on OS X gives command not found error

    - by RobV
    I'm writing a bash script which I'm testing on OS X though it will ultimately run on a standard Linux environment and running into a weird error. I have tests like this in my script: if [ ! -w $BP ]; then echo "'$1' not writable" exit 1 fi Which seems pretty sane to me and works fine under Linux but when trying to test on OS X I get the following error message: startSvr.sh: line 135: [: missing `]' startSvr.sh: line 135: -w: command not found So is this a case of OS X not supporting the -w test or is there some other reason this isn't working for me? e.g. environment

    Read the article

  • fastcgi-mono-server with Nginx is much slower than xsp4

    - by marxin
    We started testing our MVC4 app on xsp4 server compiled with mono-3.0.3, speed was enough and we decided to set up production fastcgi-mono-server4 (version 2.11.0.0) with nginx (1.2.6-r1). Single query that loads some JSON query took ~200ms on XSP4, but Nginx serves the query in about 1.2s and I am wondering where could be such a slow down? I followed nginx configuration: http://www.mono-project.com/FastCGI_Nginx and fastcgi-mono-server4 uses socket for listening nginx. Do you have any ideas how to log some time stamp which will help me? Thanks

    Read the article

  • Are there HP Infinihost III ex drivers for windows available?

    - by Matt
    I picked up some infiniband cards off ebay for development/testing purposes. I was hoping that there would be some drivers available under windows 7, but it doesn't seem to be recognised by the OFED software which would seem to contain windows drivers. They are however immediately picked up by Ubuntu and drivers are loaded. Are these cards supported under windows 7 at all? mstflint under linux reports: Image type: Failsafe FW Version: 4.8.930 I.S. Version: 1 Device ID: 25208 Chip Revision: A0 Description: Node Port1 Port2 Sys image GUIDs: 001a4bffff0c9374 001a4bffff0c9375 001a4bffff0c9376 001a4bffff0c9377 Board ID: (HP_0060000001) VSD: PSID: HP_0060000001 From what I can tell they are the following: HP IB 4X DDR PCI-e DUAL PORT HCA (HP part number 409376-B21)

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >