Search Results

Search found 15906 results on 637 pages for 'scott and the dev team'.

Page 431/637 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • Firefox: how to autocomplete password but not username

    - by Tristan
    I'm a part of a team testing a web application that needs to log into hundreds of test accounts every day. The password is always the same, but the usernames constantly change. I can save the password without an accompanying username, but then it won't autocomplete when I next visit the site. I am hoping to get Firefox to autocomplete the password field but not the username field. To make things more difficult, we're unable to use any third party addons or software thanks to beuraucratic restrictions. We're also unable to modify the login page on the server's side. Does anyone have any ideas?

    Read the article

  • Crontab - stop sending mail, special case ||

    - by 2ge
    Hi all, I need to put into my crontab small command, which is checking, if lighttpd web server is running, for some reason it hangups sometimes. So I got there: * * * * * root /bin/pgrep lighttpd || /usr/local/etc/rc.d/lighttpd restart >/dev/null 2>&1 problem is, this send me mail every minute, in the mail is number of PID of lighttpd, which is running. For other crontab job, redirection work, so I assume, when is there "||", it makes problem. maybe there should be better to rewrite crontab job, so it uses exit status of pgrep, so I can avoid "||". I am using FreeBSD. Thanks for any help, for now I disabled this job

    Read the article

  • PHP Runs Very Slow on IIS7. Need Help optimizing our config

    - by Kendor
    Am running a PHP based web app on our Windows 2008 cloud-based server. The app, which runs fine outside of our environment (e.g. a different IIS server), but is VERY slow in our environment. Based on googling this is a relatively common situation. I installed PHP and MySQL via the IIS web deployment method... Here's our setup: Windows 2008 Server Enterprise SP2 (32-bit) Microsoft-IIS/7.0 MySQL client version: mysqlnd 5.0.8-dev - 20102224 $Revision: 321634 $ PHP extension: mysqli Update for IIS 7.0 FastCGI Windows Cache Extension 1.1 for PHP 5.3 I had read elsewhere that ipv6 might be an issue, so I turned this off on the network adapter. The app is using: localhost as its connection Be easy on me, as I'm a bit green about some of these components... Also, rewriting the PHP app or modifying it is NOT an option. I'm reasonably SURE that our config is the issue.

    Read the article

  • SAMBA and Linux ACLs -- "Permission denied" on write to share but file written nevertheless

    - by MCH
    I set up a writable share directory "/home/net/share" with acl like this: sudo mkdir -p "/home/net/share" sudo setfacl -m "u:localuser:rwx,u:remoteuser:rwx,g:users:rwx" "/home/net/share" My /etc/samba/smb.conf looks like this: [global] workgroup = w server string = server security = user load printers = no log file = /var/log/samba/%m.log max log size = 50 dns proxy = no printing = bsd printcap name = /dev/null disable spoolss = yes encrypt passwords = true invalid users = nobody root follow symlinks = yes wide links = yes [share] comment = Writable by localuser and remoteuser path = /home/net/share valid users = remoteuser read only = no public = no printable = no Locally, localuser and remoteuser have user accounts and smbpasswds and can both read, create and delete files in /home/net/share. But when I log on from a different machine (like this: sudo mount -t cifs //server/share mountpoint/ -o username=remoteuser ), I get "Permission denied" both when trying to create directories and files, oddly though, it does create files (not directories!) despite these messages! How can I get this working?

    Read the article

  • What would cause a query being ran from SSMS on local box to run slower then from remote box

    - by Racter
    When I run a simply query such as "Select Column1, Column2 from Table A" from within SSMS running on my production SQL Server the results seems to take extremely long (45Min). If I run the same query from my dev system’s SSMS connecting to the production SQL Server the results return within a few seconds (<60sec). One thing I have notices is if the system was just rebooted performance is good for a bit. It is hard to determine a time as I have had it start running slow very quickly after reboot but at most it performed good for 20min and then start acting up. Also, just restarting the SQL service does not resolve the issue or provide a temporary performance boost. Specs for Server are: Windows Server 2003, Enterprise Edition, SP2 4 X Intel Xeon 3.6GHz - 6GB System Memory Active/Active Cluster SQL Server 2005 SP2 (9.0.3239)

    Read the article

  • No file sharing between two server 2008 R2 machines.

    - by ProfKaos
    I have just replaced XP with Server 2008 R2 on my test sever, and have been running 2008 R2 on my dev laptop. When my server was still XP, file sharing just worked, but now it just doesn't. I've enabled everything I can about sharing, and I can ping the server by machine name, but if I try an access a share, I get asked for a password. The passowrd dialog assumes a domain for this user, but neither my laptop admin user nor my server admin user can get past this login. What am I doing wrong?

    Read the article

  • How do I start mysqld with options

    - by xiankai
    I need to start up mysqld with command line options as from here: http://dev.mysql.com/doc/refman/5.1/en/server-options.html#option_mysqld_skip-grant-tables I normally do sudo service mysqld start, but passing the option as sudo service mysqld start --skip-grant-tables does not seem to work. Alternatively I have tried starting as a daemon, sudo mysqld_safe --skip-grant-tables & But it seems to terminate too soon: 131101 04:59:57 mysqld_safe Logging to '/var/lib/mysql/vagrant.example.com.err'. 131101 04:59:57 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 131101 05:00:03 mysqld_safe mysqld from pid file /var/lib/mysql/vagrant.example.com.pid ended My last option seems to specify the option in /etc/my.cnf instead, but is there any way to do it via the command line?

    Read the article

  • nginx proxy pass redirects ignore port

    - by Paul
    So I'm setting up a virtual path when pointing at a node.js app in my nginx conf. the relevant section looks like so: location /app { rewrite /app/(.*) /$1 break; proxy_pass http://localhost:3000; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } Works great, except that when my node.js app (an express app) calls a redirect. As an example, the dev box is running nginx on port 8080, and so the url's to the root of the node app looks like: http://localhost:8080/app When I call a redirect to '/app' from node, the actual redirect goes to: http://localhost/app

    Read the article

  • Edit exim4 Message-ID for releasing blocked mail by Mailscanner

    - by F12
    Our sysadmin team edits the field Message-ID in exim4 header files (ending with -H) and substitues the first char after "<". e.g: 077I Message-ID: <[email protected] -- 077I Message-ID: <[email protected] I'd like to write a script to release the mails. I changed the part between "<" and "@" in the field Message-ID and substituted a hash value so the Message-ID looks like: 077I Message-ID: <[email protected] Now exim says "format error" in the log and the mail is not released. There was no change except for this one field. Why can't the ID be substituted like that? Does it need to be the exact same length? It's exim4 version 4.69-2ubuntu0.3.

    Read the article

  • Ubuntu PPTP VPN to Microsoft Server Command Line ONLY

    - by supreme
    I try to setup a VPN Connection from a Ubuntu 12.04 LTS to Microsoft VPN Server (Ubuntu is the Client in this Case), but I only get this error message: .. connection failed! Check the log messages below for information why. Couldn't open the /dev/ppp device: Operation not permitted FATAL: Module ppp_generic not found./usr/sbin/pppd: Sorry - this system lacks PPP kernel support Details you may need: modprobe -v ppp > FATAL: Module ppp not found. uname -r -> 2.6.32-042stab076.8

    Read the article

  • Is an I/O benchmark made for hardware an accurate assessment of a Windows VM's performance under vSphere 5?

    - by Jeremy
    We support an enterprise application running on Windows Server 2008 R2. One of our customers has chosen to install to VMWare, and what I'm finding is that the VM's are relatively slow compared to hardware. Our product development team has advised that many VMs appear to run particularly slow on I/O benchmarks, which impact performance in production. I've tried the AttoSoft I/O benchmark and find that for smaller I/O blocks (1-32K) the VM I'm looking at is 25x slower than hardware and for larger I/O blocks (1-8MB) it's 10x slower. Is this a fair benchmark? If not, any suggestions for a fair test?

    Read the article

  • Server downtime - are these APC warnings the cause?

    - by DisgruntledGoat
    Yesterday I had a problem with my dedicated server (Ubuntu 10.04, LAMP). It wasn't down per se, but running incredibly slowly as if we had a massive overload of visitors (though I don't think we did). It's running smoothly again now. I've been checking through log files etc to see if I can find any issues, the only strange thing is a bunch of these errors, occurring at about the same time as the downtime: [apc-warning] Unable to allocate memory for pool. in [file] on line 49. And a bit later on: [apc-warning] GC cache entry '[file1]' (dev=2056 ino=8988092) was on gc-list for 3601 seconds in [file2] on line 746. Could these errors indicate the cause of the server slowdown, or are they simply a result of the server being slow in the first place? What would be the solution?

    Read the article

  • How to remove bad disk from LVM2 with the less data loss on other PVs?

    - by Walkman
    I had a LVM2 volume with two disks. The larger disk became corrupt, so I cant pvmove. What is the best way to remove it from the group to save the most data from the other disk? Here is my pvdisplay output: Couldn't find device with uuid WWeM0m-MLX2-o0da-tf7q-fJJu-eiGl-e7UmM3. --- Physical volume --- PV Name unknown device VG Name media PV Size 1,82 TiB / not usable 1,05 MiB Allocatable yes (but full) PE Size 4,00 MiB Total PE 476932 Free PE 0 Allocated PE 476932 PV UUID WWeM0m-MLX2-o0da-tf7q-fJJu-eiGl-e7UmM3 --- Physical volume --- PV Name /dev/sdb1 VG Name media PV Size 931,51 GiB / not usable 3,19 MiB Allocatable yes (but full) PE Size 4,00 MiB Total PE 238466 Free PE 0 Allocated PE 238466 PV UUID oUhOcR-uYjc-rNTv-LNBm-Z9VY-TJJ5-SYezce So I want to remove the unknown device (not present in the system). Is it possible to do this without a new disk ? The filesystem is ext4.

    Read the article

  • Google Chrome shows error messagebox every time it starts.

    - by Benjamin
    I removed Google Chrome 10.x dev version, and installed 8.x stable version again. After installing 8.x, chrome always shows this message box, every time it starts. Your profile can not be used because it is from a newer version of Google Chrome. Some features may be unavailable. Please specify a different profile directory or use a newer version of Chrome. What profile does it say. How to fix it? Thanks.

    Read the article

  • SharePoint Calendar - Start time after a certain hour

    - by KodovaKim
    I am working with SharePoint Calendar list to create a shift schedule for a team (End user side of things, I am not writing code). I have added a few custom columns to the Calendar List Item. I have the list exported to excel where I have a Pivot table set up so I can see a summary of the different columns - I can see the person's name (From the title column), total hours they are scheduled for (separated into weekdays and weekends based on a custom column I added). What I need is a way to check the start time of the shift to determine if it is a Day shift (starts at 7am), Eve shift (starts at 3pm), or a Night shift (starts at 10pm). So, when creating a new calculated column I would assume the function I need would go something like "=If([StartDate]...." but I am not sure on the rest. Anyone know how I would write that function?

    Read the article

  • WGet from one site on a server to another site on the same server

    - by JoshReedSchramm
    Hey all, I've recently been asked to administer a couple ubuntu boxes running web servers. I'm a dev by trade so if this question is fairly noob please forgive. We have about a dozen sites running on this box. 2 of our sites need to talk back and forth over a restful api. Unfortunately we are having issues with the sited connection to each other via wget. When we try and run wget manually from the command line from the server pointing to a site also on that server it hangs and eventually times out. If we do the same thing from outside the server to the same site on the server it works. Is there something that could be preventing sites on the same server from communicating with each other? The same thing happens pinging the site from the server.

    Read the article

  • Export 1 year of CVS to another repo?

    - by John Dibling
    We have a CVS repo with many years of history. It has become huge and unwieldly, so we would like to split this singe repo in to two repos: The main repo would have 1 year's worth of history, up to and including present day. This is where all dev work would take place. An archive repo would have the complete history, up to the point where the main repo would take over. This would be read-only, and only used to look at historical changes. Given that we are starting with one huge, monolithic CVS repo, is it possible to split it up in this way? How can this be accomplished?

    Read the article

  • BIND unstable with DLZ+MySQL on Ubuntu 9.10, any ideas?

    - by Chris
    My BIND server keeps dropping out and I can't work out why. Here is some info from the syslog that I think pertains to the failure(s): Apr 22 21:12:17 dnsdebug named[6613]: mysql driver unable to return result set for lookup query Apr 22 21:12:17 dnsdebug kernel: [285552.573949] type=1503 audit(1271963537.759:53): operation="open" pid=6618 parent=1 profile="/usr/sbin/named" requested_mask="::rw" denied_mask="::rw" fsuid=107 ouid=0 name="/dev/tty" Apr 22 21:12:17 dnsdebug named[6613]: mysql driver unable to return result set for lookup query Apr 22 21:13:17 dnsdebug named[6613]: last message repeated 7 times Any ideas? Mysql had a segfault sometimes, but that seems to be no longer an issue. It's the 64bit version of ubuntu too. Sometimes it will return records just fine, other times it appears to just randomly go down.

    Read the article

  • Backup tape compression

    - by pufferfish
    What things should I check to confirm that compression is actually happening on our tape backup system? Although the tapes are marked as 200G/520G (native/compressed) capacity, they seem to fill up before the 200G mark (some less than 100G). I'm using - Sony AIT-4 tape autochanger - Sony SDX4-200C (AIT-4) tapes - Ubuntu Lucid - Bacula I've tried checking hardware compression with: tapeinfo -f /dev/nst0, which gives Product Type: Tape Drive Vendor ID: 'SONY ' Product ID: 'SDX-900V ' Revision: '0102' Attached Changer API: No SerialNumber: '0001000036' MinBlock: 2 MaxBlock: 8388608 SCSI ID: 1 SCSI LUN: 0 Ready: yes BufferedMode: yes Medium Type: Not Loaded Density Code: 0x33 BlockSize: 0 DataCompEnabled: yes DataCompCapable: yes DataDeCompEnabled: yes CompType: 0x3 DeCompType: 0x3 BOP: yes Block Position: 0 Partition 0 Remaining Kbytes: 201778000 Partition 0 Size in Kbytes: 201779000 ActivePartition: 0 EarlyWarningSize: 0 NumPartitions: 0 MaxPartitions: 0 ... so I presume it's on. Notes: The Bacula documentation says hardware compression needs to be enable with "system tools such as mt"

    Read the article

  • MapReduce job is hung after 1 of 5 reducers completed on single-node environment

    - by Marboni
    I have only one Data Node on my dev environment on EC2. I ran heavy MR job and in 6 hours noticed that 100% of mappers and 20% of reducers finished (1 of reducer shows 100% competition, other ones - 0%). Looks like job is hung between 2 reducer runs. I don't see any errors in log files. What it can be? P.S. Last logs of successfully finished reducer: 2012-11-09 11:29:21,576 INFO org.apache.hadoop.mapred.Task: Task:attempt_201211090523_0004_r_000000_0 is done. And is in the process of commiting 2012-11-09 11:29:22,692 INFO org.apache.hadoop.mapred.Task: Task attempt_201211090523_0004_r_000000_0 is allowed to commit now 2012-11-09 11:29:22,719 INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Saved output of task 'attempt_201211090523_0004_r_000000_0' to /data/output/1352457275873/20121109-053433-common 2012-11-09 11:29:22,721 INFO org.apache.hadoop.mapred.Task: Task 'attempt_201211090523_0004_r_000000_0' done. 2012-11-09 11:29:22,725 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1

    Read the article

  • How to run a process and completely detach it of its parent shell

    - by Bicou
    I'm running a program on a linux server that will take days to complete. I'm launching it from my workstation from an SSH terminal, as this program is command-line only. I want to be able to do all of these : launch that program, redirect standard outputs to files, exit my SSH session without making this terminate the process. I thought about $ ./MyProg.csh -params -foo -bar </dev/null 1>~/out.log 2>~/err.log & However, the process is terminated the moment I close my SSH session. My workstation is running Windows XP, and I cannot guarantee its uptime over several days, which is required for the processing of my data on the Linux server. As you may have noted, my program requires to be launched from CSH. Is it possible to do this ? Thanks.

    Read the article

  • Software RAID 1 broken, how do I fix this?

    - by Edward
    I'm running CentOS 6 x86_64. There is a software RAID 1 being used on the two internal 80GB drives. I got the following e-mail sent to me: A DegradedArray event had been detected on md device /dev/md1. Faithfully yours, etc. P.S. The /proc/mdstat file currently contains the following: Personalities : [raid1] md0 : active raid1 sda1[0] 511988 blocks super 1.0 [2/1] [U_] md1 : active raid1 sda2[0] 8190968 blocks super 1.1 [2/1] [U_] bitmap: 1/1 pages [4KB], 65536KB chunk md4 : active raid1 sdc1[0] sdb1[1] 1953512400 blocks super 1.2 [2/2] [UU] md3 : active raid1 sdd5[1] sda5[0] 61224892 blocks super 1.1 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk md2 : active raid1 sdd3[1] sda3[0] 8190968 blocks super 1.1 [2/2] [UU] unused devices: <none> The system appears to have booted fine and is working. The two drives' content did not change at all. I only removed and reinstalled them while I was booted on the CentOS Live DVD. How do I get the array working again?

    Read the article

  • Simplification of Apache+Subversion multidirectory configuration

    - by Reinderien
    Hello. With your excellent advice, I've finally pieced together this functional Apache configuration for my Subversion service: # Macro to make an SVN repo set <Macro SVNDir $user> <Location /svn/$user> # Mandatory HTTPS, log in using Active Domain SSLRequireSSL AuthPAM_Enabled on AuthType Basic AuthBasicAuthoritative off AuthName "PAM" Require user AD\$user # Needed to squash spurious error messages AuthUserFile /dev/null # SVN stuff DAV svn SVNParentPath /var/www/svn/$user </Location> </Macro> # List of accounts Use SVNDir user1 Use SVNDir user2 # ... It works, but it isn't optimal. I'd like to somehow redo this so that it can just scan the list of directories in /var/www/svn and automatically do this for each of them. Is that possible? Thanks.

    Read the article

  • What methods are there to configure puppet to serve resources for multiple environments?

    - by cclark
    I seem to come across two ways for using puppet in multiple environments: 1) Install a puppetmaster in each environment and only update the recipes from source control for that environment when ready to deploy the recipes in that environment. 2) Use one puppetmaster and use a variable in the puppet.conf of each client to specify the environment and then in the puppetmaster specify a different modulepath for each environment and each of those paths is updated to the branch of the recipe repository intended for that environment (e.g. dev, staging, production). Only running one puppetmaster seems like it is one less piece of infrastructure to keep running but there is some additional complexity in the configuration. Are there additional pros or cons to one of these methods or something which I'm missing entirely?

    Read the article

  • Xen DomU does not have network connectivity

    - by Prakashkumar Thiagarajan
    I am trying to install Xen on my Fedora box. Dom0 image has network connectivity. But when I try to create a DomU, it does not have network connectivity. I want to be able to run in bridged mode. I have the /etc/xend/xend-config.sxp file accordingly. My config file looks like kernel = "/boot/vmlinuz-2.6.18-xenU" memory = 64 name = "clientA" vif = ['bridge=xenbr0,mac=12.34.56.78.9A.BC'] root = "/dev/sda1 ro" ramdisk = "/boot/initrd-linux.img" extra = "ro selinux=0.3 initcall_debug" features = 'auto_translated_physmap' Am I missing something ?

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >