Search Results

Search found 508 results on 21 pages for 'brad parks'.

Page 5/21 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Windows 2003 IIS FTP Server Migration w/ User Accounts

    - by Brad
    I'm trying to figure out the best way to migrate an FTP server from old hardware to new hardware. The server is on a domain, but not all the users setup on the server (to use FTP) are domain accounts, some are local to the server. For example, I have users both ways: domain\username machinename\username The new machine name will be different. So I need to copy all the files with permissions in tact from the old server to the new server. Then I need to convert all the user accounts from the old server to the new server. Then I need to change the file permissions so that they are no longer oldserver\username but newserver\username. Can this be accomplished all with CALCS? Is there an easy way that perhaps I'm missing?

    Read the article

  • MySQL 5.5 on Windows server is horribly slow

    - by Brad
    I have had no luck getting MySQL 5.5 to be as fast as 5.1 or MariaDB on the exact same hardware/database/environment under Windows server 2003R2 or 2008R2. My benchmarks from our application: MySQL 5.5 + CentOS 5.2 (XenServer Virtual) = 28 seconds (box is "busy" not buried) MariaDB (5.1) + Windows 2003 (Physical box) = 130 seconds (box is 2% busy) MySQL 5.1 + Windows 2003 (Physical box) = 170 seconds (box is 2% busy) MySQL 5.5 + Windows 2003 (Physical box) = 305 seconds (As high as 600 seconds...) (box is 2% busy) The only difference between these runs is the removal of skip-locking and the running of mysql_upgrade.exe to update some tables for stored procs on 5.5. Yes, I know it's a release candidate, I'm feeding that back to MySQL as well. No slow queries are logged, it doesn't think it's being slow, it just is. I'm going to start tearing into the queries themselves to see if the INSERT/SELECT plans have gone buggo on 5.5. Any help would be appreciated! Thanks

    Read the article

  • Memcached Lagging

    - by Brad Dwyer
    Let me preface this by saying that this is a followup question to this topic. That was "solved" by switching from Solaris (SmartOS) to Ubuntu for the memcached server. Now we've multiplied load by about 5x and are running into problems again. We are running a site that is doing about 1000 requests/minute, each request hits Memcached with approximately 3 reads and 1 write. So load is approximately 65 requests per second. Total data in the cache is about 37M, and each key contains a very small amount of data (a JSON-encoded array of integers amounting to less than 1K). We have setup a benchmarking script on these pages and fed the data into StatsD for logging. The problem is that there are spikes where Memcached takes a very long time to respond. These do not appear to correlate with spikes in traffic. What could be causing these spikes? Why would memcached take over a second to reply? We just booted up a second server to put in the pool and it didn't make any noticeable difference in the frequency or severity of the spikes. This is the output of getStats() on the servers: Array ( [-----------] => Array ( [pid] => 1364 [uptime] => 3715684 [threads] => 4 [time] => 1336596719 [pointer_size] => 64 [rusage_user_seconds] => 7924 [rusage_user_microseconds] => 170000 [rusage_system_seconds] => 187214 [rusage_system_microseconds] => 190000 [curr_items] => 12578 [total_items] => 53516300 [limit_maxbytes] => 943718400 [curr_connections] => 14 [total_connections] => 72550117 [connection_structures] => 165 [bytes] => 2616068 [cmd_get] => 450388258 [cmd_set] => 53493365 [get_hits] => 450388258 [get_misses] => 2244297 [evictions] => 0 [bytes_read] => 2138744916 [bytes_written] => 745275216 [version] => 1.4.2 ) [-----------:11211] => Array ( [pid] => 8099 [uptime] => 4687 [threads] => 4 [time] => 1336596719 [pointer_size] => 64 [rusage_user_seconds] => 7 [rusage_user_microseconds] => 170000 [rusage_system_seconds] => 290 [rusage_system_microseconds] => 990000 [curr_items] => 2384 [total_items] => 225964 [limit_maxbytes] => 943718400 [curr_connections] => 7 [total_connections] => 588097 [connection_structures] => 91 [bytes] => 562641 [cmd_get] => 1012562 [cmd_set] => 225778 [get_hits] => 1012562 [get_misses] => 125161 [evictions] => 0 [bytes_read] => 91270698 [bytes_written] => 350071516 [version] => 1.4.2 ) ) Edit: Here is the result of a set and retrieve of 10,000 values. Normal: Stored 10000 values in 5.6118 seconds. Average: 0.0006 High: 0.1958 Low: 0.0003 Fetched 10000 values in 5.1215 seconds. Average: 0.0005 High: 0.0141 Low: 0.0003 When Spiking: Stored 10000 values in 16.5074 seconds. Average: 0.0017 High: 0.9288 Low: 0.0003 Fetched 10000 values in 19.8771 seconds. Average: 0.0020 High: 0.9478 Low: 0.0003

    Read the article

  • What parts should I get for an ASRock x58 Extreme motherboard

    - by Brad Gilbert
    I just received an ASRock x58 Extreme motherboard, for my post on this question. It was a 2009 Tom's Hardware recommended buy. It is a Core i7 motherboard, with an X58 Express Chipset. It uses DDR3 RAM. What I want to know is, what parts should I get to finish it off. I'm looking for some good bargains, because of a lack of funds. The most taxing game I will probably play on it is OpenTTD. The only parts I currently have that are compatible: A Dynex 400W power supply. It appears to be an ATX 2.1 power supply, with the addition of a -5 rail. Apparently designed to be compatible with most ATX-style motherboards. Several PCI add-in cards. Mostly 10/100 Network cards Some sound cards Some video cards with a VGA connector Plenty of PATA drives. 8 GB - 80 GB Hard-drives A dozen or-so CD-ROM drives, only a handful of them are CD-RW drives. One DVD-ROM drive I have one LCD, with a 15 pin VGA connector, which I salvaged from the dump. The only thing wrong with it was some dead capacitors. It also has a stuck pixel.

    Read the article

  • mod_proxy security

    - by brad
    I'm on Debian Lenny using apache2. in my proxy.conf I tried adding Allow from localhost as suggested in some other forums to get proxying to work. Didn't work. It only worked if I say Allow from all My question is this. Are there any security implications to this Allow from all directive? Most people were saying to make this as limited as possible, but "all" is the client right? I want anyone regardless of their IP to be forwarded properly. Is there a better way to configure this?

    Read the article

  • How do I get started with Chef?

    - by Brad Wright
    The chef documentation is pretty bad. And Google isn't helping me. Can anyone point me at a decent article or something that would help me get started? My specific issues are: How do I get a client to read my configuration? chef-solo seems like the best start (I don't want to run an OpenID server or Merb) How do I configure Apache to serve Django? I already know how to do this via regular server configuration, but I figure an example Chef recipe would be a good start;

    Read the article

  • Poor WAMP performance when using SMB/UNC Paths

    - by Brad
    I've configured a WAMP (Windows Apache MySQL and PHP) stack when when configured to use local storage takes 3-4 seconds to load. When I use an SMB/UNC share it takes 12-15 seconds to load. Here are the two lines in my httpd.conf: #DocumentRoot "//10.99.108.11/test_htdocs" #<Directory "//10.99.108.11/test_htdocs"> #DocumentRoot "C:/www" #<Directory "C:/www"> Is there performance tuning I can do on windows server 2008 R2 to improve performance or is there another way to improve performance using smb

    Read the article

  • How do you use environment variables, such as %CommonProgramFiles%, in the PATH and have them recognized by services.exe?

    - by Brad Knowles
    I'm trying to add C:\Program Files\Common Files\xxx\xxx to the system PATH environment variable by appending %CommonProgramFiles%\xxx\xxx to the existing path. After rebooting, I open a command prompt and check the PATH. It expands correctly. However, when using Process Explorer from Sysinternals to view the Environment variables on services.exe, it shows the unexpanded version. Coincidentally, the paths using %SystemRoot% expand and are recognized just fine. I've tried altering the PATH through the Environment Variables window from System Properties and through direct Registry manipulation, neither seems to work. Is it possible to use other environment variables, besides %SystemRoot% in PATH and have services.exe understand it?

    Read the article

  • How do I get VDPAU working with Ubuntu 9.1?

    - by Brad Robertson
    What do I need to do to get MKV HD videos playing with VDPAU and also Blu-ray discs? Lots of people say you need to compile the latest MPlayer (which I haven't had luck doing) for VDPAU. I found an mplayer ppa that says it has VDPAU compiled into it so I'd like to use that. What packages do I need for playing MKV files and Blu-ray with the video decoding offloaded to my GPU? So far I haven't had any luck with any of the tutorials I've found. I'm just looking for a quick synopsis that will tell me what I'm looking for as I'm kind of shooting in the dark. (I didn't know what VDPAU was until a few days ago.)

    Read the article

  • Deploy Rails app from Hudson

    - by brad
    I'm using hudson as my CI and it works great, builds run their tests, code metrics, all that good stuff. But at the moment, that's it, no automated deployment, I have to manually do that after. I haven't found any sort of capistrano plugin for hudson and I can't even see where I can just run my cap deploy after a successful build in Hudson. Does anyone have any idea what I need in order to automate a deployment to a testing server on a successful build? I'd like each commit to force a build and in term deploy to testing so I can see everything right away.

    Read the article

  • java max heap size, how much is too much

    - by brad
    I'm having issues with a JRuby (rails) app running in tomcat. Occasionally page requests can take up to a minute to return (even though the rails logs processed the request in seconds so it's obviously a tomcat issue). I'm wondering what settings are optimal for the java heap size. I know there's no definitive answer, but I thought maybe someone could comment on my setup. I'm on a small EC2 instance which has 1.7g ram. I have the following JAVA_OPTS: -Xmx1536m -Xms256m -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled My first thought is that Xmx is too high. If I only have 1.7gb and I allocated 1.5gb to java, i feel like I'll get a lot of paging. Typically my java process shows (in top) 1.1g res memory and 2g virtual. I also read somewhere that setting the Xms and Xmx to the same size will help as it eliminates time spend on memory allocation. I'm not a java person but I've been tasked with figuring out this problem and I'm trying to find out where to start. Any tips are greatly appreciated!! update I've started analyzing the garbage collection dumps using -XX:+PrintGCDetails When i notice these occasional long load times, the gc logs go nuts. the last one I did (which took 25s to complete) I had gc log lines such as: 1720.267: [GC 1720.267: [DefNew: 27712K->16K(31104K), 0.0068020 secs] 281792K->254096K(444112K), 0.0069440 secs] 1720.294: [GC 1720.294: [DefNew: 27728K->0K(31104K), 0.0343340 secs] 281808K->254080K(444112K), 0.0344910 secs] about 300 of them on a single request!!! Now, I don't totally understand why it's always GC'ng from ~28m down to 0 over and over.

    Read the article

  • need to delete files owned by apache - unsure how to do that

    - by Brad
    Running apache on rhel server and sometimes I need to delete some files that I can not using my FTP program, because the FTP account I am logged into is not the apache user. I am on a Mac and there must be a way to accomplish this via terminal by either SSH'ing into the server. What credentials would I need to ssh into the server and delete the files/folders owned by apache Screen shot will show you what I mean when the file is owned by the apache user/group: http://cl.ly/e2192e6aadc8e4688c33 Any help is appreciated.

    Read the article

  • ubuntu ppa's and apt precedence

    - by Brad Robertson
    I'm trying to get the latest mplayer installed with vdpau support. Haven't had any luck so far. I found a PPA that says it works and I want to install it. I added the PPA to my apt.sources and did an update. I just wanted to know, how do I know that I'm now installing the mplayer package from the PPA, and not Ubuntu's standard repositories? From what I see I just install the same package using apt-get install mplayer. How do I know which mplayer I'm getting? Where do I specify what takes precedence?

    Read the article

  • How do I get VDPAU working with Ubuntu 9.10?

    - by Brad Robertson
    What do I need to do to get MKV HD videos playing with VDPAU and also Blu-ray discs? Lots of people say you need to compile the latest MPlayer (which I haven't had luck doing) for VDPAU. I found an mplayer ppa that says it has VDPAU compiled into it so I'd like to use that. What packages do I need for playing MKV files and Blu-ray with the video decoding offloaded to my GPU? So far I haven't had any luck with any of the tutorials I've found. I'm just looking for a quick synopsis that will tell me what I'm looking for as I'm kind of shooting in the dark. (I didn't know what VDPAU was until a few days ago.)

    Read the article

  • java.lang.OutOfMemoryError: unable to create new native thread

    - by Brad
    I consistently get this exception when trying to run my Junit tests on my mac: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:658) at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:197) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:184) at java.security.AccessController.doPrivileged(Native Method) at com.google.appengine.tools.development.ApiProxyLocalImpl.doAsyncCall(ApiProxyLocalImpl.java:172) at com.google.appengine.tools.development.ApiProxyLocalImpl.makeAsyncCall(ApiProxyLocalImpl.java:138) The same set of unit tests pass perfectly fine on ubuntu and windows. Some information about my system resources on the mac: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-10M3326) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode) The reason I dont think this is an application issue is because the same tests pass in different environments. I have tried setting heap to 1024m, 512m and setting the stack to 64k and 128k (and each of these combinations) with no luck. My open files was originally 256 and I have bumped this to 1024. I have been googling around for a bit and all posts say to decrease heap size and increase stack size but that doesnt seem to help. Anyone have anymore ideas? EDIT: Here are is some environment information on my ubuntu box: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

    Read the article

  • How does java permgen relate to code size

    - by brad
    I've been reading a lot about java memory management, garbage collecting et al and I'm trying to find the best settings for my limited memory (1.7g on a small ec2 instance) I'm wondering if there is a direct correlation between my code size and the permgen setting. According to sun: The permanent generation is special because it holds data needed by the virtual machine to describe objects that do not have an equivalence at the Java language level. For example objects describing classes and methods are stored in the permanent generation. To me this means that it's literally storing my class def'ns etc... Does this mean there is a direct correlation between my compiled code size and the permgen I should be setting? My whole app is about 40mb and i noticed we're using 256mb permgen. I'm thinking maybe we're using memory that could be better allocated to dynamic code like object instances etc...

    Read the article

  • Hyper-V Virtual Machine won't respond over network

    - by Brad Gignac
    Recently, one of our Hyper-V virtual machines has periodically stopped responding over the network. It seems to be happening every few days, and it occasionally happens up to several times a day. I am by no means a sysadmin, so any direction you guys could provide would be very welcome. I've included everything I know to include below. If you need any additional information, I'll be glad to include it. I can connect through the Hyper-V console. I can't connect to network shares, IIS web apps, using RDP, or using ping. Memory usage seems to be normal (3 of 4 GB) Processor usage seems low. We don't know the exact time the server goes down, but the following error appears consistently around the time it goes down: Error 5719, NETLOGON This computer was not able to set up as secure session with a domain controller in domain *** due to the following: There are currently no logon servers available to service the logon request. This may lead to authentication problems. Make sure that this computer is connected to the network. If this problem persists, please contact your domain administrator.

    Read the article

  • MySQL Privileges required to GRANT EVENT, EXECUTE, LOCK TABLES, and TRIGGER

    - by Brad
    I have an account, user_a, and I would like to grant all available permissions on some_db to user_b. I have tried the following query: GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SELECT, SHOW VIEW, TRIGGER, UPDATE ON `some_db`.* TO 'user_b'@'%' WITH GRANT OPTION The result: Access denied for user 'user_a'@'%' to database 'some_db' Some experimentation has shown me that the only permissions my account (user_a) is unable to grant are EVENT, EXECUTE, LOCK TABLES, and TRIGGER. What privileges are required for my account to GRANT these privileges to another user? If I run SHOW GRANTS, I get this output: "GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER ON *.* TO 'user_a'@'%' IDENTIFIED BY PASSWORD '1234567890abcdef' WITH GRANT OPTION" "GRANT SELECT, INSERT, UPDATE, DELETE, EXECUTE ON `some_other_unrelated_db`.* TO 'user_a'@'%'" "GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE ROUTINE, ALTER ROUTINE ON `another_unrelated_db`.* TO 'user_a'@'%' WITH GRANT OPTION"

    Read the article

  • How do you prevent the dock from switching monitors in OSX Mavericks?

    - by Brad Dwyer
    In Mavericks, Apple introduced a "feature" where if you hover at the bottom of any screen the dock pops up on that screen. This is disrupting my workflow as I am constantly having the dock pop up when I don't want it to and then I have to go to another window and hover at the bottom for several seconds to get it to go away so I can click on what I was trying to in the first place. I don't want the dock to move; I want it to stay on the bottom of my right-most monitor like it always has. How can I adjust this in OSX Mavericks?

    Read the article

  • Hudson deploy specific git revision

    - by brad
    I'm using hudson to auto-deploy my Rails app to heroku. In my main build job I pull from a Git repo (hosted using gitosis on the same machine), master branch with the following: URL of repository: /home/git/repositories/my_app.git Name of repository: origin Refspec: +refs/heads/master:refs/remotes/origin/master Branches to build: master Then, assuming all tests pass, I want to kick off a new build that is the deploy to Heroku. I can't however figure out how to get that deploy build to checkout the particular revision that this build was using. I understand there's a parameterized trigger plugin that would allow me to pass this revision number, but I don't know how I can tell hudson to checkout this particular revision on the deploy build. I'm pretty sure this just has to do with my limited knowledge of git, but where in the hudson git config's is there an option to checkout a particular revision? Otherwise, I could have many commits happen whilst a build is happening, and when it kicks off a deploy build, that deploy build would just check out the HEAD of the branch, which may not be the same as the code that was pushed that triggered this build. I don't fully understand why I have a refspec in Hudson, then also specify a branch to build, I thought this was the same thing. Can refspec somehow specify the revision number? How would this be referenced if it was passed through with the parameterized trigger plugin? (I've never used that plugin, but someone else recommended it as a way to pass in vars to a new build, if there's another way I'm all ears)

    Read the article

  • Rails time stamps on images in CSS

    - by brad
    Just posted this on Stack but realized it may be more appropriate here: So Rails time stamping is great. I'm using it to add expires headers to all files that end in the 10 digit timestamp. Most of my images however are referenced in my CSS. Has anyone come across any method that allows for timestamps to be added to CSS referenced images, or some funky re-write rule that achieves this? I'd love for ALL images in my site, both inline and in css to have this timestamp so I can tell the browser to cache them, but refresh any time the file itself changes. I couldn't find anything on the net regarding this and I can't believe this isn't a more frequently discussed topic. I don't think my setup will matter because the actual expiring will hopefully happen the same way, based on the 10 digit timestamp, but I'm using apache to serve all static content if that matters

    Read the article

  • VMWare Server :: VM set to 2gb RAM but vmware process shows 100mb physical, 1900mb virtual

    - by brad
    I've set up a VMWare instance to run CastIron Integration Appliance. I allocated 2gb of memory to the instance, assuming it would take this as physical memory (my server has 8gb total). When I view top however on the server, the vmware-vmx process has about 100m Resident memory and 1900m Virtual. Running CastIron it reports that the appliance often hits 50% memory usage. Does this mean I'm using 900mb of harddrive space as memory? I wanted VMWare to use 2gb of physical memory, no swap. Can anyone tell me how to achieve this? Setup Debian Lenny 5.0.3 VMWare Server 2.0.2

    Read the article

  • Chrome drag and drop download links

    - by Brad
    In Chrome, I used to be able to take a link to a file and just drag to a folder on my system. Chrome would then download whatever resource was at the URL for the link and put it into the folder dropped into. This was particularly handy when using Gmail. If there was an attachment, I could just click and drag it into a folder, and Chrome would download it for me to the correct place. Now I have to hit download, and then drag from the download bar when it is finished. Has this feature been removed? Is there any way to bring it back?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >