Search Results

Search found 10595 results on 424 pages for 'ab testing'.

Page 327/424 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • SVN, Samba and Symbolic Links. How to get them all to play together?

    - by Camsoft
    I've got a website project under version control that relies on files from an unversioned directory on the same server via Symbolic Links. I'm currently storing the symbolic links in the repository. The idea is that if someone checks out a working copy on to the same server they can edit and test the working copy of the project before committing it back to the repository. When they checkout their working copy it successfully sets up the symlinks so that the entire site works when testing. The users that work on the project are Windows users, so I've set a samba shares on the server and then mapped them to network drives in Windows. People can edit their working copies directly on the server via network shares and then test them in the web browser before committing their changes back to the repository via TortoiseSVN. The Problem The problem I have is that Samba resolves the symlinks as expected but when a user tries to commit their changes back to the repository, TortoiseSVN thinks the linked files are part of the project and tries to commit the target files to the repository and not the symlinks themselves. I tried turning off symlink support in samba which means that the linked files cannot be resolved as I don't really want people to have access to the linked files nor do I want to import the linked files in the repository. The problem with this is that I get Can't stat '\webserver\projects\working\project\symlinked_file.php'. Access is denied Apart from the symlink problem everything else works 100% perfectly. Users can either checkout website projects to their machine and work on them (but can't test) or checkout them out to their space on the dev web server and work on them and fully test. So I don't want to change the workflow process, I just need a solution to the symbolic link issue. Many thanks. Originally posted on StackOverflow: http://stackoverflow.com/questions/2400917/svn-samba-and-symbolic-links-how-to-get-them-all-to-play-together

    Read the article

  • Firefox not displaying icons in KhanAcademy

    - by ADTC
    If you don't know what Khan Academy is, check it out. It's awesome. (For testing purpose you may view any video on the website.) My problem -- it's a minor problem, but annoying -- is that in Firefox (Windows 7), the icons below the video are shown as boxes with hex codes in them. This means the icons come from some font that isn't getting downloaded by Firefox. How it appears on Chrome (Windows 7), Safari (Mac OS X) and Stainless (Mac OS X): I checked out the source and found that the font in question is called "FontAwesome". I found this question in S.O. that may explain why this happens -- the CSS does use single quotes to enclose the font's src location. However I don't have any write access to Khan Academy servers so I can't modify the actual website. I want to know if this can be fixed in Firefox, and how. I can run Greasemonkey scripts if that would help. Also, would manually downloading the font and adding it to Windows' Fonts folder help? I tried this with the TTF font, and it does not help. For reference, the CSS that sets this font up (not processed properly by Firefox) is: @font-face { font-family:'FontAwesome'; src:url('./fontawesome-webfont.eot'); src:url('./fontawesome-webfont.eot?#iefix') format('embedded-opentype'), url('./fontawesome-webfont.woff') format('woff'), url('./fontawesome-webfont.ttf') format('truetype'), url('./fontawesome-webfont.svg#FontAwesome') format('svg'); font-weight:normal; font-style:normal } [class^="icon-"]:before, [class*=" icon-"]:before { font-family:FontAwesome; font-weight:normal; font-style:normal; display:inline-block; text-decoration:inherit }

    Read the article

  • VMWare steals IP addresses

    - by Ishan Amin
    I'm having a peculiar problem, that I think I have narrowed down to VMware. For the past one year, every once in a while we lose internet connection and not all users (about 10 users) go down at the same time, its usually one-by-one. First someone will call me and say "Internet is down" and then we would go reset the router and modem and switch and it would be working again for a while, then go down again without any pattern or replicatable sequence. We'd go repeat the steps again to get everyone in the office running again. We called our Internet Service Provider and they constantly say, We see your modem and we see your router and from thier end everything is OK. we replaced our router and switch and modem, twice! Last friday, it dawned upon me, that everytime we turn on a VMware machine, this sequence of taking everyone down starts, which also explains the message that my users get for "IP Conflict Found" So we do alot of VMware testing and lo and behold, it takes my Internet down. My Yahoo and Gtalk would continue working but www is down when the VMware machines are started. I do use bridged networking to all the VMware machines, but I dont know what else to set it at. now, sorry for this long rambling but anyone have any clue on how to stop this? thanks IA

    Read the article

  • .NET 2.0 Application now running slow on IIS 7.5

    - by Valien
    I recently moved (and still in testing) an application from a Windows 2003 Server (Physical box) running IIS 6.x to a Windows 2008 R2 Standard (VM) IIS 7.5 server. The application is a .NET framework 2.0 application and is running under a 2.0 App Pool. This site works great except for one thing: Takes forever to get a request back. I've been tracking it with Chrome Inspect Element and it queries the site and can take up to 45 seconds to answer. Now when it does the page(s) render instantly but it's that initial request that's killing it. I see no error logs or issues with the application or Windows Event Viewer or even IIS logs so not sure where to start looking next. Some new changes was that previously the app resided behind a Pix firewall and now is behind a larger network environment in a DMZ zone (and I believe NetScaler is also being used to manage the network). I do not have rights/abilities to look at the network itself but can contact the Data center folks to look deeper into this but I wanted to make sure it's not my application that might be causing the slowdown or IIS. In summary: .NET 2.0 application works great in IIS 6.x Application moved to an IIS 7.5 server and now slow on rendering but when it does render responds back with pages instantly. Edit for solution Found out that it was the SOAP calls that were slowing the site down. In the new datacenter my application cannot request SOAP calls and so they time out after 40-45 seconds or so. Now trying to find out if I can install a proxy server to redirect this...

    Read the article

  • mount error 5 = Input/output error

    - by alharaka
    I am running out of ideas. After a long period of testing this morning, I cannot seem to get this to work, and I have no idea why. I want to mount a Windows SMB/CIFS share with a Debian 5.0.4 VM, and it is not cooperating. This the command I am using. debianvm:/home/me# whoami root debianvm:/home/me# smbclient --version Version 3.2.5 debianvm:/home/me# mount -t cifs //hostname.domain.tld/share /mnt/hostname.domain.tld/share --verbose -o user=SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD/username mount.cifs kernel mount options: unc=//hostname.domain.tld\share,ip=10.212.15.53,domain=SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD,ver=1,rw,user=username,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,pass=*********mount error 5 = Input/output error Refer to the mount.cifs(8) manual page (e.g.man mount.cifs) debianvm:/home/me# The word on the nets has not been very specific, and unfortunately it is almost always environment-specific. I receive no authentication errors. I have tried mount -t smbfs and mount -t cifs, along with smbmount and such. I get the same error before. I doubt it is a problem with DNS resolution, because logging shows the correct IP address. dmesg | tail -f no longer shows authentication errors when I format the domain and username accordingly. I have played a little with iocharset=utf8, file_mode, and dir_mode as described here. That did not help either. I have also tried ntlm and ntlmv2 assuming it might be a minimum auth method problem, but not forcing sec=ntlmv2 it can still authenticate without errors anymore. smbclient -L hostname.domain.tld -W SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD -U username correctly lists all the shares and shows it as the following. Domain=[SUBADDOMAIN] OS=[Windows 5.0] Server=[Windows 2000 LAN Manager] Sharename Type Comment --------- ---- ------- IPC$ IPC Remote IPC ETC$ Disk Remote Administration C$ Disk Remote Administration Share Disk Connection to hostname.domain.tld failed (Error NT_STATUS_CONNECTION_REFUSED) NetBIOS over TCP disabled -- no workgroup available I find the last line intriguing/alarming. Does anyone have any pointers!? Maybe I misread the effin manual.

    Read the article

  • Is this way of using Excel 2007 Pivot table for BI scalable ?

    - by Sim
    Hi all, Background: We need to consolidate sales data across the country to do analysis Our Internet connection/IT expertise/IT investment is not quite strong, therefore full BI solution is out of question I tried several SaaS BI solution (GoodData, ZohoReports) and while they're good, they seem not to fully support what we need We're looking at 'bout 2 millions record for every 2 months My current approach Our (10) sites currently gathers data from all their branches and consolidate them into 1 Excel file with Pivot table and embed source data In HQ, I will request 10 sites to send back those Excel files periodically We will import those Excel to our MSSQL server There will be a master Excel file, that will also have the same pivot table (as those came from site Excel file), and datasource is the MSSQL server More details For testing, I currently use MSSQL 2008 Express on my laptop So far, I imported our transactions for the past 2 months and there are 2 millions+ row in 1 table in MSSQL (we just use 1 table, corresponding to our common pivot table structure). DB size is ~ 600 MB In the master Excel file, if not including the source data, it's just < 10MB. Including the source data will increase the size to 60 MB (so I supposed Office 2007 automatically zip the data ?) I try using the Pivot (drag-and-drop fields) and the performance so far is OK (my laptop specs: C2D T7200, 3GB RAM, Windows XP) So my question is : If we're looking at full year transaction (roughly 15 millions rows in MSSQL 2008 Express, 3.6 GB in size), is there any issue with that 15 million rows in 1 table in SQL Express ? Is there any performance issue with the pivot table at that time ? Can it still embed the source data ? (I google-ed but didn't find the maximum size of source data Excel 2007 can embed) Any other suggestions on how we can better do this ? Given that we can't afford the full BI solution, any light-weight/budget/SaaS BI that you can recommend ? Thanks

    Read the article

  • Disadvantages of enabling 'Low Fragmentation Heap' LFH on Windows Server 2003?

    - by James Wiseman
    I've been investigating an issue with a production Classic ASP website running on IIS6 which seems indicative of memory fragmentation. One of the suggestions of how to ameliorate this came from Stackoverflow: How can I find why some classic asp pages randomly take a real long time to execute?. It suggested flipping a setting in the site's global.asa file to 'turn on' Low Fragmentation Heap (LFH). The following code (with a registered version of the accompanying DLL) did the trick. Set LFHObj=CreateObject("TURNONLFH.ObjTurnOnLFH") LFHObj.TurnOnLFH() application("TurnOnLFHResult")=CStr(LFHObj.TurnOnLFHResult) (Really the code isn't that important to the question). An author of a linked post reported a seemingly magic resolution to this issue, and, reading around a little more, I discovered that this setting is enabled by default on Windows Server 2008. So, naturally, this left me a little concerned: Why is this setting not enabled by default on 2003, or If it works in 2008 why have Microsoft not issued a patch to enable it by default on 2003? I suspect the answer to the above is the same for both (if there is one). Obviously, we're testing it in a non-production environment, and doing an array of metrics and comparisons to deem if it does help us. But aside from this I'm really just trying to understand if there's any technical reason why we should do this, or if there are any gotchas that we need to be aware of.

    Read the article

  • BackupExec 2012 File System Archiving - Access is denied to Remote Agent

    - by AllisZero
    Gentlemen, I've been struggling with a Trial version of Symantec Backup Exec 2012 for about a week now. It was installed as an upgrade to our 12.5 license, and the setup completed with no issues. The reason I upgraded is solely for the File System Archiving option as I'm working to reduce the amount of live data in my servers. Backups work A-Ok and I have followed the instructions in the Admin Manual to make sure I had filled all requirements. The account BE is running under is a member of the Local administrators group as required and has been added to the test share that I'm using to evaluate the archiving function. Testing the credentials in the job setup window always works fine, and I am able to add both regular and Admin$ shares to my Archive selection. However, every time I run the Archive job, I get the following message: https://dl.dropbox.com/u/59540229/BEXec.png I've already tried to troubleshoot DNS resolution issues as suggested in the Symantec KB to no avail. The only thing I can think of, at this point, is that a trial license doesn't allow me to use the Archiving function, although that would seem silly on their part. Appreciate any assistance or information. Thanks.

    Read the article

  • Celery daemon as a Ubuntu service does not consume tasks while running from terminal does

    - by Guy
    On Ubuntu 11.10, I have to issue python tasks from django using celery. I'm currently testing on the same machine but eventually the celery worker should run on a remote machine. django uses the following settings: BROKER_HOST = "127.0.0.1" BROKER_PORT = 5672 BROKER_VHOST = "/my_vhost" BROKER_USER = "celery" BROKER_PASSWORD = "celery" I can also see my task queued in http://localhost:55672/#/queues the celery daemon uses the following configuration (celeryconfig.py): BROKER_HOST = "127.0.0.1" BROKER_PORT = 5672 BROKER_USER = "celery" BROKER_PASSWORD = "celery" BROKER_VHOST = "/my_vhost" CELERY_RESULT_BACKEND = "amqp" import os import sys sys.path.append(os.getcwd()) CELERY_IMPORTS = ("tasks", ) running celeryd -l info works well and now I want to run it as a service. I've followed the instructions from http://ask.github.com/celery/cookbook/daemonizing.html and now I'm trying to run it using: sudo /etc/init.d/celeryd start But the message is not being consumed, no error in the celery log either. /etc/default/celeryd CELERYD_NODES="w1" CELERYD_CHDIR="/path/to/django/project" CELERYD_OPTS="--time-limit=300 --concurrency=1" CELERY_CONFIG_MODULE="celeryconfig" # %n will be replaced with the nodename. CELERYD_LOG_FILE="/var/log/celery/%n.log" CELERYD_PID_FILE="/var/run/celery/%n.pid" # Workers should run as an unprivileged user. CELERYD_USER="celery" CELERYD_GROUP="celery" I've also created user celery in Ubuntu not sure if its necessary. Any help will be appreciated, Thanks, Guy

    Read the article

  • Has anyone else experienced page fault crashes with Snow Leopard on MacBook Pro?

    - by BruceMartin
    I bought a Macbook Pro computer on Sept 3rd from MacMall. As I was using it to learn Snow Leopard (this is my first Mac, I am a long time Windows developer), it would crash every one or two hours. After calling Apple support, I dropped it off at the Apple store for diagnostic testing and repair. WhenI picked up the computer from Apple, they told me that it did not crash while they had it. They suspected a software problem, so they had done a fresh install of Snow Leopard for me. At home I went through the start up procedure with the newly installed Snow Leopard. Then I downloaded the iPhone SDK, and the computer crashed again while I was away waiting for the download to finish. I was using a USB mouse, which was the only device attached. No other software installed. I was presented with a dump that mentions terms like "panic", Kernel trap", and "page fault". Does anyone have any idea what this problem might be? I really can not use this MacBook under these circumstances.

    Read the article

  • Will a 2.4Ghz WAP intefere with a 5.0Ghz WAP if placed directly next to each other

    - by Dan
    This is mostly a curiosity question to people who know more about radio and wi-fi than I. The 2.4Ghz band is massively overpopulated near my house to the point of sometimes getting 1000ms pings to the router from only a few feet away. inSSIDer finds at least 10 broadcasting SSID's within around 15 seconds of starting, so this isn't a real surprise to me! Sometimes I can get good results by changing the channel to something like 3 or 8, but it's usually temporary as the others use Auto Channel and hop around. Now, the router I have is capable of 5.0Ghz, as is the laptop I type this on. Switching to 5.0Ghz gives superb results: I can download at ~90Mbps and get consistent 1ms pings. The problem is that only this laptop supports 5.0Ghz! My question: Would I still get decent 5.0Ghz performance if I place a 2.4Ghz access point directly next to my router? And, indeed, will 2.4Ghz continue working as 'normal'? Testing would be an obvious step, but I threw all my superfluous equipment out in a recent house move. My understanding is that I should get good performance, certainly in comparison to having two devices using the same frequency range, but I do believe there will be some impact by the virtue of them being directly next to each other. (Cabling is not an option due to it being a rented house)

    Read the article

  • Search behavior of Windows 7 start menu

    - by Kevin Ivarsen
    I'm coming to Windows 7 from XP, and there are aspects of the start menu search that I like. However, there are some behaviors that seem either inconsistent or surprising to me. For example: If I type "Pa" into the search bar, Paint is the first result (under the "Programs" heading), and it is selected for me. I can just hit Enter to start the program If I have a standalone exe "testing" on my desktop, and I type "test", the program comes up as the first item (under the "Files" heading), but it is not selected for me. I have to hit down-down-down-enter to open it from the keyboard. The same appears to be true for shortcuts and folders. What classifies something as a "Program" verses a "File"? Is there any way to configure the start menu so that the first search result is always selected? As a heavy keyboard user, it seems insane for the behavior to be inconsistent, and to require so many keypresses to select the top result. Also, are there resources that document the details, limitations, and tricks of the start menu search? (For example, a "Proc Exp" search will match "Process Explorer", but not "ProcessExplorer") EDIT: I've found that instead of hitting down-down-down to select the first item (when no Programs are in the list), you can just hit tab. This helps a bit, but the inconsistent behavior still makes this search feature more awkward and frustrating than necessary.

    Read the article

  • Correct MySQL username/password, but getting Access Denied error when run from script

    - by Nick
    I'm currently trying to run the following command from within a shell script. /usr/bin/mysql -u username -ppassword -h localhost database It works perfectly fine when executed manually, and not from within a script. When I try to execute a script that contains that command, I get the following error: ERROR 1045 (28000) at line 3: Access denied for user 'username'@'localhost' (using password: YES) I literally copied and pasted the working command into the script. Why the error? As a sidenote: the ultimate intent is to run the script with cron. EDIT: Here is a stripped down version of my script that I'm trying to run. You can ignore most of it up until the point where it connects to MySQL around line 19. #!/bin/sh #Run download script to download product data cd /home/dir/Scripts/Linux /bin/sh script1.sh #Run import script to import product data to MySQL cd /home/dir/Mysql /bin/sh script2.sh #Download inventory stats spreadsheet and rename it cd /home/dir /usr/bin/wget http://www.url.com/file1.txt mv file1.txt sheet1.csv #Remove existing export spreadsheet rm /tmp/sheet2.csv #Run MySQL queries in "here document" format /usr/bin/mysql -u username -ppassword -h localhost database << EOF --Drop old inventory stats table truncate table table_name1; --Load new inventory stats into table Load data local infile '/home/dir/sheet1.csv' into table table_name1 fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n'; --MySQL queries to combine product data and inventory stats here --Export combined data in spreadsheet format group by p.value into outfile '/tmp/sheet2.csv' fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n'; EOF EDIT 2: After some more testing, the issue is with the << EOF that is at the end of the command. This is there for the "here document". When removed, the command works fine. The problem is that I need << EOF there so that the MySQL queries will run.

    Read the article

  • Looking for a "light" compositing manager for GNOME

    - by detly
    I have an HP Pavilion DM3 (graphics is nVidia GeForce G105M), running Debian Squeeze with GNOME 2.30. My preference for DE is Gnome + Metacity + Nautilus. I'd like to use Docky, but it requires compositing. So I'm looking for a relatively "light" compositing manager. I realise that "light" is ambiguous, but I basically want something that won't chew through my notebook's batteries because of CPU or GPU usage. I know that Metacity is capable of compositing, but as far as I'm aware it's still testing. Some people report that it's smooth and lightweight, others claim that it eats up processor time. I've also seen references to a problem with nVidia, but no actual details. I'm not averse to Compiz, but I haven't used it before and I don't know what to expect in terms of "weight." And maybe there's something else I haven't heard of. So can anyone recommend anything? Or dispel my idea that Metacity is not the right tool for the job? (Originally posted on GNOME forums.)

    Read the article

  • SVN Authentication with LDAP and Active Directory

    - by Alex Holsgrove
    I am having a few problems getting SVN authentication to work with LDAP / Active Directory. My SVN installation works fine, but after enabling LDAP in my apache vhost, I just can't get my users to authenticate. I can use a selection of LDAP browsers to successfully connect to Active Directory, but just can't seem to get this to work. SVN is setup in /var/local/svn Server is svn.domain.local For testing, my repository is /var/local/svn/test My vhost file is as follows: <VirtualHost *:80> ServerAdmin [email protected] ServerAlias svn.domain.local ServerName svn.domain.local DocumentRoot /var/www/svn/ <Location /test> DAV svn #SVNListParentPath On SVNPath /var/local/svn/test AuthzSVNAccessFile /var/local/svn/svnaccess AuthzLDAPAuthoritative off AuthType Basic AuthName "SVN Server" AuthBasicProvider ldap AuthLDAPBindDN "CN=adminuser,OU=SBSAdmin Users,OU=Users,OU=MyBusiness,DC=domain,DC=local" AuthLDAPBindPassword "admin password" AuthLDAPURL "ldap://192.168.1.6:389/OU=SBSUsers,OU=Users,OU=MyBusiness,DC=domain,DC=local?sAMAccountName?sub?(objectClass=*)" Require valid-user </Location> CustomLog /var/log/apache2/svn/access.log combined ErrorLog /var/log/apache2/svn/error.log </VirtualHost> In my error.log, I don't seem to get any bind errors (should I be looking elsewhere?), but just the following: [Thu Jun 21 09:51:38 2012] [error] [client 192.168.1.142] user alex: authentication failure for "/test/": Password Mismatch, referer: http://svn.domain.local/test/ At the end of "AuthLDAPURL", I have seen people using TLS and NONE but neither seem to help in my case. I have the ldap modules loaded and have checked as much as I know, so any help would be most welcome. Thanks

    Read the article

  • Multiple *NIX Accounts with Identical UID

    - by Tim
    I am curious whether there is a standard expected behavior and whether it is considered bad practice when creating more than one account on Linux/Unix that have the same UID. I've done some testing on RHEL5 with this and it behaved as I expected, but I don't know if I'm tempting fate using this trick. As an example, let's say I have two accounts with the same IDs: a1:$1$4zIl1:5000:5000::/home/a1:/bin/bash a2:$1$bmh92:5000:5000::/home/a2:/bin/bash What this means is: I can log in to each account using its own password. Files I create will have the same UID. Tools such as "ls -l" will list the UID as the first entry in the file (a1 in this case). I avoid any permissions or ownership problems between the two accounts because they are really the same user. I get login auditing for each account, so I have better granularity into tracking what is happening on the system. So my questions are: Is this ability designed or is it just the way it happens to work? Is this going to be consistent across *nix variants? Is this accepted practice? Are there unintended consequences to this practice? Note, the idea here is to use this for system accounts and not normal user accounts.

    Read the article

  • What is the recommended glusterFS configuration for a growing website?

    - by montana
    Hello, I have a website that is tracking towards 50 million hits per day average, and within the next 3 months should be over 100 million hits per day. We are trying to use GlusterFS v 3.0.0 (with latest patches as of 1-17-2010) Currently, we've just upgraded to a load balancer environment that has 3 physical hosts with 6 Xen-Server 5.5u1 VM's (2 on each host) to serve webpage traffic. Each machine has 6 Raid-6 local storage drives (7200RPM-SATA). The old machine we came from had 1 mirrored SAS 10k drive. We also set up glusterFS currently with 3 bricks, one on each host, and it is serving the 6 VM's as clients. In testing, everything seemed fine. However when we went to production, it seemed that there just wasn't enough I/O's available to serve traffic even upwards of 15mil hits. Weeks prior, our old server was able to handle traffic, maxed out, at 20mil. Is there any recommended configurations for such an application, or things to be aware of that isn't apparent with their documentation at gluster.org for a site our size?

    Read the article

  • why does my computer crash?

    - by chobo2
    Hi my computer keeps crashing and I don't know why. At first I thought because I had my cpu over clocked that it all of sudden was crashing. So I set my cpu back to regular speed. This did not help. I then thought it was because 2 sticks of my memory where from a computer that suffered from a power surge. However I just ran the windows memory diagnostic tool( extended) and after like 6 hours of testing my memory it found no errors. So now the only thing that is left is windows 7 64bit. I first over clocked my cpu for a couple months while running XP. Never had a problem. I installed the memory and windows 7 at the same time. But I not sure if it is my memory now since it passed the diagnostic tests. However I am not sure if it is windows 7 either has I installed it twice in the last year. I really don't want to go back to XP to find out if this is the case. So here are my blue screens of deaths(from bluescreen). https://sites.google.com/site/myerrorswin7/errors (I hope you enjoy my great site lol) As you can see most of them are different NTFS_FILE_SYSTEM KMODE_EXCEPTION_NOT_HANDLED BAD_POOL_HEADER IRQL_NOT_LESS_OR_EQUAL SYSTEM_THREAD_EXCEPTION_NOT_HANDLED SYSTEM_SERVICE_EXCEPTION

    Read the article

  • Deploying ASP.NET MVC to Windows Server 2003

    - by pete the pagan-gerbil
    Hi, I have a problem with an MVC 2 website on Windows Server 2003 running IIS 6. It is externally hosted, but we have a 2003 server internally for testing. The internal server runs the website fine, the external server gives a 403 ("website declined to show this page") error when navigating to the root of the site, and a 404 if I try to navigate directly to a page resource. I have tried the wildcard ISAPI mapping and extension mapping, and a couple of other common checks (I forget exactly which now, most of them were already set correctly), but so far no joy. All the settings can be replicated on our internal server and the pages return properly. IIS logs just show exactly what the browser shows - 404 errors and 403s. I've read about a different level of trust required for an MVC application compared to a WebForms application - how can I check permissions and trust levels on the external and internal servers (assuming I am able to check that) and if that would cause these errors, what are the minimum levels that MVC require? Failing that, what else might be causing this error for me to try out?

    Read the article

  • Need help trying to diagnose Symmetrix SAN performance issues

    - by arcain
    I am helping to benchmark hardware for a new SQL Server instance, and the volume presented to the OS for the data files is carved from a set of spindles on a Symmetrix SAN. The server has yet to have SQL Server installed, so the only activity on the box is our benchmarking. Now, our storage engineers say that this volume and it's resources are dedicated to our new server (I don't have access to see the actual SAN config) however the performance benchmarks are troubling. For example, the numbers look good until suddenly, and randomly, we see in our IO benchmarking tool wait times of 100 seconds, and disk queue lengths of 255 in perfmon. This SAN has an 8 GB cache, plus there are other applications besides ours that use the SAN. I'm wondering if (even though the spindles for our volumes should be dedicated to us) the cache may be getting hammered during the performance testing, or perhaps the spindles our volumes are on aren't really dedicated to us. We're not getting much traction from our storage engineers in helping us track down the problem, so if anybody has experience with diagnosing a problem like this and would like to share insights and troubleshooting methodologies, I'd appreciate it.

    Read the article

  • Hardware freeze during disk activity

    - by Thomi
    I built myself a linux-based NAS. It has several drives of various sizes and ages in an LVM configuration, with 800GB or so of data. The data is served using a simple samba server. This was working flawlessly, but after physically moving it, it has developed a strange fault: Whenever I do something on the server to cause disk activity, the entire machine freezes hard. This has the effect of killing any open network connections to the box, and generally making it useless. If I leave the machine for a few minutes it seems to come right again, but obviously this isn't really a solution. There are no error or warning messages in syslog, or the kernel logs. If I power the machine on, and leave it, it runs for several days without locking up. After that time I stopped testing. It doesn't freeze instantly - obviously it doesn't freeze while booting, and I can normally log in via SSH and start poking around in a few log files for a couple of minutes before it dies. My question is: What diagnostic tests can I run to determine the casuse?

    Read the article

  • Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most

    - by Henno
    Problem Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most. Topology Facts ESXi5 version is 5.0.0.504890 VM has the latest Vmware Tools installed VM is using E1000 network driver Physical box has Win Srv 2008 R2 as the OS CrystalDiskMark says the drive on physical box can read/write 100MB/s vCenter is another vm on esx both vm and physical box are showing 1Gbps link speed Configuration Networking shows vmnic0 as 1000 Full NTttcp is a client/server tool from Microsoft for measuring pure network throughput Here's what I've done so far: Test1: VM is running Filezilla FTP Server (default settings, one user account made) Physical box is running Filezilla FTP Client (default settings) Physical box is uploading a big file to FTP server Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Physical box is downloading that file from FTP server Transfer speed (as observed by Windows Task Manager on both machines): still ~11MB/s (bad) Could it be disk performance issue? Test2: Physical box is running ntttcpr.exe -a 6 -m 6,0,VM_IP_ADDRESS VM is running ntttcps.exe -a 6 -m 6,0,PHY_BOX_IP_ADDRESS Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Could it be switch performance issue? Test3: physical box is running vSphere Client I open Summary Storage datastore Browse Datastore... from physical box and upload a file to datastore Transfer speed (as observed by Windows Task Manager on physical box): ~26-36MB/s (good) Could it be a vm specific issue? Test4: Installed ntttcp to another vm on the same esx server Measured network performance between vms on the same esx server with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~90-120MB/s (excellent :) Test5: I have another esx server on the same site, connecting to the same datastore and same switch. Those two ESX servers have both 2 NICs. One NIC goes to switch while the other goes directly to the other ESX server. vMotioned one of the testing vms off to the other ESX host Measured network performance between vms on different esx servers with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~11MB/s (bad) While I'm aware of these: ESXi 4.1 slow file transfer ESXi 5 network performance is slow Debian Etch and ESXi slow network speeds VMWare ESXi slow file copy to guest they did not help (or I must have been missed something)

    Read the article

  • How to test/debug bad network wiring?

    - by Jack Lloyd
    I recently bought a place already wired with Cat 5E (8 ports, leading to a central closet). However attempting to get link, nothing works. On closer examination, it was obvious that the ends in the closet were wired backwards (brown on pin 1, etc). The jacks that I've pulled out of the wall do look to be correctly done. However, testing with a network cable tester shows zero link between any of the jacks and any of the ports in the closet - I had expected to just see a 1/8, 2/7, ... 8/1 mismatch, but instead get nothing at all. The runs are accessible and look neat, though they take some bends that seem quite sharp and are in some cases much longer than they need to be (the person who put this in was a professional electrician but I suspect this was the first time he ran network cabling). My best guess at this point is that he either bought bad cable, or put so much tension on it that he snapped wires. Though it seems surprising/unlikely that I wouldn't get at least one active wire on one of the 8 lines. So, my question: is there anything else I should try or test before I go ripping out everything and running new cable?

    Read the article

  • How do I upgrade Windows Server 2008 R2 Standard (OEM Key) to Enterprise (MSDN Key) using DISM?

    - by Tom Crane
    (Originally asked as After upgrading to 2008 R2 Enterprise and installing more RAM, Windows can only see 4.00 GB but now I know what the question really is...) My Dell server came preinstalled with 2008 R2 Standard. I upgraded to Enterprise to take advantage of more than 32GB RAM. This server is purely for dev and testing, so I want to use my MSDN product key for the upgrade. I originally tried to uprade using the MSDN Enterprise key, but it wouldn't have it: dism /online /Set-Edition:ServerEnterprise /ProductKey:[MSDN key] => Error DISM DISM Transmog Provider: PID=5728 Product key is keyed to [], but user requested transmog to [ServerEnterprise] - CTransmogManager::ValidateTransmogrify I tried several things, including changing the current product key to the MSDN one. Eventually I used a KMS generic key which can be found in several technet forum posts. dism /online /Set-Edition:ServerEnterprise /ProductKey:[KMS Generic Key] ... and this appeared to work. I then changed the product key again (using the control panel) to the MSDN key, thinking that was the end of the matter. Only later when tried to start up VMs did I realise I only had 4GB of usable RAM. I didn't make the connection with the licensing changes at this point and went off on a wild goose chase of BIOS settings, memory configurations and the like. Only later when I saw this... http://social.technet.microsoft.com/Forums/en/winserverTS/thread/6debc586-0977-4731-b418-ca1edb34fe8b ...did I make the connection and reapply the KMS Generic key - which gave me all the RAM back. But now I have a system that isn't properly licensed, presumably I won't be able to activate it as it is, so I've got 2 days to enjoy it. With the MSDN key applied, only 4GB RAM is usable. Is there a way round this without a) rebuilding the server from scratch with the MSDN key from the start or b) buying a retail Enterprise license

    Read the article

  • Apache going straight to 100% mem usage on localhost

    - by Dennis Pedrie
    Hi, I'm running XAMPP on a OS X testing server... I'm the only person sending requests to the server. I've never messed with Apache config before, so I'm kinda without a paddle here. When I start Apache, I get ~10 httpd processes started, and 95% idle CPU. When I request a WordPress page, the CPU usage goes to 50%, and the page loads in about five seconds. It seems like once the page has finished loading, the CPU usage jumps to 100%, almost all of that httpd. A ton of processes get started, and they don't go away, and their CPU usage stays the same. I've changed the MaxRequestPerChild setting and so forth, but nothing seems to solve the problem. Even now, having not send any requests for about 15 minutes, the CPU usage is at 100%. Here's the applicable settings: Timeout 10 KeepAlive On MaxKeepAliveRequests 0 KeepAliveTimeout 3 <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 0 MaxSpareServers 2 MaxClients 20 MaxRequestsPerChild 50 </IfModule> I had always thought that once the request was made, Apache killed the process. Is there anything I can do to bring down the CPU usage, or is this just something I'll have to deal with? Thanks for helping out an Apache idiot.

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >