Search Results

Search found 19211 results on 769 pages for 'ui automated testing'.

Page 617/769 | < Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >

  • Recovering a mdadm+lvm+ext4 partition with read error

    - by bitwelder
    One of disks in my NAS has failed. The NAS is running Linux, and it uses mdadm + LVM technology for its filesystems. I do have backup for most of the contents, but not for the very last changes, and if possible, I'd like to recover that from this failing disk. The disk (a 'green drive' WD10EARS 1TB in size) throws this kind of errors: Oct 3 12:00:41 kernel: [ 3625.620000] ata5.00: read unc at 9453282 Oct 3 12:00:41 kernel: [ 3625.620000] lba 9453282 start 9453280 end 1953511007 Oct 3 12:00:41 kernel: [ 3625.620000] sde5 auto_remap 0 Oct 3 12:00:41 kernel: [ 3625.630000] ata5.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x6 Oct 3 12:00:41 kernel: [ 3625.630000] ata5.00: edma_err_cause=00000084 pp_flags=00000003, dev error, EDMA self-disable Oct 3 12:00:41 kernel: [ 3625.640000] ata5.00: failed command: READ FPDMA QUEUED Oct 3 12:00:41 kernel: [ 3625.650000] ata5.00: cmd 60/40:00:e0:3e:90/00:00:00:00:00/40 tag 0 ncq 32768 in Oct 3 12:00:41 kernel: [ 3625.650000] res 41/40:00:e2:3e:90/12:00:00:00:00/40 Emask 0x409 (media error) <F> Oct 3 12:00:41 kernel: [ 3625.660000] ata5.00: status: { DRDY ERR } However, while testing with 'dd', I noticed that if I skip the first 4kB, the read seems to be ok, i.e. a command like. dd if=/dev/sde5 of=dev/null bs=4k count=1000 skip=1 doesn't return any read error. Supposing that there is no other read failure in the rest of the disk, would I be able to recover this 900 GB partition (as I mentioned before, it's a 'linux raid autodetect' partition, that contains a a LVM2 volume that contains a ext4 filesystem) if I copy-clone the partition somewhere else, but the first 4kB?

    Read the article

  • Is this way of using Excel 2007 Pivot table for BI scalable ?

    - by Sim
    Hi all, Background: We need to consolidate sales data across the country to do analysis Our Internet connection/IT expertise/IT investment is not quite strong, therefore full BI solution is out of question I tried several SaaS BI solution (GoodData, ZohoReports) and while they're good, they seem not to fully support what we need We're looking at 'bout 2 millions record for every 2 months My current approach Our (10) sites currently gathers data from all their branches and consolidate them into 1 Excel file with Pivot table and embed source data In HQ, I will request 10 sites to send back those Excel files periodically We will import those Excel to our MSSQL server There will be a master Excel file, that will also have the same pivot table (as those came from site Excel file), and datasource is the MSSQL server More details For testing, I currently use MSSQL 2008 Express on my laptop So far, I imported our transactions for the past 2 months and there are 2 millions+ row in 1 table in MSSQL (we just use 1 table, corresponding to our common pivot table structure). DB size is ~ 600 MB In the master Excel file, if not including the source data, it's just < 10MB. Including the source data will increase the size to 60 MB (so I supposed Office 2007 automatically zip the data ?) I try using the Pivot (drag-and-drop fields) and the performance so far is OK (my laptop specs: C2D T7200, 3GB RAM, Windows XP) So my question is : If we're looking at full year transaction (roughly 15 millions rows in MSSQL 2008 Express, 3.6 GB in size), is there any issue with that 15 million rows in 1 table in SQL Express ? Is there any performance issue with the pivot table at that time ? Can it still embed the source data ? (I google-ed but didn't find the maximum size of source data Excel 2007 can embed) Any other suggestions on how we can better do this ? Given that we can't afford the full BI solution, any light-weight/budget/SaaS BI that you can recommend ? Thanks

    Read the article

  • Has anyone else experienced page fault crashes with Snow Leopard on MacBook Pro?

    - by BruceMartin
    I bought a Macbook Pro computer on Sept 3rd from MacMall. As I was using it to learn Snow Leopard (this is my first Mac, I am a long time Windows developer), it would crash every one or two hours. After calling Apple support, I dropped it off at the Apple store for diagnostic testing and repair. WhenI picked up the computer from Apple, they told me that it did not crash while they had it. They suspected a software problem, so they had done a fresh install of Snow Leopard for me. At home I went through the start up procedure with the newly installed Snow Leopard. Then I downloaded the iPhone SDK, and the computer crashed again while I was away waiting for the download to finish. I was using a USB mouse, which was the only device attached. No other software installed. I was presented with a dump that mentions terms like "panic", Kernel trap", and "page fault". Does anyone have any idea what this problem might be? I really can not use this MacBook under these circumstances.

    Read the article

  • Effective Permissions displays incorrect information

    - by Konrads
    I have a security mystery :) Effective permissions tab shows that a few sampled users (IT ops) have any and all rights (all boxes are ticked). The permissions show that Local Administrators group has full access and some business users have too of which the sampled users are not members of. Local Administrators group has some AD IT Ops related groups of which the sampled users, again, appear not be members. The sampled users are not members of Domain Administrators either. I've tried tracing backwards (from permissions to user) and forwards (user to permission) and could not find anything. At this point, there are three options: I've missed something and they are members of some groups. There's another way of getting full permissions. Effective Permissions are horribly wrong. Is there a way to retrieve the decision logic of Effective Permissions? Any hints, tips, ideas? UPDATE: The winning answer is number 3 - Effective Permissions are horribly wrong. When comparing outputs as ran from the server logged on as admin and when running it as a regular user from remote computer show different results: All boxes (FULL) access and on server - None. Actually testing the access, of course, denies access.

    Read the article

  • Scaling databases with cheap SSD hard drives

    - by Dennis Kashkin
    Hey guys! I hope that many of you are working with high traffic database-driven websites, and chances are that your main scalability issues are in the database. I noticed a couple of things lately: Most large databases require a team of DBAs in order to scale. They constantly struggle with limitations of hard drives and end up with very expensive solutions (SANs or large RAIDs, frequent maintenance windows for defragging and repartitioning, etc.) The actual annual cost of maintaining such databases is in $100K-$1M range which is too steep for me :) Finally, we got several companies like Intel, Samsung, FusionIO, etc. that just started selling extremely fast yet affordable SSD hard drives based on SLC Flash technology. These drives are 100 times faster in random read/writes than the best spinning hard drives on the market (up to 50,000 random writes per second). Their seek time is pretty much zero, so the cost of random I/O is the same as sequential I/O, which is awesome for databases. These SSD drives cost around $10-$20 per gigabyte, and they are relatively small (64GB). So, there seems to be an opportunity to avoid the HUGE costs of scaling databases the traditional way by simply building a big enough RAID 5 array of SSD drives (which would cost only a few thousand dollars). Then we don't care if the database file is fragmented, and we can afford 100 times more disk writes per second without having to spread the database across 100 spindles. . Is anybody else interested in this? I've been testing a few SSD drives and can share my results. If anybody on this site has already solved their I/O bottleneck with SSDs, I would love to hear your war stories! PS. I know that there are plenty of expensive solutions out there that help with scalability, for example the time proven RAM-based SANs. I want to be clear that even $50K is too expensive for my project. I have to find a solution that costs no more than $10K and does not take much time to implement.

    Read the article

  • How do I upgrade Windows Server 2008 R2 Standard (OEM Key) to Enterprise (MSDN Key) using DISM?

    - by Tom Crane
    (Originally asked as After upgrading to 2008 R2 Enterprise and installing more RAM, Windows can only see 4.00 GB but now I know what the question really is...) My Dell server came preinstalled with 2008 R2 Standard. I upgraded to Enterprise to take advantage of more than 32GB RAM. This server is purely for dev and testing, so I want to use my MSDN product key for the upgrade. I originally tried to uprade using the MSDN Enterprise key, but it wouldn't have it: dism /online /Set-Edition:ServerEnterprise /ProductKey:[MSDN key] => Error DISM DISM Transmog Provider: PID=5728 Product key is keyed to [], but user requested transmog to [ServerEnterprise] - CTransmogManager::ValidateTransmogrify I tried several things, including changing the current product key to the MSDN one. Eventually I used a KMS generic key which can be found in several technet forum posts. dism /online /Set-Edition:ServerEnterprise /ProductKey:[KMS Generic Key] ... and this appeared to work. I then changed the product key again (using the control panel) to the MSDN key, thinking that was the end of the matter. Only later when tried to start up VMs did I realise I only had 4GB of usable RAM. I didn't make the connection with the licensing changes at this point and went off on a wild goose chase of BIOS settings, memory configurations and the like. Only later when I saw this... http://social.technet.microsoft.com/Forums/en/winserverTS/thread/6debc586-0977-4731-b418-ca1edb34fe8b ...did I make the connection and reapply the KMS Generic key - which gave me all the RAM back. But now I have a system that isn't properly licensed, presumably I won't be able to activate it as it is, so I've got 2 days to enjoy it. With the MSDN key applied, only 4GB RAM is usable. Is there a way round this without a) rebuilding the server from scratch with the MSDN key from the start or b) buying a retail Enterprise license

    Read the article

  • Can't Install Win2k8 On KVM - Classic 0x80070013 error

    - by javano
    I am trying to install Win2k8 Std as a KVM guest on Debian Squeeze. As you can see from these screen shots; No drives are detected (I have blanked out a 20GB image for testing) - screenshot1 I am using this driver CD: - screenshot2 I have signed the Win7 driver (I assume this was the most appropriate one?) - screenshot3 I can now see an unpartitioned drive - screenshot4 But I can't create a new partition on here, getting the error code 0x80070013 - screenshot5 I have had this error code before but only on a physical server. If I remember correctly it was complaining because the disks were partitioned as GPT (because it was a server that was being re-purposed) so repartitioning with an MS-DOS table fixed that. This is a blank disk image though. What is wrong here, and how can I correct this? Thank you. UPDATE I have booted the VM with a Gparted-Live disk and formatted this volume with an MS-DOS partitioning scheme, and a single 20GB NTFS file system. Now when I boot the Win2k8 CD, load my drivers, I get a different error. As you can see at the bottom of screenshot6 "Windows cannot be installed on this hard drive space. Windows must be installed to a partition formatted as NTFS". Clicking format produces the error (0x80004005) on the screen, so I think this is still a driver issue because Windows can see the drive but not interact with it properly. Is that insane thinking?

    Read the article

  • Vlaning on WNR3500L

    - by ageis23
    When I try connecting to my wireless network it attempts to connect then gives up. There's something strange going on with the mac's. The eternet switch and all the vlan interfaces have a mac 00:FF:FF:FF:FF:FF. config 'switch' 'eth0' option 'vlan0' '2 3 4 8*' option 'vlan1' '0 8' option 'vlan2' '1 8' config 'interface' 'loopback' option 'ifname' 'lo' option 'proto' 'static' option 'ipaddr' '127.0.0.1' option 'netmask' '255.0.0.0' config 'interface' 'lan' option 'type' 'bridge' option 'ifname' 'eth0.1' option 'proto' 'static' option 'netmask' '255.255.255.0' option 'ipaddr' '192.168.2.1' option 'ip6addr' '' option 'gateway' '192.168.1.253' option 'ip6gw' '' option 'dns' '' config 'interface' 'wan' option 'ifname' 'eth0' option 'proto' 'dhcp' option 'ipaddr' '192.168.1.8' option 'ip6addr' '' option 'netmask' '255.255.255.0' option 'gateway' '192.168.1.253' option 'ip6gw' '' option 'dns' '192.168.1.253' config 'interface' 'dmz' option 'ifname' 'eth0.2' option 'proto' 'static' option 'ipaddr' '192.168.0.1' option 'netmask' '255.255.255.0' Any help on this will be greatly appreciated! When I try setting the mac using macaddr it does nothing. It works perfectly fine when I turn the authentication off. I've also discovered that when wpa2 is switched on I don't receive a association reply from ap. thats my hostapd.conf interface=eth1 driver=broadcom bridge=br-lan ssid=O2BB3 wpa=2 wpa_passphrase=prettywoman wpa_key_mgmt=WPA-PSK rsn_pairwise=CCMP Btw that password is only temporary while am testing.

    Read the article

  • Procurve Primary VLAN

    - by fukawi2
    I'm trying to depreciate usage of VLAN 1 on my ProCurve switches; 1 is unused. I understand that VLAN 1 must exist, but I want to remove it from all ports, especially trunks between switches. The problem I have is that stacking does not seem to work without VLAN 1. I have changed the primary VLAN and management VLAN on all the switches: (config)# primary-vlan 42 (config)# management-vlan 42 (config)# no vlan 1 untagged 25 Port 25 is the link between the 2 switches I'm testing with; the stack master and a member switch; I only want tagged traffic between the switches, no untagged frames. show stacking on the master shows all members as "UP" but I can not telnet any of them: Telnet failed: Connection timed out. All switches have manually assigned (static) IP addresses on VLAN 42, and all exist in the same /25 subnet, as does my desktop. I can telnet the switches directly from my desktop to the individual switch IP addresses, just not from the master switch. Do I need to reboot the switches to have the primary-vlan change take effect? Or is there something else I'm missing?

    Read the article

  • Will a 2.4Ghz WAP intefere with a 5.0Ghz WAP if placed directly next to each other

    - by Dan
    This is mostly a curiosity question to people who know more about radio and wi-fi than I. The 2.4Ghz band is massively overpopulated near my house to the point of sometimes getting 1000ms pings to the router from only a few feet away. inSSIDer finds at least 10 broadcasting SSID's within around 15 seconds of starting, so this isn't a real surprise to me! Sometimes I can get good results by changing the channel to something like 3 or 8, but it's usually temporary as the others use Auto Channel and hop around. Now, the router I have is capable of 5.0Ghz, as is the laptop I type this on. Switching to 5.0Ghz gives superb results: I can download at ~90Mbps and get consistent 1ms pings. The problem is that only this laptop supports 5.0Ghz! My question: Would I still get decent 5.0Ghz performance if I place a 2.4Ghz access point directly next to my router? And, indeed, will 2.4Ghz continue working as 'normal'? Testing would be an obvious step, but I threw all my superfluous equipment out in a recent house move. My understanding is that I should get good performance, certainly in comparison to having two devices using the same frequency range, but I do believe there will be some impact by the virtue of them being directly next to each other. (Cabling is not an option due to it being a rented house)

    Read the article

  • Hardware freeze during disk activity

    - by Thomi
    I built myself a linux-based NAS. It has several drives of various sizes and ages in an LVM configuration, with 800GB or so of data. The data is served using a simple samba server. This was working flawlessly, but after physically moving it, it has developed a strange fault: Whenever I do something on the server to cause disk activity, the entire machine freezes hard. This has the effect of killing any open network connections to the box, and generally making it useless. If I leave the machine for a few minutes it seems to come right again, but obviously this isn't really a solution. There are no error or warning messages in syslog, or the kernel logs. If I power the machine on, and leave it, it runs for several days without locking up. After that time I stopped testing. It doesn't freeze instantly - obviously it doesn't freeze while booting, and I can normally log in via SSH and start poking around in a few log files for a couple of minutes before it dies. My question is: What diagnostic tests can I run to determine the casuse?

    Read the article

  • How can I get web pages from sub.a.com using url sub.b.com?

    - by Steven
    I have developed www.mysite.com. This site can be "integrated" into my partners website. What I do is to create partner1.mysite.com and repalce my header and footer with my partners header and footer and replace some CSS styling. This should make it as transaprent as possible for the user, so that they think they are still browsing my partners website. There are two ways I see how I can accomplish this: 1. My partner uses an IFrame to show the content from partner1.mysite.com 2. My partner creates sub domain and points it to my sub domain. Solution 1 is easy, but I'm not sure how search engines likes this, so I will try solution 2. QUESTION Can I use mysite.partner1.com but read content from partner1.mysite.com? I don't want to forward / redirect users to partner1.mysite.com. It's important that the URL is mysite.partner1.com / mysite.partner1.com/some/page. Is this possible? For testing, I have Apache configuration more or less like this: NameVirtualHost 10.0.0.17 <VirtualHost 10.0.0.17> DocumentRoot D:/wamp/www/mysite/ ServerName mysite.com </VirtualHost> <VirtualHost 10.0.0.17> DocumentRoot D:/wamp/www/mysite/ ServerName site1.mysite.com </VirtualHost> // Since this is on my localhost, I also configure site1 here <VirtualHost 10.0.0.17> DocumentRoot D:/wamp/www/site1/ ServerName site1.com </VirtualHost> <VirtualHost 10.0.0.17> ServerName mysite.site1.com --> DO SOME SORT OF FORWARDING HERE <-- </VirtualHost>

    Read the article

  • How to kill tasks in Windows 7 when even Task Manager won't open or respond?

    - by endolith
    Occasionally one of my computers will get so bogged down that everything locks up, Ctrl+Alt+Del doesn't work, Task Manager won't open, or they work, but are opening so slowly that it will take hours or days to shut down other processes and regain control of the computer, etc. Is there a way to, for instance, force Task Manager to be highest priority so it always opens immediately with Ctrl+Shift+Esc even when some other process/driver is hogging the CPU? Is there some other program that can run in the background and open immediately like this? This question isn't about fixing "underlying problems". No matter how much memory you have, it's still possible for a rogue process to eat it all up and lock up the computer in page fault thrashing, hog the CPU, etc. This question is about how to take back control of the computer when that happens. Basically when these kind of lock-ups happen, I want to open some kind of task manager that pauses every other process and allows me to kill one of them, and then let everything resume so I can save my work, etc. Otherwise my only option is to hold down the power button. Antifreeze is supposed to do exactly what i want, pausing all other applications and starting a task manager to kill the offender, but in my testing, it actually does neither.

    Read the article

  • IIS 7.5 truncating POST body containing JSON data with ASP.NET MVC 3

    - by Guneet Sahai
    I'm facing a problem which I hope is a configuration thing with IIS but is right now giving a lot of trouble. Basically I have a controller that accepts a JSON and does some processing. While it generally works fine, but every now and then when the system has some load I get an error. After some painful debugging, we figured the incoming JSON gets truncated which causes the deserialzer to fail. To narrow down the problem - we wrote a simple controller that accepts a JSON and tries to deserialize it. In case it fails it just logs it. This works fine but when I hit it using a load testing tool (JMeter) it throws the same error (truncation) for a few requests. The # of failures increased when I increase parallel connections. It starts showing with 150 concurrent requests. We are running IIS 7 on windows 2008 server with ASP.Net MVC 3 with more or less default configuration of IIS. More information available in my question below http://stackoverflow.com/questions/12662282/content-length-of-http-request-body-size

    Read the article

  • What is the recommended glusterFS configuration for a growing website?

    - by montana
    Hello, I have a website that is tracking towards 50 million hits per day average, and within the next 3 months should be over 100 million hits per day. We are trying to use GlusterFS v 3.0.0 (with latest patches as of 1-17-2010) Currently, we've just upgraded to a load balancer environment that has 3 physical hosts with 6 Xen-Server 5.5u1 VM's (2 on each host) to serve webpage traffic. Each machine has 6 Raid-6 local storage drives (7200RPM-SATA). The old machine we came from had 1 mirrored SAS 10k drive. We also set up glusterFS currently with 3 bricks, one on each host, and it is serving the 6 VM's as clients. In testing, everything seemed fine. However when we went to production, it seemed that there just wasn't enough I/O's available to serve traffic even upwards of 15mil hits. Weeks prior, our old server was able to handle traffic, maxed out, at 20mil. Is there any recommended configurations for such an application, or things to be aware of that isn't apparent with their documentation at gluster.org for a site our size?

    Read the article

  • Why can't I see all of the client certificates available when I visit my web site locally on Windows 7 IIS 7?

    - by Jay
    My team has recently moved to Windows 7 for our developer machines. We are attempting to configure IIS for application testing. Our application requires SSL and client certificates in order to authenticate. What I've done: I have configured IIS to require SSL and require (and tried accept) certificates under SSL Settings. I have created the https binding and set it to the proper server certificate. I've installed all the root and intermediate chain certificates for the soft certificates properly in current user and local machine stores. The problem When I browse to the web site, the SSL connection is established and I am prompted to choose a certificate. The issue is that the certificate is one that is created by my company that would be invalid for use in the application. I am not given the soft certificates that I have installed using MMC and IE. We are able to utilize the soft certs from our development machines to our Windows 2008 servers that host the application. What I did: I have attempted to copy the Root CA to every folder location for the Current User and Location Machine account stores that the company certificate's root is in. My questions: Could I be mishandling the certs anywhere else? Could there be a local/group policy that could be blocking the other certs from use? What (if anything) should have to be done differently on Windows 7 from 2008 in regards to IIS? Thanks for your help.

    Read the article

  • Windows 7 comments field missing when browsing network

    - by Toymangenie
    I have just purchased three Windows 7 Professional Dell 64-bit PCs for testing prior to upgrading our company’s 120+ PCs from Windows XP Professional. The setup is a standard domain with a Windows Server 2003 32-bit server. We name each PC XP1 to XP150 so that when users join or leave, I don’t have to rename the PC. We use the Description field to allocate the user’s name to each PC. We also have a share set up on each PC using the user’s name. When I browse the network using Windows Explorer in XP, I get a useful display. The left pane showing the PC number and the right pane showing NAME and COMMENTS So, for example I would see: XP01 Fred Bloggs (Each PC on a new row.) The right pane is my main tool for administering the network. I can easily see the PC number and the name of the user. However, in Windows 7, this seems to have been thrown out of the window and replaced with fields that I do not need and in my case always display the same info. "Name", "Category", "Workgroup", "Network Location" In my case the Name column gives the PC number (XP10) etc and all three other columns display identical useless information. So I can’t see who is using XP10. When I am in “help desk” mode, I would naturally ask the user’s name and use my remote desktop client to view their screen. The user isn’t aware of their PC name, so I am finding it impossible to match the user name with a PC number. Any ideas how to overcome this "by design" change to Windows 7?

    Read the article

  • Looking for a "light" compositing manager for GNOME

    - by detly
    I have an HP Pavilion DM3 (graphics is nVidia GeForce G105M), running Debian Squeeze with GNOME 2.30. My preference for DE is Gnome + Metacity + Nautilus. I'd like to use Docky, but it requires compositing. So I'm looking for a relatively "light" compositing manager. I realise that "light" is ambiguous, but I basically want something that won't chew through my notebook's batteries because of CPU or GPU usage. I know that Metacity is capable of compositing, but as far as I'm aware it's still testing. Some people report that it's smooth and lightweight, others claim that it eats up processor time. I've also seen references to a problem with nVidia, but no actual details. I'm not averse to Compiz, but I haven't used it before and I don't know what to expect in terms of "weight." And maybe there's something else I haven't heard of. So can anyone recommend anything? Or dispel my idea that Metacity is not the right tool for the job? (Originally posted on GNOME forums.)

    Read the article

  • Production deployment to EC2 with minimal downtime

    - by jensendarren
    I have a simple web application deployed on a large instance with EC2. I now want to deploy the latest code to this server but I want to do this in a way which minimizes downtime and is a smooth as possible for the end user. Here is my plan: Fire up another large instance Install all the software layers on that instance Restore and attach an EBS drive to the instance Deploy our latest production ready code on the new instance Run all tests (including manual testing of the application) (If tests pass) Put a "Site Under Maintenance" notice on the live site. Backup the EBS instance on the live site Detach the EBS instance from the new server and replace with the latest backup Use ec2-associate-address to move the IP address to the new instance Sit back and wait for traffic to start flowing though the new instance Terminate the old instance Does this seem like a good strategy? Are there any tutorials or books that might cover this topic? I have already read Cloud Application Architectures by George Reese, which is an excellent book, but does not cover deployment. Additionally, I know that there are tools that can help with this like RightScale or enStratus which I will use when I start using more than one instance.

    Read the article

  • Getting 404 error on MVC web-site

    - by RB
    I have an IIS7.5 web-site, on Windows Server 2008, with an ASP.NET MVC2 web-site deployed to it. The website was built in Visual Studio 2008, targeting .NET 3.5, and IIS 5.1 has been successfully configured to run it as well, for local testing. We've installed the world's simplest MVC application (the one which is created when you create a new MVC2 project in Visual Studio), and we are getting 404s on any page we try and access - e.g. <my_server>/Home/About will generate a 404. I've asked this question on StackOverflow as well, but that was before I knew it was a server issue. I have checked the following things: There are 404 entries in the IIS log, corresponding to each request. The application pool for the web-site is set to use the Integrated pipeline. The "customErrors" mode is set to off. .NET 3.5 SP1 is installed ASP.NET MVC 2 is installed I've used MVC Diagnostics to confirm all MVC DLLs are being found. ASP.NET is enabled in IIS, which we've demonstrated by running the MVC Diagnostics page. KB 2023146 did highlight that HTTP Redirection was off, so we've turned it on, but no joy. Any ideas will be greatly appreciated! Someone did suggest that there might be problems running it caused by Windows Server 2008 being 64-bit - does anyone know anything about this?

    Read the article

  • Can I use iptables on my Varnish server to forward HTTPS traffic to a specific server?

    - by Dylan Beattie
    We use Varnish as our front-end web cache and load balancer, so we have a Linux server in our development environment, running Varnish with some basic caching and load-balancing rules across a pair of Windows 2008 IIS web servers. We have a wildcard DNS rule that points *.development at this Varnish box, so we can browse http://www.mysite.com.development, http://www.othersite.com.development, etc. The problem is that since Varnish can't handle HTTPS traffic, we can't access https://www.mysite.com.development/ For dev/testing, we don't need any acceleration or load-balancing - all I need is to tell this box to act as a dumb proxy and forward any incoming requests on port 443 to a specific IIS server. I suspect iptables may offer a solution but it's been a long while since I wrote an iptables rule. Some initial hacking has got me as far as iptables -F iptables -A INPUT -p tcp -m tcp --sport 443 -j ACCEPT iptables -A OUTPUT -p tcp -m tcp --dport 443 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 443 -j DNAT --to 10.0.0.241:443 iptables -t nat -A POSTROUTING -p tcp -d 10.0.0.241 --dport 443 -j MASQUERADE iptables -A INPUT -j LOG --log-level 4 --log-prefix 'PreRouting ' iptables -A OUTPUT -j LOG --log-level 4 --log-prefix 'PostRouting ' iptables-save > /etc/iptables.rules (where 10.0.0.241 is the IIS box hosting the HTTPS website), but this doesn't appear to be working. To clarify - I realize there's security implications about HTTPS proxying/caching - all I'm looking for is completely transparent IP traffic forwarding. I don't need to decrypt, cache or inspect any of the packets; I just want anything on port 443 to flow through the Linux box to the IIS box behind it as though the Linux box wasn't even there. Any help gratefully received... EDIT: Included full iptables config script.

    Read the article

  • bind would not work unless allow-query is "any"

    - by adrianTNT
    I have this in /etc/named.conf, I commented the default values and set my own under it. My domain would not load in browser unless I set allow-query to "any", is this OK, what should I edit? If is localhost or 127.0.0.1; 10.0.1.0/24; domain would not load. I tried the 127.. thing because it mentioned it here: http://wiki.mandriva.com/en/Testing:Bind Bind version is 9.7.0-P2-RedHat-9.7.0-5.P2.el6_0.1 OS is CentOS 6.0. options { // listen-on port 53 { 127.0.0.1; }; listen-on port 53 { any; }; //listen-on-v6 port 53 { ::1; }; listen-on-v6 port 53 { any; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; //allow-query { localhost; }; allow-query { any; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; };

    Read the article

  • Changing PATH Environment Variable for all Users. (Ubuntu)

    - by Wally Glutton
    I recently compiled Ruby Enterprise Edition (REE) on an Ubuntu 8.04 server. I would like to update my PATH to ensure this new version of Ruby (found in /opt/ruby_ee/bin) supersedes the older version in /usr/local/bin. (I still want the old version around, though.) I would like these PATH changes to affect all users and crontabs. Attempted Solution #1: The REE documentation recommends placing the REE bin folder at the beginning of the global PATH in /etc/environment. I altered the PATH in this file to read: PATH="/opt/ruby_ee/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" This did not affect my PATH at all. Attempted Solution #2: Next I followed these instructions and updated the PATH setting in /etc/login.defs and /etc/crontab. (I did not change /etc/sudoers.) This didn't affect my PATH either, even after logging out and rebooting the server. Other information: I seem to be having the same problem described here. I'm testing using the commands "echo $PATH" and "ruby -v". My shell is bash. My .bashrc doesn't override my PATH. Yes, I have heard of the Ruby Version Manager project. ;)

    Read the article

  • How to setup a hyper-v domain with internet access

    - by fynnbob
    First off let me say that I'm not a network admin or server guy, I know very little about that stuff. What I'm trying to do is setup a virtualized domain using hyper-V. Here is the configuration: Physical Server: 4Mb RAM Windows Server 2008 R2 running Hyper-V Virtual Environment: One Domain Controller running Windows Server 2008 R2 One Client running Windows Server 2008 R2 I have been successful in setting up a virtual domain controller and adding a virtual client to that domain controller but I'm stuck at trying to give the virtual Environment Internet access. I can give the client VM Internet access if I remove them from the virtual domain but once I add them back to the virtual domain, Internet access is gone. I've read articles describing many different ways this can be done (using RRAS with NAT, using a wireless connection, etc...) but all of those articles only cover a small piece of the setup and also seem to be geared towards people who know there way around networking and servers which I don't. I'd like to know more but my thing is software development and I have my hands full trying to keep up with everything in that realm. I simply want to setup a virtual domain with Internet access for testing. Can anyone point me to any "for Dummy's" type information on how to setup this type of environment or can anyone provide this kind of step-by-step help. Any help would be very much appreciated.

    Read the article

  • mod rewrite help

    - by Benny B
    Ok, I don't know regex very well so I used a generator to help me make a simple mod_rewrite that works. Here's my full URL https://www.huttonchase.com/prodDetails.php?id_prd=683 For testing to make sure I CAN use this, I used this: RewriteRule prodDetails/(.*)/$ /prodDetails.php?id_prd=$1 So I can use the URL http://www.huttonchase.com/prodDetails/683/ If you click it, it works but it completely messes up the relative paths. There are a few work-arounds but I want something a little different. https://www.huttonchase.com/prod_683_stainless-steel-flask I want it to see that 'prod' is going to tell it which rule it's matching, 683 is the product number that I'm looking up in the database, and I want it to just IGNORE the last part, it's there only for SEO and to make the link mean something to customers. I'm told that this should work, but it's not: RewriteRule ^prod_([^-]*)_([^-]*)$ /prodDetails.php?id_prd=$1 [L] Once I get the first one to work I'll write one for Categories: https://www.huttonchase.com/cat_11_drinkware And database driven text pages: https://www.huttonchase.com/page_44_terms-of-service BTW, I can flip around my use of dash and underscore if need be. Also, is it better to end the URLs with a slash or without? Thanks!

    Read the article

< Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >