Search Results

Search found 69026 results on 2762 pages for 'toy problem'.

Page 494/2762 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • Jersey 2 in GlassFish 4 - First Java EE 7 Implementation Now Integrated (TOTD #182)

    - by arungupta
    The JAX-RS 2.0 specification released their Early Draft 3 recently. One of my earlier blogs explained as the features were first introduced in the very first draft of the JAX-RS 2.0 specification. Last week was another milestone when the first Java EE 7 specification implementation was added to GlassFish 4 builds. Jakub blogged about Jersey 2 integration in GlassFish 4 builds. Most of the basic functionality is working but EJB, CDI, and Validation are still a TBD. Here is a simple Tip Of The Day (TOTD) sample to get you started with using that functionality. Create a Java EE 6-style Maven project mvn archetype:generate -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=webapp-javaee6 -DgroupId=example -DartifactId=jersey2-helloworld -DarchetypeVersion=1.5 -DinteractiveMode=false Note, this is still a Java EE 6 archetype, at least for now. Open the project in NetBeans IDE as it makes it much easier to edit/add the files. Add the following <respositories> <repositories> <repository> <id>snapshot-repository.java.net</id> <name>Java.net Snapshot Repository for Maven</name> <url>https://maven.java.net/content/repositories/snapshots/</url> <layout>default</layout> </repository></repositories> Add the following <dependency>s <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.10</version> <scope>test</scope></dependency><dependency> <groupId>javax.ws.rs</groupId> <artifactId>javax.ws.rs-api</artifactId> <version>2.0-m09</version> <scope>test</scope></dependency><dependency> <groupId>org.glassfish.jersey.core</groupId> <artifactId>jersey-client</artifactId> <version>2.0-m05</version> <scope>test</scope></dependency> The complete list of Maven coordinates for Jersey2 are available here. An up-to-date status of Jersey 2 can always be obtained from here. Here is a simple resource class: @Path("movies")public class MoviesResource { @GET @Path("list") public List<Movie> getMovies() { List<Movie> movies = new ArrayList<Movie>(); movies.add(new Movie("Million Dollar Baby", "Hillary Swank")); movies.add(new Movie("Toy Story", "Buzz Light Year")); movies.add(new Movie("Hunger Games", "Jennifer Lawrence")); return movies; }} This resource publishes a list of movies and is accessible at "movies/list" path with HTTP GET. The project is using the standard JAX-RS APIs. Of course, you need the trivial "Movie" and the "Application" class as well. They are available in the downloadable project anyway. Build the project mvn package And deploy to GlassFish 4.0 promoted build 43 (download, unzip, and start as "bin/asadmin start-domain") as asadmin deploy --force=true target/jersey2-helloworld.war Add a simple test case by right-clicking on the MoviesResource class, select "Tools", "Create Tests", and take defaults. Replace the function "testGetMovies" to @Testpublic void testGetMovies() { System.out.println("getMovies"); Client client = ClientFactory.newClient(); List<Movie> movieList = client.target("http://localhost:8080/jersey2-helloworld/webresources/movies/list") .request() .get(new GenericType<List<Movie>>() {}); assertEquals(3, movieList.size());} This test uses the newly defined JAX-RS 2 client APIs to access the RESTful resource. Run the test by giving the command "mvn test" and see the output as ------------------------------------------------------- T E S T S-------------------------------------------------------Running example.MoviesResourceTestgetMoviesTests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.561 secResults :Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 GlassFish 4 contains Jersey 2 as the JAX-RS implementation. If you want to use Jersey 1.1 functionality, then Martin's blog provide more details on that. All JAX-RS 1.x functionality will be supported using standard APIs anyway. This workaround is only required if Jersey 1.x functionality needs to be accessed. The complete source code explained in this project can be downloaded from here. Here are some pointers to follow JAX-RS 2 Specification Early Draft 3 Latest status on specification (jax-rs-spec.java.net) Latest JAX-RS 2.0 Javadocs Latest status on Jersey (Reference Implementation of JAX-RS 2 - jersey.java.net) Latest Jersey API Javadocs Latest GlassFish 4.0 Promoted Build Follow @gf_jersey Provide feedback on Jersey 2 to [email protected] and JAX-RS specification to [email protected].

    Read the article

  • My own personal use of Oracle Linux

    - by wcoekaer
    It always is easier to explain something with examples... Many people still don't seem to understand some of the convenient things around using Oracle Linux and since I personally (surprise!) use it at home, let me give you an idea. I have quite a few servers at home and I also have 2 hosted servers with a hosted provider. The servers at home I use mostly to play with random Linux related things, or with Oracle VM or just try out various new Oracle products to learn more. I like the technology, it's like a hobby really. To be able to have a good installation experience and use an officially certified Linux distribution and not waste time trying to find the right libraries, I, of course, use Oracle Linux. Now, at least I can get a copy of Oracle Linux for free (even if I was not working for Oracle) and I can/could use that on as many servers at home (or at my company if I worked elsewhere) for testing, development and production. I just go to http://edelivery.oracle.com/linux and download the version(s) I want and off I go. Now, I also have the right (and not because I am an employee) to take those images and put them on my own server and give them to someone else, I in fact, just recently set up my own mirror on my own hosted server. I don't have to remove oracle-logos, I don't have to rebuild the ISO images, I don't have to recompile anything, I can just put the whole binary distribution on my own server without contract. Perfectly free to do so. Of course the source code of all of this is there, I have a copy of the UEK code at home, just cloned from https://oss.oracle.com/git/?p=linux-2.6-unbreakable.git. And as you can see, the entire changelog, checkins, merges from Linus's tree, complete overview of everything that got changed from kernel to kernel, from patch to patch, errata to errata. No obfuscating, no tar balls and spending time with diff, or go read bug reports to find out what changed (seems silly to me). Some of my servers are on the external network and I need to be current with security errata, but guess what, no problem, my servers are hooked up to http://public-yum.oracle.com which is open, free, and completely up to date, in a consistent, reliable way with any errata, security or bugfix. So I have nothing to worry about. Also, not because I am an employee. Anyone can. And, with this, I also can, and have, set up my own mirror site that hosts these RPMs. both binary and source rpms. Because I am free to get them and distribute them. I am quite capable of supporting my servers on my own, so I don't need to rely on the support organization so I don't need to have a support subscription :-). So I don't need to pay. Neither would you, at least not with Oracle Linux. Another cool thing. The hosted servers came (unfortunately) with Centos installed. While Centos works just fine as is, I tend to prefer to be current with my security errata(reliably) and I prefer to just maintain one yum repository instead of 2, I converted them over to Oracle Linux as well (in place) so they happily receive and use the exact same RPMs. Since Oracle Linux is exactly the same from a user/application point of view as RHEL, including files like /etc/redhat-release and no changes from .el. to .centos. I know I have nothing to worry about installing one of the RHEL applications. So, OL everywhere makes my life a lot easier and why not... Next! Since I run Oracle VM and I have -tons- of VM's on my machines, in some cases on my big WOPR box I have 15-20 VMs running. Well, no problem, OL is free and I don't have to worry about counting the number of VMs, whether it's 1, or 4, or more than 10 ... like some other alternatives started doing... and finally :) I like to try out new stuff, not 3 year old stuff. So with UEK2 as part of OL6 (and 6.3 in particular) I can play with a 3.0.x based kernel and it just installs and runs perfectly clean with OL6, so quite current stuff in an environment that I know works, no need to toy around with an unsupported pre-alpha upstream distribution with libraries and versions that are not compatible with production software (I have nothing against ubuntu or fedora or opensuse... just not what I can rely on or use for what I need, and I don't need a desktop). pretty compelling. I say... and again, it doesn't matter that I work for Oracle, if I was working elsewhere, or not at all, all of the above would still apply. Student, teacher, developer, whatever. contrast this with $349 for 2 sockets and oneguest and selfsupport per year to even just get the software bits.

    Read the article

  • SQL Server 2008 R2 Enterprise won't install on Windows 2008 R2 Enterprise

    - by Carlos Paulino
    I've been trying to install SQL Server on a new Windows Server 2008. I have tried everything but I haven't been able to narrow down the problem. When the installation fails I get " Exit code (Decimal): -2068643839". The problem with this is that according to Microsoft this is a generic error code. I follow their guide to look into the detail.txt inside C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\ But I can't find something that specifies the exact error. Any suggestions ? Thanks in advanced. I uploaded to detail.txt to http://www.megaupload.com/?d=0MV46SZH because it is to big to paste here. Below is the summary.txt ---------- Overall summary: Final result: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Exit code (Decimal): -2068643839 Exit facility code: 1203 Exit error code: 1 Exit message: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Start time: 2011-02-28 11:29:56 End time: 2011-02-28 11:34:45 Requested action: Install Machine Properties: Machine name: SA-SERVER Machine processor count: 8 OS version: Windows Server 2008 R2 OS service pack: Service Pack 1 OS region: United States OS language: English (United States) OS architecture: x64 Process architecture: 64 Bit OS clustered: No Product features discovered: Product Instance Instance ID Feature Language Edition Version Clustered Package properties: Description: SQL Server Database Services 2008 R2 ProductName: SQL Server 2008 R2 Type: RTM Version: 10 SPLevel: 0 Installation location: F:\x64\setup\ Installation edition: ENTERPRISE User Input Settings: ACTION: Install ADDCURRENTUSERASSQLADMIN: True AGTSVCACCOUNT: NT AUTHORITY\SYSTEM AGTSVCPASSWORD: ***** AGTSVCSTARTUPTYPE: Manual ASBACKUPDIR: Backup ASCOLLATION: Latin1_General_CI_AS ASCONFIGDIR: Config ASDATADIR: Data ASDOMAINGROUP: <empty> ASLOGDIR: Log ASPROVIDERMSOLAP: 1 ASSVCACCOUNT: <empty> ASSVCPASSWORD: ***** ASSVCSTARTUPTYPE: Automatic ASSYSADMINACCOUNTS: <empty> ASTEMPDIR: Temp BROWSERSVCSTARTUPTYPE: Disabled CONFIGURATIONFILE: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20110228_112601\ConfigurationFile.ini CUSOURCE: ENABLERANU: False ENU: True ERRORREPORTING: False FARMACCOUNT: <empty> FARMADMINPORT: 0 FARMPASSWORD: ***** FEATURES: SQLENGINE,BIDS,CONN,IS,BC,SDK,SSMS,ADV_SSMS,SNAC_SDK,OCS FILESTREAMLEVEL: 0 FILESTREAMSHARENAME: <empty> FTSVCACCOUNT: <empty> FTSVCPASSWORD: ***** HELP: False IACCEPTSQLSERVERLICENSETERMS: False INDICATEPROGRESS: False INSTALLSHAREDDIR: C:\Program Files\Microsoft SQL Server\ INSTALLSHAREDWOWDIR: C:\Program Files (x86)\Microsoft SQL Server\ INSTALLSQLDATADIR: <empty> INSTANCEDIR: D:\SQLServer INSTANCEID: MSSQLSERVER INSTANCENAME: MSSQLSERVER ISSVCACCOUNT: NT AUTHORITY\SYSTEM ISSVCPASSWORD: ***** ISSVCSTARTUPTYPE: Automatic NPENABLED: 0 PASSPHRASE: ***** PCUSOURCE: PID: ***** QUIET: False QUIETSIMPLE: False ROLE: AllFeatures_WithDefaults RSINSTALLMODE: FilesOnlyMode RSSVCACCOUNT: NT AUTHORITY\NETWORK SERVICE RSSVCPASSWORD: ***** RSSVCSTARTUPTYPE: Automatic SAPWD: ***** SECURITYMODE: SQL SQLBACKUPDIR: <empty> SQLCOLLATION: SQL_Latin1_General_CP1_CI_AS SQLSVCACCOUNT: NT AUTHORITY\SYSTEM SQLSVCPASSWORD: ***** SQLSVCSTARTUPTYPE: Automatic SQLSYSADMINACCOUNTS: SA-SERVER\Administrator SQLTEMPDBDIR: <empty> SQLTEMPDBLOGDIR: <empty> SQLUSERDBDIR: <empty> SQLUSERDBLOGDIR: <empty> SQMREPORTING: False TCPENABLED: 1 UIMODE: Normal X86: False Configuration file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20110228_112601\ConfigurationFile.ini Detailed results: Feature: Database Engine Services Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: SQL Client Connectivity SDK Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Integration Services Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Client Tools Connectivity Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Management Tools - Complete Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Management Tools - Basic Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Client Tools SDK Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Client Tools Backwards Compatibility Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Business Intelligence Development Studio Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Feature: Microsoft Sync Framework Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Rules with failures: Global rules: Scenario specific rules: Rules report file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20110228_112601\SystemConfigurationCheck_Report.htm

    Read the article

  • Slow login to load-balanced Terminal Server 2008 behind Gateway Server

    - by Frans
    I have a small load-balanced (using Session Broker) Terminal Server 2008 farm behind a Gateway Server which is accessed from the Internet. The problem I have is that there is a delay of 20-30 seconds if the session broker switches the user to another server during login. I think this is related to the fact that I am forcing the security layer to be RDP rather than SSL. The background The Gateway server has a public routeable IP addres and DNS name so it can be accessed from the Internet and all users come in via this route (the system is used to provide access to hosted applications to external customers). The actual terminal servers only have internal IP addresses. This works really well, except that with a Vista or Windows 7 client, the Remote Desktop client will negotiate with the server to use SSL for the security layer. This then exposes the auto-generated certificate that TS1 or TS2 has - but since they are internal, auto-generated certificates, the client will get a stern warning that the certificate is not valid. I can't give the servers a properly authorised certificate as the servers do not have public routeable IP address or DNS name. Instead, I am using Group Policy to force the connections to be over RDP instead of SSL. \Computer Configuration\Policies\Administrative Templates\Windows Components\Terminal Services\Terminal Server\Security\Require use of specific security layer for remote (RDP) connections The Windows 7 user now gets a much less stern warning that "the server's identity cannot be confirmed" which I can live with. I don't have enough control over the end-user's machines to ask them to install a new root certificate either. TS1 and TS2 are also load-balanced using the Session Broker, which is installed on the Gateway Server. I am using round-robin DNS, so the user's initial connection will go via Gateway1 to either TS1 or TS2. TS1/TS2 will then talk to the session broker and may pass the user to the other server. I.e. the user may get connected to TS2, but after talking to the session broker the user may be passed to TS1, which is where they will run their session. When this switching of servers happens, in my setup, the screen sits with the word "Welcome" for 20-30 seconds after which it flickers, Welcome is shown again and then flashing through nthe normal login screens (i.e. "wait for user profile manager" etc). Having done some research, I think what is happening is that the user is being fully logged on to TS2 (while "Welcome" is shown) before being passed to TS1, where they are then logged in again. It is interesting that normally when you see the ""Welcome" word, the little circle to left rotates. However, it does not rotate during this delay - the screen just looks frozen. This blog post leads me to think that this is because CredSSP is not being used, probably because I am disallowing SSL and forcing RDP. What I have tried I enabled SSL again which removes the "Welcome" delay. However, it seems to introduc a new delay much earlier in the process. Specifically, when the RDP client is saying "initialising connection" - this is now much slower. Quite apart from the fact that my certificate problem precludes me using that solution without considerable difficulty. I tried disabling the load balancing (just remove the servers from the session broker farm) and the connections do not have any delay. The problem is also intermittent in the sense that it only happens when the user gets bumped from one server to another. I tested this by trying to connect directly to TS1 (via the Gateway, of course) and then checking which server I actually got connected to. Just to be sure, I also by-passed the round-robin DNS to see if it had any impact and it doesn't. The setup is essentially in line with MS recommendations here: TS Session Broker Load Balancing Step-by-Step Guide I tried changing to using a dedicated redirector. Basically, rather than using a round-robin DNS, I pointed my DNS to the Gateway server and configured it to be a dedicated redirector (disallow logons, add it to the farm). Same problem, alas. Any ideas or suggestions gratefully received.

    Read the article

  • Cisco Prime NCS not starting

    - by Kwazii
    I have received the Cisco Prime OVA file and which we placed onto an Oracle virtual environment. We turn the VM on and the CLI boots, When we try to start the NCS service we get errors. HOSTNAME/USER# ncs start Starting Network Control System... Exception in thread "main" java.lang.NullPointerException at com.cisco.wnbu.udi.impl.UDIManager.isPhysicalAppliance(UDIManager.java:184) at com.cisco.packaging.WCSAdmin.start(WCSAdmin.java:335) at com.cisco.packaging.WCSAdmin.runMain(WCSAdmin.java:281) at com.cisco.packaging.WCSAdmin.main(WCSAdmin.java:901) Logs HOSTNAME/USER# show logging 07/18/13 10:25:38.878 INFO [system] [main] Setting management interface address to 192.168.0.10 07/18/13 10:25:38.884 INFO [system] [main] Setting peer server interface address to 192.168.0.10 07/18/13 10:25:38.884 INFO [system] [main] Setting client interface address to 192.168.0.10 07/18/13 10:25:38.884 INFO [system] [main] Setting local host name to HOSTNAME 07/18/13 10:25:40.341 ERROR [system] [main] THROW java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:419) at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:536) at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:228) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:521) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at com.cisco.server.persistence.util.OracleSchemaUtil.openConnection(OracleSchemaUtil.java:277) at com.cisco.server.persistence.util.OracleSchemaUtil.dbServerUp(OracleSchemaUtil.java:836) at com.cisco.packaging.DBAdmin.dbServerUp(DBAdmin.java:1429) at com.cisco.packaging.WCSAdmin.status(WCSAdmin.java:833) at com.cisco.packaging.WCSAdmin.status(WCSAdmin.java:757) at com.cisco.packaging.WCSAdmin.wcsServerUp(WCSAdmin.java:637) at com.cisco.packaging.WCSAdmin.start(WCSAdmin.java:294) at com.cisco.packaging.WCSAdmin.runMain(WCSAdmin.java:281) at com.cisco.packaging.WCSAdmin.main(WCSAdmin.java:901) Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:375) at oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:422) at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:678) at oracle.net.ns.NSProtocol.connect(NSProtocol.java:238) at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1054) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:308) ... 15 more Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(Unknown Source) at java.net.PlainSocketImpl.connectToAddress(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:209) at oracle.net.nt.ConnOption.connect(ConnOption.java:123) at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:353) ... 20 more 07/18/13 10:25:40.347 INFO [admin] [main] 07/18/13 10:25:40.347 INFO [admin] [main] Starting Network Control System... 07/18/13 10:25:40.347 INFO [admin] [main] 07/18/13 10:25:40.394 ERROR [admin] [main] Problem using CARS API: com.cisco.cars.fnd.CARSException: CARS_FAILURE : -999 : Failed to get UDI configuration. : Failure occurred during request at com.cisco.cars.fnd.CARSException.analyzeReturnCode(CARSException.java:118) at com.cisco.cars.serviceEngine.impl.EngineAdminServiceImpl.getUDI(EngineAdminServiceImpl.java:66) at com.cisco.wnbu.udi.impl.UDIManager.generateUDI(UDIManager.java:69) at com.cisco.wnbu.udi.impl.UDIManager.setPersistenceDirectory(UDIManager.java:139) at com.cisco.packaging.WCSAdmin.start(WCSAdmin.java:332) at com.cisco.packaging.WCSAdmin.runMain(WCSAdmin.java:281) at com.cisco.packaging.WCSAdmin.main(WCSAdmin.java:901) 07/18/13 10:25:40.396 ERROR [admin] [main] Problem using CARS API: com.cisco.cars.fnd.CARSException: CARS_FAILURE : -999 : Failed to get UDI configuration. : Failure occurred during request at com.cisco.cars.fnd.CARSException.analyzeReturnCode(CARSException.java:118) at com.cisco.cars.serviceEngine.impl.EngineAdminServiceImpl.getUDI(EngineAdminServiceImpl.java:66) at com.cisco.wnbu.udi.impl.UDIManager.generateUDI(UDIManager.java:69) at com.cisco.wnbu.udi.impl.UDIManager.setVirtualPID(UDIManager.java:169) at com.cisco.packaging.WCSAdmin.start(WCSAdmin.java:333) at com.cisco.packaging.WCSAdmin.runMain(WCSAdmin.java:281) at com.cisco.packaging.WCSAdmin.main(WCSAdmin.java:901) 07/18/13 10:25:40.397 ERROR [admin] [main] Problem using CARS API: com.cisco.cars.fnd.CARSException: CARS_FAILURE : -999 : Failed to get UDI configuration. : Failure occurred during request at com.cisco.cars.fnd.CARSException.analyzeReturnCode(CARSException.java:118) at com.cisco.cars.serviceEngine.impl.EngineAdminServiceImpl.getUDI(EngineAdminServiceImpl.java:66) at com.cisco.wnbu.udi.impl.UDIManager.generateUDI(UDIManager.java:69) at com.cisco.wnbu.udi.impl.UDIManager.setPhysicalPID(UDIManager.java:154) at com.cisco.packaging.WCSAdmin.start(WCSAdmin.java:334) at com.cisco.packaging.WCSAdmin.runMain(WCSAdmin.java:281) at com.cisco.packaging.WCSAdmin.main(WCSAdmin.java:901) 07/18/13 10:25:40.397 ERROR [admin] [main] Problem using CARS API: com.cisco.cars.fnd.CARSException: CARS_FAILURE : -999 : Failed to get UDI configuration. : Failure occurred during request at com.cisco.cars.fnd.CARSException.analyzeReturnCode(CARSException.java:118) at com.cisco.cars.serviceEngine.impl.EngineAdminServiceImpl.getUDI(EngineAdminServiceImpl.java:66) at com.cisco.wnbu.udi.impl.UDIManager.generateUDI(UDIManager.java:69) at com.cisco.wnbu.udi.impl.UDIManager.getUDI(UDIManager.java:112) at com.cisco.wnbu.udi.impl.UDIManager.isPhysicalAppliance(UDIManager.java:184) at com.cisco.packaging.WCSAdmin.start(WCSAdmin.java:335) at com.cisco.packaging.WCSAdmin.runMain(WCSAdmin.java:281) at com.cisco.packaging.WCSAdmin.main(WCSAdmin.java:901) Any help is appreciated, Thanks

    Read the article

  • Windows Server: specifying the default IP address when NIC has multiple addresses

    - by Cédric Boivin
    I have a Windows Server which has ~10 IP addresses statically bound. The problem is I don't know how to specify the default IP address. Sometimes when I assign a new address to the NIC, the default IP address changes with the last IP entered in the advanced IP configuration on the NIC. This has the effect (since I use NAT) that the outgoing public IP changes too. Even though this problem is currently on Windows Server 2008, I've had the same problem with a Windows Server 2003. How can you set the default IP address on a NIC when it has multiple IP addresses bound? I remark something. When i check route print i see is it there the problem ? 0.0.0.0 0.0.0.0 192.168.x.x 192.168.99.49 261 if i want the default ip be example : 192.168.99.100 There is my route. Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.99.1 192.168.99.49 261 10.10.10.0 255.255.255.0 On-link 10.10.10.10 261 10.10.10.10 255.255.255.255 On-link 10.10.10.10 261 10.10.10.255 255.255.255.255 On-link 10.10.10.10 261 192.168.99.0 255.255.255.0 On-link 192.168.99.49 261 192.168.99.49 255.255.255.255 On-link 192.168.99.49 261 192.168.99.51 255.255.255.255 On-link 192.168.99.49 261 192.168.99.52 255.255.255.255 On-link 192.168.99.49 261 192.168.99.53 255.255.255.255 On-link 192.168.99.49 261 192.168.99.54 255.255.255.255 On-link 192.168.99.49 261 192.168.99.55 255.255.255.255 On-link 192.168.99.49 261 192.168.99.56 255.255.255.255 On-link 192.168.99.49 261 192.168.99.57 255.255.255.255 On-link 192.168.99.49 261 192.168.99.58 255.255.255.255 On-link 192.168.99.49 261 192.168.99.59 255.255.255.255 On-link 192.168.99.49 261 192.168.99.60 255.255.255.255 On-link 192.168.99.49 261 192.168.99.61 255.255.255.255 On-link 192.168.99.49 261 192.168.99.62 255.255.255.255 On-link 192.168.99.49 261 192.168.99.64 255.255.255.255 On-link 192.168.99.49 261 192.168.99.65 255.255.255.255 On-link 192.168.99.49 261 192.168.99.66 255.255.255.255 On-link 192.168.99.49 261 192.168.99.67 255.255.255.255 On-link 192.168.99.49 261 192.168.99.68 255.255.255.255 On-link 192.168.99.49 261 192.168.99.70 255.255.255.255 On-link 192.168.99.49 261 192.168.99.71 255.255.255.255 On-link 192.168.99.49 261 192.168.99.100 255.255.255.255 On-link 192.168.99.49 261 192.168.99.108 255.255.255.255 On-link 192.168.99.49 261 192.168.99.109 255.255.255.255 On-link 192.168.99.49 261 192.168.99.112 255.255.255.255 On-link 192.168.99.49 261 192.168.99.255 255.255.255.255 On-link 192.168.99.49 261 224.0.0.0 240.0.0.0 On-link 192.168.99.49 261 224.0.0.0 240.0.0.0 On-link 10.10.10.10 261 255.255.255.255 255.255.255.255 On-link 192.168.99.49 261 255.255.255.255 255.255.255.255 On-link 10.10.10.10 261 I need to be certain, when i go on internet i go with the 192.168.99.100 how i do that ?

    Read the article

  • Windows Server: specifying the default IP address when NIC has multiple addresses

    - by Cédric Boivin
    I have a Windows Server which has ~10 IP addresses statically bound. The problem is I don't know how to specify the default IP address. Sometimes when I assign a new address to the NIC, the default IP address changes with the last IP entered in the advanced IP configuration on the NIC. This has the effect (since I use NAT) that the outgoing public IP changes too. Even though this problem is currently on Windows Server 2008, I've had the same problem with a Windows Server 2003. How can you set the default IP address on a NIC when it has multiple IP addresses bound? I remark something. When i check route print i see is it there the problem ? 0.0.0.0 0.0.0.0 192.168.x.x 192.168.99.49 261 if i want the default ip be example : 192.168.99.100 There is my route. Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.99.1 192.168.99.49 261 10.10.10.0 255.255.255.0 On-link 10.10.10.10 261 10.10.10.10 255.255.255.255 On-link 10.10.10.10 261 10.10.10.255 255.255.255.255 On-link 10.10.10.10 261 192.168.99.0 255.255.255.0 On-link 192.168.99.49 261 192.168.99.49 255.255.255.255 On-link 192.168.99.49 261 192.168.99.51 255.255.255.255 On-link 192.168.99.49 261 192.168.99.52 255.255.255.255 On-link 192.168.99.49 261 192.168.99.53 255.255.255.255 On-link 192.168.99.49 261 192.168.99.54 255.255.255.255 On-link 192.168.99.49 261 192.168.99.55 255.255.255.255 On-link 192.168.99.49 261 192.168.99.56 255.255.255.255 On-link 192.168.99.49 261 192.168.99.57 255.255.255.255 On-link 192.168.99.49 261 192.168.99.58 255.255.255.255 On-link 192.168.99.49 261 192.168.99.59 255.255.255.255 On-link 192.168.99.49 261 192.168.99.60 255.255.255.255 On-link 192.168.99.49 261 192.168.99.61 255.255.255.255 On-link 192.168.99.49 261 192.168.99.62 255.255.255.255 On-link 192.168.99.49 261 192.168.99.64 255.255.255.255 On-link 192.168.99.49 261 192.168.99.65 255.255.255.255 On-link 192.168.99.49 261 192.168.99.66 255.255.255.255 On-link 192.168.99.49 261 192.168.99.67 255.255.255.255 On-link 192.168.99.49 261 192.168.99.68 255.255.255.255 On-link 192.168.99.49 261 192.168.99.70 255.255.255.255 On-link 192.168.99.49 261 192.168.99.71 255.255.255.255 On-link 192.168.99.49 261 192.168.99.100 255.255.255.255 On-link 192.168.99.49 261 192.168.99.108 255.255.255.255 On-link 192.168.99.49 261 192.168.99.109 255.255.255.255 On-link 192.168.99.49 261 192.168.99.112 255.255.255.255 On-link 192.168.99.49 261 192.168.99.255 255.255.255.255 On-link 192.168.99.49 261 224.0.0.0 240.0.0.0 On-link 192.168.99.49 261 224.0.0.0 240.0.0.0 On-link 10.10.10.10 261 255.255.255.255 255.255.255.255 On-link 192.168.99.49 261 255.255.255.255 255.255.255.255 On-link 10.10.10.10 261 I need to be certain, when i go on internet i go with the 192.168.99.100 how i do that ?

    Read the article

  • BIND DNS Master with Zerigo Slaves - BIND won't update the slave servers

    - by Anthony
    I've tried to resolve this myself and have looked through Google and Stack but haven't found the answer I'm looking for. Currently on a VPS server I have BIND DNS installed as a MASTER DNS Server. I use Zerigo's DNS service as SLAVE servers for public use: The Master doesn't receive queries - It's job is to simply create and modify DNS entries locally of which the SLAVE use to serve. Here is an excerpt of the BIND log, I set it to INFO event logging: 14-Apr-2012 23:00:00.234 general: info: received control channel command 'reload' 14-Apr-2012 23:00:00.234 general: info: loading configuration from 'C:\DNS\BIND\etc\named.conf' 14-Apr-2012 23:00:00.234 general: info: using default UDP/IPv4 port range: [1024, 65535] 14-Apr-2012 23:00:00.234 general: info: using default UDP/IPv6 port range: [1024, 65535] 14-Apr-2012 23:00:00.250 general: info: reloading configuration succeeded 14-Apr-2012 23:00:00.250 general: info: reloading zones succeeded 14-Apr-2012 23:16:22.750 xfer-out: info: client 174.36.24.251#47135: transfer of 'ajmakeup.com/IN': AXFR started 14-Apr-2012 23:16:22.750 xfer-out: info: client 174.36.24.251#47135: transfer of 'ajmakeup.com/IN': AXFR ended 14-Apr-2012 23:16:23.015 xfer-out: info: client 68.71.141.22#36212: transfer of 'ajmakeup.com/IN': AXFR started 14-Apr-2012 23:16:23.031 xfer-out: info: client 68.71.141.22#36212: transfer of 'ajmakeup.com/IN': AXFR ended As you can see there is no problem with Zerigo's DNS servers requesting new DNS data, when I force a reload that is; I don't believe, as per the way they are set as SLAVE, that they poll for changes. However the problem is the other way; the MASTER is not updating the SLAVE servers when reload is run (on the MASTER); it is a batch on a 15 minute timer. Below is my NAMED.CONF: key "rndc-key" { algorithm hmac-md5; secret "REMOVED FOR SECURITY"; }; acl "trusted" { 174.36.24.251/32; 68.71.141.22/32; localhost; }; options { version "not currently available"; directory "C:\DNS\BIND\etc"; allow-query { trusted; }; }; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; }; }; logging{ channel simple_log { file "C:\DNS\BIND\logging\bind.log" versions 3 size 5m; severity info; print-time yes; print-severity yes; print-category yes; }; category default{ simple_log; }; }; zone "ajmakeup.com" in { type master; file "c:\dns\BIND\zones\db.ajmakeup.com.txt"; allow-transfer { 174.36.24.251; 68.71.141.22; }; allow-update { none; }; }; Does my problem have something to do with 'allow-query' under options? You will notice that 'allow-transfer' is set explicitly on each DNS zone. In case you need it here is my RNDC.CONF: key "rndc-key" { algorithm hmac-md5; secret "REMOVED FOR SECURITY"; }; options { default-key "rndc-key"; default-server 127.0.0.1; default-port 953; }; server localhost { key "rndc-key"; }; Note: I am using WebsitePanel as my hosting panel and is such why it creates the zone enteries the way it does. Although I know I can change this behaviour, I do not wish to do so nor do I believe is the root of the problem. Thanks for your help.

    Read the article

  • Connecting debian and windows via IPsec VPN with Racoon and ipsec-tools

    - by Michi Qne
    I've some trouble with the IPsec configuration on my debian server (6 squeeze). This server should connect via IPsec VPN to an windows server, which is protected by an firewall. I've used racoon and ipsec-tools and this tutorial http://wiki.debian.org/IPsec. However, I am not quite sure, if this tutorial fits to my purpose, because of some differences: my Host and my gateway are the same server. So I don't have two different ip addresses. I guess, that's not a problem the other server is an windows system behind a firewall. Hopefully, not a problem the subnet of the windows system is /32 not /24. So I change it to /32. I worked through the tutorial step by step, but I wasn't able to route the ip. The following command didn't work for me: ip route add to 172.16.128.100/32 via XXX.XXX.XXX.XXX src XXX.XXX.XXX.XXX So I tried the following instead: ip route add to 172.16.128.100 .., which obviously not solved the problem. The next problem is the compression. The windows doesn't use a compression, but 'compression_algorithm none;' doesn't work with my racoon. So the current value is 'compression_algorithm deflate;' So my current result looks like this: When I am trying to ping the windows host (ping 172.16.128.100), I receive the following error message from ping: ping: sendmsg: Operation not permitted And racoon logs: racoon: ERROR: failed to get sainfo. After googling for a while I came to no conclusion, what's the solution. Does this error message mean that the first phase of IPsec works? I am thankful for any advice. I guess my configs might be helpful. My racoon.conf looks like this: path pre_shared_key "/etc/racoon/psk.txt"; remote YYY.YYY.YYY.YYY { exchange_mode main; proposal { lifetime time 8 hour; encryption_algorithm 3des; hash_algorithm sha1; authentication_method pre_shared_key; dh_group 2; } } sainfo address XXX.XXX.XXX.XXX/32 any address 172.16.128.100/32 any { pfs_group 2; lifetime time 8 hour; encryption_algorithm aes 256; authentication_algorithm hmac_sha1; compression_algorithm deflate; } And my ipsec-tools.conf looks like this: flush; spdflush; spdadd XXX.XXX.XXX.XXX/32 172.16.128.100/32 any -P out ipsec esp/tunnel/XXX.XXX.XXX.XXX-YYY.YYY.YYY.YYY/require; spdadd 172.16.128.100/32 XXX.XXX.XXX.XXX/32 any -P in ipsec esp/tunnel/YYY.YYY.YYY.YYY-XXX.XXX.XXX.XXX/require; If anyone has an advice, that would be awesome. Thanks in Advance. Greets, Michael It was a simple copy-and-paste error in an ip address.

    Read the article

  • Need help with yum,python and php in CentOS. (I made a complete mess!)

    - by pek
    a while back I wanted to install some plugins for Trac but it required python 2.5 I tried installing it (I don't remember how) and the only thing I managed was to have two versions of python (2.4 and 2.5). Trac still uses the old version but the console uses 2.5 (python -V = Python 2.5.2). Anyway, the problem is not python, the problem is yum (which uses python). I am trying to upgrade my PHP version from 5.1.x to 5.2.x. I tried following this tutorial but when I reach the step with yum I get this error: >[root@XXX]# yum update Loading "installonlyn" plugin Setting up Update Process Setting up repositories Reading repository metadata in from local files Traceback (most recent call last): File "/usr/bin/yum", line 29, in ? yummain.main(sys.argv[1:]) File "/usr/share/yum-cli/yummain.py", line 94, in main result, resultmsgs = base.doCommands() File "/usr/share/yum-cli/cli.py", line 381, in doCommands return self.yum_cli_commands[self.basecmd].doCommand(self, self.basecmd, self.extcmds) File "/usr/share/yum-cli/yumcommands.py", line 150, in doCommand return base.updatePkgs(extcmds) File "/usr/share/yum-cli/cli.py", line 672, in updatePkgs self.doRepoSetup() File "/usr/share/yum-cli/cli.py", line 109, in doRepoSetup self.doSackSetup(thisrepo=thisrepo) File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 338, in doSackSetup self.repos.populateSack(which=repos) File "/usr/lib/python2.4/site-packages/yum/repos.py", line 200, in populateSack sack.populate(repo, with, callback, cacheonly) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 91, in populate dobj = repo.cacheHandler.getPrimary(xml, csum) File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 100, in getPrimary return self._getbase(location, checksum, 'primary') File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 86, in _getbase (db, dbchecksum) = self.getDatabase(location, metadatatype) File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 82, in getDatabase db = self.makeSqliteCacheFile(filename,cachetype) File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 245, in makeSqliteCacheFile self.createTablesPrimary(db) File "/usr/lib/python2.4/site-packages/yum/sqlitecache.py", line 165, in createTablesPrimary cur.execute(q) File "/usr/lib/python2.4/site-packages/sqlite/main.py", line 244, in execute self.rs = self.con.db.execute(SQL) _sqlite.DatabaseError: near "release": syntax error Any help? Thank you. Update OK, so I've managed to update yum hoping it would solve my problems but now I get a slightly different version of the same error: [root@XXX]# yum -y update Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirror.skiplink.com * base: www.gtlib.gatech.edu * epel: mirrors.tummy.com * extras: yum.singlehop.com * updates: centos-distro.cavecreek.net (process:30840): GLib-CRITICAL **: g_timer_stop: assertion `timer != NULL' failed (process:30840): GLib-CRITICAL **: g_timer_destroy: assertion `timer != NULL' failed Traceback (most recent call last): File "/usr/bin/yum", line 29, in ? yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 309, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 178, in main result, resultmsgs = base.doCommands() File "/usr/share/yum-cli/cli.py", line 345, in doCommands self._getTs(needTsRemove) File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 101, in _getTs self._getTsInfo(remove_only) File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 112, in _getTsInfo pkgSack = self.pkgSack File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 661, in <lambda> pkgSack = property(fget=lambda self: self._getSacks(), File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 501, in _getSacks self.repos.populateSack(which=repos) File "/usr/lib/python2.4/site-packages/yum/repos.py", line 260, in populateSack sack.populate(repo, mdtype, callback, cacheonly) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 190, in populate dobj = repo_cache_function(xml, csum) File "/usr/lib/python2.4/site-packages/sqlitecachec.py", line 42, in getPrimary self.repoid)) TypeError: Can not create packages table: near "release": syntax error I'm guessing that this "release" thing has something to do with a repository, but I didn't find anything... I went to the sqlitecachec.py at line 42 which writes (line numbers added for convenience): 39: return self.open_database(_sqlitecache.update_primary(location, 40: checksum, 41: self.callback, 42: self.repoid)) Update 2 I think I found the problem. This post suggests that the problem is sqlite and not yum. The version of sqlite I have installed is 3.6.10 but I have no idea which version does python 2.4 uses. ld.so.config contains the following: include ld.so.conf.d/*.conf /usr/local/lib In folder /usr/local/lib I find a symbolic link named libsqlite3.so that points to libsqlite3.so.0.8.6 WHAT IS HAPPENING??????? :S

    Read the article

  • Server 2003 R2 doesn't allow logon after a few days of uptime

    - by Bryan
    We have a server 2003 R2 standard (which I'll refer to as SRV01) that's knocking on a bit now, but it still acts as a file, print and SQL server on our company's network. SRV01 hosts user profiles, home directories and pretty much all our business data. Note our AD is currently at 2008 R2 level. This server is due to be upgraded in the next 12 months, but I've no budget to spend on it just yet. A bit of history of this server follows: When SRV01 was first commissioned, it acted as a domain controller (with the same 2003 R2 install it has today), paired with another server that ran Server 2003 R2 SBS. A few years ago, we purchased a pair of dedicated DCs (2008 R2) and at this point we decommissioned the 2003 SBS server, and SRV01 was DCPROMOed out of the AD. Up until very recently, SRV01 used to run Exchange 2003, however we've recently purchased a dedicated server for Exchange 2010 and upgraded (following Microsoft recommended upgrade path). Exchange 2003 was recently uninstalled. - Cleanly to the best of my knowledge. Ever since Exchange was removed from SRV01, I'm finding that after a few days of uptime, when I attempt to logon, pressing CTRL-ALT-DEL just hides the Welcome to Windows Server 2003 banner, and never presents the logon dialog. All I see is a moveable mouse pointer and a blank background. It's a similar story with an admin TS session, the RDP client connects and gives me a blank background, but no logon dialog is presented. The RDP session indefinitely hangs until I give up and close it. The only way I've been able to gain access to the server is to pull the plug on it. Whilst the server does have a battery backed up RAID 5 controller, I'm unhappy about having to do this, so as a temporary measure, I've created a scheduled job to reboot SRV01 each night. Not only do I not like the idea of scheduling a reboot of a server like this, but it is also causing problems for users that leave desktop PCs left logged on overnight. Users complain of 'Delayed Write Failures', and there has also been a number of users that have started to complain about account lockout problems, as well as users not able to connect to shares on SRV01 until they reboot their desktop PCs. I've examined event logs on SRV01 and on the DCs looking for clues as to what the problem is, but there really is nothing untoward being logged. How could I being to investigate this problem when nothing of any relevance is being logged? Is there some additional logging that can be enabled that might give some clues as to what could be causing this problem? Could performance monitor help me out here, and if so, what counters would you consider monitoring? It's worth mentioning that whilst the server is unresponsive via the console and TS, it does still respond to clients connecting to shares without problems for several days, but after about a week I then start to hear users reporting problems accessing shares, but this seems quite sporadic. I've also tried leaving the console logged on (and locked), when I notice I can no longer logon via TS, I can unlock the server console without problem, but it refuses to reboot/shutdown, and subsequent attempts to reboot report that a system shutdown is already in progress and the system then completely hangs. I've tried playing the waiting game for several hours thinking that a timeout might allow the shutdown to continue, but to no avail.

    Read the article

  • Exchange 2010 OWA - a few questions about using multiple mailboxes

    - by Alexey Smolik
    We have an Exchange 2010 SP2 deployment and we need that our users could access multiple mailboxes in OWA. The problem is that a user (eg John Smith) needs to access not just somebody else's (eg Tom Anderson) mailboxes, but his OWN mailboxes, e.g. in different domains: [email protected], [email protected], [email protected], etc. Of course it is preferable for the user to work with all of his mailboxes from a single window. Such mailboxes can be added as multiple Exchange accounts in Outlook, that works almost fine. But in OWA, there are problems: 1) In the left pane - as I've learned - we can open only Inbox folders from other mailboxes. No way to view all folders like in Outlook? 2) With Send-As permissions set, when trying to send a message from another address, that message is saved in the Sent Items folder of the mailbox that is opened in OWA, and not in the mailbox the message is sent from. The same thing with the trash can. Is there a way to fix that? Also, this problem exists in desktop Outlook when mailboxes are added automatically via the Auto Mapping feature, so that we need to turn it off and add the accounts manually. Is there a simpler workaround? 3) Okay, suppose we only open Inbox folders in the left pane. The problem is that the mailbox names shown there are formed from Display Name attributes. But those names are all identical! All the mailboxes are owned by John Smith, so they should be all named John Smith - so that letter recepient sees "John Smith" in the "from" field, no matter what mailbox it is sent from. Also, the user knows what's his name - no need to tell him. He wants to know what mailbox he works with. So we need a way to either: a) customize OWA to show mailbox email address instead of user Display Name, or b) make Exchange use another attribute to put in the "from" field when sending letters 4) Okay, we can switch between mailboxes using "Open Other Mailbox" in the upper-right corner menu. But: a) To select a mailbox we need to enter its name (or first letters). It there a way to show a list of links to mailboxes the user has full access to? Eg in the page header... b) If we start entering the first letters, we see a popup list with possible mailboxes to be opened. But there are all mailboxes (apparently from GAL), not only mailboxes the user has permission to open! How to filter that popup list? c) The same problem as in (3) with mailbox naming. We can see the opened mailbox email address ONLY in the page URL, which is insufficient for many users. In the left pane we see "John Smith" which is useless. 5) Each mailbox is tied with a separate user in AD. If one has several mailboxes, we need to have additional dummy AD accounts, create additional OUs to store them, etc. That's not very nice, is there any standartized, optimal way to build such a structure? We would really appreciate any answers or additional info for any of these questions. Thank you in advance.

    Read the article

  • SQL Server Issue: Could not allocate space for object ... primary filegroup is full

    - by Luke
    Trying to figure out a problem at an office that has SQL Server 2005 installed on Windows SBS Server 2008. Here's the setup: It's an office, and the person who set this all up is nowhere to be found. I'm the best hope they have... One of the programs they use on a workstation gives them an error of "Could not allocate space for object 'Billing' in database "MyDatabase" because primary filegroup is full" when trying to save an entry in their software. I searched around for hours, looking for possible solutions. One was to check for available disk space, and another was to defrag. I checked the hard drives on the server, and there is plenty of space free. I also defragged, which may have helped the problem somewhat. It's hard to say, because it seems like with the nature of the error, if you try over and over you might get it to actually save. My next step was to try to see if autogrowth was enabled on the database. This would seem to be a likely / possible solution, but I can't access the database! If I run the SQL Management Studio, I can log in as my Windows user and view the list of databases. However, if I try to do anything (actually view the database, view the properties, add or edit users), I get errors that I don't have permission. For what it's worth, I also tried runing Management Studio as Administrator, in case that would help. No difference, though. Now, what I'm guessing is going on -- from my limited knowledge of SQL and from reading online -- is that though I'm logged in as a Windows administrator, that account does NOT have SQL access. I do see a list of SQL users, including SA, but I again don't have permission to add one or to change the password on an existing one. And nobody at the office has any idea what the SQL passwords could be. So... here's my thinking thus far: 1 - The "Could not allocate" error likely points to a database that needs to be allowed to autogrow. Especially since I verified there is plenty of free space and the HD has been defragmented. 2 - Enabling autogrow would be very easy to do if I had the proper access within SQL Management Stuido. That leads me to this link: http://blogs.technet.com/b/sqlman/archive/2011/06/14/tips-amp-tricks-you-have-lost-access-to-sql-server-now-what.aspx It sounds like it's a step-by-step guide for giving me the access I need to SQL. I'm guessing that if I followed this guide, I would be able to then log in to the SQL server via Management Studio with the proper permissions, and would be able to enable autogrow (or simply view the status of the existing database), and hopefully solve the "Could not allocate space" problem! So I guess I have a few questions: 1 - Would you guys agree with my "diagnosis"? Think I'm barking up the right tree? 2 - Is there any risk at all in hurting / disabling / wrecking the current SQL database or setup with me going through the guide to regain SQL access? I understand that per the guide, I would have to temporarily shut down SQL, so obviously it wouldn't be accessible during that time. But it wouldn't be worth the risk if there's a chance I could mess anything up... Like I said, the workstations ARE currently accessing the database somehow, but nobody knows with what login info or anything. Basically, it's set up, it works (usually), but if they had to reload the software, nobody would know how. Any feedback would be appreciated!! The problem is such that it's not an emergency for them, but an annoyance. If I could fix it, it would be wonderful. But if not, I think they'll manage, especially as they are going to eventually stop using this software. Thank you so much for your time! Luke

    Read the article

  • PPTP connection fails with errors 800/806

    - by Mark S. Rasmussen
    I've got a client (Server 2008 R2) that won't connect to our production environment PPTP VPN server (Server 2003, running RRAS). The server is behind a firewall that has TCP1723 open as well as GRE. Other clients at our office are able to connect just fine. Our office is behind a Juniper SSG5-Serial firewall, but all outgoing traffic is allowed, and multiple other clients are able to connect to VPN servers without issues. I've also setup a completely different VPN server on another network outside of our office. The functioning clients connect just fine - the Server 2008 R2 machine doesn't. Thus it's definitely a problem with this machine in particular. I've rebooted it. I've disabled the firewall, no dice on either. I've run PPTPSRV and PPTPCLNT on the server/client and they're able to communicate perfectly - indicating there's no problem using neither TCP1723 nor GRE. The Server 2008 R2 machine is also running as a VPN server itself (incoming connection) and that's working perfectly. We have the issues no matter if there are active incoming connections or not. I'm not sure what my next debugging step would be; any suggestions? EDIT: The event log on the server has the following warning from RasMan: A connection between the VPN server and the VPN client xxx.xxx.xxx.xxx has been established, but the VPN connection cannot be completed. The most common cause for this is that a firewall or router between the VPN server and the VPN client is not configured to allow Generic Routing Encapsulation (GRE) packets (protocol 47). Verify that the firewalls and routers between your VPN server and the Internet allow GRE packets. Make sure the firewalls and routers on the user's network are also configured to allow GRE packets. If the problem persists, have the user contact the Internet service provider (ISP) to determine whether the ISP might be blocking GRE packets. Obviously this points to GRE being a potential problem. But seeing as I have other clients connectiong without problems, as well as PPTPSRV and PPTPCLNT being able to communicate, I'm suspecting this might be a red herring. EDIT: Here are the anonymized events logged by the client in chronological order: CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY has started dialing a VPN connection using a per-user connection profile named ZZZ. The connection settings are: Dial-in User = XXX\YYY VpnStrategy = PPTP DataEncryption = Require PrerequisiteEntry = AutoLogon = No UseRasCredentials = Yes Authentication Type = CHAP/MS-CHAPv2 Ipv4DefaultGateway = No Ipv4AddressAssignment = By Server Ipv4DNSServerAssignment = By Server Ipv6DefaultGateway = Yes Ipv6AddressAssignment = By Server Ipv6DNSServerAssignment = By Server IpDnsFlags = Register primary domain suffix IpNBTEnabled = Yes UseFlags = Private Connection ConnectOnWinlogon = No. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY is trying to establish a link to the Remote Access Server for the connection named ZZZ using the following device: Server address/Phone Number = XXX.YYY.ZZZ.KKK Device = WAN Miniport (PPTP) Port = VPN3-4 MediaType = VPN. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY has successfully established a link to the Remote Access Server using the following device: Server address/Phone Number = XXX.YYY.ZZZ.KKK Device = WAN Miniport (PPTP) Port = VPN3-4 MediaType = VPN. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The link to the Remote Access Server has been established by user XXX\YYY. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY dialed a connection named ZZZ which has failed. The error code returned on failure is 806. Running Wireshark on the client shows it trying and retrying to send a "71 Configuration Request" While the server shows the incoming client requests, but apparently without replying: Given that this is GRE traffic, I think rules out the GRE traffic being blocked. Question is, why doesn't the server reply? This is the Configuration Request the server receives from the non functioning client (meaning no response is sent to the client request): And this is the Configuration Request the server receives from the working client: To me they seem identical, except for differing keys and magic numbers, and the fact that one client receives a response while the other doesn't.

    Read the article

  • Unmountable boot volume blue screen, what should I do?

    - by Josh
    I was trying to install an update from NVIDIA for my GTX 560, but while it was installing, my computer shut off. After a few minutes, I turned it back on. It got to the Windows boot screen and then had a blue screen error and if left on it would just keep doing that. A few details about my PC: I haven't added any new hardware or software, I'm running Windows XP Professional 32 bit and Windows XP Professional 64 bit on the same hard drive for about 2 years now. I have 2 other hard drives also, but I don't have one large enough to hold everything from my main hard drive, so formatting isn't an option. Now, as for what I've done so far: I've scanned the RAM with "memtest - 86 v3.4" and it said that it was good. I scanned the hard drive in question with chkdsk /r and it gets to 50% and tells me something along the lines of "the drive has one or more unrepairable problems". I also tried to use chkdsk on the drive I installed the new copy of Windows XP on and it got to 75% then jumped back down to 50% and stayed there (I had to reboot the pc). So, after that, I turned off auto reboot and got to read the blue screen error code and I looked it up only to find that nobody seems to have this problem, just problems close to it. The error code is 0x000000ed and I've seen a lot of these online but none that matched the detailed part of the code UNMOUNTABLE_BOOT_VOLUME 0x000000ed (0xfffffadf513c19a0, 0xffffffffc0000006, 0, 0) So, I have installed another copy of Windows XP Professional 32 bit on one of my other hard drives in hopes of accessing the data on the drive in question and when it booted it asked if I wanted chkdsk to scan the drive in question and this is what it found: file record segments 12740, 12741, 12742 and 12743 were reported unreadable. Then it says "recovering lost files" but it sits there for a few seconds and then just boots to Windows. I can't access the drive in question from Windows as far as I can tell, it just says "drive not accessible" and when I go to properties it says that the drive has 100% free space. So, after that failed I didn't give up, I looked for another way to access the drive in question. I used a Ubuntu bootable disk and was able to access the drive in question without any problems. However, I can't access the registry editor because it's a .exe file and that won't load from Ubuntu. I made a copy of the "Windows" folder and put it on one of my other drives and that's where I'm stuck at now. I'm sure my drive works fine, I know chkdsk can't fix the problem with it and I know what caused the problem in the first place for the most part, but I don't know what to do about it. I have a laptop that I can use to download and burn disks if needed and I also have the other copy of Windows XP Professional 32 bit that I can use that's installed on the computer in question (so I know it's not a hardware issue). I'm pretty sure it's a driver issue or the update was editing the registry when it shut off and left me when a broken registry. I've tried accessing C:\Windows\System32\CONFIG only to find that the Windows XP disk repair option can't even access the files on the drive in question. It seems I'll need to be able to do everything from Ubuntu unless there is something I haven't tried with the Windows XP disk. I didn't install the update on Windows XP 64 bit but yet it also has the same blue screen error (that's where the error code above came from but I haven't checked to see if they are the same). They both stopped working at the same time, so I assume it's one problem causing both to not work.

    Read the article

  • reiserfsck on lvm

    - by DaDaDom
    It seems like my filesystem got corrupted somehow during the last reboot of my server. I can't fsck some logical volumes anymore. The setup: root@rescue ~ # cat /mnt/rescue/etc/fstab proc /proc proc defaults 0 0 /dev/md0 /boot ext3 defaults 0 2 /dev/md1 / ext3 defaults,errors=remount-ro 0 1 /dev/systemlvm/home /home reiserfs defaults 0 0 /dev/systemlvm/usr /usr reiserfs defaults 0 0 /dev/systemlvm/var /var reiserfs defaults 0 0 /dev/systemlvm/tmp /tmp reiserfs noexec,nosuid 0 2 /dev/sda5 none swap defaults,pri=1 0 0 /dev/sdb5 none swap defaults,pri=1 0 0 [UPDATE] First question: what "part" should I check for bad blocks? The logical volume, the underlying /dev/md or the /dev/sdx below that? Is doing what I am doing the right way to go? [/UPDATE] The errormessage when checking /dev/systemlvm/usr: root@rescue ~ # reiserfsck /dev/systemlvm/usr reiserfsck 3.6.19 (2003 www.namesys.com) [...] Will read-only check consistency of the filesystem on /dev/systemlvm/usr Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes ########### reiserfsck --check started at Wed Feb 3 07:10:55 2010 ########### Replaying journal.. Reiserfs journal '/dev/systemlvm/usr' in blocks [18..8211]: 0 transactions replayed Checking internal tree.. Bad root block 0. (--rebuild-tree did not complete) Aborted Well so far, let's try --rebuild-tree: root@rescue ~ # reiserfsck --rebuild-tree /dev/systemlvm/usr reiserfsck 3.6.19 (2003 www.namesys.com) [...] Will rebuild the filesystem (/dev/systemlvm/usr) tree Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes Replaying journal.. Reiserfs journal '/dev/systemlvm/usr' in blocks [18..8211]: 0 transactions replayed ########### reiserfsck --rebuild-tree started at Wed Feb 3 07:12:27 2010 ########### Pass 0: ####### Pass 0 ####### Loading on-disk bitmap .. ok, 269716 blocks marked used Skipping 8250 blocks (super block, journal, bitmaps) 261466 blocks will be read 0%....20%....40%....60%....80%....100% left 0, 11368 /sec 52919 directory entries were hashed with "r5" hash. "r5" hash is selected Flushing..finished Read blocks (but not data blocks) 261466 Leaves among those 13086 Objectids found 53697 Pass 1 (will try to insert 13086 leaves): ####### Pass 1 ####### Looking for allocable blocks .. finished 0% left 12675, 0 /sec The problem has occurred looks like a hardware problem (perhaps memory). Send us the bug report only if the second run dies at the same place with the same block number. mark_block_used: (39508) used already Aborted Bad. But let's do it again as mentioned: [...] Flushing..finished Read blocks (but not data blocks) 261466 Leaves among those 13085 Objectids found 54305 Pass 1 (will try to insert 13085 leaves): ####### Pass 1 ####### Looking for allocable blocks .. finished 0%... left 12127, 958 /sec The problem has occurred looks like a hardware problem (perhaps memory). Send us the bug report only if the second run dies at the same place with the same block number. build_the_tree: Nothing but leaves are expected. Block 196736 - internal Aborted Same happens every time, only the actual error message changes. Sometimes I get mark_block_used: (somenumber) used already, other times the block number changes. Seems like something is REALLY broken. Are there any chances I can somehow get the partitions to work again? It's a server to which I don't have physical access directly (hosted server). Thanks in advance!

    Read the article

  • Varnish VCL - Regular Expression Evaluation

    - by Hugues ALARY
    I have been struggling for the past few days with this problem: Basically, I want to send to a client browser a cookie of the form foo[sha1oftheurl]=[randomvalue] if and only if the cookie has not already been set. e.g. If a client browser requests "/page.html", the HTTP response will be like: resp.http.Set-Cookie = "foo4c9ae249e9e061dd6e30893e03dc10a58cc40ee6=ABCD;" then, if the same client request "/index.html", the HTTP response will contain a header: resp.http.Set-Cookie = "foo14fe4559026d4c5b5eb530ee70300c52d99e70d7=QWERTY;" In the end, the client browser will have 2 cookies: foo4c9ae249e9e061dd6e30893e03dc10a58cc40ee6=ABCD foo14fe4559026d4c5b5eb530ee70300c52d99e70d7=QWERTY Now, that, is not complicated in itself. The following code does it: import digest; import random; ##This vmod does not exist, it's just for the example. sub vcl_recv() { ## We compute the sha1 of the requested URL and store it in req.http.Url-Sha1 set req.http.Url-Sha1 = digest.hash_sha1(req.url); set req.http.random-value = random.get_rand(); } sub vcl_deliver() { ## We create a cookie on the client browser by creating a "Set-Cookie" header ## In our case the cookie we create is of the form foo[sha1]=[randomvalue] ## e.g for a URL "/page.html" the cookie will be foo4c9ae249e9e061dd6e30893e03dc10a58cc40ee6=[randomvalue] set resp.http.Set-Cookie = {""} + resp.http.Set-Cookie + "foo"+req.http.Url-Sha1+"="+req.http.random-value; } However, this code does not take into account the case where the Cookie already exists. I need to check that the Cookie does not exists before generating a random value. So I thought about this code: import digest; import random; sub vcl_recv() { ## We compute the sha1 of the requested URL and store it in req.http.Url-Sha1 set req.http.Url-Sha1 = digest.hash_sha1(req.url); set req.http.random-value = random.get_rand(); set req.http.regex = "abtest"+req.http.Url-Sha1; if(!req.http.Cookie ~ req.http.regex) { set req.http.random-value = random.get_rand(); } } The problem is that Varnish does not compute Regular expression at run time. Which leads to this error when I try to compile: Message from VCC-compiler: Expected CSTR got 'req.http.regex' (program line 940), at ('input' Line 42 Pos 31) if(req.http.Cookie !~ req.http.regex) { ------------------------------##############--- Running VCC-compiler failed, exit 1 VCL compilation failed One could propose to solve my problem by matching on the "abtest" part of the cookie or even "abtest[a-fA-F0-9]{40}": if(!req.http.Cookie ~ "abtest[a-fA-F0-9]{40}") { set req.http.random-value = random.get_rand(); } But this code matches any cookie starting by 'abtest' and containing an hexadecimal string of 40 characters. Which means that if a client requests "/page.html" first, then "/index.html", the condition will evaluate to true even if the cookie for the "/index.html" has not been set. I found in bug report phk or someone else stating that computing regular expressions was extremely expensive which is why they are evaluated during compilation. Considering this, I believe that there is no way of achieving what I want the way I've been trying to. Is there any way of solving this problem, other than writting a vmod? Thanks for your help! -Hugues

    Read the article

  • Web Interfaces not opening even after Port Forwarding is said to be working!

    - by Ahmad
    I'm encountering this strange problem which has baffled me to the ground, and which I haven't encountered even after years of doing port forwarding .. ! I am hoping somebody here can help me solve this mystery .. :) My network configuration is as follows: I have a DSL modem (custom made and branded by my ISP) which is receiving a DSL stream ... it has an external IP which is visible to the world, say, 11.22.33.44 ... This modem has DHCP enabled, has an internal IP for itself, which is 192.168.1.1 .. it is connected to 2 laptops via and ethernet cable .. Laptop 1 has IP 192.168.1.2, and Laptop 2 has IP 192.168.1.3 ... On Laptop 1, two applications are running, jDownloader and Media Player Classic, which have their web interfaces on ports 8765 and 13579, respectively ... I can access both of these web interfaces from Laptop 2 by opening these addresses: 192.1681.2:8765 and 192.168.1.2:13579 ... both of their web interfaces open up, meaning the web interfaces are working fine .. Moving on, I now want to access these web interfaces from outside my network as well, and so I've configured port forwarding in my PTCL modem to forward all traffic on ports between 8000 and 14000 (both TCP and UDP) to IP 192.168.1.2 ... I have verified that port forwarding is working by testing it using PortForward.com's port checker tool, and this website too: [URL]http://www.yougetsignal.com/tools/open-ports/[/URL] When I use the website, if I'm running the applications on Laptop 2, the website reports that the port is open .. if I then close the application, the website reports the port is closed ... This makes sense as nothing is listening on my machine in the latter case .. Also, if I disable port forwarding in my modem, again, the website reports the port is closed ... so, the website's results seem to be okay ... Same of the above can be said when I'm used PortForward.com's port checker tool ... So again, everything okay so far ... Now, here comes the problem !! ... Despite the above tools reporting that port forwarding is working, I am unable to open the web interfaces from outside my network ... So for example, if I tried to browse 11.22.33.44:8765 or 11.22.33.44:13579, nothing opens in my browser ... But if I accessed these web server's locally from Laptop 3, by typing in 192.168.1.2:8765 or 192.168.1.2:13579, they opened ... So where is the problem here ?? The tools report unanimously that port forwarding is working, and yet I am unable to open the web interfaces from outside the network .. Also note that I have disabled the firewall from my computer, and have also made sure that any option in the above programs (whose web interfaces I am trying to open) that says only local connections are to be accepted, is disabled ... So whats the problem ... ?!! Any ideas ??

    Read the article

  • "Can't create table" when having to many partitions

    - by Chris
    I am currently having a problem I dont understand. Wherever I look it says mySQL (5.5) / InnoDB doesnt have a table limit. I wanted to test the InnoDB compression and was about to create an empty copy of an existing table and ran into the following problem. this one works: CREATE TABLE `hsc` ( LOTS OF STUFF ) ENGINE=InnoDB CHARSET=utf8 PARTITION BY RANGE (pid) SUBPARTITION BY HASH (cons) SUBPARTITIONS 2 (PARTITION hsc_p0 VALUES LESS THAN (10000) , PARTITION hsc_p1 VALUES LESS THAN (20000) , PARTITION hsc_p2 VALUES LESS THAN (30000) , PARTITION hsc_p3 VALUES LESS THAN (40000) , PARTITION hsc_p4 VALUES LESS THAN (50000) , PARTITION hsc_p40 VALUES LESS THAN (4000000) ); this one doesn't: CREATE TABLE `hsc` ( LOTS OF STUFF ) ENGINE=InnoDB CHARSET=utf8 PARTITION BY RANGE (pid) SUBPARTITION BY HASH (cons) SUBPARTITIONS 2 (PARTITION hsc_p0 VALUES LESS THAN (10000) , PARTITION hsc_p1 VALUES LESS THAN (20000) , PARTITION hsc_p2 VALUES LESS THAN (30000) , PARTITION hsc_p3 VALUES LESS THAN (40000) , PARTITION hsc_p4 VALUES LESS THAN (50000) , PARTITION hsc_p5 VALUES LESS THAN (75000) , PARTITION hsc_p6 VALUES LESS THAN (100000) , PARTITION hsc_p7 VALUES LESS THAN (125000) , PARTITION hsc_p8 VALUES LESS THAN (150000) , PARTITION hsc_p9 VALUES LESS THAN (175000) , PARTITION hsc_p40 VALUES LESS THAN (4000000) ); ERROR 1005 (HY000): Can't create table 'hsc' (errno: 1) Its reproducable by removing the number of partitions and adding them again. it does not have to do anything with the name of the table as i tried various names. there is also enough empty space on the HDD. /dev/simfs 230G 26G 192G 12% /var/lib/mysql.mnt There should be no limit on the partitions http://dev.mysql.com/doc/refman/5.5/en/partitioning-limitations.html Maximum number of partitions. The maximum possible number of partitions for a given table (that does not use the NDB storage engine) is 1024. This number includes subpartitions. i have increased both open_files show variables where variable_name LIKE '%open_files%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | innodb_open_files | 512 | | open_files_limit | 1536 | +-------------------+-------+ No change. Any clues where should I start looking? UPDATE: the whole thing is running in an openvz environment. i saw in users_beancounters that the numflock was a problem, so i increased it. but the problem still persists. maybe this helps: ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 515011 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 515011 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited cat /proc/user_beancounters Version: 2.5 uid resource held maxheld barrier limit failcnt 200: kmemsize 9309653 13357056 14372700 14790164 0 lockedpages 0 1008 2048 2048 0 privvmpages 675424 686528 1048576 1572864 0 shmpages 33 673 21504 21504 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numproc 49 90 240 240 0 physpages 243761 246945 0 9223372036854775807 0 vmguarpages 0 0 1048576 1048576 0 oomguarpages 81672 83305 1048576 1048576 0 numtcpsock 6 8 360 360 0 numflock 175 188 512 512 8 numpty 1 9 16 16 0 numsiginfo 0 48 256 256 0 tcpsndbuf 104640 263912 1720320 2703360 0 tcprcvbuf 98304 131072 1720320 2703360 0 othersockbuf 32368 89304 1126080 2097152 0 dgramrcvbuf 0 2312 262144 262144 0 numothersock 19 28 360 360 0 dcachesize 2285052 3624426 3409920 3624960 0 numfile 616 870 9312 9312 0 dummy 0 0 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numiptent 24 24 128 128 0

    Read the article

  • ScriptAlias makes requests match too many Location blocks. What is going on?

    - by brain99
    We wish to restrict access on our development server to those users who have a valid SSL Client certificate. We are running Apache 2.2.16 on Debian 6. However, for some sections (mainly git-http, setup with gitolite on https://my.server/git/) we need an exception since many git clients don't support SSL client certificates. I have succeeded in requiring client cert authentication for the server, and in adding exceptions for some locations. However, it seems this does not work for git. The current setup is as follows: SSLCACertificateFile ssl-certs/client-ca-certs.crt <Location /> SSLVerifyClient require SSLVerifyDepth 2 </Location> # this works <Location /foo> SSLVerifyClient none </Location> # this does not <Location /git> SSLVerifyClient none </Location> I have also tried an alternative solution, with the same results: # require authentication everywhere except /git and /foo <LocationMatch "^/(?!git|foo)"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> In both these cases, a user without client certificate can perfectly access my.server/foo/, but not my.server/git/ (access is refused because no valid client certificate is given). If I disable SSL client certificate authentication completely, my.server/git/ works ok. The ScriptAlias problem Gitolite is setup using the ScriptAlias directive. I have found that the problem occurs with any similar ScriptAlias: # Gitolite ScriptAlias /git/ /path/to/gitolite-shell/ ScriptAlias /gitmob/ /path/to/gitolite-shell/ # My test ScriptAlias /test/ /path/to/test/script/ Note that /path/to/test/script is a file, not a directory, the same goes for /path/to/gitolite-shell/ My test script simply prints out the environment, super simple: #!/usr/bin/perl print "Content-type:text/plain\n\n"; print "TEST\n"; @keys = sort(keys %ENV); foreach (@keys) { print "$_ => $ENV{$_}\n"; } It seems that if I go to https://my.server/test/someLocation, that any SSLVerifyClient directives are being applied which are in Location blocks that match /test/someLocation or just /someLocation. If I have the following config: <LocationMatch "^/f"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> Then, the following URL requires a client certificate: https://my.server/test/foo. However, the following URL does not: https://my.server/test/somethingElse/foo Note that this only seems to apply for SSL configuration. The following has no effect whatsoever on https://my.server/test/foo: <LocationMatch "^/f"> Order allow,deny Deny from all </LocationMatch> However, it does block access to https://my.server/foo. This presents a major problem for cases where I have some project running at https://my.server/project (which has to require SSL client certificate authorization), and there is a git repository for that project at https://my.server/git/project which cannot require a SSL client certificate. Since the /git/project URL also gets matched agains /project Location blocks, such a configuration seems impossible given my current findings. Question: Why is this happening, and how do I solve my problem? In the end, I want to require SSL Client certificate authorization for the whole server except for /git and /someLocation, with as minimal configuration as possible (so I don't have to modify the configuration each time something new is deployed or a new git repository is added). Note: I rewrote my question (instead of just adding more updates at the bottom) to take into account my new findings and hopefully make this more clear.

    Read the article

  • Hosed Windows 7 permissons

    - by Anthony
    Here is the most interesting thing I've noticed since the problems started: If I go into a control panel/system module (in this case the Resource Monitor) that has a "Check Online" type option, Firefox (my default browser) opens right up without a problem. But if I just start Firefox from any shortcuts (start menu, desktop, etc), the Firefox process starts up (and the start menu icon starts glowing) only to end without notice a few seconds later. Possibly related: If I start up in Safe-Mode (w/o Networking, but haven't tried with yet), I can start up FF or Chrome just fine, but if I attempt to open Chrome normally, I get a permissions error. Opera and Safari seem to be okay (mostly). Safari crashes when I try to download any files. All of the above leads me to believe that some (but clearly not all) core files have messed up permissions. Or rather, that I no longer have permission. System still does, based on Firefox opening without fail when the system initiates it. I've run MS Forefront once in normal mode, Malwarebytes twice in normal mode and once in safe-mode. One trojan found and deleted, but the problem persists. Two other things worth mentioning: I accidentally duplicated my library... I thought I'd try to add the "Internet" folder to my start menu, next to music and downloads. The first advanced thing I tried was "create new library". I clearly misunderstood what this means. I thought it was a way to add virtual folders to the library (which I thought, in turn, would allow me to choose it as a link on the start menu), but instead it recreated my already existing user folder, AppData and all. I didn't notice this until today. Then I tried setting permissions for my User folder to full control, recursively... Confused but not giving up,I thought I could maybe create a shortcut to the NetHood folder manually, but instead got hit with an access denied error. So I tried to change the permission levels for all sub-folders to my user folder so that I had full control. I got several access denied errors along the way. At this point I gave up, went out, ended up caught in the rain and stuck on a friend's couch and showing up late for work the next day. Thanks for nothing, Microsoft. When I finally got home today (20 hours later), I noticed that Firefox was acting really strange. I tried opening Chrome to see if the problem was client side or server side, and instead got the above-mentioned "you don't have permission to open this program" alert. And I think that's the whole story. Oh, I also did a system restore, but not chose a point from this morning (an auto update), and it worked but the problem wasn't fixed. And then all the earlier restore points were gone. So the questions are: a) is there a way to set the admin and user privs back to "default"? b) would this, in anyone's expert opinion, fix the problems I'm having? c) how come being logged in as an admin isn't the same as being logged in with admin privs? It seems that half the time I have to do run as admin for fairy standard things because i'm being treated as me-theuser and not me-theadmin. Thanks for reading.

    Read the article

  • Monitor randomly shutting down, computer accepting no input, need to restart to get working

    - by Sebastian Lamerichs
    First off, spec list: OS: Windows 7 Ultimate 64-bit SP1 CPU: i7-4820k @ 3.7GHz (stock) GPU: Two 3GB Radeon HD 7970s @ 1.05GHz Mobo: AsRock X79 Extreme6 HDD: 2TB Seagate Barracuda 7200rpm RAM: 16GB quad-channel Kingston 1600MHz PSU: Antec HCG 900W Monitors: Acer S220HQL 1920x1080 + ViewSonic VA2251 1920x1080. Plugged into different GPUs. My problem is that, on a daily-ish basis, my monitors will turn off and not turn back on. My computer will still be running, GPU/CPU/case fans all still going, but the monitors will not turn back on. Additionally, it seems to cease all network activity. It doesn't seem to log any errors at all. I've verified that this is not a monitor issue, as when I press the num/caps/scroll lock buttons on my keyboard, the lights don't change, so the computer is clearly not accepting input. I have noticed a few other people on the internet with this problem, and some have claimed that it was solved by disabling PCI-Express Link State Power Management, but the issue still occurs for me after this. Whilst my CPU and GPUs both run at 100% 24/7, the temperatures are certainly not at dangerous levels, with the CPU averaging 65°C and the GPUs at 70°C and 78°C average. All components are brand new. I have tried forcing MSI Afterburner to start when Windows starts and to force a constant voltage, as this fixed the issue for a few days for another user, but he reported back saying that it had stopped working properly again, so I'm not putting too much faith in this working. Many people have said to adjust display sleep mode settings, but this will clearly not work, as the keyboard lights would still work if the monitors were the issue. The closest I can get to a log file for this issue is the following Folding@Home logs: 14:45:21:WU01:FS00:0x17:Completed 1120000 out of 2000000 steps (56%) 14:46:43:WU00:FS01:0x17:Completed 480000 out of 2000000 steps (24%) 14:46:49:WU01:FS00:0x17:Completed 1140000 out of 2000000 steps (57%) 14:48:30:WU01:FS00:0x17:Completed 1160000 out of 2000000 steps (58%) 14:49:55:WU01:FS00:0x17:Completed 1180000 out of 2000000 steps (59%) As you can see, the second GPU (FS01) stops computation approximately three and a half minutes before the issue occurs (it should be completing 1% every 80-120 seconds), and the first GPU (FS00) continues for a few minutes more before the logs just end. As far as I can tell, the computer has a network failure at the time the first GPU stops working, the latest IRC message I received from this time was at 14:47:58. That being said, there could have just not been any messages between then and 14:50:00, so I'm going to be connecting a laptop to the same bouncer to double-check if it happens again. The GPUs functioned perfectly well in another computer for a significant period of time, so I'm fairly confident that they aren't the issue, which means that this is being caused by either software or the motherboard, or possibly RAM. I really hope it's software. I heard from a forum board that there was a patch from Microsoft that fixed this problem, but "I've forgot which KB it was or the google search terms I used to find the patch, LOL.", so that's not much help. Haven't seen it mentioned by anyone else on about a dozen threads about this issue either. The computer is plugged in via a surge-protected power board, and I've run several other computers and pieces of hardware through it with no issues, so that is not the cause. I have just set the hard disk to never turn off, although I don't believe that that will solve the issue. Strangely, this has only happened when I'm not at the computer (which is actually a minority of the time). Until today it had only happened when I had not been actively using the computer for 6 hours, but today it happened within 10-30 minutes of me last using the computer actively. I have enabled file logging from MSI Afterburner, so hopefully this will shed some light on the issue, but I'm not too optimistic. I've heard that it could be a motherboard problem, but I figured I should ask around before RMAing it. Any help?

    Read the article

  • How do I connect my Windows XP laptop to the internet?

    - by rubysiddhi
    Hello fellow super users, The Past I have a Acer Travelmate 2300 laptop running Windows XP. 6 months ago I moved into a new apartment and got a new internet connection set up. After getting an internet connection installed in my apartment I reinstalled Windows XP and at the same time wiped my drive clean losing all the original Acer software and drivers. Once XP was reinstalled I had to find all the drivers again to get the Travelmate laptop connected to the internet. So, using my Vista laptop which was connected fine, I went to the Acer Travelmate Series drivers download page to download the necessary drivers. I transferred them to my Acer XP machine and installed them the best I could (there were no easy instructions so I just had to find all the executables and run them). I eventually got connected to the internet but not exactly in the way I had hoped for. The Present To be connected to the internet I need to have an Ethernet cord connecting my computer (via the Ethernet port) to my router. This is a problem since it defeats the purpose of having a Wireless LAN card in my Acer laptop. One of the programs I downloaded from the Acer Travelmate Series page was the Acer Wireless LAN Configuration Utility. This program allows me to see the current network I am connected to and all the available networks I could potentially connect to. It reminds me of XP's Wireless Network Connection window/utility where you can see all available wireless networks, refresh the network list and connect to one of the networks. I should mention that my ISP set up a security enabled wireless network with WPA. This network requires a network key if you want to connect to it. I guess my Vista computer has the network key entered into it already. The problem is that I do not know what the network key is. Now obviously you would say just contact my ISP to get the key. And I will but there is just one extra weird issue. I am able to connect to another unsecured wireless network in the Wireless Network Connection window/utility. I can be on it as long as my Ethernet cable is plugged in. So this is not really wireless is it? And this indicates that even if I do get that network key password from my ISP, I will only solve one of the two problems I have. I will only solve being able to get online as long as I am connected to my router via the Ethernet cable. The Main Questions So how do I enable my acer IPN2220 Wireless LAN Card so that I can use my Acer laptop from anywhere with in my apartment? Or should I first get the network key from my ISP to access my security enabled wireless network? And then deal with getting the acer IPN2220 Wireless LAN Card working? Hard & Learned VS Easy & Stupid Of course contacting the ISP would be easier. Have em just come in here and do there thing. The problem with that is that they do not speak English (yeah, im in Poland) and it'd be a hell of a time trying to understand what they are doing (uncomfortable looking over their shoulder). Also, I want to learn how to do this task myself so that I can fix the problem if it ever happens again. You know, be more self sufficient. I look forward to helpful replies. Thanks, Xaviour

    Read the article

  • HttpContext.Items and Server.Transfer/Execute

    - by Rick Strahl
    A few days ago my buddy Ben Jones pointed out that he ran into a bug in the ScriptContainer control in the West Wind Web and Ajax Toolkit. The problem was basically that when a Server.Transfer call was applied the script container (and also various ClientScriptProxy script embedding routines) would potentially fail to load up the specified scripts. It turns out the problem is due to the fact that the various components in the toolkit use request specific singletons via a Current property. I use a static Current property tied to a Context.Items[] entry to handle this type of operation which looks something like this: /// <summary> /// Current instance of this class which should always be used to /// access this object. There are no public constructors to /// ensure the reference is used as a Singleton to further /// ensure that all scripts are written to the same clientscript /// manager. /// </summary> public static ClientScriptProxy Current { get { if (HttpContext.Current == null) return new ClientScriptProxy(); ClientScriptProxy proxy = null; if (HttpContext.Current.Items.Contains(STR_CONTEXTID)) proxy = HttpContext.Current.Items[STR_CONTEXTID] as ClientScriptProxy; else { proxy = new ClientScriptProxy(); HttpContext.Current.Items[STR_CONTEXTID] = proxy; } return proxy; } } The proxy is attached to a Context.Items[] item which makes the instance Request specific. This works perfectly fine in most situations EXCEPT when you’re dealing with Server.Transfer/Execute requests. Server.Transfer doesn’t cause Context.Items to be cleared so both the current transferred request and the original request’s Context.Items collection apply. For the ClientScriptProxy this causes a problem because script references are tracked on a per request basis in Context.Items to check for script duplication. Once a script is rendered an ID is written into the Context collection and so considered ‘rendered’: // No dupes - ref script include only once if (HttpContext.Current.Items.Contains( STR_SCRIPTITEM_IDENTITIFIER + fileId ) ) return; HttpContext.Current.Items.Add(STR_SCRIPTITEM_IDENTITIFIER + fileId, string.Empty); where the fileId is the script name or unique identifier. The problem is on the Transferred page the item will already exist in Context and so fail to render because it thinks the script has already rendered based on the Context item. Bummer. The workaround for this is simple once you know what’s going on, but in this case it was a bitch to track down because the context items are used in many places throughout this class. The trick is to determine when a request is transferred and then removing the specific keys. The first issue is to determine if a script is in a Trransfer or Execute call: if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler) Context.Handler is the original handler and CurrentHandler is the actual currently executing handler that is running when a Transfer/Execute is active. You can also use Context.PreviousHandler to get the last handler and chain through the whole list of handlers applied if Transfer calls are nested (dog help us all for the person debugging that). For the ClientScriptProxy the full logic to check for a transfer and remove the code looks like this: /// <summary> /// Clears all the request specific context items which are script references /// and the script placement index. /// </summary> public void ClearContextItemsOnTransfer() { if (HttpContext.Current != null) { // Check for Server.Transfer/Execute calls - we need to clear out Context.Items if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler) { List<string> Keys = HttpContext.Current.Items.Keys.Cast<string>().Where(s => s.StartsWith(STR_SCRIPTITEM_IDENTITIFIER) || s == STR_ScriptResourceIndex).ToList(); foreach (string key in Keys) { HttpContext.Current.Items.Remove(key); } } } } along with a small update to the Current property getter that sets a global flag to indicate whether the request was transferred: if (!proxy.IsTransferred && HttpContext.Current.Handler != HttpContext.Current.CurrentHandler) { proxy.ClearContextItemsOnTransfer(); proxy.IsTransferred = true; } return proxy; I know this is pretty ugly, but it works and it’s actually minimal fuss without affecting the behavior of the rest of the class. Ben had a different solution that involved explicitly clearing out the Context items and replacing the collection with a manually maintained list of items which also works, but required changes through the code to make this work. In hindsight, it would have been better to use a single object that encapsulates all the ‘persisted’ values and store that object in Context instead of all these individual small morsels. Hindsight is always 20/20 though :-}. If possible use Page.Items ClientScriptProxy is a generic component that can be used from anywhere in ASP.NET, so there are various methods that are not Page specific on this component which is why I used Context.Items, rather than the Page.Items collection.Page.Items would be a better choice since it will sidestep the above Server.Transfer nightmares as the Page is reloaded completely and so any new Page gets a new Items collection. No fuss there. So for the ScriptContainer control, which has to live on the page the behavior is a little different. It is attached to Page.Items (since it’s a control): /// <summary> /// Returns a current instance of this control if an instance /// is already loaded on the page. Otherwise a new instance is /// created, added to the Form and returned. /// /// It's important this function is not called too early in the /// page cycle - it should not be called before Page.OnInit(). /// /// This property is the preferred way to get a reference to a /// ScriptContainer control that is either already on a page /// or needs to be created. Controls in particular should always /// use this property. /// </summary> public static ScriptContainer Current { get { // We need a context for this to work! if (HttpContext.Current == null) return null; Page page = HttpContext.Current.CurrentHandler as Page; if (page == null) throw new InvalidOperationException(Resources.ERROR_ScriptContainer_OnlyWorks_With_PageBasedHandlers); ScriptContainer ctl = null; // Retrieve the current instance ctl = page.Items[STR_CONTEXTID] as ScriptContainer; if (ctl != null) return ctl; ctl = new ScriptContainer(); page.Form.Controls.Add(ctl); return ctl; } } The biggest issue with this approach is that you have to explicitly retrieve the page in the static Current property. Notice again the use of CurrentHandler (rather than Handler which was my original implementation) to ensure you get the latest page including the one that Server.Transfer fired. Server.Transfer and Server.Execute are Evil All that said – this fix is probably for the 2 people who are crazy enough to rely on Server.Transfer/Execute. :-} There are so many weird behavior problems with these commands that I avoid them at all costs. I don’t think I have a single application that uses either of these commands… Related Resources Full source of ClientScriptProxy.cs (repository) Part of the West Wind Web Toolkit Static Singletons for ASP.NET Controls Post © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • URL Rewrite – Multiple domains under one site. Part II

    - by OWScott
    I believe I have it … I’ve been meaning to put together the ultimate outgoing rule for hosting multiple domains under one site.  I finally sat down this week and setup a few test cases, and created one rule to rule them all.  In Part I of this two part series, I covered the incoming rule necessary to host a site in a subfolder of a website, while making it appear as if it’s in the root of the site.  Part II won’t work without applying Part I first, so if you haven’t read it, I encourage you to read it now. However, the incoming rule by itself doesn’t address everything.  Here’s the problem … Let’s say that we host www.site2.com in a subfolder called site2, off of masterdomain.com.  This is the same example I used in Part I.   Using an incoming rewrite rule, we are able to make a request to www.site2.com even though the site is really in the /site2 folder.  The gotcha comes with any type of path that ASP.NET generates (I’m sure other scripting technologies could do the same too).  ASP.NET thinks that the path to the root of the site is /site2, but the URL is /.  See the issue?  If ASP.NET generates a path or a redirect for us, it will always add /site2 to the URL.  That results in a path that looks something like www.site2.com/site2.  In Part I, I mentioned that you should add a condition where “{PATH_INFO} ‘does not match’ /site2”.  That allows www.site2.com/site2 and www.site2.com to both function the same.  This allows the site to always work, but if you want to hide /site2 in the URL, you need to take it one step further. One way to address this is in your code.  Ultimately this is the best bet.  Ruslan Yakushev has a great article on a few considerations that you can address in code.  I recommend giving that serious consideration.  Additionally, if you have upgraded to ASP.NET 3.5 SP1 or greater, it takes care of some of the references automatically for you. However, what if you inherit an existing application?  Or you can’t easily go through your existing site and make the code changes?  If this applies to you, read on. That’s where URL Rewrite 2.0 comes in.  With URL Rewrite 2.0, you can create an outgoing rule that will remove the /site2 before the page is sent back to the user.  This means that you can take an existing application, host it in a subfolder of your site, and ensure that the URL never reveals that it’s in a subfolder. Performance Considerations Performance overhead is something to be mindful of.  These outbound rules aren’t simply changing the server variables.  The first rule I’ll cover below needs to parse the HTML body and pull out the path (i.e. /site2) on the way through.  This will add overhead, possibly significant if you have large pages and a busy site.  In other words, your mileage may vary and you may need to test to see the impact that these rules have.  Don’t worry too much though.  For many sites, the performance impact is negligible. So, how do we do it? Creating the Outgoing Rule There are really two things to keep in mind.  First, ASP.NET applications frequently generate a URL that adds the /site2 back into the URL.  In addition to URLs, they can be in form elements, img elements and the like.  The goal is to find all of those situations and rewrite it on the way out.  Let’s call this the ‘URL problem’. Second, and similarly, ASP.NET can send a LOCATION redirect that causes a redirect back to another page.  Again, ASP.NET isn’t aware of the different URL and it will add the /site2 to the redirect.  Form Authentication is a good example on when this occurs.  Try to password protect a site running from a subfolder using forms auth and you’ll quickly find that the URL becomes www.site2.com/site2 again.  Let’s term this the ‘redirect problem’. Solving the URL Problem – Outgoing Rule #1 Let’s create a rule that removes the /site2 from any URL.  We want to remove it from relative URLs like /site2/something, or absolute URLs like http://www.site2.com/site2/something.  Most URLs that ASP.NET creates will be relative URLs, but I figure that there may be some applications that piece together a full URL, so we might as well expect that situation. Let’s get started.  First, create a new outbound rule.  You can create the rule within the /site2 folder which will reduce the performance impact of the rule.  Just a reminder that incoming rules for this situation won’t work in a subfolder … but outgoing rules will. Give it a name that makes sense to you, for example “Outgoing – URL paths”. Precondition.  If you place the rule in the subfolder, it will only run for that site and folder, so there isn’t need for a precondition.  Run it for all requests.  If you place it in the root of the site, you may want to create a precondition for HTTP_HOST = ^(www\.)?site2\.com$. For the Match section, there are a few things to consider.  For performance reasons, it’s best to match the least amount of elements that you need to accomplish the task.  For my test cases, I just needed to rewrite the <a /> tag, but you may need to rewrite any number of HTML elements.  Note that as long as you have the exclude /site2 rule in your incoming rule as I described in Part I, some elements that don’t show their URL—like your images—will work without removing the /site2 from them.  That reduces the processing needed for this rule. Leave the “matching scope” at “Response” and choose the elements that you want to change. Set the pattern to “^(?:site2|(.*//[_a-zA-Z0-9-\.]*)?/site2)(.*)”.  Make sure to replace ‘site2’ with your subfolder name in both places.  Yes, I realize this is a pretty messy looking rule, but it handles a few situations.  This rule will handle the following situations correctly: Original Rewritten using {R:1}{R:2} http://www.site2.com/site2/default.aspx http://www.site2.com/default.aspx http://www.site2.com/folder1/site2/default.aspx Won’t rewrite since it’s a sub-sub folder /site2/default.aspx /default.aspx site2/default.aspx /default.aspx /folder1/site2/default.aspx Won’t rewrite since it’s a sub-sub folder. For the conditions section, you can leave that be. Finally, for the rule, set the Action Type to “Rewrite” and set the Value to “{R:1}{R:2}”.  The {R:1} and {R:2} are back references to the sections within parentheses.  In other words, in http://domain.com/site2/something, {R:1} will be http://domain.com and {R:2} will be /something. If you view your rule from your web.config file (or applicationHost.config if it’s a global rule), it should look like this: <rule name="Outgoing - URL paths" enabled="true"> <match filterByTags="A" pattern="^(?:site2|(.*//[_a-zA-Z0-9-\.]*)?/site2)(.*)" /> <action type="Rewrite" value="{R:1}{R:2}" /> </rule> Solving the Redirect Problem Outgoing Rule #2 The second issue that we can run into is with a client-side redirect.  This is triggered by a LOCATION response header that is sent to the client.  Forms authentication is a common example.  To reproduce this, password protect your subfolder and watch how it redirects and adds the subfolder path back in. Notice in my test case the extra paths: http://site2.com/site2/login.aspx?ReturnUrl=%2fsite2%2fdefault.aspx I want to remove /site2 from both the URL and the ReturnUrl querystring value.  For semi-readability, let’s do this in 2 separate rules, one for the URL and one for the querystring. Create a second rule.  As with the previous rule, it can be created in the /site2 subfolder.  In the URL Rewrite wizard, select Outbound rules –> “Blank Rule”. Fill in the following information: Name response_location URL Precondition Don’t set Match: Matching Scope Server Variable Match: Variable Name RESPONSE_LOCATION Match: Pattern ^(?:site2|(.*//[_a-zA-Z0-9-\.]*)?/site2)(.*) Conditions Don’t set Action Type Rewrite Action Properties {R:1}{R:2} It should end up like so: <rule name="response_location URL"> <match serverVariable="RESPONSE_LOCATION" pattern="^(?:site2|(.*//[_a-zA-Z0-9-\.]*)?/site2)(.*)" /> <action type="Rewrite" value="{R:1}{R:2}" /> </rule> Outgoing Rule #3 Outgoing Rule #2 only takes care of the URL path, and not the querystring path.  Let’s create one final rule to take care of the path in the querystring to ensure that ReturnUrl=%2fsite2%2fdefault.aspx gets rewritten to ReturnUrl=%2fdefault.aspx. The %2f is the HTML encoding for forward slash (/). Create a rule like the previous one, but with the following settings: Name response_location querystring Precondition Don’t set Match: Matching Scope Server Variable Match: Variable Name RESPONSE_LOCATION Match: Pattern (.*)%2fsite2(.*) Conditions Don’t set Action Type Rewrite Action Properties {R:1}{R:2} The config should look like this: <rule name="response_location querystring"> <match serverVariable="RESPONSE_LOCATION" pattern="(.*)%2fsite2(.*)" /> <action type="Rewrite" value="{R:1}{R:2}" /> </rule> It’s possible to squeeze the last two rules into one, but it gets kind of confusing so I felt that it’s better to show it as two separate rules. Summary With the rules covered in these two parts, we’re able to have a site in a subfolder and make it appear as if it’s in the root of the site.  Not only that, we can overcome automatic redirecting that is caused by ASP.NET, other scripting technologies, and especially existing applications. Following is an example of the incoming and outgoing rules necessary for a site called www.site2.com hosted in a subfolder called /site2.  Remember that the outgoing rules can be placed in the /site2 folder instead of the in the root of the site. <rewrite> <rules> <rule name="site2.com in a subfolder" enabled="true" stopProcessing="true"> <match url=".*" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^(www\.)?site2\.com$" /> <add input="{PATH_INFO}" pattern="^/site2($|/)" negate="true" /> </conditions> <action type="Rewrite" url="/site2/{R:0}" /> </rule> </rules> <outboundRules> <rule name="Outgoing - URL paths" enabled="true"> <match filterByTags="A" pattern="^(?:site2|(.*//[_a-zA-Z0-9-\.]*)?/site2)(.*)" /> <action type="Rewrite" value="{R:1}{R:2}" /> </rule> <rule name="response_location URL"> <match serverVariable="RESPONSE_LOCATION" pattern="^(?:site2|(.*//[_a-zA-Z0-9-\.]*)?/site2)(.*)" /> <action type="Rewrite" value="{R:1}{R:2}" /> </rule> <rule name="response_location querystring"> <match serverVariable="RESPONSE_LOCATION" pattern="(.*)%2fsite2(.*)" /> <action type="Rewrite" value="{R:1}{R:2}" /> </rule> </outboundRules> </rewrite> If you run into any situations that aren’t caught by these rules, please let me know so I can update this to be as complete as possible. Happy URL Rewriting!

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >