Search Results

Search found 25198 results on 1008 pages for 'failed request tracing'.

Page 292/1008 | < Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >

  • 401 - Unauthorized On Server 2008 R2 IIS 7.5

    - by mxmissile
    I have a web application deployed to Server 2008 IIS 7.5 box. From remote it gives this error: 401 - Unauthorized: Access is denied due to invalid credentials. (remote = desktops on the same LAN) Have tried several remote clients using different browsers, all the same result. (IE, FF, and Chrome) Hitting the application from the desktop of the server itself works flawlessly. However I have not tried Firebug on the server desktop. I would assume it's still issuing a 401 status code yet returning the content anyway. See Update #2. The application is using Anonymous Authentication. The application is written in .NET 4.0 Asp.Net using the MVC framework. Static content works fine, example: http://server.com/content/image.jpg Sysinternals procmon returns these 2 results for each request: FAST IO DISALLOWED and PATH NOT FOUND. I have 2 other MVC apps running fine on the same server. I have checked the security on the folders and they all match. App runs fine on a Server 2008 IIS 7.0 box. Nothing shows up in the Event log on the server related to this. Pulling my hair out here, any troubleshooting tips? UPDATE #1: This just get's more WTF as I dig. If I click on the Application in IIS Manager - Error Pages - Edit Feature Settings select Detailed Errors, the app works remotely. Not leaving this on, so problem is not solved yet, its just more confusing. UPDATE #2: Using Firebug, I see that the Status is still 401 Unauthorized, but the Response is returning the application's correct HTML. UPDATE #3 Playing around with Failed Request Tracing, here is the WARNING Request Trace that is causing the 401: ModuleName ManagedPipelineHandler Notification 128 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 0 ErrorCode 0 ConfigExceptionInfo Notification EXECUTE_REQUEST_HANDLER ErrorCode The operation completed successfully. (0x0) Update #4 Regular IIS log is showing this: #Software: Microsoft Internet Information Services 7.5 #Version: 1.0 #Date: 2010-07-20 19:17:22 #Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status time-taken 2010-07-20 19:17:22 10.10.1.10 GET /Purchasing/Home - 80 - 10.10.1.12 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US;+rv:1.9.2.6)+Gecko/20100625+Firefox/3.6.6 401 0 0 4414

    Read the article

  • Change /usr/lib to /usr/lib32 for eclipse to look for *.so files

    - by firen
    I am trying to run eclipse and I am getting: /usr/lib/gio/modules/libgvfsdbus.so: wrong ELF class: ELFCLASS64 Failed to load module: /usr/lib/gio/modules/libgvfsdbus.so I already found out that it is because this library is 64bits. I have found 32bit version of it and putted in subdirectory of /usr/lib32 but eclipse do not want to look for it there. How can I make it to look for libraries in /usr/lib32?

    Read the article

  • Configuring Apache reverse proxy

    - by Martin
    I have loadbalancer server and edges. I am trying to configure reverse proxy in order to hide the backend servers PL1,2,3. PL 1,2,3 are not located in same subnet. They are located in different locations. PL1 Lb1 -> PL2 PL3 I tried to configure Apache reverse proxy but it is not sending request to PL1,2,3. Reverse proxy worked only when I configured apache to send request to local server on other port. ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /PL1 http://PL1server.com/ ProxyPassReverse /PL1 http://PL1server.com/ The above configuration did not worked. Could you help me to solve the issue. Or is there other proxy types like Squid,Socks5 to solve this issue. Does the reverse proxy fails if we use IP address or domain URL in ProxyPass and ProxyPassReverse ?

    Read the article

  • Taking web sites offline for demonstration

    While working in software development in general, and in web development for a couple of customers it is quite common that it is necessary to provide a test bed where the client is able to get an image, or better said, a feeling for the visions and ideas you are talking about. Usually here at IOS Indian Ocean Software Ltd. we set up a demo web site on one of our staging servers, and provide credentials to the customer to access and review our progress and work ad hoc. This gives us the highest flexibility on both sides, as the test bed is simply online and available 24/7. We can update the structure, the UI and data at any time, and the client is able to view it as it suits best for her/him. Limited or lack of online connectivity But what is going to happen when your client is not capable to be online - no matter for what reasons; here are some more obvious ones: No internet connection (permanently or temporarily) Expensive connection, ie. mobile data package, stay at a hotel, etc. Presentation devices at an exhibition, ie. using tablets or iPads Being abroad for a certain time, and only occasionally online No network coverage, especially on mobile Bad infrastructure, like ie. in Third World countries Providing a catalogue on CD or USB pen drive Anyway, it doesn't matter really. We should be able to provide a solution for the circumstances of our customers. Presentation during an exhibition Recently, we had the following request from a customer: Is it possible to let us have a desktop version of ResortWork.co.uk that we can use for demo purposes at the forthcoming Ski Shows? It would allow us to let stand visitors browse the sites on an iPad to view jobs and training directory course listings. Yes, sure we can do that. Eventually, you might think why don't they simply use 3G enabled iPads for that purpose? As stated above, there might be several reasons for that - low coverage, expensive data packages, etc. Anyway, it is not a question on how to circumvent the request but to deliver a solution to that. Possible solutions... or not? We already did offline websites earlier, and even established complete mirrors of one or two web sites on our systems. There are actually several possibilities to handle this kind of request, and it mainly depends on the system or device where the offline site should be available on. Here, it is clearly expressed that we have to address this on an Apple iPad, well actually, I think that they'd like to use multiple devices during their exhibitions. Following is an overview of possible solutions depending on the technology or device in use, and how it can be done: Replication of source files and database The above mentioned web site is running on ASP.NET, IIS and SQL Server. In case that a laptop or slate runs a Windows OS, the easiest way would be to take a snapshot of the source files and database, and transfer them as local installation to those Windows machines. This approach would be fully operational on the local machine. Saving pages for offline usage This is actually a quite tedious job but still practicable for small web sites Tool based approach to 'harvest' the web site There quite some tools in the wild that could handle this job, namely wget, httrack, web copier, etc. Screenshots bundled as PDF document Not really... ;-) Creating screencast or video Simply navigate through your website and record your desktop session. Actually, we are using this kind of approach to track down difficult problems in order to see and understand exactly what the user was doing to cause an error. Of course, this list isn't complete and I'd love to get more of your ideas in the comments section below the article. Preparations for offline browsing The original website is dynamically and data-driven by ASP.NET, and looks like this: As we have to put the result onto iPads we are going to choose the tool-based approach to 'download' the whole web site for offline usage. Again, depending on the complexity of your web site you might have to check which of the applications produces the best results for you. My usual choice is to use wget but in this case, we run into problems related to the rewriting of hyperlinks. As a consequence of that we opted for using HTTrack. HTTrack comes in different flavours, like console application but also as either GUI (WinHTTrack on Windows) or Web client (WebHTTrack on Linux/Unix/BSD). Here's a brief description taken from the original website about HTTrack: HTTrack is a free (GPL, libre/free software) and easy-to-use offline browser utility. It allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site's relative link-structure. Simply open a page of the "mirrored" website in your browser, and you can browse the site from link to link, as if you were viewing it online. And there is an extensive documentation for all options and switches online. General recommendation is to go through the HTTrack Users Guide By Fred Cohen. It covers all the initial steps you need to get up and running. Be aware that it will take quite some time to get all the necessary resources down to your machine. Actually, for our customer we run the tool directly on their web server to avoid unnecessary traffic and bandwidth. After a couple of runs and some additional fine-tuning - explicit inclusion or exclusion of various external linked web sites - we finally had a more or less complete offline version available. A very handsome feature of HTTrack is the error/warning log after completing the download. It contains some detailed information about errors that appeared on the pages and the links within the pages that have been processed. Error: "Bad Request" (400) at link www.resortwork.co.uk/job-details_Ski_hire:tech_or_mgr_or_driver_37854.aspx (from www.resortwork.co.uk/Jobs_A_to_Z.aspx)Error: "Not Found" (404) at link www.247recruit.net/images/applynow.png (from www.247recruit.net/css/global.css)Error: "Not Found" (404) at link www.247recruit.net/activate.html (from www.247recruit.net/247recruit_tefl_jobs_network.html) In our situation, we took the records of HTTP 400/404 errors and passed them to the web development department. Improvements are to be expected soon. ;-) Quality assurance on the full-featured desktop Unfortunately, the generated output of HTTrack was still incomplete but luckily there were only images missing. Being directly on the web server we simply copied the missing images from the original source folder into our offline version. After that, we created an archive and transferred the file securely to our local workspace for further review and checks. From that point on, it wasn't necessary to get any more files from the original web server, and we could focus ourselves completely on the process of browsing and navigating through the offline version to isolate visual differences and functional problems. As said, the original web site runs on ASP.NET Web Forms and uses Postback calls for interaction like search, pagination and partly for navigation. This is the main field of improving the offline experience. Of course, same as for standard web development it is advised to test with various browsers, and strangely we discovered that the offline version looked pretty good on Firefox, Chrome and Safari, but not in Internet Explorer. A quick look at the HTML source shed some light on this, and there are conditional CSS inclusions based on the user agent. HTTrack is not acting as Internet Explorer and so we didn't have the necessary overrides for this browser. Not problematic after all in our case, but you might have to pay attention to this and get the IE-specific files explicitly. And while having a view at the source code, we also found out that HTTrack actually modifies the generated HTML output. In several occasions we discovered that <div> elements were converted into <table> constructs for no obvious reason; even nested structures. Search 'e'nd destroy - sed (or Notepad++) to the rescue During our intensive root cause analysis for a couple of HTML/CSS problems that needed some extra attention it is very helpful to be familiar with any editor that allows search and replace over multiple files like, ie. sed - stream editor for filtering and transforming text on Linux or my personal favourite Notepad++ on Windows. This allowed us to quickly fix a lot of anchors with onclick attributes and Javascript code that was addressed to ASP.NET files instead of their generated HTML counterparts, like so: grep -lr -e '.aspx' * | xargs sed -e 's/.aspx/.html?/g' The additional question mark after the HTML extension helps to separate the query string from the actual target and solved all our missing hyperlinks very fast. The same can be done in Notepad++ on Windows, too. Just use the 'Replace in files' feature and you are settled. Especially, in combination with Regular Expressions (regex). Landscape of browsers Okay, after several runs of HTML/CSS code analysis, searching and replacing some strings in a pool of more than 4.000 files, we finally had a very good match of an offline browsing experience in Firefox and Chrome on Linux. Next, we transferred that modified set of files to a Windows 8 machine for review on Firefox, Chrome and Internet Explorer 7 to 10, and a Mac mini running Mac OS X 10.7 to check the output on Safari and again on Chrome. Besides IE, for reasons already mentioned above, the results were identical. And last but not least it was about to check web site on tablets. Please continue to read on the following articles: Taking web sites offline for demonstration on Galaxy Tablet Taking web sites offline for demonstration on iPad

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • installing ntop in ubuntu 12.4

    - by George Ninan
    When i try to start the ntop i get the following error - Secure Connection Failed An error occurred during a connection to 192.168.166.229:3000. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) The page you are trying to view cannot be shown because the authenticity of the received data could not be verified. Please contact the website owners to inform them of this problem. Alternatively, use the command found in the help menu to report this broken site. Please advice

    Read the article

  • Static classes and/or singletons -- How many does it take to become a code smell?

    - by Earlz
    In my projects I use quite a lot of static classes. These are usually classes that naturally seem to fit into a single-instance type of thing. Many times I use static classes and recently I've started using some singletons. How many of these does it take to become a code smell? For instance, in my recent project which has a lot of static classes is an Authentication library for ASP.Net. I use a static class for a helper class that fixes ASP.Net error codes so it can be used like CustomErrorsFixer.Fix(Context); Or my authentication class itself is a static class //in global.asax's begin_application Authentication.SomeState="blah"; Authentication.SomeOption=true; //etc //in global.asax's begin_request Authentication.Authenticate(); When are static or singleton classes bad to use? Am I doing it wrong, or am I just in a project that by definition has very little per-instance state associated with it? The only per-instance state I have is stored in HttpContext.Current.Items like so: /// <summary> /// The current user logged in for the HTTP request. If there is not a user logged in, this will be null. /// </summary> public static UserData CurrentUser{ get{ return HttpContext.Current.Items["fscauth_currentuser"] as UserData; //use HttpContext.Current as a little place to persist static data for this request } private set{ HttpContext.Current.Items["fscauth_currentuser"]=value; } }

    Read the article

  • DAO/Webservice Consumption in Web Application

    - by Gavin
    I am currently working on converting a "legacy" web-based (Coldfusion) application from single data source (MSSQL database) to multi-tier OOP. In my current system there is a read/write database with all the usual stuff and additional "read-only" databases that are exported daily/hourly from an Enterprise Resource Planning (ERP) system by SSIS jobs with business product/item and manufacturing/SCM planning data. The reason I have the opportunity and need to convert to multi-tier OOP is a newer more modern ERP system is being implemented business wide that will be a complete replacement. This newer ERP system offers several interfaces for third party applications like mine, from direct SQL access to either a dotNet web-service or a SOAP-like web-service. I have found several suitable frameworks I would be happy to use (Coldspring, FW/1) but I am not sure what design patterns apply to my data access object/component and how to manage the connection/session tokens, with this background, my question has the following three parts: Firstly I have concerns with moving from the relative safety of a SSIS job that protects me from downtime and speed of the ERP system to directly connecting with one of the web services which I note seem significantly slower than I expected (simple/small requests often take up to a whole second). Are there any design patterns I can investigate/use to cache/protect my data tier? It is my understanding data access objects (the component that connects directly with the web services and convert them into the data types I can then work with in my Domain Objects) should be singletons (and will act as an Adapter/Facade), am I correct? As part of the data access object I have to setup a connection by username/password (I could set up multiple users and/or connect multiple times with this) which responds with a session token that needs to be provided on every subsequent request. Do I do this once and share it across the whole application, do I setup a new "connection" for every user of my application and keep the token in their session scope (might quickly hit licensing limits), do I set the "connection" up per page request, or is there a design pattern I am missing that can manage multiple "connections" where a requests/access uses the first free "connection"? It is worth noting if the ERP system dies I will need to reset/invalidate all the connections and start from scratch, and depending on which web-service I use might need manually close the "connection/session"

    Read the article

  • ITIL Incident Classification - Fault vs SR vs Technical Incident

    - by ExceptionLimeCat
    I am new to ITIL and Incident classifcations and I am trying learn more about them and understand how they could integrate in our organization. I have found it difficult to find a clear definition of Fault vs. Service Request vs. Technical incidents. I am basing my definitions on this article: http://www.itsmsolutions.com/newsletters/DITYvol6iss27.htm As I understand it: Service Request - Service provided by IT as part of regular administration of a system. Fault - An unexpected error in a system. Technical Incident - An interruption or potential interruption in IT service due to an expected incident caused by some IT policy.

    Read the article

  • 403 in Response to OPTIONS when updating working copy having full access

    - by user23419
    There is an SVN repository (single repository) http://example.net/svn The repository contains several projects (directories): http://example.net/svn/Project1 http://example.net/svn/Project2 User has full access to Project1 directory and has no access neither to root nor to Project2. Everything works fine for a while: user checks out http://example.net/svn/Project1, commits and updates it successfully. But sometimes trying to update leads to the following error: Command: Update Error: Server sent unexpected return value (403 Forbidden) in response to OPTIONS Error: request for 'http://example.net/svn' Finished! Why does TortoiseSVN request something in the root??? I have noticed that this happens after somebody else committed copy or move operation. Checking out http://example.net/svn/Project1 helps till next time... The main question: How to set up access rights for user to avoid these errors? Note, it's not an option to grant user any read or write access right on the root directory for security reasons.

    Read the article

  • Help with IPTables - Masquerading + Forwarding, 1-to-1?

    - by Artiom Chilaru
    I've got a clean Ubuntu Server 10.10 with OpenSSH, OpenVPN and vsFTPd installed. The server is running as a VM on the Hyper-V server (hypervisor), has two network interfaces mapped to physical adapters (eth0 and eth1), and a virtual interface with a direct connection to the hypervisor (eth2). The VPN will create a tun0 interface when a client connects. What I want is the remote user, connecting over VPN to be able to connect to the hypervisor (all ports, ping etc). The initial idea was to make the VPN create a tap0 interface, and bridge eth2 to tap0, but this didn't work, unfortunately, as it seems that the adapters don't want to go into promiscuous mode (partially confirmed by MS) At the same time, both the hypervisor and the remove client over VPN can successfully ping/connect to the ubuntu server with no problems. So my plan right now is to try doing some 1-1 masquerading, if possible. Basically, I want every request sent from the VPN client to the ubuntu server to be redurected to the hypervisor instead (with IP translation ofc), and every request from the hypervisor to the ubuntu machine sent to the VPN client (IP translated too). Only 1 client will be connected at a time to the VPN, so I can force limit it to a single IP at all times, if necessary. Is this the right way to go, and if true, how can this be achieved? It's almost like a special case of port-forwarding, except every single port on tun0 is forwarded to a machine in eth2, and every port on the eth2 side forwards to an ip on tun0 I guess it could be done with iptables, but I'm rather new in linux, so I can't do it myself... help? :(

    Read the article

  • JSP Include: one large bean or bean for each include

    - by shylynx
    I want to refactor a webapp that consists of very distorted JSPs and servlets. Because we can't switch to a web framework easily we have to keep JSPs and Servlets, and now we are in doubt how to include pages into another and how to setup the use:bean-directives effectively. At the first step we want to decouple the code for the core-actions and the bean-creation into servlets. The servlets should forward to their corresponding pages, which should use the bean. The problem here is, that each jsp consists of different sub- and sub-sub-jsp that are included into another. Here is a shortend extract (because reality is more complex): head header top navigation actionspanel main header actionspanel foot footer Moreover each jsp (also the header and footer) use dynamic data. For example title and actionspanel can change on each page-reload or do have links and labels that depend on the processing by the preceding servlet. I know that jsp-include-directives should only be used for static content und should be avoided for dynamic content. But here we have very large pages, that consist of many parts. Now the core questions: Should I use one big bean for each page, so that each bean holds also data for header and footer beside its core data, so that each subsequent included jsp uses the same bean-directive? For example: DirectoryJSP <- DirectoryBean CompareJSP <- CompareBean Or should I use one bean for each jsp, so that each bean only holds the data for one jsp and its own purpose. For example: DirectoryJSP <- DirectoryBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean CompareJSP <- CompareBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean In the second case: should the subsequent beans be a member of the corresponding parent bean, so that only the parent bean is attached as attribute to the request? Or should each bean attached to the request?

    Read the article

  • Need help configurating my Tomcat server without any WAR files

    - by gablin
    I just reinstalled my entire server, and now I can't seem to get my JSP-based website to work on Tomcat anymore. I use the same server.xml file, which worked perfectly before the reinstallation, but no longer. Here's the content of the server.xml file which worked before: <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <!-- <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> --> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> <!-- </Host> --> <Host name="www.rebootradio.nu"> <Alias>rebootradio.nu</Alias> <Context path="" docBase="D:/services/http/rebootradio.nu" debug="1" reloadable="true"/> </Host> </Engine> </Service> </Server> The JSP site doesn't use any WAR files or anything like that; there's just a default.jsp in the specified folder D:/services/http/rebootradio.nu which loads the site. As I said, this configuration worked before, but now with the latest verion of XAMPP and Tomcat it doesn't work anymore. All I get is a 404 message saying The requested resource () is not available.

    Read the article

  • Did I lost my RAID again?

    - by BarsMonster
    Hi! A little history: 2 years ago I was really excited to find out that mdadm is so powerful so it even can reshape arrays so you can start with a smaller array and the grow it as you need. I've bought 3x1Tb drives and made RAID-5. It was fine for a year. Then I bought 2x more, and tried to reshape to RAID-6 out of 5 drives, and due to some mess with superblock versions, lost all content. Had to rebuild it from scratch, but 2Tb of data were gone. Yesterday I bought 2 more drives, and this time I had everything: properly built array, UPS. I've disabled write intent map, added 2 new drives as a spare and run a command to grow array to 7-disk. It started working, but speed was ridiculously slow, ~100kb/sec. AFter processing first 37Mb at such an amasing speed, one of old HDDs fails. I properly shutdown PC and disconnected failed drive. After bootup it appeared it recreated intent map as it was still in mdadm config, so I removed it from config and rebooted again. Now all I see is that all mdadm processes deadlocks, and don't do anything. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1937 root 20 0 12992 608 444 D 0 0.1 0:00.00 mdadm 2283 root 20 0 12992 852 704 D 0 0.1 0:00.01 mdadm 2287 root 20 0 0 0 0 D 0 0.0 0:00.01 md0_reshape 2288 root 18 -2 12992 820 676 D 0 0.1 0:00.01 mdadm And all I see in mdstat is: $ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid6 sdb1[1] sdg1[4] sdf1[7] sde1[6] sdd1[0] sdc1[5] 2929683456 blocks super 1.2 level 6, 1024k chunk, algorithm 2 [7/6] [UU_UUUU] [>....................] reshape = 0.0% (37888/976561152) finish=567604147.2min speed=0K/sec I've already tried mdadm 2.6.7, 3.1.4 and 3.2 - nothing helps. Did I lost my data again? Any suggestions how can I make it work? OS is Ubuntu Server 10.04.2... PS. Needless to say that data is unaccessible - I cannot mount /dev/md0 as save the most valuable data. You can see my disappointment - the very specific thing I was excited about failed twice taking 5Tb of my data with it.

    Read the article

  • How should an API use http basic authentication

    - by user1626384
    When an API requires that a client authenticates to it, i've seen two different scenarios used and I am wondering which case I should use for my situation. Example 1. An API is offered by a company to allow third parties to authenticate with a token and secret using HTTP Basic. Example 2. An API accepts a username and password via HTTP Basic to authenticate an end user. Generally they get a token back for future requests. My Setup: I will have an JSON API that I use as my backend for a mobile and web app. It seems like good practice for both the mobile and web app to send along a token and secret so only these two apps can access the API blocking any other third party. But the mobile and web app allow users to login and submit posts, view their data, etc. So I would want them to login via HTTP Basic as well on each request. Do I somehow use a combination of both these methods or only send the end user credentials (username and token) on each request? If I only send the end user credentials, do I store them in a cookie on the client?

    Read the article

  • "Fails to get size of gamma" error when trying to set resolution

    - by Max Payne
    On 11.10 my max allowed resolution is 1024x768, while my monitor supports 1280x800 on windows. I've seen a method to solve this via xrandr, but I allways get a message saying it fails to get size of gamma. xrandr: Failed to get size of gamma for output default Screen 0: minimum 640 x 480, current 1024 x 768, maximum 1024 x 768 default connected 1024x768+0+0 0mm x 0mm 1024x768 61.0* 800x600 61.0 640x480 60.0 Is there any other way to add 1280x800 resolution to my laptop, any workarounds this? Thanks in advance.

    Read the article

  • ACPI _OSC control

    - by nmc
    I've noticed an error from dmesg output that ubuntu canot enable ASPM: [ 0.192722] pci0000:00: ACPI _OSC support notification failed, disabling PCIe ASPM [ 0.192728] pci0000:00: Unable to request _OSC control (_OSC support mask: 0x08) While I know that this is not a bug, I'd like to ask if there is a way to correct it and enable ASPM, the reason being that it should increase battery life (powertop shows that PCIe is always 100% on). Edit: I am running on an ASUS eee pc 1015px with the latest bios version (1401)

    Read the article

  • Ubuntu 11.04 Broadcom BCM4312 Not Working

    - by ptran221
    I have a HP MINI 210-1010NR and just installed Ubuntu 11.04 and I can't get my wireless to work.I have checked through multiple Q&A's throughout this FAQ and tried them all. When I go over the wireless thing at the top it says "Wireless Networks device not ready(firmware missing)." Okay, now here is my ~$ lspci -vvnn | grep 14e4 02:00.0 Network controller [0280]: Broadcom Corporation BCM4312 802.11b/g LP-PHY [14e4:4315] (rev 01) Also, when I try to open additional drivers it says that "Downloading package indexes failed, please check your network status."

    Read the article

  • Configuring SQL Server Audit Logging with Powershell

    - by Jonathan Kehayias
    One of the standard configuration options that I set on all SQL Server installs is to log Failed Login Attempts to the SQL Server Error Log.  I recently inherited an environment that this option wasn’t standardized across all of the servers and needed to configure it for multiple servers in a scripted manner.  There are a couple of ways to handle this kind of task.  First I could log on to every server in SSMS, open the Server Properties, and set the option on the Security sheet for...(read more)

    Read the article

  • Error Copying Source File in Audio Spectrum Visualizer [closed]

    - by David Dimalanta
    I'm testing this code using LibGDX, Java, and Eclipse to test the music player that detects the frequency. I saw this one on this website plus the link on GitHub: http://gtomee.com/2012/07/28/audio-spectrum-visualizer-with-libgdx/ It works when running on desktop project folder but not on Android project folder and the result is this: 10-10 13:57:45.320: E/AndroidRuntime(9421): FATAL EXCEPTION: GLThread 16845 10-10 13:57:45.320: E/AndroidRuntime(9421): com.badlogic.gdx.utils.GdxRuntimeException: Error copying source file: soundtrack 1 bioman.mp3 (Internal) 10-10 13:57:45.320: E/AndroidRuntime(9421): To destination: tmp/audio-spectrum.mp3 (External) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.copyFile(FileHandle.java:625) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.copyTo(FileHandle.java:534) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.bodapps.rhythm.Drop.create(Drop.java:393) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.backends.android.AndroidGraphics.onSurfaceChanged(AndroidGraphics.java:292) 10-10 13:57:45.320: E/AndroidRuntime(9421): at android.opengl.GLSurfaceView$GLThread.guardedRun(GLSurfaceView.java:1505) 10-10 13:57:45.320: E/AndroidRuntime(9421): at android.opengl.GLSurfaceView$GLThread.run(GLSurfaceView.java:1240) 10-10 13:57:45.320: E/AndroidRuntime(9421): Caused by: com.badlogic.gdx.utils.GdxRuntimeException: Error stream writing to file: tmp/audio-spectrum.mp3 (External) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.write(FileHandle.java:313) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.copyFile(FileHandle.java:623) 10-10 13:57:45.320: E/AndroidRuntime(9421): ... 5 more 10-10 13:57:45.320: E/AndroidRuntime(9421): Caused by: com.badlogic.gdx.utils.GdxRuntimeException: Error writing file: tmp/audio-spectrum.mp3 (External) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.write(FileHandle.java:293) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.write(FileHandle.java:305) 10-10 13:57:45.320: E/AndroidRuntime(9421): ... 6 more 10-10 13:57:45.320: E/AndroidRuntime(9421): Caused by: java.io.FileNotFoundException: /storage/sdcard0/tmp/audio-spectrum.mp3: open failed: EACCES (Permission denied) 10-10 13:57:45.320: E/AndroidRuntime(9421): at libcore.io.IoBridge.open(IoBridge.java:416) 10-10 13:57:45.320: E/AndroidRuntime(9421): at java.io.FileOutputStream.<init>(FileOutputStream.java:88) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.write(FileHandle.java:289) 10-10 13:57:45.320: E/AndroidRuntime(9421): ... 7 more 10-10 13:57:45.320: E/AndroidRuntime(9421): Caused by: libcore.io.ErrnoException: open failed: EACCES (Permission denied) 10-10 13:57:45.320: E/AndroidRuntime(9421): at libcore.io.Posix.open(Native Method) 10-10 13:57:45.320: E/AndroidRuntime(9421): at libcore.io.BlockGuardOs.open(BlockGuardOs.java:110) 10-10 13:57:45.320: E/AndroidRuntime(9421): at libcore.io.IoBridge.open(IoBridge.java:400) 10-10 13:57:45.320: E/AndroidRuntime(9421): ... 9 more I'm not sure if I come this to the right place for help and suggestions.

    Read the article

  • ERROR running Bumblebee on 13.10

    - by paul
    I'm trying to get Bumblebee working again after an upgrade to Saucy. Running software with Optirun gives the following output: optirun nvidia-settings [ 45.697126] [ERROR]Cannot access secondary GPU - error: [XORG] (EE) Failed to load /usr/lib/xorg/modules/libglamoregl.so: /usr/lib/xorg/modules/libglamoregl.so: undefined symbol: _glapi_tls_Context [ 45.697179] [ERROR]Aborting because fallback start is disabled. Does anyone know how to fix this? Thanks! :)

    Read the article

  • magento on Zend Server (Win7) installation error

    - by czerasz
    I try to install magento for the first time. I've created the database with the name "project" in my C:\Zend\Apache2\conf\httpd.conf I added on the end: <Directory "C:\Zend\Apche2\htdocs\project"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> in my ZendServer/Server Setup/Extensions: PDO_MySQL, simplexml, mcrypt, hash, GD, DOM, iconv, curl, SOAP are on in C:\Zend\ZendServer\etc\php.ini I set: safe_mode = Off ;<-- was set to off ... memory_limit = 512M; Maximum amount of memory a script may consume (128MB) After step "Configuration" of magento installation (with Use Web Server (Apache) Rewrites enabled) I get: Internal Server Error My database is full of tables (that schould be ok) My Zend Server shows: 27-Oct 06:55 6 Severe Slow Request Execution (Absolute) http://localhost/project/index.php/install/wizard/installDb/ Critical Open 27-Oct 06:55 4 Fatal PHP Error C:\Zend\Apache2\htdocs\project\lib\Varien\Db\Adapter\Pdo\Mysql.php Critical Open 27-Oct 06:55 5 Slow Function Execution curl_exec Warning Open 27-Oct 06:55 5 Slow Request Execution (Absolute) http://localhost/project/index.php/install/wizard/configPost/ What can be wrong?

    Read the article

  • 404 Not Found for a PL script that exists!

    - by Abs
    Hello all, I make a GET request to a CGI script and I get a 404 error. However, I am 100% sure that script is present and it has permissions: -rwxr-xr-x 1 apache apache 6520 Sep 7 03:01 uu_ini_status_audios.pl The request URL is: http://mysite.com/cgi-bin/uu_ini_status_audios.pl?tmp_sid=893facacc5dc392ad0f4c91e6a9e8d40&rnd_id=0.12266222834382812 The error I get: The requested URL /cgi-bin/uu_ini_status_audios.pl was not found on this server. This use to work for me before, but I think it stopped working after I restarted apache so maybe it means its a configuration I changed?? I checked the error logs for apache and php and nothing useful was found to help me with my problem! I appreciate any help on this!

    Read the article

  • Apache2 Syntex, cant acces 000-default

    - by enrique2334
    I have been using Apache2 and webmin with my raspberry pi. after a restart and reinstalations apache wont start. > sudo /etc/init.d/apache2 restart apache2: Syntax error on line 268 of /etc/apache2/apache2.conf: Could not open configuration file /etc/apache2/sites-enabled/000-default: No such file or directory Action 'configtest' failed. The Apache error log may have more information. failed! The file 000-default is there and unopenable permisions to root-root My apache2.conf file looks like this (bottom half) # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog ${APACHE_LOG_DIR}/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel debug # Include module configuration: Include mods-enabled/*.load Include mods-enabled/*.conf # Include list of ports to listen on and which to use for name based vhosts Include ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # Include of directories ignores editors' and dpkg's backup files, # see the comments above for details. # Include generic snippets of statements Include conf.d/ # Include the virtual host configurations: Include sites-enabled/ <VirtualHost *:80> DocumentRoot /var/www <Directory /var/www> allow from all Options +Indexes </Directory> ServerName IMASERVER </VirtualHost> does anyone know what the cause of this?

    Read the article

< Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >