Search Results

Search found 16467 results on 659 pages for 'request filtering'.

Page 407/659 | < Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >

  • JSR Updates - Multiple JSRs migrate to latest JCP version

    - by Heather VanCura
    As part of the JCP.Next reform effort, many JSRs have migrated to the latest version of the JCP program in the last month.  These JSRs' Spec Leads and Expert Groups are contributing to the strides the JCP has been making to enable greater community transparency, participation and agility to the working of the JSR development through the JCP program. Any other JSR Spec Leads interested in migrating to the latest JCP version, now JCP 2.9, as of 13 November, incorporating the Merged Executive Committee (EC), see the Spec Lead Guide for instructions on migrating to the latest JCP version.  For JCP 2.8 JSRs, you are effectively already operating under JCP 2.9 since there are no longer two ECs.  This is the difference for JCP 2.8 JSRs migrating to JCP 2.9 -- a merged EC.  To make the migration official, just inform your Expert Group on a public channel and email your request to admin at jcp.org. JSR 310, Date and Time API, led by Stephen Colebourne and Michael Nascimento and Oracle (Roger Riggs)  JSR 349, Bean Valirdation 1.1, led by RedHat (Emmanuel Bernard) JSR 350, Java State Management, led by Oracle (Mitch Upton) JSR 339, JAX-RS 2.0: The Java API for RESTful Web Services, led by Oracle, (Santiago Pericas-Geertsen and Marek Potociar) JSR 347, Data Grids for the Java Platform, led by RedHat (Manik Surtani)

    Read the article

  • Dummy HTTP server for debugging

    - by Andrea
    This is more or less the inverse of my previous question. I need to debug some HTTP requests that I am making. Since these requests arise from the use of some external libraries, sometimes I am not sure of what is the actual data I am sending. Is there some dummy server (for Linux) that accepts HTTP requests and just prints them somewhere so that I can inspect them? I would like to be able to see in plain text the full request, like POST /foo HTTP/1.1 Host: www.example.com Accept: text/xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Language: en-gb,en;q=0.5 Content-Type: text/plain Content-Length: 11 Hello world

    Read the article

  • Nodejs for processing js and Nginx for handling everything else

    - by Kevin Parker
    I am having a nodejs running on port 8000 and nginx on port 80 on same server. I want Nginx to handle all the requests(image,css,etc) and forward js requests to nodejs server on port 8000. Is it possible to achieve this. i have configured nginx as reverse proxy but its forwarding every request to nodejs but i want nginx to process all except js. nginx/sites-enabled/default/ upstream nodejs { server localhost:8000; #nodejs } location / { proxy_pass http://192.168.2.21:8000; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }

    Read the article

  • EBS Customer Relationship Manager (CRM) Product Family Webcasts

    - by user793044
    Oracle's Advisor Webcasts are live presentations given by subject matter experts who deliver knowledge and information about services, products, technologies, best practices and more. Delivered through WebEx the Oracle Advisor Webcast Program brings interactive expertise straight to your desktop, at no cost. Each session is usually followed by a live Q&A where you can have your questions answered. If you miss any of the live webcasts then you can replay the recording or download the PDF of the presentation. Doc Id 740966.1 gives you access to all the scheduled webcasts as well as the archived recordings and presentations. Just select the product family you are interested in to access the latest webcasts in that area. Below is a listing of the currently scheduled archived webcasts for the EBS CRM and Industries product family. Webcast Topic and Description Webcast Link Date and Time Upcoming: Oracle E-Business Suite - Service Oracle Service Charges - Introduction/Overview Register Dec 6, 2012 EBS CRM - Service R12: How to debug Email Center Auto Service Request Creation Failures Recording | .pdf Archived XCALC: Failed Calculations when Using OIC Recording | .pdf Archived XPOP: Failed Population When Using Oracle Incentive August 30, 2012 Recording | .pdf Archived XROLL: Failed Roll Up When Using Oracle Incentive Compensation August 16, 2012 Recording | .pdf Archived Common Problems Associated with Product Catalog in Sales Recording | .pdf Archived Oracle Incentive Compensation - Troubleshooting Payment Issues Recording | .pdf Archived R12 Renewing Service Contracts - Overview Recording | .pdf Archived 11i and R12 Oracle CRM Service Basics and Troubleshooting - an Overview Recording | .pdf Archived 11i and R12 Transaction Error Troubleshooting Overview Recording | .pdf Archived

    Read the article

  • Belkin router issue

    - by walr1
    Hi, My cousin and I bought a wireless Belkin router for testing purposes. Please keep in mind for all of our tests there is no ethernet cable plugged in, just the router's power cord. We have been trying to "flood" it with PING requests on its default address 192.168.2.1, but it isn't doing a thing; not even logging any attempts of too many requests. I've disabled the firewall, disabled PING request block, etc. Any idea why this thing isn't being affected? We sent 4 million packets and it hasn't done a thing. Quite odd! Thanks.

    Read the article

  • Apache 2.4 and PHP 5.4 getting connection reset errors in the browser

    - by zuallauz
    In the weekend I upgraded my development web server to Apache 2.4 and PHP 5.4. In my web application which was previously working great on Apache 2.2 and PHP 5.3 it now starts getting these messages saying the "connection was reset" in Firefox. See screenshot. I am connecting to the linux machine via local LAN. I'm assuming it might be something to do with the new version of Apache or PHP, or the new LAMP stack which I downloaded from BitNami? It would seem to happen every 5-10 requests and throw this error, perhaps more likely to trigger it is if I send a POST request from a page. Is it timing out the script or something? These are just basic dynamic pages I'm loading and they worked perfectly in Apache 2.2 and PHP5.3. Here are my httpd.conf and PHP.ini if that has any clues. Any ideas? Any help much appreciated.

    Read the article

  • Difference between two kinds of Bing URL Referers

    - by joshuahedlund
    Most of the referral URLS that I get from Bing have the following syntax: http://www.bing.com/search?q=keywords+keywords&[some other variables] However I just noticed that maybe 10-20% of them are coming in like this: http://www.bing.com/url?source=search&[some other variables]&url=http%3A%2F%2Fwww.example.com/user-landing-page-on-my-site&yrktarget=_top&q=keywords+keywords&[some other variables] The first syntax gives me the keywords the user typed in, but the second actually gives me the keywords the user typed in and their landing page on my site. I was originally unaware of this second kind altogether because I have a customized referral report that filters out URLs containing my domain. But now that I noticed them I want to know why they occur to see if I can get more to occur this way because the second syntax contains more valuable information. If I go to one of the first URLs, it gives me a typical Bing query page. The second URLs seem to just redirect me to the Bing home page. I'm not sure if it has to do with the kind of search being performed (I also get a few http://www.bing.com/shopping/search?q= referers) or some other metric. Does anyone know what causes some referral URLs from Bing to have the /search?q syntax and others to have the /url?source syntax? P.S. I have verified that I am getting both kinds of URLs from non-advertising clicks. P.P.S. I am not talking about data in Google Analytics or similar software but the raw $_SERVER['HTTP_REFERER'] value coming from the client's original request.

    Read the article

  • squid running out of sockets

    - by drscroogemcduck
    I have a setup where squid sits in front of a java server and acts as a reverse proxy. Recently i've load tested the site and if i fire 100 threads at it each making a request using jmeter i start getting errors in my load test tool like 'no route to host' even though the load test tool and the server are on the same machine. if i run the following command where port 82 is the port my squid server is running on: netstat -ann | grep 82 | wc -l i get 22000 or something and most of them are in TIMED_WAIT. i'm thinking that maybe the huge number of sockets in the TIMED_WAIT state are starving the box of resources.

    Read the article

  • Hack a Linksys Router into a Ambient Data Monitor

    - by Jason Fitzpatrick
    If you have a data source (like a weather report, bus schedule, or other changing data set) you can pull it and display it with an ambient data monitor; this fun build combines a hacked Linksys router and a modified toy bus to display transit arrival times. John Graham-Cumming wanted to keep an eye on the current bus arrival time tables without constantly visiting the web site to check them. His workaround turns a hacked Linksys router, a display, a modified London city bus (you could hack apart a more project-specific enclosure, of course), and a simple bit code that polls the bus schedule’s API, into a cool ambient data monitor that displays the arrival time, in minutes, of the next two buses that will pass by his stop. The whole thing could easily be adapted to another API to display anything from stock prices to weather temps. Hit up the link below for more information on the project. Ambient Bus Arrival Monitor Hacked from Linksys Router [via Make] Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • Connection Reset by Peer error with Apache and JBoss 7.1.1

    - by vikingz
    We are seeing errors on some of our QA testing scripts that intermittently throw Connection Reset By Peer errors. The Test scripts submit requests via F5 which forwards requests to Apache (2.2.21) with a mod_jk load_balancer with the following setting for each worker in the worker.property worker1 props worker.worker1.type=ajp13 worker.worker1.port=8109 worker.worker1.lbfactor=1 worker.worker1.host=skunkhost1.com worker.worker1.connection_pool_timeout=30 and here is what is in the JBoss domain.xml for the AJP port from JBoss 7.1.1 <unbounded-queue-thread-pool name="SKUNKY.APP.AJP"> <max-threads count="300"/> <keepalive-time time="3" unit="minutes"/> </unbounded-queue-thread-pool> Here is httpd.conf Timeout 300 KeepAlive On KeepAliveTimeout 15 MaxKeepAliveRequests 100 TraceEnable Off My question is that is it posisbe that apache times out and closes the connection while jboss is still ready and working on the request? What might be causing the Connection Reset By Peer error?what am i missing here? Any help is majorly appreciated!! Sincerely KK

    Read the article

  • How Service Component Architecture (SCA) Can Be Incorporated Into Existing Enterprise Systems

    After viewing Rob High’s presentation “The SOA Component Model” hosted on InfoQ.com, I can foresee how Service Component Architecture (SCA) can be incorporated in to an existing enterprise. According to IBM’s DeveloperWorks website, SCA is a set of conditions which outline a model for constructing applications/systems using a Service-Oriented Architecture (SOA). In addition, SCA builds on open standards such as Web services. In the future, I can easily see how some large IT shops could potently divide development teams or work groups up into Component/Data Object Groups, and Standard Development Groups. The Component/Data Object Group would only work on creating and maintaining components that are reused throughout the entire enterprise. The Standard Development Group would work on new and existing projects that incorporate the use of various components to accomplish various business tasks. In my opinion the incorporation of SCA in to any IT department will initially slow down the number of new features developed due to the time needed to create the new and loosely-coupled components. However once a company becomes more mature in its SCA process then the number of program features developed will greatly increase. I feel this is due to the fact that the loosely-coupled components needed in order to add the new features will already be built and ready to incorporate into any new development feature request. References: BEA Systems, Cape Clear Software, IBM, Interface21, IONA Technologies PLC, Oracle, Primeton Technologies Ltd, Progress Software, Red Hat Inc., Rogue Wave Software, SAP AG, Siebel Systems, Software AG, Sun Microsystems, Sybase, TIBCO Software Inc. (2006). Service Component Architecture. Retrieved 11 27, 2011, from DeveloperWorks: http://www.ibm.com/developerworks/library/specification/ws-sca/ High, R. (2007). The SOA Component Model. Retrieved 11 26, 2011, from InfoQ: http://www.infoq.com/presentations/rob-high-sca-sdo-soa-programming-model

    Read the article

  • DotNetNuke + PayPal

    - by Nuri Halperin
    A DotNetNuke i'm supporting has had a paypal "buy now" button and other variations with custom fields for a while now. About 2 weeks ago (somewhere in March 2010) they all stopped working. The problem manifested such that once you clicked the "buy now" button, Paypal site would throw a scary error page to the effect of: "Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log" Once I verified no cheeky content editor changed the page, I went digging for answers. The main source incompatibility of PayPal's simple HTML forms is that DNN includes a form on every page, and nested forms are not really supported. As blogged here and lamented here, the solution I came up with is simply to modify the form enctype to 'application/x-www-form-urlencoded' as illustrated below: 1: <input type="image" border="0" 2: src="https://www.paypal.com/en_US/i/btn/btn_buynowCC_LG.gif" 3: name="submit" 4: alt="PayPal - The safer, easier way to pay online!" 5: onClick="this.form.action='https://www.paypal.com/cgi-bin/webscr'; this.form.enctype='application/x-www-form-urlencoded';this.form.submit();" /> One would think that PayPal would want the masses submitting HTML in all manners of "enctype", but I guess every company has it's quirks. At least my favorite non-profit can now continue and accept payments. Sigh.

    Read the article

  • How to manage long running background threads and report progress with DDD

    - by Mr Happy
    Title says most of it. I have found surprising little information about this. I have a long running operation of which the user wants to see the progress (as in, item x of y processed). I also need to be able to pause and stop the operation. (Stopping doesn't rollback the items already processed.) The thing is, it's not that each item takes a long time to get processed, it's that that there are usually a lot of items. And what I've read about so far is that it's somewhat of an anti-pattern to put something like a queue in the DB. I currently don't have any messaging system in place, and I've never worked with one either. Another thing I read somewhere is that progress reporting is something that belongs in the application layer, but it didn't go into the details. So having said all this, what I have in mind is the following. User request with list of items enters the application layer. Application layer gets some information from the domain needed to process the items. Application layer passes the items and the information off to some domain service (should the implementation of this service belong in the infrastructure layer?) This service spins up a worker thread with callbacks for both progress reporting and pausing/stopping it. This worker thread will process each item in it's own UoW. This means the domain information from earlier needs to be stored in some DTO. Since nothing is really persisted, the service should be singleton and thread safe Whenever a user requests a progress report or wants to pause/stop the operation, the application layer will ask the service. Would this be a correct solution? Or am I at least on the right track with this? Especially the singleton and thread safe part makes the whole thing feel icky.

    Read the article

  • redirect all youtube video requests to a specific one

    - by iTayb
    I'm on an IT team in my company and I would like to block youtube to users. I don't want to just deny access to the whole youtube domain, but only to replace the .flv/.mp4 request with the one that I want. That way, if someone tries to watch youtube videos on the network, He'll get a video of why using our expensive bandwidth for pleasure is a no-no. I thought about using a packet manipulation program and just replace the video ID with something that I want, but I didn't manage to do it right.

    Read the article

  • Cost effective way to provide static media content

    - by james
    I'd like to be able to deliver around 50MB of static content, either in about 30 individual files up to 10MB or grouped into 3 compressed files, around 5k to 20k times a day. Ideally I'd like to put some sort of very basic security around providing the data to ensure that a request is from the expected source, but if tossing the security for a big reduction in price is possible then it's an option. Does anyone have any suggestions other than what I've found: Google AppEngine is $0.12/GB & I believe has a file size limit of 10MB so I'd have to break the data up a bit. So a rough calculation would seem to be that this would cost me about $30 to $120 a day. Or I've seen something like what seems to be just public static content delivery with no type of logic capabilities like Usenet.nl at what I think calculates to about $0.025/GB which would cost me about $6 to $25 a day. Any idea if I'm going about these calculations right & if there might be a better option for just static content on a decently high volume delivery? Again some basic security would be great but if cost is greatly reduced without it then I'm up for that.

    Read the article

  • Squid/Kerberos authentication with only Linux

    - by user28362
    Hi, I would like to know if it possible to let a Windows Xp machine authenticate to Squid (Linux) using Kerberos without the need of an Active Directory domain. I only want to create a Kerberos ticket on the client side, which should give the client access to squid (using I.E.). I only found tutorials about configuring A.D./Squid, not an environment with only Linux servers. Thanks Update: The kerberos setup is correctly done, the proxy and client can get tickets. As for the browser (FF/IE), I get: ERROR Cache Access Denied While trying to retrieve the URL: http://www.google.com/ The following error was encountered: * Cache Access Denied. Sorry, you are not currently allowed to request: http://www.google.com/ from this cache until you have authenticated yourself. In kerberos, I get: squid_kerb_auth: Got 'YR ElRNTVMTUABBAABAB4IIogAAAAAAAAAAAAAAAAAAAAAFASgDAAAADw==' from squid (length: 59). squid_kerb_auth: parseNegTokenInit failed with rc=101 squid_kerb_auth: received type 1 NTLM token This message is strange, as I didn't configure NTLM. It looks like the browser uses the wrong authentication methode.

    Read the article

  • whats the name of this pattern?

    - by Wes
    I see this a lot in frameworks. You have a master class which other classes register with. The master class then decides which of the registered classes to delegate the request to. An example based passed in class may be something this. public interface Processor { public boolean canHandle(Object objectToHandle); public void handle(Object objectToHandle); } public class EvenNumberProcessor extends Processor { public boolean canHandle(Object objectToHandle) { if (!isNumeric(objectToHandle)){ return false } return isEven(objectToHandle); } public void handle(objectToHandle) { //Optionally call canHandleAgain to ensure the calling class is fufilling its contract doSomething(); } } public class OddNumberProcessor extends Processor { public boolean canHandle(Object objectToHandle) { if (!isNumeric(objectToHandle)){ return false } return isOdd(objectToHandle); } public void handle(objectToHandle) { //Optionally call canHandleAgain to ensure the calling class is fufilling its contract doSomething(); } } //Can optionally implement processor interface public class processorDelegator { private List processors; public void addProcessor(Processor processor) { processors.add(processor); } public void process(Object objectToProcess) { //Lookup relevant processor either by keeping a list of what they can process //Or query each one to see if it can process the object. chosenProcessor=chooseProcessor(objectToProcess); chosenProcessor.handle(objectToProcess); } } Note there are a few variations I see on this. In one variation the sub classes provide a list of things they can process which the ProcessorDelegator understands. The other variation which is listed above in fake code is where each is queried in turn. This is similar to chain of command but I don't think its the same as chain of command means that the processor needs to pass to other processors. The other variation is where the ProcessorDelegator itself implements the interface which means you can get trees of ProcessorDelegators which specialise further. In the above example you could have a numeric processor delegator which delegates to an even/odd processor and a string processordelegator which delegates to different strings. My question is does this pattern have a name.

    Read the article

  • LAMP Server: (104) Connection reset by peer

    - by StephenM
    A user reported the following problem when attempting to visit www.airlinemogul.com. The requested URL could not be retrieved While trying to retrieve the URL: http://www.airlinemogul.com/airlinemogul/index.php The following error was encountered: * Read Error The system returned: (104) Connection reset by peer An error condition occurred while reading data from the network. Please retry your request. There are no other issues reported by any other users, so it may be an isolated issue. Could anyone give me any suggestions as to how I could investigate the problem further or find a solution? Thanks.

    Read the article

  • Does this BSD-like license achieve what I want it to?

    - by Joseph Szymborski
    I was wondering if this license is: self defeating just a clone of an existing, better established license practical any more "corporate-friendly" than the GPL too vague/open ended and finally, if there is a better license that achieves a similar effect? I wanted a license that would (in simple terms) be as flexible/simple as the "Simplified BSD" license (which is essentially the MIT license) allow anyone to make modifications as long as I'm attributed require that I get a notification that such a derived work exists require that I have access to the source code and be given license to use the code not oblige the author of the derivative work to have to release the source code to the general public not oblige the author of the derivative work to license the derivative work under a specific license Here is the proposed license, which is just the simplified BSD with a couple of additional clauses (all of which are bolded). Copyright (c) (year), (author) (email) All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. The copyright holder(s) must be notified of any redistributions of source code. The copyright holder(s) must be notified of any redistributions in binary form The copyright holder(s) must be granted access to the source code and/or the binary form of any redistribution upon the copyright holder's request. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

    Read the article

  • How do MaxSpareServers work in Apache?

    - by John Hunt
    I've scoured the web but I can't find out what MaxSpareServers are in Apache MPM prefork.. The MaxSpareServers directive sets the desired maximum number of idle child server processes. An idle process is one which is not handling a request. If there are more than MaxSpareServers idle, then the parent process will kill off the excess processes. Great, but what causes a spareserver to be created? More importantly, when does a spare server go away? I understand that minspareservers are created gradually after the server is started.. How do maxspareservers relate to maxclients? Basically I'm at a bit of a loss on how best to configure Apache.. there's a lot of documentation out there but it isn't that clear. Thanks, John.

    Read the article

  • Is it bad practice for a module to contain more information than it needs?

    - by gekod
    I just wanted to ask for your opinion on a situation that occurs sometimes and which I don't know what would be the most elegant way to solve it. Here it goes: We have module A which reads an entry from a database and sends a request to module B containing ONLY the information from the entry module B would need to accomplish it's job (to keep modularity I just give it the information it needs - module B has nothing to do with the rest of the information from the read DB entry). Now after finishing it's job, module B has to reply to a module C if it succeeded or failed. To do this module B replies with the information it has gotten from module A and some variable meaning success or fail. Now here comes the problem: module C needs to find that entry again BUT the information it has gotten from module B is not enough to uniquely find the exact same entry again. I don't think that module A giving more information to module B which it doesn't need to do it's job but which it could then give back to module C would be a good practice because this would mean giving some module information it doesn't really need. What do you think?

    Read the article

  • Hyper-V Ubuntu 10.04, Filesystem suddenly becomes Read-Only?

    - by Daniel Upton
    We are running a Ubuntu 10.04 VM on a Hyper-V system, The VM is dedicated to running one of our web applications. We have enabled the Hyper-V drivers in /etc/initramfs-tools/modules like so: hv_vmbus hv_storvsc hv_blkvsc hv_netvsc And updated the kernel image like so: $ update-initramfs -u And all was good... until.. This morning i got a support request that our web application was throwing an error 500, so i checked the logs and nothing was there. Then I remembered that I had seen this on another of our ubuntu servers so I... $ touch foo.txt And my suspicions were confirmed: touch: cannot touch `foo.txt': Read-only file system Why is the filesystem randomly becoming readonly? Is this only in Ubuntu on HV? Is it a problem on RedHat or Cent?

    Read the article

  • High availability for Windows Service under Windows Server 2003

    - by empi
    Hi. I have a following situation: I need to deploy a windows service that listens for incoming request on tcp port (basically WCF service). I have a High Availability requirement - the service must be deployed on two servers and if the service stops (only the service, not the whole server) on one server, all the requests must be redirected to the second one. For me it looks like a basic failover scenario. How can I achieve this on Windows Server 2003? Should I use Microsoft Cluster Service or Network Load Balancing? The important part is that the process of swapping the servers should not concern the clients (the client must see only single address / single host or domain name). Thanks in advance for help.

    Read the article

  • How do I remove a LOT of indexed pages from Google?

    - by Thierry
    A few weeks ago we have figured out that Google has indexed some information we would rather keep in some confidentiality, in the format of individual PDF files. Our assumption was that this was a problem with our robots.txt we had overlooked. Even though we are not sure whether or not this is the case, we are certain that the robots.txt file is in a valid format and is, according to Google's webmaster tools, blocking the files. However, even after this adjustment that has been made weeks ago, Google still has the PDF files indexed, but does tell us further information cannot be provided due to the robots.txt file being present. As you can hopefully understand, this is unwanted behaviour due to the nature of the documents. I am aware that there is a request page being provided by Google for this purpose, but there are a lot of files. Is there an easier way to get Google to remove all of the files from its search engine? If not, is there anything else you could advise us to do besides manually requesting Google to remove every single page? Thanks in advance.

    Read the article

  • Steps to send patch to Launchpad

    - by Alois Mahdal
    With a Git/Github background and knowing very little about Bazaar VCS, I would like to occasionally report a bug to Launchpad and even send a patch. I'd like to do it in a "proper" way so that it's ready for merging or improvement while not getting in way. I can't seem to find a decent simple How-to suited for my needs. So what I did so far: I have created a Launchpad account, reported the bug, installed Bazaar and setup SSH keys etc. Now if it was Github, I'd fork the repo, clone the forked repo, create a sanely named branch and do the work, commit + push, create a pull request using Github WUI. But it's not Github, and both LP and Bazaar architectures seem quite different from their Github/Git cunterparts. So could a kind soul save me from drowning in tons of documents and complile a straightforward step path, mainly the second part? Possibly including relevant CLI commands when they are needed? Edit: It seems that I should clarify if I'm asking specifically about Ubuntu packages (whatever it means) or Launchpad packages. I don't really care much about distinction between Ubuntu packages and non-Ubuntu packages. Any software could be in Ubuntu today and out of it tomorrow, or vice-versa. The development is what matters much more than distribution. Ao I was assuming that not every single package distributed in Ubuntu is hosted on Launchpad, an "official" or "default" workflow for Launchpad exists (well if all devs can agree on using Bazaar, why couldn't most of them agree on a patching workflow?), so I'm asking about the Launchpad way, not the Ubuntu way. And I chose AU because since the intersection is vast, I guess it's pretty on topic here.

    Read the article

< Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >