Search Results

Search found 24401 results on 977 pages for 'duplicate content'.

Page 673/977 | < Previous Page | 669 670 671 672 673 674 675 676 677 678 679 680  | Next Page >

  • Filtering downloading a file

    - by Ozgun Sunal
    people. i know there are several types of firewalls operating at different layers of OSI. ACLs(layer 3 firewalls filter based on port numbers and IP addresses), SPI(which examines the patterns of data at layer 3 and realise that data content is malicious or not) and application layer firewalls which is capable of understanding the data at that level. Considering this, i'll give an example and learn what i need to do. Lets say, we have a computer has access to the Internet. i want to download a file or display a web page from a website but block access to the another website/s or downloading. To do this, i cant block access to the web browser on the 3rd party firewall bcos that will shut down all access. ACLs wont already do it. So, which kind of firewall will make it possible to filter specific traffic and how?

    Read the article

  • News Applications internal working [on hold]

    - by Vijay
    How does news applications work other than RSS Feed based applications? I know some of them take the RSS content from the source site.But sometimes I see, those applications show - Title Description Date Image video etc. Even though when I see the original site's rss, image, video is not there in rss. So how does one get that to show in there applications? Some applications even shows feeds from magazine sites, newspaper sites. How do these applications work? I am creating an application which will link to different news sites feeds categorized (like top news, technology, games, articles etc.) On the front page it will show the website names, then on selection of any news site it will get the feed from that website and show it to user. So I would like to know All the fetching of data from should be done on user selection or data should be prefetched? Detailed information I want to fetch from the original like provided in the rss data. How should I go about it?

    Read the article

  • LVM vs RAID0 vs RAID "linear" - Combine 2 disks as one, data recovery?

    - by leto
    Hi there, given two 2TB USB external disks that have to be combined to one 4TB volume and formatted with one big Filesystem (XFS), I have a small question to ask. Does LVM provide better Data recovery, should one disk be unplugged/damaged by being able to recover the data of the still working disk or is everything lost? I would appreciate a solution where only the data of one disk is lost and I can recover the content of the other with the usual filesystem/lvm/raid tools. Is that possible with LVM or RAID "linear"? This is for storing unimportant files that can be retrieved from backup, but I want to save time :) Thank you in advance

    Read the article

  • Dowload size of Streaming Videos

    - by Excalibur2000
    I would like to know that if a website advertises a streaming download as say 100MB, would my download to my computer be 100MB ? Would there be streaming control packets that a service provider would charge for over and above the 100MB content ? Assume the latest RealPlayer viewer. The rub for me is that I have downloaded MIT lectures and according to my file manager the file sizes have matched up to the download sizes on YouTube. However my ISP seems to think that the streams were larger and charged me for more than the file size of the download. I am left wondering where the data came from.

    Read the article

  • ADF Hands on Training &ndash; Prerequisites for 22nd March 2011

    - by Grant Ronald
    For those of you coming to the ADF Hands on training on the 22nd March in London, there was a link to the prerequisites.  Unfortunately, in a reshuffle of content on OTN, this page was removed.  So, over the next day or so I’m hoping to the pull together the relevant information into this blog post.  So keep checking back! Firstly, you need to being your laptop with you to do the hands on exercises.  No laptop, no hands on. Recommended 2GB RAM running Microsoft Windows XP SP2, 2003 Server SP2, Vista (32 bit only), Windows 7 or Linux or Mac 2GHz Processor (less will be acceptable but slower) Mozilla Firefox 2.0 or higher, Internet Explorer 7 or higher, Safari 3.0 and higher, Google Chrome 1.0 or higher Winzip or other extracting software Adobe Acrobat reader Flash (if you want to see dynamic graphs in your application) As for software, you will need have installed JDeveloper 11g.  The hands on instructions are based on 11.1.1.2 (or is it 11.1.1.3)! anyway, either of those or 11.1.1.4 would be required. You also need an Oracle database on your machine and access to the HR schema (which should be unlocked).  Don’t expect to have access to a network and VPN to a database. A simple test, unplug your laptop from your corporate network, run up JDev  and select File –> New –> Database connection and make sure you can connect to HR database and see the Emp/Dept etc tables.  If you can do that, you should be good to go. I would strongly recommend ensuring you have this in place before you arrive on Tuesday. Look forward to seeing you there.

    Read the article

  • Postfix - How to process incoming emails?

    - by Borivojevic
    Hello Does anybody know how to process incoming emails for virtual mailboxes in postfix? I am building web application where users add new content by sending emails to application. Email address used for each user is custom (eg. [email protected]) and it is dynamically created as a Postfix virtual mailbox. User needs to be able to send email to his custom mailbox address ([email protected]) and i want to process each incoming email, parse it's contents and populate my database with data from email. I tried using Postfix After Queue filter but what i really wont is to process emails once they are saved in users virtual mailbox folder.

    Read the article

  • Nginx issue with two web nodes

    - by HTF
    I'm running Wordpress website with Nginx and Memcached. I have simple DNS round robin balancing with A records pointing to both web servers. I've noticed the following entries in both web servers access logs: 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 I've configured W3 Total cache plugin for Wordpress - pointing to loopback address (127.0.0.1:11211) on each Wordpress installation. Is this because the webserver is trying to access content that is cached on the other web server? Shall I add IPs to W3 plugin of both web servers on each website (192.168.1.:11211, 192.168.1.2:11211)? I'm not sure if this related to Memcached or maybe some configuration issue on the server itself? Regards

    Read the article

  • /manual/cache folder on my server?

    - by MrZombie
    Hi all, On our site's server, once managed by someone who's no longer with us, there's a folder named "/manual/cache" which contains txt files named+like+this, mostly using pornographic-related keywords. The content is mainly spam-like gibberish. My assumption on the matter is that it's somehow used to spam search engines, but I might be wrong, which is the reason of my question here. Any idea what it might mean/contain? As an additionnal note, the person's hiring period oddly correspond to the dates of the files, which seem to have automagically stopped being generated after the date we parted ways.

    Read the article

  • Lighttpd based server issues crop up when port forwarding

    - by michael
    I have four host computers running lighttpd webservers. they are sitting behind a hspa modem, which each occupying a http port between [81 - 84]. 80 is taken by the modem itself. The port forwarding is setup correctly, however, only a portion of any webpage I request from any of the hosts comes through (they all fails after %20 of the page). If I put the host on port 81 into the dmz, it serves pages fine. The others do not respond to the dmz treatment. Is it possible the web content on the hosts somehow require ports aside from their respective http port? Or is it possible that even though the server.port in the lighttpd_ssl.conf file is set, the individual hosts are still expecting to serve on port 80? I am not familiar with lighttpd, nor did i set them up. they are running on video encoders i purchased. I can grab any files from them required for further information on the problem.

    Read the article

  • How does the performance of pure Nginx compare to cpNginx?

    - by jb510
    There is now a Cpanel plugin to fairly easily setup Nginx as a reverse proxy on a Cpanel/Apache server. I've been simultaneously interested in setting up my first unmanaged VPS and my first Nginx server and as a masochist figured why not combine the two. I'm wondering however if it's worth setting up a pure Nginx server vs trying out cpNginx on Apache? My goal is solely to host WordPress sites and while what I've read raves about Nginx's is exceptional ability serving static at least as a reverse proxy, I am unclear if there is substantial benefit to running a pure nginx with eAccelorator over cpNginx on Apache for dynamic sites? Regardless I'll be running W3TC on all sites to cache content, but am still interested if there are big CPU reductions running PHP scripts under pure Nginx over cpNginx?

    Read the article

  • Which are the best ways to organize view hierarchies in GUI interfaces?

    - by none
    I'm currently trying to figure out the best techniques for organizing GUI view hierarchies, that is dividing a window into several panels which are in turn divided into other components. I've given a look to the Composite Design Pattern, but I don't know if I can find better alternatives, so I'd appreciate to know if using the Composite is a good idea, or it would be better looking for some other techniques. I'm currently developing in Java Swing, but I don't think that the framework or the language can have a great impact on this. Any help will be appreciated. ---------EDIT------------ I was currently developing a frame containing three labels, one button and a text field. At the button pressed, the content inside the text field would be searched, and the results written inside the three labels. One of my typical structure would be the following: MainWindow | Main panel | Panel with text field and labels. | Panel with search button Now, as the title explains, I was looking for a suitable way of organizing both the MainPanel and the other two panels. But here came problems, since I'm not sure whether organizing them like attributes or storing inside some data structure (i.e. LinkedList or something like this). Anyway, I don't really think that both my solution are really good, so I'm wondering if there are really better approaches for facing this kind of problems. Hope it helps

    Read the article

  • need a different backup solution

    - by DigitalJedi
    I just built a new media/backup server using Ubuntu 12.04 64bit. I installed a hard drive to be used only for music, pictures, and videos and formatted it fat32 so my 1 and only Windows PC could map those folders as netshares. My laptop, also running Ubuntu 12.04, is what I am using the most so new media is first downloaded on my laptop. I've already got the music, videos, and pictures folders from my server mounting as shares on my laptop on boot thanks to some fstab edits and sshfs. Now I'm wanting either an app or script that could backup any new files I add to my local media folders to the mounted folders on my server. I've been Googling all day and found a few apps like rsync but they seem to have issues with ext4 to vfat backups. I thought maybe a script would be best but I'm new to scripting in Linux and don't want to mess anything up. Basically I am looking for something that will backup only newly added files to the server. I figure I could schedule it once a week. There are some stipulations. For example, my local music folder has over 700 folders for each artist/band then sub folders inside those for albums. I want something smart enough to only copy newly added content so I'm guessing the modified date would probably be a good condition if I were scripting. I'm rambling. Any suggestions would be GREATLY appreciated. I'm not finding anything to suit my needs. I'm almost to the point of just learning bas scripting so I can write something but then it will be a couple weeks or so before I have a possible solution and I'd like something in place sooner.

    Read the article

  • I found two usb sticks on the ground. Now what ?

    - by Stefano Borini
    As from subject. I want to see what's inside. I am seriously interested in finding the owner if possible and returning them, but I am worried it could be an attempt at social engineering. I own a macbook intel with OSX 10.6. It is a very important install. What would you do in my situation if you want to see the content without risks ? Any proposal welcome. Edit: I decided not to plug them in, and I brought them to the hotel reception. They will forward it to the police.

    Read the article

  • Customer Reviews on Company Listings [closed]

    - by GSTAR
    I'm not sure if this is the right place but I am after some general advice on a feature I am looking to implement on my website. The website I have is a Wedding Directory. Here, companies can advertise in the form of a directory listing. This listing contains the company details - such as what they do and how you can contact them. There are three packages available - Basic package is free and Silver and Gold packages are charged for. Now in order to further enhance the directory, I want to enable customer reviews for each listing. This is basically whereby customers who have used a particular company's service can write a review (and rate out of 5) of their experience. The dilemma I face is that if customers are to be paying for their listings then surely they will expect to not have content on their listing that would tarnish their reputation (i.e. negative reviews). But at the same time I want to be impartial and help my site visitors to make informed choices on which companies to book when arranging their weddings. Of course I will exercise my ability to remove offensive or fake reviews but I do not want to be removing reviews just to satisfy my clients. Suppose a client pays me a premium fee to have their listing on my front page and at the top of the listings. Now suppose that client gets lost of negative reviews - the fact that they have increased visibility means that they will also get increased bad publicity. This in turn means they won't renew their package and will most likely request the listing to be taken down. So, how can I get the right balance here and keep everyone happy?

    Read the article

  • Stream computer screen to TV via network instead of a USB wireless link

    - by user24559
    I want to stream my computer screen (not just video or a limited amount of content) to my TV via the network. I know there are wireless devices that use USB to tranfer the screen to the TV. However, these are limited to a short distance. What I want to do is stream the data via the network so I can be anywhere within the network and have the data shown on the tv. I am looking for video and sound to transfer. I want the entire computer screen to transfer just like when you connect the computer to the tv via VGA or HDMI and the sound out using the 3.5mm plug. I have been unable to find a unit that allows for the entire computer screen to transfer via the network. I just find the ability to stream video. I am using Windows 7 Ultimate with a quad processor and 16 GB of memory so I have the power to handle the transfer. My tv is hdtv.

    Read the article

  • Using mod_rewrite for a Virtual Filesystem vs. Real Filesystem

    - by philtune
    I started working in a department that uses a CMS in which the entire "filesystem" is like this: create a named file or folder - this file is given a unique node (ex. 2345) as well as a default "filename" (ex. /WelcomeToOurProductsPage) and apply a template assign one or more aliases to the file for a URL redirect (ex. /home-page-products - can also be accessed by /home-page-products.aspx) A new Rewrite command is written on the .htaccess file for each and every alias Server accesses either /WelcomeToOurProductsPage or /home-page-products and redirects to something like /template.aspx?tmp=2&node=2345 (here I'm guessing what it does - I only have front-end access for now - but I have enough clues to strongly assume) Node 2345 grabs content stored in a SQL Db and applies it to the template. Note: There are no actual files being created on the filesystem. It's entirely virtual. This is probably a very common thing, but since I have never run across this kind of system before two months ago, I wanted to explain it in case it isn't common. I'm not a fan at all of ASP or closed-sourced systems, so it may be that this is common practice for ASP developers. My question, that has taken far too long to ask, is: what are the benefits of this kind of system, as opposed to creating an actual file hierarchy? Are there any drawbacks to having every single file server call redirected? To having the .htaccess file hold rewrite rules for every single alias?

    Read the article

  • Sharepoint managed Properties

    - by paulie
    Originally posted on StackOverflow, and edited for clarity I have a custom Content Type inside a list that has over 30 items (Which were uploaded via DockIt), and I have added several "managed properties" to the "crawled properties", in the SSP. All of them work except 1. The column "Synopsis" is a multiline field with no limit on it's length. It appears as a crawled property "Synopsis", and is mapped to a managed property 'asynop'. On the 'Advanced Search Page', it is added as a property and searchable, however it only returns a some matching records (if any). I manually created an entry, ran the crawl and was able to search for it. I edited an existing entry, ran the crawl (full and incremental), and it still only returned the manually entered entry. If I entered the search term in the Search box directly "asynop:fatigue", then all the correct results appear. Why is this happening? And could it please stop?

    Read the article

  • node.js server not running

    - by CMDadabo
    I am trying to learn node.js, but I'm having trouble getting the simple server to run on localhost:8888. Here is the code for server.js: var http = require("http"); http.createServer(function(request, response) { response.writeHead(200, {"Content-Type": "text/plain"}); response.write("Hello World"); response.end(); }).listen(8888); server.js runs without errors, and trying netstat -an | grep 8888 from terminal returns tcp4 0 0 *.8888 *.* LISTEN However, when I go to localhost:8888 in a browser, it says that it cannot be found. I've looked at all the related questions, and nothing has worked so far. I've tried different ports, etc. I know that my router blocks incoming traffic on port 8888, but shouldn't that not matter if I'm trying to access it locally? I've run tomcat servers on this port before, for example. Thanks so much for your help! node.js version: v0.6.15 OS: Mac OS 10.6.8

    Read the article

  • Windows file Sync

    - by Deane Venske
    So I have a big problem at the moment. Trying to find a reliable solution for syncing 2 windows IIS servers. I need to keep the web content imaged on both. Now I have been trying to use Rsync to this point, but unfortunately file permission errors are a nightmare to manage this way. I'm testing out dropbox, but the performance sucks. I'm more familiar with Linux stuff and I've used Rsync in the past but isn't there a native windows solution that will work?

    Read the article

  • Apache HTTP Not Working When SSL Enabled

    - by dominic7il
    I've got a very bizarre problem in that after enabling SSL support in Apache I'm only able to access my site via SSL and not through http as well. I can confirm that Apache is definitely listening on both ports 80 and 443 (accdording to netstat). Additionally the Apache access logs are showing the requests - it's just that going in through http results in a timeout and I'm never actually able to reach the content. Like I said going through https works. Here is my httpd.conf: http://pastebin.com/kG2dPjJ2 and here is my httpd-ssl.conf: http://pastebin.com/thqvjgGJ Can anyone spot any issues with those configurations? or Have any suggestion at all? I've searched and searched but there appear to be very few people who have experienced the same. Also worth mentioning that I did a comparision between those configurations and those of a working set up and I couldn't spot anything.

    Read the article

  • Make nginx avoid cache if response contains Vary Accept-Language

    - by gioele
    The cache module of nginx version 1.1.19 does not take the Vary header into account. This means that nginx will serve the same request even if the content of one of the fields specified in the Vary header has changed. In my case I only care about the Accept-Language header, all the others have been taken care of. How can I make nginx cache everything except responses that have a Vary header that contains Accept-Language? I suppose I should have something like location / { proxy_cache cache; proxy_cache_valid 10m; proxy_cache_valid 404 1m; if ($some_header ~ "Accept-Language") { # WHAT IS THE HEADER TO USE? set $contains_accept_language # HOW SHOULD THIS VARIABLE BE SET? } proxy_no_cache $contains_accept_language proxy_http_version 1.1; proxy_pass http://localhost:8001; } but I do not know what is the variable name for "the Vary header received from the backend".

    Read the article

  • Show symbolic links AND their targets in web directory listing (apache)

    - by Erwan Queffélec
    Listing a directory content with ls -l shows this output: total 12 drwxr-xr-x 3 root root 4096 Dec 11 16:38 2.3 drwxr-xr-x 5 root root 4096 Dec 11 16:38 2.4 drwxr-xr-x 2 root root 4096 Dec 11 16:38 archive lrwxrwxrwx 1 root root 10 Dec 11 16:38 current -> 2.4/2.4.1/ lrwxrwxrwx 1 root root 10 Dec 11 16:38 next -> 2.4/2.4.2/ lrwxrwxrwx 1 root root 10 Dec 11 16:38 previous -> 2.4/2.4.0/ Notice how it shows the symbolic links and their respective targets. I need to know if there is a way of getting the same behaviour in apache directory browsing. If apache is not capable of it as I suspect, is there an application (FLOSS) providing that kind of behaviour ?

    Read the article

  • Can I proxy my no-ip domain using a .htaccess file on my hosted domain?

    - by Dean
    I have a domain http://www.example.com which has a hosting package and website on it. I also have a http://example.no-ip.org domain which contains some content I would like to appear under the same domain. Can I setup a .htaccess file at http://www.example.com/proxy/ which proxies the files at http://www.example.no-ip.org/files/ Similarly, could I host an entire domain in the same way?, e.g. http://www.example2.com/ proxying http://example.no-ip.org/files2/ Alternatively, if someone were to say "That's stupid, use this free (or super-cheap) dynamic DNS host:" I would probably accept that answer.

    Read the article

  • Appcache and jquery mobile on a CMS powered site?

    - by user793011
    Has anyone used the cache manifest to make a CMS site work offline? I've made a demo with static html files which seems to work fine, so I'm assuming it wouldn't be too hard to achieve the same thing with a CMS. The way that you tell browsers that files have changed (and so need to be downloaded again) is by adding a comment to the cache manifest file so its byte size changes. I'm not quite sure how to do this with a CMS, but maybe some sort of server cron could run periodically? Personally I'm more interested in having a site that works offline rather than achieving ideal performance, so if the file was modified every hour rather than when content actually changed that would be fine for me. If anyone has used appcache with a CMS, has anyone done so with jquery mobile at the same time? What I'm after is a fully native feel to a site that's accessible offline, in other words I want to mimic a native App. My static demo does this perfectly with jquery mobile, so again I would have thought this would be achievable in a CMS.

    Read the article

  • Squid gives always tcp_miss reverse proxy

    - by JaakL
    I added installed latest squid3 in front of apache as reverse proxy. The problem is that it gives always tcp_miss, in fact I have not yet found a single TCP_HIT message in the log file, and most of the content is static. Relevant config values for cache_dir and refresh_pattern are default ones, directory /var/spool/squid3 exists and has some files/folders. I have 100+G free storage, but reconfigure gives warning "WARNING cache_mem is larger than total disk cache space!", which does not make any sense to me. I have googled a lot and seen with similar problems, but none of them has helped.

    Read the article

< Previous Page | 669 670 671 672 673 674 675 676 677 678 679 680  | Next Page >